Tags

, , ,

Continuation from the second post

What do you need to know?

  • Some operational knowledge of the application.
  • Users:
    • Idea of how users use the application in Production?
      • How many users the application needs to support at the time of deployment? After 6 months? After 12 months?
      • It’s more important to plan for peaks rather than the troughs.
    • How many users will use the application concurrently?
    • How many additional users might access the application over time? This comes into play when the application might attract additional customers when it is new due to the novelty factor. A university website might experience high number of users during enrollment.
  • Location of users?
  • How will they connect to the application?
  • Must know the application architecture client-server configuration, DB etc.

Pre-Requisites:

  1. Application stability with major functional defects resolved: If the application is not functionally stable, it will mask the issues caused by poor performance. Poorly performing SQL queries or undetected errors like 404 may mask the issues as well.
  2. Suitable PErformance Testing Environment
  3. Choosing an appropriate Performance testing tool.
  4. Setting realistic and appropriate performance targets.
  5. Identifying business critical transactions.
  6. Preparing test data of high quality
  7. Identifying Server and networking monitoring KPIs.
  8. Allocating enough time and resources to do performance testing.

Performance Targets

These can be

  • Service Oriented Indicators :Availability/uptime, Concurrency, scalability, and throughput, Response Time

Availability:

App should be available to the end user at all the time. A successful ping to web server doesn’t always mean that the application is available.Also, app may be available at modest loads but may start to time out or return errors as the load increases.

Concurrency: Refers to number of users accessing the application simultaneously. Concurrency, from a tool’ perspective is the number of users generated simultaneously in a given time and will not necessarily be equal to the number of simultaneous users accessing the application.  A tool might be generating 100 users but at any given point of time, only 50 might be performing operations on the application. In this case, the performance testing results are for 50 users rather than 100.

If your script involves login, performing some action and logging out, all users will not be logged in at the application simultaneously. Concurrency of users needs to be maintained by the test script.

Throughput : Throughput is often more important than concurrency as a performance target for applications which are stateless, that is there is no concept of traditional logged in user.

Number of hits in a time frame is more important that concurrent users. It is generally measured per minute or per second.

According to stats, good Rule of thumb is to add 10% to your anticipated go live concurrency or throughput target.

Response Time: Upon reaching a certain concurrent users, application should respond in a timely fashion to the user queries. DEciding whether response time is acceptable depends on user perception and the application type.

Network Utilization: Bandwidth between server and user is important.

In-house testing might not bring up network problems, but the scenario changes when real user is accessing the application.

Following network metrics should be measured:

Data Volume: Amount of data presented to the network. High data volume when combined with bandwidth restrictions and network latency effects does not use good performance.

Data throughput: Rate at which data is presented to the network. Sudden reduction in throughput is often first symptom of capacity problems and users start receiving server timeouts.

Data error rate: Large number of network errors that require data retransmission will also affect application performance.

Server Utilization:

Application will have limits on server resource that can be used.  Server KPIs should be monitored when server is under load.

Server metrics to be measured are CPU Utilization, Memory Utilization, Disk input/output and disk space.

Advertisements