Let's begin by defining what performance testing is, why it is carried out, and what types of performance tests exist.
What is performance testing?
Performance testing is a type of non-functional testing aimed at evaluating the behavior of a system under load. These tests check how well a system performs when faced with a large number of simultaneous users or operations, and how quickly it responds to requests.
Why do we conduct performance testing?
Performance testing is conducted for several key reasons:
- Identifying bottlenecks in the system
- Determining the system's maximum performance capabilities
- Verifying that the system meets performance requirements
- Comparing performance across different configurations or versions of the system
- Capacity planning and ensuring the system can scale
- Detecting performance issues before production deployment
Types of performance tests
- Load Testing: this test assesses the system's behavior under expected load, simulating typical numbers of users or transactions.
- Stress Testing: stress testing examines the system's behavior under extreme load, often exceeding its normal operational limits.
- Scalability Testing: this test evaluates the system's ability to scale up or down in response to changes in load.
- Endurance Testing: also known as soak testing, this test checks how the system handles the expected load over an extended period of time.
- Capacity Testing: this test determines the maximum number of users or transactions the system can handle while maintaining acceptable performance levels.
- Spike Testing: this test evaluates the system’s response to sudden, short-term spikes in load.
Performance testing - a step-by-step process
Step 1: Client interviews
To conduct effective performance testing, it's good to start with an interview with the client. Here are some key questions you should ask:
- What are the main business goals of the performance tests?
This will help identify the most important aspects for the client and where to focus the testing efforts. - What are the key business processes or system functionalities?
This will help pinpoint the critical areas that need to be tested. - What are the typical behaviors or user journeys within the system?
This will enable the creation of realistic test scenarios. - What is the expected number of concurrent users?
This will help determine the load that needs to be simulated. - What are the peak usage periods for the system?
This will allow for testing under realistic conditions. - What are the acceptable response times for key operations?
This will help set the success criteria for the tests. - Are there any upcoming major changes or events that might affect system load?
This will help prepare for future requirements in testing. - How often is the system updated, and are any major changes planned?
This will aid in planning the frequency and scope of future tests.
It's crucial to carry out performance tests on machines with identical configurations to the production environment. Avoid testing on the production system, especially when the tests might leave traces, such as database entries.
Client interview example
For the purposes of this article, let’s assume we are conducting performance tests for an online store. Here's an example of how a conversation with a client might look:
Tester: What problems are you trying to solve or concerns you'd like to address with these tests?
Client: Our main concern is whether our platform can handle the upcoming promotional campaign. Last year, we experienced performance issues during Black Friday, and we want to avoid that happening again.
Tester: Which elements of your platform are most frequently used by your customers?
Client: The most important operations for us are product browsing and checkout completion. These need to work smoothly, even under heavy load.
Tester: How many users are you expecting during the campaign?
Client: At peak times, we expect around 1,000 users, which is much higher than our usual 100-200 users.
Tester: What are your expectations for system response times for these key operations?
Client: Loading the product list must be under 1 second, and all order-related operations should also be completed in under 1 second.
Step 2: Identifying the backend operations
Next, we move on to identifying the backend operations on the client’s platform. How can we identify these operations? By following the paths indicated by the business. To do this, we’ll use the user interface (UI) while running the developer console.
- Open the application and follow the user paths, such as browsing products and completing a purchase.
- Open the browser’s developer console (usually accessible via F12 or right-click and select "Inspect").
- Go to the "Network" tab to monitor all network requests made by the application.
- Identify key API calls or backend operations that correspond to critical system actions, such as loading product pages or submitting orders.
For this article, I’ve created a simple application that allows you to conduct your own performance tests.
Backend Operation Identification:
Step 3: Create the load testing scenario
We already know which operations on the platform require testing, so we can proceed with creating the load testing scenario. First, we will focus on logging in 1,000 users, and then assign appropriate weights to specific operations based on the insights gained from the interview with the business team.
Scenario plan
User Login: First, we create a method that can log in 1,000 users. Each user should go through the authentication process, simulating realistic system behavior under load. It is essential to ensure that each login is performed with different user data to accurately replicate production conditions.
Assigning weights to operations
Based on the interview with the business team, we know which operations are most frequently performed by users. It is crucial to assign appropriate weights, which will help realistically reflect how the system is used:
- Browsing product listings: This operation is performed most frequently, so it is assigned the highest weight. Users often browse many products before deciding to purchase.
- Navigating to the product page: The next step is viewing the details of a selected product, which has less weight than browsing listings but remains an important element of the scenario.
- Adding to cart: This operation is performed less frequently than browsing products but is a critical step in the purchasing process.
- Finalizing the purchase: This operation will have the lowest weight because not all users make a purchase after browsing products. However, it is important to include it in the scenario because finalizing the purchase places a load on the system, processing payments and generating orders.
Creating the test scenario
This scenario reflects the actual behavior of users on the platform. Operations such as logging in, browsing products, adding to the cart, and finalizing the purchase will be performed in proportions determined by the assigned weights:
- Login: 1,000 users.
- Browsing products: the largest number of operations.
- Viewing product pages: fewer operations.
- Adding to the cart: even fewer operations than the above.
- Finalizing the purchase: the smallest number of operations.
Other possible scenarios
In the article, we can also consider other testing scenarios, such as user authorization performance, where we simulate many simultaneous logins and registrations. This scenario would allow assessing how the system handles sudden load related to user authentication, which could be useful during marketing campaigns or promotions.
By using various load testing scenarios, we can obtain a comprehensive view of the system's performance and identify potential bottlenecks in different aspects of its operation.
We can perform the tests using many popular tools such as Gatling, JMeter, Locust, or even Postman, which has recently added support for this purpose. In this article, we will focus on Locust. Let's dive straight into it.
Step 4: Setup environment
First, using Docker, launch a test application on which you will independently conduct your first performance tests. To do this, install Docker. Then, download the Docker compose file.
Download script and execute the docker-compose up you can find the documentation at the address: http://localhost:5001/apidocs/#/ You should see:
Postman collection is available here.
Now, let's move on to the Locust tool. Install Python, and then run the command pip install locust. Download the prepared test scenario and save it in a file called locustfile.py.
Before running the scenario, let's discuss its structure and functionality.
Step 5: Script review
The scenario is based on the Locust library and the random module, both of which are imported at the beginning. A constant MAX_RESPONSE_TIME is defined, setting the maximum acceptable response time as 1000 ms (which is 1 second).
Class EcommerceTasks (TaskSet)
This class defines a set of tasks that simulate typical behaviors of an online store user.
- Method on_start(): This method simulates the user login process. A random user number is generated to create a unique username. A POST request is then sent to the /login endpoint, and the response contains an access token, which is saved for later use in subsequent requests. Each user goes through the login process at the start of their session.
- Method get_headers(): Returns headers containing the authorization token, which is used to authorize subsequent requests.
Defined Tasks
- browse_products(): Simulates browsing products. It checks the status code, response time, and the correct format of the JSON response. The products obtained from this task are saved for use in the next tasks.
- view_product(): Simulates viewing detailed information about a randomly selected product.
- add_to_cart(): Simulates adding a random product to the cart. The response contains the cart ID, which is saved for later use.
- finalize_order(): Simulates finalizing the order. After the order process is completed, the cart ID is reset, preparing the scenario for the next order.
Class EcommerceUser (HttpUser)
This class defines a Locust user who performs the tasks defined in EcommerceTasks. This user has a defined wait time between tasks, ranging from 1 to 5 seconds.
Each task is assigned different weights, which simulates typical user behaviors in an online store. For example, browsing products has a weight of 10, while finalizing an order has a weight of 2, reflecting the lower probability of completing a purchase compared to browsing the product catalog.
Error Handling
The scenario monitors HTTP status codes, response times, and the correctness of JSON responses. In the event of issues, the tasks are marked as failed.
Dependencies Between Tasks
Some tasks, such as view_product and add_to_cart, depend on data obtained in previous tasks, like the product list or cart ID.
Performance Monitoring
Each task monitors the response time, checking whether it exceeds the value of MAX_RESPONSE_TIME. This allows for monitoring the key performance indicators of the system.
This well-prepared scenario allows you to verify the application by simulating realistic user behaviors and monitoring key performance metrics, such as response time and system correctness.
Step 6: Begin the tests
After script review, it's time to run Locust.
- Navigate to the directory with the locustfile.py file:
In the terminal/console, navigate to the folder where the locustfile.py file is located, for example:
cd /path/to/folder
- Run Locust with a specified host:
Run the following command to start Locust, specifying the host:
locust -H http://localhost:5001
- Set test parameters:
After starting Locust, go to the following address in your browser: http://localhost:8089. There, you will be able to configure the number of users and the rate at which they will join the tests.
To configure 1,000 users who will gradually join the test at a rate of 10 users per second until the total reaches 1,000, follow these steps in the Locust web interface and click start
During the tests, we can observe how many requests are being sent to each endpoint based on the assigned weights, the current number of users, the percentage of failures, and the RPS (Requests Per Second) – which shows how many requests are being sent to the system every second by all users.
What should we pay attention to?
- Response times:
- Average, minimum, and maximum response times for each endpoint.
- Trends in response times as the load increases.
- Percentiles of response times (e.g., 90th, 95th, 99th percentile) to get insights into the worst-performing requests.
- Throughput:
- The number of requests per second (RPS) for the entire system and for individual endpoints.
- Errors:
- Number and percentage of errors encountered during the test.
- Types of errors (e.g., timeouts, 500 internal server errors, database connection failures).
- Correlation between error rate and system load, identifying if errors increase as more users are added.
- Resource utilization:
- Monitoring CPU, RAM, disk, and network usage on application and database servers.
- Resource bottlenecks that may be causing performance issues.
- Scalability:some text
- How well the system handles the increasing number of users/requests.
- Identifying the points at which performance starts to degrade.
When might multiple computers be needed to simulate many users?
- When scaling up: A single machine has limited resources (CPU, memory, bandwidth). Using only one computer could become a bottleneck, limiting the test’s ability to simulate a large number of users.
- When simulating more traffic: By using multiple machines, you can simulate a much larger load, which better reflects real-world usage scenarios.
Locust allows for running tests in a distributed mode, where you have one machine designated as the "master" and multiple "worker" machines to share the load. However, this is a more advanced topic, which can be explored in another article.
Have questions about performance testing?
I hope the above instruction helps you in conducting performance tests by yourself. If you come across difficulties or simply have questions about the process I’ve outlined here, feel free to contact me at hi@rst.software and I’ll do my best to help you out.