Performance testing tool usage
Basic Concepts of Load Testing Tools
Load testing tools are software used to simulate high-concurrency requests, helping developers evaluate system performance under load. The Node.js ecosystem offers various load testing tools, such as artillery
, k6
, and autocannon
. These tools can simulate thousands or even tens of thousands of users accessing a system simultaneously, testing server throughput, response time, and stability.
Why Load Testing Tools Are Needed
An application may run smoothly in a development environment but crash after deployment due to a sudden surge in traffic. Load testing tools can expose performance bottlenecks in advance, such as insufficient database connection pools, memory leaks, or CPU overload. For example, an e-commerce website might face 100 times more traffic during a promotion than usual. Load testing helps estimate the scale of server expansion required.
Comparison of Common Node.js Load Testing Tools
artillery
A Node.js-based load testing tool that supports YAML or JSON configuration for test scenarios. Suitable for complex multi-step testing workflows, such as simulating user login followed by product browsing.
// artillery script example
config:
target: "https://api.example.com"
phases:
- duration: 60
arrivalRate: 50
scenarios:
- flow:
- get:
url: "/products"
- post:
url: "/cart"
json:
productId: 123
k6
Developed in Go but provides a JavaScript API, offering higher performance than pure Node.js tools. Ideal for scenarios requiring precise control over request timing.
import http from 'k6/http';
import { check, sleep } from 'k6';
export default function () {
let res = http.get('https://test-api.example.com');
check(res, {
'status is 200': (r) => r.status === 200,
});
sleep(1);
}
autocannon
A lightweight HTTP load testing tool, suitable for quickly testing single API endpoints. Easy to install and requires no complex configuration.
npx autocannon -c 100 -d 20 https://api.example.com/users
Designing Load Testing Scenarios
Step-by-Step Load Testing
Gradually increase the number of concurrent users to observe system performance inflection points. For example:
- Initial phase: 50 users/second for 2 minutes
- Growth phase: Increase by 20 users/second every 30 seconds
- Peak phase: Maintain 200 users/second for 5 minutes
# artillery step test configuration
phases:
- duration: 120
arrivalRate: 50
name: "Warm up"
- duration: 300
arrivalRate: 50
rampTo: 200
name: "Ramp up"
- duration: 300
arrivalRate: 200
name: "Sustain"
Mixed Business Scenarios
Simulate real user behavior combinations, such as:
- 30% of users browse product lists
- 40% of users search for products
- 20% of users add items to cart
- 10% of users complete payments
// k6 mixed scenario example
import { group, sleep } from 'k6';
export default function () {
group('Browse flow', function () {
http.get('https://shop.com/products');
sleep(Math.random() * 3);
});
group('Checkout flow', function () {
http.post('https://shop.com/cart', { productId: 456 });
sleep(1);
http.get('https://shop.com/checkout');
});
}
Interpreting Key Performance Metrics
Response Time
- p95: 95% of requests have a response time below this value
- p99: The slowest 1% of requests' response time
- Average response time: The mean duration of all requests
Error Rate
The proportion of HTTP non-2xx/3xx status codes. A healthy system should have an error rate below 1%.
Throughput
The number of requests processed by the system per unit of time, typically measured in RPS (requests per second).
Practical Example: Testing an Express API
Assume a user query API:
// server.js
const express = require('express');
const app = express();
app.get('/users/:id', (req, res) => {
// Simulate database query
setTimeout(() => {
res.json({ id: req.params.id, name: 'Test User' });
}, 100);
});
app.listen(3000);
Testing with autocannon:
npx autocannon -c 100 -d 30 http://localhost:3000/users/123
Typical output includes:
Running 30s test @ http://localhost:3000/users/123
100 connections
┌─────────┬───────┬───────┬───────┬───────┬──────────┬─────────┬────────┐
│ Stat │ 2.5% │ 50% │ 97.5% │ 99% │ Avg │ Stdev │ Max │
├─────────┼───────┼───────┼───────┼───────┼──────────┼─────────┼────────┤
│ Latency │ 102ms │ 105ms │ 121ms │ 125ms │ 106.23ms │ 5.12ms │ 138ms │
└─────────┴───────┴───────┴───────┴───────┴──────────┴─────────┴────────┘
Advanced Technique: Testing with Authentication
Testing an API requiring JWT authentication:
// k6 test with authentication
import { crypto } from 'k6/crypto';
function generateJWT() {
const header = JSON.stringify({ alg: 'HS256', typ: 'JWT' });
const payload = JSON.stringify({ userId: 123, exp: Date.now()/1000 + 3600 });
const key = 'secret';
const unsignedToken = `${btoa(header)}.${btoa(payload)}`;
const signature = crypto.hmac('sha256', key, unsignedToken, 'base64');
return `${unsignedToken}.${signature}`;
}
export default function () {
const token = generateJWT();
const params = {
headers: { 'Authorization': `Bearer ${token}` },
};
http.get('https://api.example.com/protected', params);
}
Troubleshooting Common Issues
Test Client Becoming a Bottleneck
- Symptom: CPU usage at 100%, network bandwidth saturated
- Solution: Distributed load testing or using multiple machines simultaneously
Highly Variable Test Results
- Possible cause: Backend caching mechanisms or database connection pool limits
- Solution: Extend test duration to ensure cache warm-up
Connection Refused
- Checkpoints:
- Whether the server has connection limits
- OS file descriptor limits
- Firewall rules
Integrating into CI/CD Pipelines
Automating load tests in GitHub Actions:
name: Load Test
on: [push]
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: npm install -g artillery
- run: artillery run test/scenarios/basic.yml
env:
API_URL: ${{ secrets.TEST_ENDPOINT }}
Performance Optimization Recommendations
Possible optimization directions based on load test results:
-
Database level:
- Add missing indexes
- Optimize complex queries
- Consider read/write separation
-
Code level:
- Avoid synchronous blocking operations
- Use streams for large file processing
- Implement caching strategies
-
Architecture level:
- Add load balancing
- Introduce CDN
- Consider horizontal scaling
Long-Term Performance Monitoring
Load testing should not be a one-time activity. Establish continuous monitoring:
- Use Prometheus + Grafana to set up monitoring dashboards
- Set alert thresholds for key metrics
- Perform regular load tests (e.g., monthly) to compare historical data
Example Grafana dashboard should include:
- Request response time trends
- Error rate curves
- System resource (CPU/memory) usage
- Database query performance metrics
本站部分内容来自互联网,一切版权均归源网站或源作者所有。
如果侵犯了你的权益请来信告知我们删除。邮箱:cc@cccx.cn