Is your database slow and crashing? Don't rush to switch to Redis! First optimize these 6 MySQL parameters.
Despite ample CPU and memory resources on the server, the database frequently encounters issues during peak business hours:
• Application logs frequently report errors: “Too many connections to the database”
• Frontend requests get stuck spinning, displaying “Connection timed out”
• Database server CPU spikes instantly to 100%
The root cause is often not hardware bottlenecks, but rather MySQL's overly conservative default configurations that fail to effectively leverage modern server hardware resources.
Avoid rushing to implement heavyweight solutions like caching or database sharding. Experience shows that over 80% of database performance issues can be resolved by adjusting critical MySQL parameters and standardizing SQL writing practices. The following optimization strategies apply to MySQL 5.7/8.0, require no business code modifications, and have been validated in production environments.
I. Optimize Six Core Parameters to Unlock Hardware Potential
Before proceeding, back up your my.cnf configuration file (typically located at /etc/my.cnf or /etc/mysql/my.cnf). Restart the MySQL service after each modification:
systemctl restart mysqld
- max_connections - Eliminate “Connection Exhaustion”
Issue: The default connection limit of 151 is quickly exhausted in microservices or high-concurrency scenarios, preventing applications from obtaining database connections.
Optimization: Significantly enhances concurrent support capacity.
Note: Each connection consumes approximately 256KB of memory. If max_connections=2000 is set, reserve about 500MB of memory. Adjust based on your server's actual memory.
- innodb_buffer_pool_size - The most critical performance accelerator
Issue: The default value is extremely small, preventing data and indexes from being cached in memory. Consequently, all queries require disk I/O, resulting in slow performance.
Optimization: Set it to 60%-70% of available memory to keep frequently accessed data resident in memory.
Recommendation: This is the most critical parameter; increasing it yields immediate results.
- tmp_table_size & max_heap_table_size - Avoid disk-based temporary tables
Issue: Default values (typically 16M-64M) are too small. Complex operations like sorting (ORDER BY) and grouping (GROUP BY) cannot be completed in memory, forcing them to write to disk-based temporary tables, causing a sharp performance drop.
Optimization: Increase the upper limit for in-memory temporary tables.
- thread_cache_size - Reduce connection creation overhead
Issue: The default value (e.g., 9) is too low, causing frequent connection creation/destruction to consume extra CPU resources.
Optimization: Increase the thread cache pool size to allow threads to be reused after short-lived connections are closed.
- Slow Query Log - The “Microscope” for Pinpointing Performance Bottlenecks
Issue: Disabled by default, preventing identification of inefficient SQL queries.
Optimization: Enable and configure the slow query log to precisely locate problematic SQL statements.
- innodb_log_file_size - Enhance write and transaction commit efficiency
Issue: The default 48MB is too small, causing InnoDB to frequently flush redo logs and trigger I/O jitter.
Optimization: Increase the log file size to reduce checkpoint frequency, enabling smoother writes.
⚠️ Important Warning: Before modifying this parameter, you must completely stop the MySQL service, then delete the old log files (typically ib_logfile0 and ib_logfile1), and restart MySQL. It will automatically create new log files of the specified size.
II. Essential SQL Optimization Practices (Addressing the Root Cause)
Parameter optimization treats symptoms; well-crafted SQL addresses the root cause.
- Create Effective Indexes for Queries
Negative Example: The following query will trigger a full table scan if no indexes exist on user_id and create_time .
Optimization Solution: Create a composite index to cover both queries and sorting in one step.
- Strictly prohibit using SELECT *; retrieve fields only as neede
Negative example: Transmitting unnecessary data wastes network bandwidth and memory.
Optimization approach: Query only the required fields.
- Deep Paging Optimization: Replace OFFSET with WHERE
Negative Example: The larger the OFFSET value, the more data the database must scan and discard, resulting in extremely low efficiency.
Optimization Solution: Use “Cursor-based Pagination” to record the ID of the last record on the previous page.
III. Verifying and Monitoring Optimization Results
After tuning, monitoring must be performed to confirm effectiveness.
Viewing Current Status:
SHOW STATUS LIKE ‘Threads_connected’; -- View current connection count
SHOW STATUS LIKE ‘Threads_created’; -- View total number of threads created historically; this figure should increase more slowly after optimization
Checking Buffer Pool Hit Rate (should be > 99%):
SHOW ENGINE INNODB STATUS\G -- Locate the “BUFFER POOL AND MEMORY” section in the output -- Calculation logic: Hit Rate = (1 - Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests) * 100%
Analyze slow query logs:
Analyze using mysqldumpslow tool
mysqldumpslow -s t /var/log/mysql/slow.log
Sort by total elapsed time
mysqldumpslow -s c /var/log/mysql/slow.log
Sort by occurrence count
IV. Recommended Configuration Template (16GB RAM Server)
Summary: Three Essential Steps for Database Performance Optimization
Tune Parameters: Enable MySQL to fully leverage modern hardware (memory, CPU) – the fastest way to see results.
Write Smart, Query Smart: Follow SQL best practices (indexes, avoid SELECT *, optimize pagination) to eliminate performance risks at the source.
Continuous Monitoring: Proactively identify issues using tools like slow query logs to enable data-driven, ongoing optimization.
By following these guidelines, a properly configured 4-core, 16GB MySQL server can easily handle tens of millions of daily requests (3,000–5,000 QPS). Don't let outdated default configurations become a bottleneck for your business growth.

