I’ve been diving into PostgreSQL memory settings lately, looking for ways to fine-tune them and boost performance. After digging through a bunch of resources and running some tests, I found a few key settings that can make a big difference.
If you’re running PostgreSQL and want to optimize how it uses memory, here are a few important settings to pay attention to.
Key Memory Settings to Tune
Let’s break down the most important PostgreSQL memory settings you should consider adjusting.
Think of shared_buffers as PostgreSQL’s personal memory stash where it keeps frequently accessed data. The idea is to store as much of your data in memory as possible to reduce those expensive disk reads.
Recommended Setting:
Most experts suggest setting this to about 25% of your system’s total RAM.
For example, if you’ve got 16GB of RAM, setting it to around 4GB is a good starting point.
- work_mem
Whenever PostgreSQL processes queries that involve sorting, joining, or grouping data, it uses work_mem. This setting controls how much memory each operation gets before it starts writing temporary files to disk.
Example:
- If work_mem = 50MB and your query has 5 sorting tasks, it could use 50MB x 5 = 250MB of memory.
- Now imagine 10 users running similar queries – that’s 250MB x 10 = 2.5GB of memory being used.
Tip:
- Set it too low, and queries might get slower due to disk usage.
- Set it too high, and you might run out of memory fast, especially with lots of users.
Sometimes, bigger work_mem values can actually slow things down because PostgreSQL has to manage large memory chunks.
- effective_cache_size
This setting is a bit different – it doesn’t actually allocate memory but tells PostgreSQL how much memory it thinks it can use from the OS cache. PostgreSQL uses this value to decide if it should use indexes or do sequential scans.
Setting it wrong:
- Too low? PostgreSQL might avoid indexes, thinking memory is scarce.
- Too high? PostgreSQL might get too optimistic and make bad decisions.
A good rule of thumb: Set this to 50-75% of your total RAM, especially if PostgreSQL is the main workload on the machine.
- maintenance_work_mem
This one is all about how much memory PostgreSQL can use for things like VACUUM, CREATE INDEX, and other maintenance tasks. Setting this value higher can speed up these operations without slowing down normal database activity.
Suggested Setting:
If you’ve got the room, bump it up to 1GB or more to make maintenance tasks fly.
Ways to Modify the Memory Settings
PostgreSQL allows modifying memory settings in multiple ways, depending on your requirements and risk tolerance. Here are the most effective approaches:
- Modifying Memory Settings Globally
For permanent changes across all sessions and queries, modify the PostgreSQL configuration file (postgresql.conf
) or use ALTER SYSTEM
:
ALTER SYSTEM SET work_mem = '500MB';
ALTER SYSTEM SET shared_buffers = '8GB';
ALTER SYSTEM SET effective_cache_size = '20GB';
SELECT pg_reload_conf(); -- Apply changes without restart
- Changing Settings Per Session
For temporary adjustments that affect only the current connection:
SET work_mem = '100MB';
SET effective_cache_size = '16GB';
My Real Example of PostgreSQL Memory Tuning
Scenario ##o
A PostgreSQL 10 database running on a 30GB RAM server, with often 15GB RAM available, was experiencing slow CSV export query performance when processing financial transactions. A complex query joining four large tables was taking 33 seconds to produce results:
SELECT * FROM finance_transaction ft
INNER JOIN billing_record br ON br.id = ft.billing_record
INNER JOIN transaction_log tl ON ft.transaction_log = tl.id
INNER JOIN transaction_audit_history tah ON tah.finance_transaction = ft.id;
Table sizes:
- finance_transaction (~1 million rows)
- transaction_log (~1 million rows)
- transaction_audit_history (~4 million rows)
- billing_record (~1 million rows)
Default memory settings:
Settings | Default Value |
---|---|
shared_buffers | 128MB |
work_mem | 4MB |
maintenance_work_mem | 64MB |
effective_cache_size | 4GB |
Query Execution Plan (EXPLAIN ANALYZE)
Gather (cost=1071962.33..3055242.09 rows=6022483 width=4782) (actual time=26603.645..33465.648 rows=4847517 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=1070962.33..2451993.79 rows=1505621 width=4782) (actual time=26817.703..29038.177 rows=969503 loops=5)
Hash Cond: (ft.id = tah.finance_transaction)
-> Hash Join (cost=716707.62..1254298.44 rows=403544 width=4591) (actual time=18471.362..19081.998 rows=259931 loops=5)
Hash Cond: (br.id = ft.billing_record)
-> Parallel Seq Scan on billing_record br (cost=0.00..39488.92 rows=335492 width=3598) (actual time=0.005..42.297 rows=268393 loops=5)
-> Hash (cost=501594.96..501594.96 rows=1564453 width=993) (actual time=18468.587..18468.589 rows=1299656 loops=5)
Buckets: 4096 Batches: 512 Memory Usage: 708kB
-> Hash Join (cost=103452.20..501594.96 rows=1564453 width=993) (actual time=9094.933..15042.241 rows=1300948 loops=5)
Hash Cond: (tl.id = ft.transaction_log)
-> Seq Scan on transaction_log tl (cost=0.00..33484.53 rows=1564453 width=766) (actual time=0.019..133.507 rows=1564453 loops=5)
-> Hash (cost=46549.31..46549.31 rows=1300631 width=227) (actual time=9092.767..9092.768 rows=1300948 loops=5)
Buckets: 16384 Batches: 128 Memory Usage: 2199kB
-> Seq Scan on finance_transaction ft (cost=0.00..46549.31 rows=1300631 width=227) (actual time=0.005..143.568 rows=1300948 loops=5)
-> Hash (cost=165645.54..165645.54 rows=4852654 width=191) (actual time=8267.360..8267.360 rows=4852654 loops=5)
Buckets: 32768 Batches: 512 Memory Usage: 2059kB
-> Seq Scan on transaction_audit_history tah (cost=0.00..165645.54 rows=4852654 width=191) (actual time=0.020..789.242 rows=4852654 loops=5)
Planning time: 0.346 ms
Execution time: 33586.541 ms
Analyze the query execution plan:
Issue | Observation from EXPLAIN ANALYZE | Possible cause | Solution |
---|---|---|---|
Disk Spills in Hash Joins | Buckets: 4096 → Batches: 512 Memory Usage: 708kB | work_mem is too low (default 4MB) | Increase work_mem to 300MB ALTER SYSTEM SET work_mem = '300MB'; |
High I/O from Sequential Scans | Parallel Seq Scans on billing_record, finance_transaction, and transaction_log | shared_buffers too low (default 128MB) | Increase shared_buffers to 8GB ALTER SYSTEM SET shared_buffers = '8GB'; |
Query Planner Underestimates Available RAM | Uses more sequential scans instead of index scans | effective_cache_size too low (default 4GB) | Increase effective_cache_size to 20GB ALTER SYSTEM SET effective_cache_size = '20GB'; |
After increasing memory settings, this is the EXPLAIN ANALYZE result:
Gather (cost=1071962.33..3055242.09 rows=6022483 width=4782) (actual time=16228.246..23884.145 rows=4847517 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=1070962.33..2451993.79 rows=1505621 width=4782) (actual time=17226.338..20072.376 rows=969503 loops=5)
Hash Cond: (ft.id = tah.finance_transaction)
-> Hash Join (cost=716707.62..1254298.44 rows=403544 width=4591) (actual time=14289.482..15561.150 rows=259931 loops=5)
Hash Cond: (br.id = ft.billing_record)
-> Parallel Seq Scan on billing_record br (cost=0.00..39488.92 rows=335492 width=3598) (actual time=0.005..37.096 rows=268393 loops=5)
-> Hash (cost=501594.96..501594.96 rows=1564453 width=993) (actual time=14287.210..14287.212 rows=1299656 loops=5)
Buckets: 524288 Batches: 8 Memory Usage: 46914kB
-> Hash Join (cost=103452.20..501594.96 rows=1564453 width=993) (actual time=3728.706..6986.704 rows=1300948 loops=5)
Hash Cond: (tl.id = ft.transaction_log)
-> Seq Scan on transaction_log tl (cost=0.00..33484.53 rows=1564453 width=766) (actual time=0.018..123.814 rows=1564453 loops=5)
-> Hash (cost=46549.31..46549.31 rows=1300631 width=227) (actual time=3704.309..3704.309 rows=1300948 loops=5)
Buckets: 2097152 Batches: 2 Memory Usage: 151889kB
-> Seq Scan on finance_transaction ft (cost=0.00..46549.31 rows=1300631 width=227) (actual time=0.018..139.211 rows=1300948 loops=5)
-> Hash (cost=165645.54..165645.54 rows=4852654 width=191) (actual time=2927.466..2927.466 rows=4852654 loops=5)
Buckets: 2097152 Batches: 4 Memory Usage: 246002kB
-> Seq Scan on transaction_audit_history tah (cost=0.00..165645.54 rows=4852654 width=191) (actual time=0.025..630.852 rows=4852654 loops=5)
Planning time: 0.341 ms
Execution time: 24013.565 ms
Performance Comparison: Before vs After Increasing Memory Settings
Metric | Before (Default Settings) | After (Optimized Memory Settings) | Improvement |
---|---|---|---|
Total Execution Time | 33.58s | 24.01s | ⬇ ~28.5% Faster (~9.5s saved) |
Hash Join (ft.id = tah.finance_transaction) | 29.03s | 20.07s | ⬇ ~9s Faster |
Hash Join (br.id = ft.billing_record) | 19.08s | 15.56s | ⬇ ~3.5s Faster |
Hash Batches (ft.transaction_log) | 128 Batches | 2 Batches | ✅ Huge Reduction in Disk Spills |
Hash Batches (transaction_audit_history) | 512 Batches | 4 Batches | ✅ Reduced Disk Usage |
Choosing the Right Memory Settings: A Case-by-Case Approach
Tuning PostgreSQL memory settings is not a one-size-fits-all solution. It depends on your database size, concurrent users, and query patterns. Finding the Right work_mem, shared_buffers, and effective_cache_size.
- If your server has plenty of free RAM, you can allocate more memory to PostgreSQL.
- If you have many concurrent users, allocating too much memory per query can cause system instability.
- Run EXPLAIN ANALYZE on slow queries to identify disk spills and hash join inefficiencies.
If you want to safely test changes without affecting the entire system, apply settings per query instead of globally:
✅ Change settings for a single heavy query:
SET LOCAL work_mem = '300MB';
SELECT * FROM finance_transaction ft
INNER JOIN billing_record br ON br.id = ft.billing_record
INNER JOIN transaction_log tl ON ft.transaction_log = tl.id
INNER JOIN transaction_audit_history tah ON tah.finance_transaction = ft.id;
Conclusion
Tuning PostgreSQL memory settings can make a huge difference in performance, helping your queries run faster and reducing unnecessary disk I/O. The key is to find the right balance based on your database load, available RAM, and how your queries behave.
Key Takeaways:
- Know your workload – Don’t just blindly increase memory settings; analyze how your queries are running first.
- Use
EXPLAIN ANALYZE
– This will help you spot issues like disk spills and inefficient joins. - Tweak
work_mem
carefully – A higher value can make joins and sorting faster, but setting it too high can eat up all your RAM. - Change settings at the right level – Use global settings for overall improvements, session settings for temporary tweaks, and per-query settings for really heavy operations.
- Monitor the impact – Check logs, track performance, and adjust as needed.
At the end of the day, PostgreSQL tuning is part science, part experimentation. Start small, track the results, and adjust as you go. With the right settings, your database will run smoother, faster, and way more efficiently. 🚀