Advanced technical guide for developers & system architects — indexes, queries, caching, workers, monitoring, and real-world fixes.
1. Understanding ERPNext Performance Architecture
ERPNext runs on the Frappe Framework and typically relies on a common stack: MariaDB as the primary datastore, Redis for caching and queues, Gunicorn for request handling, Nginx as a reverse proxy, and RQ workers to process background jobs. Understanding how a user request flows through this stack is the first step in identifying bottlenecks.
User → NGINX → Gunicorn → Frappe App → MariaDB
↘ Redis Cache
↘ Redis Queue (RQ Worker)
Performance problems often come from slow database queries, insufficient Redis capacity, or misconfigured workers that allow backlog to grow.
2. Profiling Slow Queries Using MariaDB Slow Query Log
The MariaDB slow query log is essential to find queries that exceed a given threshold. Start by enabling the slow query log and setting long_query_time appropriate to your environment (e.g. 1–2 seconds).
Steps to enable the slow query log
# my.cnf (or /etc/mysql/my.cnf)
slow_query_log = 1
long_query_time = 2
slow_query_log_file = /var/log/mysql/slow.log
# restart DB
sudo systemctl restart mariadb
Analyze log
mysqldumpslow /var/log/mysql/slow.log
| Finding | Meaning | Fix |
|---|---|---|
| Full table scan | No index used | Add index |
| Query took 5 sec | Bad join or aggregation | Rewrite query / add index |
| Sorting large rows | Large dataset used for ORDER BY | Use WHERE + LIMIT / index for ORDER BY column |
3. Using Frappe’s Database Query Debugger
Frappe logs queries and can surface slow queries emitted by ORM calls. Use bench logs and check the request trace to identify which API calls are triggering problematic SQL.
bench --site sitename log-frappe
# Example slow query shown in logs
SELECT * FROM `tabGL Entry` WHERE voucher_no='' ORDER BY posting_date;
Optimization: add appropriate indexes (e.g. voucher_no, posting_date) or rewrite the query to avoid SELECT *.
4. Indexing Strategy for ERPNext Performance
Indexes are the most powerful tool for speeding up SELECT queries — but they come with write-costs. Apply indexes strategically to fields that are frequently used in WHERE, JOIN, or ORDER BY clauses.
When to create an index
- Fields used in filters (e.g.,
customer,posting_date). - Fields used for joins or foreign key lookups.
- Columns used in ORDER BY for large result sets.
SQL Example — Add Index
ALTER TABLE `tabSales Invoice`
ADD INDEX customer_index(customer),
ADD INDEX posting_date_index(posting_date);
Avoid adding excessive or duplicate indexes — this slows down INSERT and UPDATE operations and consumes disk space.
5. Optimizing ERPNext Reports (Pivot, Script, Query Reports)
Reports are common performance culprits. Prefer SQL-based reports for large datasets and ensure filters and pagination are present. Avoid loading entire tables into memory.
Best practices
- Use WHERE filters to reduce dataset size.
- Use SQL queries when aggregations are large.
- Offer LIMITed previews and pagination.
- Ensure report filter columns are indexed.
Example — Optimized Query Report
SELECT posting_date, customer, base_net_total
FROM `tabSales Invoice`
WHERE posting_date BETWEEN %(from_date)s AND %(to_date)s
ORDER BY posting_date DESC
LIMIT 500;
6. Redis Optimization for Cache & Queues
ERPNext uses multiple Redis instances: cache, queue, and socketio. Monitor Redis memory and configure eviction policies if needed. Redis being full is a common source of application slowness.
Monitor Redis
redis-cli info memory
redis-cli monitor
| Issue | Cause | Fix |
|---|---|---|
| Redis memory full | Too many cached entries | Increase memory / enable eviction policy |
| Queue backlog | Insufficient workers | Add more RQ workers |
| Notification lag | socketio overloaded | Restart socketio & increase workers |
7. Optimizing RQ Background Workers
Background workers should handle long-running and non-blocking tasks: email sending, heavy calculations, ledger updates, large imports, and report generation. Properly sizing worker pools prevents HTTP timeouts and queue backlogs.
Config — workers in common_site_config.json
"workers": {
"default": 4,
"short": 2,
"long": 3
}
After updating, restart the bench or container orchestration to apply changes.
bench restart
8. Database Table Cleanup & Archival Strategy
ERP systems accumulate volume quickly. Identify high-growth tables (Activity Log, Error Log, Email Queue, Versions) and either purge or archive old data to a separate historical database to keep primary schema fast.
Cleanup example
bench --site sitename execute frappe.core.doctype.log_settings.log_settings.clear_logs --args '("Error Log", 30)'
When archiving, ensure referential integrity is maintained if the archived data is needed for compliance or audits.
9. HTTP Request vs Background Job Decision
Never run heavy operations inside an HTTP request — they will cause timeouts and a poor UX. Use RQ background tasks for long jobs.
Bad examples (HTTP)
- Recalculate entire stock ledger
- Rebuild months of BOMs
- Send thousands of emails
Good practice — enqueue long jobs
frappe.enqueue("myapp.tasks.process_data", queue="long", timeout=600)
10. Frappe Caching for Speed
Use Frappe cache to store expensive computations and precomputed lists. Cache both short-lived and long-lived values appropriately and invalidate when underlying data changes.
Cache example
@frappe.whitelist()
def get_top_customers():
data = frappe.cache().get_value("top_customers")
if not data:
data = frappe.db.sql("""
SELECT customer, SUM(base_net_total) as total
FROM `tabSales Invoice`
GROUP BY customer
ORDER BY total DESC
LIMIT 10
""", as_dict=True)
frappe.cache().set_value("top_customers", data)
return data
11. Optimizing DocType Design
Poor DocType design directly impacts performance. Avoid deeply nested child tables for frequently updated entities, limit use of very large text fields for records queried often, and store binary data such as files in the file store rather than in the DB.
| Component | Recommendation |
|---|---|
| Fields | Keep reasonable — prefer splitting into linked doctypes over >200 fields on a single doctype |
| Attachments | Use File storage (S3 / object store) rather than DB blobs |
| Child tables | Use sparingly; prefer normalized tables for heavy datasets |
12. Improving Stock Ledger Performance
Stock Ledger is typically the most write-heavy and read-heavy area, especially for manufacturing and retail businesses. Prioritize indexing for item_code, warehouse, and posting_date, and minimize backdated entries where possible.
Index example
ALTER TABLE `tabStock Ledger Entry`
ADD INDEX idx_item_warehouse (item_code, warehouse),
ADD INDEX idx_posting_date (posting_date);
Batch operations when doing large updates (e.g., re-post stock by month) to avoid long-running locks.
13. Materialized Views (Simulated)
MariaDB does not have first-class materialized views, but you can create summary tables updated on a schedule or via hooks. This reduces repeated heavy aggregations.
Approach
- Create a summary table (e.g.,
sales_summary). - Populate it daily via a scheduled task or trigger.
- Query the summary table for dashboards and reports.
# Example: scheduled task to update summary
bench execute myapp.utils.update_sales_summary
14. Load Testing ERPNext
Use tools such as Locust or JMeter to simulate user traffic. Focus on realistic use-cases: login, list view, submit document, create invoice, and heavy reports.
Locust example
from locust import HttpUser, task
class ERPUser(HttpUser):
@task
def load_dashboard(self):
self.client.get("/app")
Run tests against a staging environment with similar data volume and hardware to production for reliable results.
15. SQL Query Optimization Techniques
Small SQL changes have big impacts. Avoid SELECT *, avoid functions (like DATE()) in WHERE clauses which prevent index usage, and prefer explicit column lists.
Bad vs Good
-- Bad
SELECT * FROM `tabSales Invoice`;
-- Bad (prevents index usage)
WHERE DATE(posting_date) = '2025-01-01'
-- Good
SELECT name, posting_date, base_net_total FROM `tabSales Invoice` WHERE posting_date = '2025-01-01';
16. GDPR Compliant Data Purging
Regulatory requirements may require removing or anonymizing personal data. Implement retention policies and purge tasks to remove data that both reduces DB size and ensures compliance.
Purge steps
- Define retention window per doctype.
- Create archival tables for records that must be kept but rarely accessed.
- Automate purging via scheduled jobs with audit logs.
17. Scheduler Optimization
Schedulers run periodic tasks; tune frequency to balance freshness and system load. Heavy tasks should run during off-peak hours.
"scheduler_events": {
"hourly": ["myapp.tasks.hourly"]
}
18. Real-Time Monitoring Dashboards
Monitoring is essential. Combine host-level (htop, iostat), Redis metrics, and DB metrics. Tools such as prometheus+grafana or simpler scripts can provide alerts for queue length, slow queries, high CPU, or memory pressure.
- htop — process-level view
- mysqltuner — DB tuning hints
- bench doctor — ERPNext health checks
19. ERPNext Performance Issues & Solutions (Real-World)
| Issue | Example | Solution |
|---|---|---|
| Slow Sales Invoice submit | 20k monthly entries causing slow submit | Add index to filters + move heavy recalculation to background job |
| Stock repost slow | 2-year historical transactions | Repost by month batches |
| Delay in notifications | Redis socketio blocked | Increase socketio workers & monitoring |
| Queue backlog | 10k emails in queue | Increase long workers / throttle email sending |
20. Recommended Server Specs for Performance
| Users | CPU | RAM | Storage |
|---|---|---|---|
| <50 | 4 core | 8 GB | SSD 120 GB |
| 50–250 | 8 core | 16–32 GB | NVMe |
| 250–1000 | 16 core | 64 GB | NVMe RAID |
IOPS and latency matter more than raw disk size — prefer NVMe and RAID configurations for medium/large deployments.
21. Final Optimization Checklist
Database
- Enable slow query log
- Add appropriate indexes
- Optimize heavy queries
Redis
- Increase memory if required
- Monitor queue lengths
- Choose eviction policy
ERPNext
- Cache expensive results
- Push heavy ops to background
- Refactor heavy doctypes
Server
- Use SSD/NVMe
- Right-size workers
- Tune MariaDB configs

No comments yet. Login to start a new discussion Start a new discussion