The ERP Is Dead.
Long Live the Flask App.
The real competition for ERP vendors isn't another enterprise platform. It's one developer with Flask, psycopg2, and an LLM.
What if the complexity was always optional?
Think about what an ERP actually is. A monolithic system that tries to model every business process in one giant interconnected blob — SAP, Oracle, Odoo, whatever your company got sold on. Companies spend millions implementing them, years customizing them, and still end up with workflows that don't quite fit because the software was designed for a generic business, not their business.
The result? Proprietary spaghetti configurations that nobody at the company fully owns — and that often require expensive specialist help to untangle.
There's an alternative. It's already happening. People just don't have a name for it yet.
What ERP Got Wrong
The ERP model made sense until the mid-2010s. You had one database, one system of record, one vendor — and for a long time it was a genuine improvement over spreadsheets, paper, and disconnected tools. The monolith was a real leap forward.
But the industry overfit to that model. What started as "let's centralize our data" became "let's model every possible business process in a single platform with modules for everything." And the more modules, the more customization required, the more consultants needed, the more dependency created.
The ERP trap, step by step:
- Buy the platform ($$$). License fees, implementation fees, training fees.
- Realize it doesn't quite fit your actual workflow. Hire consultants to customize it.
- Accumulate customizations. Now nobody can upgrade safely.
- The configuration becomes a black box — often only understood by whoever built it.
- Changing anything requires going back to the source. Dependency compounds.
The ERP vendor didn't trap you on purpose. But the incentive structure did its work just as effectively.
The Architecture Nobody's Naming Yet
Instead of one massive ERP, imagine this:
The Post-ERP Company
Postgres ← shared database, single source of truth
│
├── inventory.company.com ~1000 lines Python
│ tracks stock, runs reorder queries
│
├── invoicing.company.com ~1000 lines Python
│ generates invoices, tracks payments
│
├── scheduling.company.com ~1000 lines Python
│ manages shifts and appointments
│
└── reporting.company.com ~1000 lines Python
analytical queries across all tables
Each workflow gets its own small Flask app. Each one does exactly what that specific process needs — nothing more. The Postgres database is the shared integration layer. Everything reads and writes to the same tables, so your reporting app can always join invoicing data with scheduling data.
The database is the cathedral. The Flask apps are tents you pitch and strike as needed.
Any one of them can be rewritten in a week or two if requirements change. No vendor lock-in. No licensing fees. You own the code completely.
€0
Licensing fees
Postgres is free. Python is free. You own everything.
1-2 weeks
To rewrite any app
~1000 lines of Python is not a scary codebase.
1 person
To maintain all of it
One developer who knows the domain and the schema.
A Concrete Example
Say you're building a SaaS for a dog grooming business. The whole thing lives in one Postgres table:
Schema
CREATE TABLE appointments (
id UUID PRIMARY KEY,
customer_name TEXT,
dog_name TEXT,
dog_breed TEXT,
service_type TEXT, -- 'bath', 'haircut', 'nails', 'full_groom'
groomer_id UUID REFERENCES groomers(id),
price NUMERIC(10,2),-- exact decimal, not a float
scheduled_at TIMESTAMP,
status TEXT, -- 'booked', 'in_progress', 'completed', 'cancelled'
notes TEXT
);
The "old school" approach: build a domain model, create a view model, add a service layer, DTOs, mappers. Five files and three abstraction layers before anything hits the screen.
The post-ERP approach: point Flask at that table and render it directly.
The entire Flask route
@app.route('/appointments')
def appointments():
with get_db() as cur:
cur.execute("""
SELECT * FROM appointments
WHERE status != 'cancelled'
ORDER BY scheduled_at
""")
appointments = cur.fetchall()
return render_template('appointments.html', appointments=appointments)
Each column maps 1:1 to a UI element. No view model, no mapper, no DTO. For this use case, that's not a shortcut — it's the correct architecture. Adding layers would be pure ceremony.
Now what about the "complex" view — a daily schedule grouped by time slot, with groomer names, showing revenue per day?
The "complex" view is just SQL
SELECT
a.scheduled_at::date AS day,
a.scheduled_at::time AS time_slot,
a.dog_name,
a.service_type,
a.price,
g.name AS groomer,
SUM(a.price) OVER (
PARTITION BY a.scheduled_at::date
) AS daily_revenue
FROM appointments a
JOIN groomers g ON a.groomer_id = g.id
WHERE a.scheduled_at::date = %s
AND a.status != 'cancelled'
ORDER BY a.scheduled_at;
Run it with psycopg2
cur.execute(query, [date])
rows = cur.fetchall()
# The query result IS your view model.
No ORM. No domain model. No service layer. No repository pattern. Just SQL that returns exactly the shape the UI needs.
PostgreSQL is already an incredibly powerful data transformation engine. JOINs, GROUP BY, window functions, CTEs — these do what half of those intermediate layers were built to do, but faster and closer to the data.
Note the NUMERIC(10,2) for price. Exact decimal precision, out of the box. No floating point rounding errors, no JavaScript float gymnastics. Postgres handles money correctly.
Why the Complexity Was Always Optional
Domain-Driven Design, clean architecture, hexagonal architecture — these have genuine uses. Complex domains with intricate, constantly-changing business rules (banking, insurance, healthcare logistics) benefit from the structure. The abstraction layers earn their keep when the business logic is genuinely complex.
The problem is these patterns became cargo cult. People applied the full toolkit to apps that are basically CRUD with a couple of validation rules, and ended up with ten files and three abstraction layers to save a row to a table.
There's a structural dynamic worth noticing:
Complex architecture requires more people to maintain it. Larger teams require more coordination, more governance, more tooling. The patterns that grew dominant inside big organizations were naturally the ones that scaled with headcount — not necessarily the ones that were most effective per developer.
In a large org, nobody champions the 200-line solution. It doesn't look serious enough to justify the team around it.
This isn't about bad intentions — most developers genuinely believe in the patterns they advocate. But the incentives of large-org engineering shaped what became "best practice," and those incentives don't transfer to a small business building internal tools.
When a small team ships the same product in a few weeks with Postgres, raw queries, and Flask — and it works, it's readable, and one person can hold it in their head — that's not a shortcut. That's right-sized architecture.
Why AI Makes This Viable at Scale
The "one person, many small apps" model always made sense in theory. In practice, it ran up against a real constraint: one person can only write so much boilerplate before burning out.
AI coding tools eliminate that constraint. Not by replacing domain knowledge — you still need to understand your business processes, your data, your users. But by handling the repetitive scaffolding that made the approach feel unscalable.
What AI changes in this stack:
- Scaffolding is free. Describe the workflow in plain language. Get a working Flask route, template, and SQL query. Review, adjust, ship.
- Refactoring is cheap. When the schema changes, updating templates takes minutes with AI assistance, not a sprint.
- Domain knowledge matters more, not less. The bottleneck is no longer "can you write the code." It's "do you understand the problem well enough to describe it clearly."
- App code becomes almost disposable. The permanent asset is the Postgres schema. The Flask apps are cheap to generate, cheap to throw away, cheap to regenerate differently when requirements change.
A business owner or operations manager who knows their processes and has basic SQL literacy can now build and maintain a suite of internal workflow apps that would have previously required a team or an expensive ERP contract.
This is why Flask + psycopg2 shines in the AI era. Simple, explicit code is easier to generate and easier to verify.
See the full AI workflow comparison →
The consultant's role shifts, not disappears
The most valuable consultants in a post-ERP world aren't the ones who configure vendor platforms — they're the ones who help business owners articulate their actual workflows. Map the real processes. Identify what data matters. Design a schema that reflects how the business genuinely operates. That translation from "how we work" to "how the database should be structured" is still hard, still human, and still worth paying for. The difference is: the value is in the domain understanding, not the platform dependency.
Where This Is Going
The trajectory is clear. Companies are already replacing legacy ERP systems piece by piece rather than doing big-bang migrations. The post-ERP approach is the artisanal version of what the industry is converging toward — small, focused apps sharing a database as the integration layer.
The endgame looks something like this:
Articulate the workflow
A business owner — or a consultant working with them — maps the actual process in plain language: what data exists, what decisions get made, what needs to appear on screen.
AI generates the Flask app
Schema, routes, templates, queries. A working prototype in a day or two, iterated with the business owner until it fits.
Deploy, iterate, own it
When the workflow changes, patch the app in an afternoon. No vendor ticket. No locked upgrade cycle. The knowledge stays inside the company.
Who this works best for
This vision is strongest for SMBs and internal tooling: a 50-person logistics company, a chain of clinics, a mid-size retailer managing its own operations. These are the ideal candidates — businesses with real workflows that no off-the-shelf ERP fits cleanly, but that don't need the infrastructure of a Fortune 500.
The argument gets harder at true enterprise scale: thousands of concurrent users, multi-jurisdiction regulatory compliance, complex audit trails, cross-border data residency. Not impossible, but that's a different set of constraints — and being honest about that boundary is part of building with integrity.
The people who thrive in this world won't be the ones who know the most design patterns. They'll be the ones who understand their business domain deeply and can articulate what they need clearly.
Which, ironically, is what Domain-Driven Design was supposed to be about all along — just without ten layers of Java in between.
Start Building Post-ERP
A Postgres database, a few Flask apps, and psycopg2. That's the stack. Everything else is optional.
Join the Movement
Get tutorials, tools, and insights on building with Flask + PostgreSQL + Vanilla JS.
No spam. Unsubscribe anytime. We respect your inbox.