I once participated in implementing a system as a monolith, and later on handled the rewrite to microservices, to 'future-proof' the system.
The nice thing is that I have the Jira tickets for both projects, and I have actual hard proof, the microservice version absolutely didn't go smoother or take less time or dev hours.
You can really match up a lot of it feature-by-feature and it'll be plainly visible that the microservice version of the feature took longer and had more bugs.
And Imo this is the best case scenario for microservices. The 'good thing' about microservices, is once you have the interfaces, you can start coding. This makes these projects look more productive at least initially.
But the issue is that, more often than not, the quality of the specs are not great to awful, I've seen projects where basically Team A and Team B coded their service against a wildly different interface, and it was only found in the final stretch that these two parts do not meet.
I'm always going to say that if you have third-party integrations where you call out to other organizations' services, that will be the thing that breaks down the most. You have to armor the heck out of it and plan for contingencies, and yes, that includes when third party is <Famous Company Where Surely Nothing Ever Goes Wrong>.
Microservices are just a slightly more reliable version of that, since you can hassle the author as coworker instead of via harried FCWSNEGW support mouse.
I dream of a SQL like engine for distributed systems where you can declaratively say "svc A uses the results of B & C where C depends on D."
Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.
Its a weird notion of a distributed object, even in 2014, I think i would never consider calling the methods of a distributed object directly with something linke RPC but instead replicate the objects with a replication protocol and then use the replicas locally.
"The consequence of this difference is that your guidelines for APIs are different. In process calls can be fine-grained, if you want 100 product prices and availabilities, you can happily make 100 calls to your product price function and another 100 for the availabilities."
While this is true, in fact for efficiency reasons it's often better to treat even local dispatch like it's "network" -- chasing pointers and doing things one at a time in a loop is far less efficient on a modern architecture than doing things in bulk and vectorized.
Non uniform memory hierarchies, caches, branch predictors, SIMD, and now GPUs, etc. all tend to reward working with data in batches.
If I were to think of a "pure" model of computation that unified remote and local it would be to treat the entire machine in terms of the relational data model, not objects. To treat all data manipulation and decisions like a query.
And to ideally in fact have the same concept of a query optimizer / planner that a DBMS has, which is able to make decisions on how to proceed based on the cost of the storage model, the indexes, etc. because it has a bigger picture of what the programmer is trying to accomplish.
99% of systems out there are not truly microservices but SOA(fat services). A microservice is something that send emails, transforms images, encodes video and so on. Most real services are 100x bigger than that.
Secondly, if you are not doing event sourcing from the get go, doing distributed system is stupid beyond imagination.
When you do event sourcing, you can do CQRS and therefore have zero need for some humongous database that scales ad infinitum and costs and arm and a leg.
AI has also changed the dynamics around this. Splitting things into smaller components now has a dev advantage because the AI program better with smaller scope
A lot of this first law was specifically coupled to how these systems often hid that distributed objects were distributed. In the past 10 years, async has become far more common place, and it makes the distributed boundary much less like a secret special anomaly that you wouldn't otherwise deal with and far more like just another type of async code.
I still thoroughly want to see capnproto or capnweb emerge the third party handoff, so we can do distributed systems where we tell microservice-b to use the results from microservice-a to run it's compute, without needing to proxy those results through ourself. Oh to dream.
Microservices and the First Law of Distributed Objects (2014)
(martinfowler.com)46 points by pjmlp 20 March 2026 | 33 comments
Comments
I once participated in implementing a system as a monolith, and later on handled the rewrite to microservices, to 'future-proof' the system.
The nice thing is that I have the Jira tickets for both projects, and I have actual hard proof, the microservice version absolutely didn't go smoother or take less time or dev hours.
You can really match up a lot of it feature-by-feature and it'll be plainly visible that the microservice version of the feature took longer and had more bugs.
And Imo this is the best case scenario for microservices. The 'good thing' about microservices, is once you have the interfaces, you can start coding. This makes these projects look more productive at least initially.
But the issue is that, more often than not, the quality of the specs are not great to awful, I've seen projects where basically Team A and Team B coded their service against a wildly different interface, and it was only found in the final stretch that these two parts do not meet.
100% true in retrospect.
Microservices are just a slightly more reliable version of that, since you can hassle the author as coworker instead of via harried FCWSNEGW support mouse.
Then the engine would find the best way to resolve the graph and fetch the results. You could still add your imperative logic on top of the fetched results, but you don't concern yourself with the minutiae of resilience patterns and how to traverse the dependency graph.
While this is true, in fact for efficiency reasons it's often better to treat even local dispatch like it's "network" -- chasing pointers and doing things one at a time in a loop is far less efficient on a modern architecture than doing things in bulk and vectorized.
Non uniform memory hierarchies, caches, branch predictors, SIMD, and now GPUs, etc. all tend to reward working with data in batches.
If I were to think of a "pure" model of computation that unified remote and local it would be to treat the entire machine in terms of the relational data model, not objects. To treat all data manipulation and decisions like a query.
And to ideally in fact have the same concept of a query optimizer / planner that a DBMS has, which is able to make decisions on how to proceed based on the cost of the storage model, the indexes, etc. because it has a bigger picture of what the programmer is trying to accomplish.
Secondly, if you are not doing event sourcing from the get go, doing distributed system is stupid beyond imagination.
When you do event sourcing, you can do CQRS and therefore have zero need for some humongous database that scales ad infinitum and costs and arm and a leg.
I still thoroughly want to see capnproto or capnweb emerge the third party handoff, so we can do distributed systems where we tell microservice-b to use the results from microservice-a to run it's compute, without needing to proxy those results through ourself. Oh to dream.