Entropy, Engineered
Constantine Mirin

About me

I am Constantine Mirin, the CEO of Postindustria. We build agentic systems that do actual work.

This site is my “hard copy.”

In my role, I see two distinct worlds. The noisy world of LinkedIn AI influencers “vibe coding” with demos that break in production, and the quiet, unglamorous world of engineering systems that actually work at 3 AM when nobody is watching.

I am building this site to document the latter.

The Origin Story

I got into computer science because of a Harry Harrison novel “The Turing Option” that I read around 1999. The book was about the inventor of AI, and the story was set in the 2020s. It felt impossibly far away.

Well. Here we are. The 2020s arrived. The AI arrived. So many predictions from the book did too. And I somehow ended up on the engineering side of it. The science fiction became Tuesday’s standup.

What You Will Find Here

Everything here is written from the perspective of Agentic Engineering: moving beyond prompt engineering into building reliable, deterministic systems where AI agents do useful work.

  • Protocols, not Personas: Why telling an AI to “be an expert” fails, and how to engineer workflows instead.
  • The Code: Actual patterns from my open-source toolkits.
  • The Business: How these agents apply to Sales and enterprise workflows (what actually pays the bills).

I treat this site as a lab notebook. The polished case studies go to the corporate blog. The raw, technical, opinionated truth lives here.

On Co-Creating with AI

Yes, I use Claude and other LLMs to co-create this content. Deliberately, transparently, and — I believe — responsibly.

I hold a simple conviction: generative AI empowers people to create faster, better, and more useful things. Like any precision tool — a chainsaw versus a hand saw — it amplifies both skill and carelessness. Used poorly, it produces mess. Used well, it lets you vastly outperform what was possible before.

The shift is profound. We move attention from characters to thoughts. From syntax to solutions. From typing to steering. The code we used to write was expensive — not in dollars, but in cognitive load spent on implementation details instead of the actual problem.

Of course, this comes at a price. Joel Spolsky’s Law of Leaky Abstractions applies here too. Every abstraction eventually breaks at the edges. LLMs are no different — just with far more intermediate layers, and boundary conditions we are still discovering. Security is a real concern; I advocate for deploying anything AI-generated with the same rigor as human-written code. Probably more.

But I believe LLMs are here to stay. Our job is to learn how to use these tools to their full potential. The leaders at Anthropic and Google DeepMind say the path to AGI runs through teaching AI to write better code — so we can use AI to build better AI. I agree. And I think the same loop will play out across every domain of human work.

This site is co-created. Every piece is steered by me, reviewed by me, and published only when it says what I mean. The AI accelerates; the judgment remains mine.

When I am not building agents, I am usually underwater, on a mountain, or watching a 3D printer fail in new and educational ways.

Let’s build.


LinkedIn · GitHub · X