Oscar Isaza described his business in fifteen minutes. A $42 million LNG regasification operation. International tankers. Cryogenic containers on barges. A five-year contract with Ecopetrol. Thermal plant expansion across southwestern Colombia. Three hundred knowledge sources managed across separate platforms. And a question: could AI help his executives make faster decisions?
I let him finish. Then I asked the question.
"What's the worst realistic outcome if the AI system gets it wrong?"
The room went quiet for about two seconds. Not because the answer was hard — because it was obvious. Wrong advice about LNG logistics, plant safety, or pipeline operations could cause physical harm. Wrong analysis in a business case going to international investors could cause financial catastrophe. Wrong compliance information in an Ecopetrol submission could void the contract.
Tier 4. The entire project, every module, classified in under five minutes. The rest of our time went into what that classification means.
The Five-Minute Decision That Governs Everything
The intake phase is the shortest phase in the pipeline and the most consequential. It answers three questions:
- What are we building? (Capture the idea — who, what, why)
- How dangerous is failure? (Set the trust tier)
- Are we building new or changing existing? (Greenfield or brownfield)
That's it. No architecture discussions. No technology selection. No timeline estimates. Just these three answers, captured on a project card that becomes the source of truth for every downstream decision.
The trust tier question — "what's the worst realistic outcome if this gets it wrong?" — is asked exactly once and cannot be changed without restarting the pipeline. This sounds rigid, and it is. Deliberately. Because every downstream decision — spec depth, test rigor, deployment gates, human oversight levels — cascades from the tier. If you change the tier mid-project, you invalidate every decision you've already made.
When Mauricio and I were building TravelOS, I skipped this discipline. I jumped straight to features — what the platform would do, what tools to integrate, how to structure the tiers. We shipped weeks of build before we aligned on who the product was for. I realized later what we had made: a Frankenstein — two audiences crammed into one stack, technically correct and commercially wrong.
"What you actually want," I told Mauricio when the misalignment became clear, "is to reduce friction in the flow you already have today. Not replace it."
He confirmed. And we finally had our intake.
The intake question isn't just about risk — it's about alignment. When you force yourself to state, in one sentence, what you're building and who gets hurt if it's wrong, you remove the ambiguity that lets two smart people build two different things in parallel.
How the Intake Conversation Works
Intake is a conversation, not a form. The structure matters.
The first question — "what are we building?" — is asked to the person who owns the problem, not the person who owns the technology. Oscar Isaza wasn't a developer. He was a CEO who managed a $42 million physical infrastructure operation. His description of the problem was the most important input in the entire project. Technical people don't always have access to that level of domain clarity. The intake conversation surfaces it.
Let the person describe the problem fully before asking any follow-up questions. Most people lead with the solution they want ("I need an AI assistant for my team") when the real need is different ("I need my executives to stop making decisions from spreadsheets they don't trust"). The full description often reveals the real problem, which is frequently simpler and more tractable than the solution they proposed.
After the description, ask the tier question. Don't offer tiers as options — ask the open question and let the answer lead. "If this AI system gives someone wrong information, what's the worst realistic thing that happens?"
The critical word is realistic. Not catastrophic. Not the movie-plot scenario. The realistic worst case is almost always specific, plausible, and sobering — and it almost always reveals the tier more cleanly than any framework analysis could.
After the tier is set, ask the greenfield/brownfield question. "Is there an existing system this would change, or are we starting from scratch?" This sounds simple. It often isn't. Most organizations have existing processes, tools, or data sources that an AI system will touch — and the people closest to the process know where the landmines are. The person who manages the call center wiki knows which articles are outdated. The administrador knows that the building's financial records are in three different Excel files, not in a single system. Getting this honest answer at intake prevents a lot of pain in discovery.
What you're looking for in the intake conversation: genuine clarity on who uses the system, what they do with it, and what happens to them when it's wrong. You're not looking for technical specifications — those come later. You're looking for the specific human reality that the spec will need to protect.
Four Intakes, Four Different Projects
I've run this conversation four times on projects that are now active. The intake conversations were different every time.
Regasificadora del Pacífico: The fastest intake I've run. Oscar described the operation, I asked the question, and the answer was immediate. Tier 4 in five minutes. The subsequent conversation was about scope — three modules, which ones to start with, what "Phase 1" could look like that delivered value without requiring the full system. The tier question took five minutes. The scope question took two hours.
Ecomm Knowledge Operating System: The intake took longer, because the risk was less obvious on the surface. A call center knowledge base. SOPs for customer service representatives. How dangerous could it be? Then we got to the specifics: this call center handles prescription medication referrals. Customers call with questions about drugs their doctors have prescribed. The representatives use the knowledge base to answer. If the knowledge base gives the representative wrong information about drug interactions, the representative might give that wrong information to the customer, who might act on it.
Tier 4. Not because the technology is sophisticated — it isn't. Because one correct answer to one question about one drug interaction might prevent one serious adverse event. That's the realistic worst case. And "one" is too many.
Edifica: This intake required the most judgment. Building management software for Colombian residential properties under Ley 675 de 2001. The consequences of errors are legal — a governance decision made incorrectly could invalidate a property owners' assembly, expose the administrator to legal liability, or produce a financial report that doesn't comply with transparency requirements. But nobody dies. There's no safety risk.
This is the tier 3/4 boundary. Financial and legal consequences, significant but not safety-critical. The eventual classification was Tier 3, with Tier 4 protocols for the specific governance modules (assembly convocations, quorum calculations) where legal invalidity was the consequence. One project, two tiers for different modules — unusual, but honest.
VZYN Labs: The easiest classification. If the marketing automation stack generates a bad blog post or misreads a competitor's website, a human catches it in review. Wasted time, maybe some embarrassment. Tier 2. The intake for VZYN took about three minutes.
The contrast is instructive. Fifteen minutes for Regasificadora because the scope was complex. Three minutes for VZYN because the stakes were clear.
Greenfield vs. Brownfield
The second classification — greenfield or brownfield — determines whether the next phase is SPEC (for new systems) or DISCOVER (for existing systems that need changes).
Greenfield means starting from zero. No existing code, no existing behavior, no existing users. The spec describes what should exist. This is the simpler case: the primary constraint is the spec quality.
Brownfield means changing something that already works. Existing code, existing behavior, existing users with existing expectations. The next step is discovery — understanding what's there before you touch it. Then the spec describes what changes, what stays, and what must survive unchanged.
Most real projects are brownfield. Most interesting failures come from treating brownfield projects as if they were greenfield — rebuilding instead of modifying, ignoring existing behavior, breaking things that worked because you didn't know they existed.
The brownfield trap has a specific texture: the team knows what they want to add, but they don't know what they might break by adding it. A call center that wants to add an AI-assisted search to its knowledge wiki is a brownfield project — the wiki exists, the SOPs exist, the representatives have existing habits and workarounds and tribal knowledge that the AI system will need to navigate, not replace. A team that treats this as greenfield ("we'll build a new knowledge system from scratch") risks rebuilding everything that works while missing the institutional knowledge that makes the existing system useful.
The greenfield/brownfield question at intake is not about the technology. It's about who and what already depends on the system you're touching. If the answer is "nobody and nothing," you're greenfield. If the answer is "these people, in these ways," you're brownfield, and discovery comes before specification.
What the Three Questions Actually Surface
Each of the three intake questions is asking for a specific kind of clarity that the subsequent phases require. Understanding what each question surfaces — not just what it asks — makes the intake more useful.
The "what are we building?" question surfaces who the system is actually for. Not who it's nominally for. In practice, this distinction matters more than it should. A call center tool is nominally for the customer service representatives. But if the tool's success is measured by supervisor satisfaction rather than representative satisfaction, the real user in the spec should be the supervisor. The intake question "who uses this?" should be followed immediately by "who cares if it works?" — because those are sometimes different people, and the spec will need to serve both.
The question also surfaces what the system is not. "What doesn't it do?" is as important as "what does it do?" for two reasons. First, explicit out-of-scope prevents scope creep during build. Second, it forces the people in the room to confront the boundaries of what they're actually committing to. Saying "this does not handle payment processing" is a decision, not just a description. Making it at intake means everyone in the room heard it, agreed to it, and can hold to it when the first stakeholder asks "but what if we also needed to handle payments?"
The "how dangerous is failure?" question surfaces the real risk model, not the official one. Every organization has an official risk model — the one that appears in the project charter, the one that's been through legal review. The real risk model is what keeps the domain expert up at night. They're usually different.
The official risk model for a building management system says: financial and compliance risk. The real risk model, when you talk to an administrador who's been sued twice for governance errors: legal liability that can bankrupt a small building administration practice. These two risk models both point to Tier 3-4, but the second is more honest about the stakes, and the spec that emerges from the second conversation is more protective.
The tier question is asked to the person who has experienced the risk, not just the person who manages it on paper. The difference in conversation is stark.
The "greenfield or brownfield?" question surfaces dependencies that the team doesn't know to look for. The answer "brownfield" should always be followed by "what depends on the current system?" — which usually produces a list of integrations, workflows, and user habits that the spec will need to protect.
A knowledge base migration that looks like it can be rebuilt from scratch often has fifteen years of tribal knowledge embedded in the file naming conventions, the article structure, and the way search results are ranked. None of that is in the code. All of it is in the habits of the people who use the system. The brownfield question, asked carefully, surfaces it early enough to be included in the discovery plan.
Scope at Intake
Intake captures the initial scope — what this project is trying to do and who it serves. It does not attempt to enumerate every feature. That discipline matters because the spec phase requires a complete feature list, and at intake you don't yet have the information to write a complete one.
The scope question at intake is: "What is the core problem this system must solve, and what is explicitly outside scope?" Not "what features should it have" — that's a spec question. The scope question establishes the boundary: inside the boundary is what we're building, outside is what we're not building, at least not now.
The inside-scope answer should fit in two or three sentences. For Regasificadora: "An executive research platform that synthesizes 300+ knowledge sources into fast, traceable answers to strategic questions about the LNG operation." For Ecomm: "An AI-assisted search layer over the call center knowledge base that surfaces relevant SOPs and procedures to customer service representatives." For VZYN Labs: "A marketing automation platform that runs pre-audit, campaign, and reporting playbooks for digital agencies."
The outside-scope answer is often more important. Explicitly stating what is out of scope prevents scope creep, manages stakeholder expectations, and gives the spec architect clarity about where to stop. For Ecomm: "Not a decision-making system — the system surfaces information; representatives make decisions. Not a replacement for the wiki — it adds search capability to the existing content. Not a medication dispensing or prescription system." These aren't obvious to everyone in the room. Writing them at intake prevents expensive misalignment during build.
When scope changes after intake — when a stakeholder asks "can we also have it do X?" — the question is not "is this a good idea?" The question is: does adding X change the tier? If X involves a new class of users, new data, or a new class of consequences when wrong, it may change the tier. If the tier changes, the pipeline restarts. If the tier doesn't change, X can be added to the spec in the next iteration.
What Intake Gets Wrong
Intake fails in predictable ways. Knowing the failure modes makes them avoidable.
Failure Mode 1: Skipping intake entirely. Teams jump straight to architecture discussions. "We're building a RAG system over our document repository." That's a technical decision that should follow from a tier decision. The tier determines whether RAG is appropriate, how much validation is required, and how carefully the document repository needs to be curated. Making the technical decision first inverts the stack.
Failure Mode 2: The wrong person in the room. The tier question needs to be answered by someone who understands the consequences of failure in the domain — not just in the technology. A CTO can answer "what happens to the system?" A domain expert answers "what happens to the people?" Both matter, but the domain expert's answer sets the tier.
For Regasificadora, the relevant person was Oscar — the CEO who understood the physical operations, the Ecopetrol relationship, and the investor implications. Not the IT team. Not the operations manager. The person who carried the consequence.
Failure Mode 3: Optimistic classification. Teams classify by what they hope the system will do, not what happens when it fails. "It's just providing information, so Tier 1." Information that informs a decision that affects someone's health, finances, or legal standing is not Tier 1. The system's job is to provide information. The tier is set by what happens to the person who receives it.
Failure Mode 4: Scope creep disguised as feature requests. The intake produces a project scope. Feature requests added after intake — "can we also have it do X?" — are scope changes that require a tier review. Adding a feature that changes the worst-case consequence changes the tier, and changing the tier mid-project invalidates the spec depth, test coverage, and human oversight decisions that were made at the original tier.
The intake is not a one-time document. The project card is updated when scope changes. Tier changes restart the pipeline.
The Non-Technical Stakeholder Intake
Francisco is a forty-year veteran of his industry with almost no software background. When I run the intake conversation with him, I don't talk about RAG pipelines or vector databases. I talk about what he currently does and what it costs him.
"Walk me through what you do when someone asks you that question."
He walks me through it. He opens the right folder on his desktop. He checks three documents. He cross-references with a regulation he has memorized. He makes a judgment based on experience. The whole sequence takes five minutes if he's focused, twenty if he's interrupted.
"What happens when a colleague does it instead of you?"
He pauses. "They get it right most of the time. But they don't know which documents to check, so sometimes they miss something."
Now we have the intake. The system should: surface the right documents instantly, in a sequence that matches his expert workflow. The right tier: Tier 3 or 4, depending on the consequences of a miss. The right classification: brownfield — the documents exist, the workflow exists, we're building a search layer over both.
None of those words were necessary. The intake happened through a conversation about his daily work, not a conversation about system architecture.
The non-technical stakeholder intake has a specific rhythm: describe what you do, describe what goes wrong, describe who else does it and how. This three-part description surfaces everything the technical intake questions ask for, in language that doesn't require translation. The "what goes wrong" answer is the tier question. The "who else does it" answer is the user question. The "what documents do you use" is the greenfield/brownfield question.
The project card that emerges from a non-technical intake looks the same as any other project card. The conversation to produce it was different. Both produce the same three answers: what, how dangerous is failure, and new or existing. The answers are what matter.
The Intake Conversation for a Regulated Industry
Regulated industries — healthcare, finance, legal, construction, food safety — have a specific intake challenge: the regulatory environment creates consequences that the domain expert understands but the AI builder may not anticipate.
When Oscar described the Regasificadora operation, he mentioned Ecopetrol as a counterparty almost in passing. To someone unfamiliar with Colombia's energy sector, "Ecopetrol contract" sounds like a business relationship. In context, it's a relationship with the state-owned company that controls Colombia's oil and gas infrastructure. Wrong compliance information in an Ecopetrol submission doesn't just end a contract — it can end a company's ability to operate in the sector.
That's not a software consequence. It's a regulatory and legal consequence that requires Tier 4 treatment, and it only becomes visible if the intake conversation surfaces it.
For regulated industry intakes, the supplementary question is: "What regulatory bodies have authority over this operation, and what happens if this system produces an output that conflicts with their requirements?"
The answer often reveals tier-setting consequences that the "worst realistic outcome" question didn't surface, because the stakeholder is thinking about operational consequences (slow decisions, wrong analysis) rather than regulatory consequences (voided licenses, compliance failures, sector bans).
Regulated industries also tend to have existing compliance obligations that constrain what the system can do — privacy regulations, data residency requirements, sector-specific prohibitions. These are discovered during this intake question and documented in the project card as explicit constraints that will govern the spec.
The intake for a regulated industry is usually the longest intake. It requires someone who understands both the domain and the regulatory environment to confirm that the tier and scope are correct. For Ecomm — prescription medication referrals — the intake included a conversation with a pharmacist to understand what the clinical consequences of various failure modes would be. The pharmacist's input was what confirmed Tier 4. Without that input, the system might have been classified at Tier 3.
The Project Card
The intake produces one artifact: a project card. It records the tier, the greenfield/brownfield classification, the initial scope, and the next action. The card lives in the project vault and becomes the reference point for every subsequent conversation.
A minimal project card looks like this:
## Project Card: Ecomm Knowledge Operating System
**Tier**: 4 (patient safety — prescription medication referral)
**Classification**: Brownfield (existing wiki, 500+ SOPs, active call center)
**Scope**: AI-assisted search layer over call center knowledge base
- Not replacing wiki — augmenting search
- Representatives make decisions; AI surfaces information
- Human review mandatory on all medical-adjacent responses
**Worst realistic outcome**: Representative receives incorrect drug interaction
guidance; patient harmed.
**Next phase**: DISCOVER — map existing wiki structure, identify SOP patterns,
document tribal knowledge gaps before spec begins.
**Owner**: [Spec Architect name]
**Stakeholders**: [Call center lead, QA team lead, pharmacist liaison]
Three paragraphs. No fifty-page project charter. No timeline with milestones. The card records the tier decision with its reasoning — because "Tier 4" without "patient safety — prescription medication referral" is a label. With it, it's a decision that every subsequent choice can reference.
The project card also records the worst realistic outcome explicitly. Not to be morbid — to be precise. The person reading the card six months from now, deciding whether a new feature requires a tier review, needs to know what the original classification was protecting against. "Drug interaction guidance to patients" is specific. "The system could cause harm" is not.
What Happens After Intake
The intake produces a project card and a classification. What happens next depends on the classification.
Greenfield → SPEC. If the project is starting from scratch, the next phase is specification. The spec architect and the domain expert (or the business owner) sit down with the blank spec template. The intake answers — who the system serves, what it must do, what happens when it fails — are the foundation the spec is built on.
Brownfield → DISCOVER. If the project is changing an existing system, the next phase is discovery. Before writing a single spec requirement, the team needs to understand what exists: the structure of the current system, the behavioral contracts it already honors, the users and workflows that depend on it, the integration boundaries that constrain what can change. The spec comes after discovery, not before.
The handoff from intake to the next phase is also where the stakeholder relationship is established. The spec architect explains: "Here's what we know from intake. Here's what we need to understand before we can write the specification. Here's what we'll produce after this phase and how you'll review it." This sets expectations for the process — not just the product — and establishes the collaboration pattern that will run through the project.
The intake also sets the pace. A Tier 1 project can move from intake to spec in a single day — the spec is minimal, the tier is low, the risk of moving fast is manageable. A Tier 4 project might spend a week in discovery before the spec begins. The tier determines the pace, and the pace is set honestly at intake.
The most common mistake in the intake-to-next-phase transition is over-eagerness: the team is excited to start building, so they rush through intake and discovery to get to the "real work." The real work is intake. The real work is discovery. BUILD is the phase that ships fastest when the preceding phases were thorough.
The Intake Produces More Than a Card
The project card is the visible output of intake. The less visible output is organizational alignment — a shared understanding of what the project is, who it's for, and what failure means in the real world.
This alignment is more valuable than the card itself. The card can be lost. The alignment travels with everyone in the room. When a developer asks "should this feature show the user's contact information by default?" six weeks into the build, the answer isn't in the code and it isn't in the spec yet. But if the intake established that this is a Tier 4 system where resident privacy is explicitly subordinate to legal compliance, the developer has the frame to answer the question correctly without asking it. The intake doesn't just classify the system — it educates the team about what the system is for.
The intake conversation is often the first time all the relevant people are in the same room, talking about the same system, with shared purpose. The spec architect, the domain expert, the technical lead, and the business owner each have different models of what's being built. The intake forces those models into alignment before any code is written. That alignment is worth the hour spent on it, independent of the project card it produces.
Why This Phase Is Fast
The intake should not take a day. It should not produce a detailed document. It should take one to two hours for a complex project and twenty minutes for a simple one.
The speed is not carelessness. It's the structure of the question. "What's the worst realistic outcome?" is not a question that requires extensive research — it's a question that requires the right person in the room to answer honestly. Once they answer honestly, the tier is set. The tier is final. And the next phase can begin.
The rest of the pipeline is where the depth lives. The spec will be detailed. The discovery document will be thorough. The test library will be comprehensive. All of that depth is correctly placed: after the tier is known, which means after the intake is complete.
Fast intake. Deep everything else.
When intake returns "brownfield," the spec doesn't come next — discovery does. That's the next chapter, and it's where most projects find the bottleneck they didn't know they had.