Skip to main content

· Laurent Perello

How to choose between advanced prompt, no-code, API and custom agent to automate a business process?

By Laurent Perello, founder of Perello Consulting -- web pioneer for over 25 years, AI operator in production since 2024. Last updated: 13 April 2026.

The first question we ask is never "which tool do you want?" It is "which class of solution fits this process?" You are looking for a tool; you should name a class first. Five classes structure AI automation today: advanced prompt, no-code workflow, direct API call, custom specialised agent, vertical SaaS. Each has its cost, its timeline, its constraints, its breaking point. If you pick the wrong class, you pay three times: to build, to maintain, to rebuild. This article offers a seven-question decision tree, a costed comparison table, four recurring anti-patterns, and a composite case applied to a 25-person SME. The preliminary hours costing is covered in article 1; the first process selection in article 2; governance risks in article 3.

Why the wrong class costs more than the right tool

The dominant reflex in 2026 is to choose a tool before naming the class. You hear about autonomous agents at a conference, you open Zapier the next day, you sign a vertical SaaS the following week. You never ask the prior question: at what level of structuring does your process belong? The CNIL, in its recommendations on AI system development, writes it unambiguously. Any production deployment must be preceded by a formalised analysis of the need and data sensitivity1. ANSSI requires the same sequence for security2. Choosing a class precedes choosing a product. This choice conditions the viability of everything you build after.

The error takes two symmetrical forms. First form: over-dimensioning. You build a custom agent to automate a task that three well-framed prompts would handle in a day. Observed cost: EUR 25 to 60k in initial development for a return that would have been achieved for a few hundred euros in licences3. Anthropic, in its Building effective agents publication, formulates the inverse rule: start with the simplest solution, increase complexity only when the need requires it4. This rule is ignored as soon as a vendor offers a spectacular demo.

Second form: under-dimensioning, or glass ceiling. You deploy a no-code workflow for 500 executions per month. Six months later, you process 12,000 executions, the platform cost explodes, latency exceeds acceptable thresholds. You then undergo a forced migration to an API integration, rebuild on the fly, with EUR 12 to 25k in excess cost compared to better-calibrated initial scoping. The Cour des comptes, in its analyses on digital transformation, documents precisely this invisible technical debt that accumulates on poorly anticipated architecture choices5.

[UNIQUE INSIGHT] The rule we apply in engagements is: the chosen class must be the lightest that covers the need at 12 months, not 3 months. A class that is too heavy wastes capital. A class that is too light creates migration debt. The decision tree presented below serves exactly to avoid both pitfalls. You use it in under fifteen minutes per process, and you document it in one archived page with your decision. It is this traceability that transforms your intuition into a defensible arbitrage, carried through in the methodology that our team deploys on every audit.

The five solution classes

This section presents, for each class, its definition, use cases, cost, timeline, constraints, an anonymised example, and the conditions where we recommend or exclude it. Read them in order: the first class covers the simplest uses, the fifth the most packaged. No class is superior to the others; each has an optimal domain that you must name before choosing.

Advanced prompt -- the human pilots at 100%

Class 1 designates direct use of a generative model via a vendor interface: ChatGPT Teams, Claude Team, Copilot M365, Mistral Le Chat. The employee interacts in natural language, without an orchestrated workflow, without a programmatic call. Value rests on prompt quality, business framing, human supervision. The human remains 100% in the loop and signs the final output.

Typical use cases cover drafting, reformulation, document synthesis, first-pass legal review, meeting note preparation, brainstorming, technical translation. Cost breaks down into three lines: enterprise licence EUR 20 to 60 per user per month, initial training EUR 300 to 1,500 for a fifteen-person SME, internal referent time two to four hours per month for best-practice maintenance6. Production deployment takes one to five business days.

Constraints concentrate on two points. Data side: never paste named client information without a contract guaranteeing non-reuse for training, per CNIL recommendations1 and European Data Protection Board guidance7. Governance side: a written policy is mandatory; without it, shadow AI takes hold by default, as documented in article 3.

Anonymised example. Accounting firm of eleven people. Claude Team deployed in 2026 for tax return synthesis and client correspondence reformulation. Monthly cost: EUR 385. Measured gain: four hours per week per employee on drafting. ROI reached at month three. No technical integration, no development.

Prefer Class 1 if you handle non-repetitive, high-variability tasks where the human signs the final output, at low volume (under 50 similar tasks per month), or with no internal technical capability. Avoid it if you have high-volume repetitive processes (beyond 200 identical tasks per month), a multi-step chain requiring orchestration, or sensitive data on consumer accounts.

No-code and low-code -- orchestration without a developer

Class 2 designates visual platforms that orchestrate a step sequence linking applications (CRM, email, spreadsheet, storage, generative model). Main players are Zapier, Make, n8n, Power Automate. The user configures triggers, conditions, transformations, without writing code. The model is called as one step among others; the platform routes the result to the target tool.

This class covers lead qualification, email extraction, support ticket classification, automated weekly synthesis, document routing, CRM filling from forms. Cost breaks down into platform subscription (EUR 20 to 300 per month depending on volume), per-use LLM cost (EUR 10 to 200 per month), initial build (EUR 1,500 to 8,000 for a 5-to-15-step workflow), and maintenance one to three hours per month per critical workflow8910. Production timeline is one to four weeks.

Constraints fall on three axes. Server location: Zapier and Make are hosted in the US; n8n can be self-hosted in Europe, opening sensitive data uses10. Expertise: a trained referent (40 hours of practice suffice) manages a catalogue of 5 to 15 workflows. Documentation: without a one-page sheet per workflow and dual knowledge ownership, debt builds within six months and the workflow becomes a black box (see risk 6).

Anonymised example. E-commerce SME of nineteen people. Make workflow qualifying inbound leads: website form -> company enrichment -> structured LLM scoring -> CRM routing -> automated first message. Cumulative monthly cost: EUR 112. Build: four days for EUR 5,200. Volume processed: 450 leads per month. Sales time saved: 18 hours per week.

Prefer Class 2 if you pilot a recurring process at medium volume (100 to 5,000 executions per month), with several applications to connect, without available developer capability, and an initial budget under EUR 10,000. Avoid it if you exceed 10,000 monthly executions, require critical latency under two seconds, conditional logic beyond twenty steps, or highly sensitive data without self-hosting option.

Direct API -- lightweight integration into the existing stack

Class 3 designates integrating an LLM API call directly into an existing application: ERP, CRM, in-house back-office. Dominant providers are Anthropic, OpenAI, Mistral, Cohere, and European sovereign players (Mistral, Dust, LightOn). The model processes a structured input and returns a structured output, typically JSON. No external orchestration: the logic stays in the calling application.

Typical use cases are structured extraction from documents (invoices, contracts), ticket classification in an in-house CRM, contextual completion in a business application, summary generation in a back-office, text scoring in an existing pipeline. Token cost ranges from EUR 30 to 1,500 per month depending on volume; for reference, Claude Sonnet is priced at approximately $3 per million input tokens and $15 output; GPT-4o approximately $2.50 and $10; Mistral Large approximately EUR 2 and EUR 631112. Initial integration costs EUR 3 to 18k for one to five days of experienced developer. Production timeline is two to six weeks.

Constraints are organisational as much as technical. Data side: the API call requires contract verification (non-reuse for training); for highly sensitive data, prefer a sovereign provider or private deployment27. Expertise side: a confirmed developer is required; failing that, external services are needed. Governance side: prompt versioning in code, unit tests on outputs and error monitoring are indispensable, per AI Act article 13 on transparency13.

Anonymised example. Industrial SME of 42 people, in-house ERP. In 2025, added an automatic extraction module on supplier invoices (OCR coupled with an LLM call for structuring). Initial development: EUR 9,000. Monthly cost: EUR 78. Volume: 1,200 invoices per month. Accounting time saved: 22 hours per week. ROI reached at month two.

Prefer Class 3 if you are enriching an existing business application, with stable predictable volume above 1,000 calls per month, a need for structured output consumed by other systems, and an identified developer capability (internal or external). Avoid it if you have neither technical capability nor services budget, if you need multi-application orchestration (prefer Class 2), or a low-volume occasional use.

Custom specialised agent -- extended autonomy

Class 4 designates agents built from objectives, which choose their action sequence, call multiple tools, iterate on their outputs and return a deliverable. Main technical building blocks are Claude Agent SDK, LangGraph, CrewAI, Temporal. The agent combines orchestration, contextual memory, feedback loop, controlled access to a tool set. Autonomy is extended but always framed by guardrails: permissions, logging, human breakpoint4.

Typical use cases are structured document research, multi-source sourced drafting, technical pre-audit, multi-channel prospect qualification, composite document generation (due diligence, commercial dossiers), level-2 support on a document base. Initial cost is EUR 15 to 80k depending on complexity, i.e. three to twenty days of experienced developer plus an architect. Operations cost EUR 150 to 2,500 per month (model plus infrastructure). Maintenance is one to three days per month.

Production timeline is longer than other classes: a functional POC in two to four weeks, then hardening, supervision and guardrails in an additional two to three months before core production. AI Act article 14, on human supervision, mandates this hardening for any system involved in decisions with material effect13. Class 4 without governance is the most costly source of documented incidents in engagements.

Anonymised example. Strategy consulting firm of 22 people. Agent built to automate the pre-engagement research phase: review of 10 to 15 sources, structuring into a dossier, first synthesis draft. Build: EUR 38,000. Operations: EUR 620 per month. Volume: 11 engagements per month. Gain: 14 hours per engagement, i.e. 154 hours per month on consultants. Systematic human supervision on the final deliverable.

Prefer Class 4 if you pilot a high-value, multi-step, multi-tool process where each case requires adaptation, with a unit value above EUR 500 saved per execution, and a budget above EUR 20,000 over 12 months. Avoid it if you have a stable process with identical steps (prefer Class 2 or 3), a team with low AI risk maturity, or a use case not yet proven in Class 1 or 2. Building an agent before proving the need remains the top cause of waste observed.

Vertical AI SaaS -- packaged solution by domain

Class 5 designates SaaS solutions packaged by a third-party vendor, specialised by domain: Midjourney for visuals, Gamma for presentations, Copilot Sales for sales, Dust for knowledge management, Intercom Fin for support. The business subscribes, configures within offered limits, integrates via standard connectors. Zero development, business configuration.

Typical use cases are marketing visual generation, commercial presentation creation, automated level-1 customer support, contract assistance, augmented CRM, vertical tools by function (accounting, HR, legal). Subscription cost ranges from EUR 15 to 800 per user per month depending on verticality and tier. Initial setup is EUR 1,500 to 15,000 (scoping, data import, training). Technical maintenance is low (the vendor maintains), but business maintenance remains high to preserve configuration quality.

Constraints concentrate on vendor dependency. The contract must be read on data non-reuse for training (clause varies by vendor; some store and train by default). Dependency is maximal by design: clean export of entered data is rare, switching cost is high, pricing increase or ownership change risk is documented by the Cour des comptes on cloud markets5. A written exit plan is indispensable from signing.

Anonymised example. Marketing agency of eight people. Cumulative subscriptions to Gamma, Midjourney and Copilot Sales. Monthly cost: EUR 540. Initial setup: EUR 2,800. Gain on presentation and visual production: 30% of creative time. Estimated switching cost: EUR 18,000, identified and provisioned from signing.

Prefer Class 5 if you have a standard need on a well-covered domain, without internal build capability, with medium predictable volume, and if you explicitly accept vendor dependency. Avoid it if you operate a critical core business process, handle ultra-sensitive data without a binding contract, need deep customisation, or stack multiple SaaS threatening profitability.

The five-to-seven-question decision tree

Seven questions suffice to orient you towards the right class. Answer them in order. Each answer orients you to one or two preferred classes and excludes one or two. The class that appears most often in your answers is your priority candidate. In case of a tie, prefer the lightest class -- the one that costs least to abandon if your hypothesis proves wrong. France Num, in its AI Self-Diagnostic, structures the same logic of progressive arbitrage14; Bpifrance, through the Diag Data IA, offers a use-case-oriented variant15.

Q1. What is the estimated monthly volume of your process to automate? Under 50 executions per month orients towards Class 1 and excludes Classes 3 and 4 (disproportionate fixed cost). 50 to 1,000 orients towards Class 2, with Class 1 acceptable if variability is high. 1,000 to 10,000 opens Class 2 or Class 3 depending on developer availability. Above 10,000, Class 3 or Class 4 become priorities; Class 2 should be excluded as its platform cost becomes prohibitive.

Q2. How sensitive is the data you process? Public or non-sensitive internal data: all classes open. Named client, employee, financial data: Class 1 under validated enterprise contract, Class 3 with sovereign provider or strict contract, Class 4 with controlled architecture; avoid Class 5 if it covers core business. Highly sensitive data (health, industrial secrets, defence): Class 3 on sovereign or self-hosted model, or private Class 4; consumer Classes 1, 2 and 5 are excluded, per CNIL1 and ANSSI2 guidance.

Q3. What is your team's technical maturity? No dev capability: Classes 1, 2, 5 (in that order). Basic capability with a trained referent: Class 2 priority, Class 5 as complement, Class 3 via services. Confirmed dev capability: Class 3 or Class 4 depending on complexity. Product team or AI architect: all classes open, Class 4 becomes priority on high unit-value processes.

Q4. What budget are you allocating over the first 12 months (build and run combined)? Under EUR 5,000: Class 1 exclusively, minimalist Class 2 on one or two workflows. EUR 5,000 to 25,000: Class 2 main, Class 3 feasible on a specific case, Class 5 as needed. EUR 25,000 to 100,000: Class 3 or Class 4 depending on process complexity. Above EUR 100,000: Class 4 or Classes 3+4+5 combination across a process portfolio.

Q5. How critical is the process to your business? Experimentation or internal support: Class 1 or 2, start light. Peripheral production (content generation, qualification, routing): Class 2 or 5. Core production (invoicing, external client support, client deliverable): Class 3 or Class 4 with demanding human supervision, as documented in article 3 on governance risks.

Q6. What is your desired timeline to production? Under two weeks: Class 1 or Class 5. Two to eight weeks: Class 2 or Class 3. Two to four months: Class 4 feasible with POC phase. Beyond four months: Class 4 with careful architecture and governance formalised from scoping.

Q7. What is the acceptable cost of failure for you? Very low (no immediate financial penalty): start in Class 1 or 2, test, then industrialise. Medium (occasional client loss possible): Class 2 or 3 with supervision and monthly review. High (litigation, sanction, significant loss): Class 3 or Class 4 with full governance, 100% human supervision for the first three months, per AI Act article 1413.

Comparative synthesis table

The table below condenses orders of magnitude observed in engagements. Each cell presents a value range, not a contractual promise; actual costs depend on context, chosen vendor and data quality. The 0-5 score on required maturity and vendor dependency indicates constraint intensity. The higher the vendor dependency, the more essential a written exit plan becomes, per Cour des comptes findings on digital markets5.

DimensionClass 1 Advanced promptClass 2 No-codeClass 3 Direct APIClass 4 Custom agentClass 5 Vertical SaaS
Initial cost (build)0.3-1.5k EUR1.5-8k EUR3-18k EUR15-80k EUR1.5-15k EUR
Monthly cost (run)20-60 EUR/user30-500 EUR30-1,500 EUR150-2,500 EUR15-800 EUR/user
Production timeline1-5 days1-4 weeks2-6 weeks1-4 months1-8 weeks
Required technical maturity1/52/54/55/51/5
Vendor dependency2/53/52/52/55/5
Optimal monthly volume< 50100-10,0001,000-100,000500-unlimitedvariable

Reading the table. Class 1 minimises everything except processable volume. Class 4 maximises capacity at the price of a high initial investment. Class 5 inverts the effort curve: low at entry, high at exit. Three classes (1, 3, 4) limit vendor dependency when architecture is controlled; Class 5 concentrates it by design. No choice is neutral: each row corresponds to a trade-off you must name before signing.

The four anti-patterns to avoid

Four errors concentrate most observed overruns. They are independent of vendor, sector and business size. Naming them before your project is often worth more than correcting them after.

Direct API without assumed technical debt

An internal developer integrates an LLM call into the ERP in two days, without tests, without prompt versioning, without documentation. Output is satisfactory in demo, the project is validated. Six months later, the model changes version, outputs drift, no one knows why. Remediation cost runs three to five times the initial cost. Class 3 requires a software engineering framework; without it, prefer Class 2. This is the operational application of risk 6 documented in article 3. Discipline is not optional: on Class 3, it conditions the very existence of the return on investment.

No-code for a volume that will scale

A Make workflow is built for 500 executions per month; volume reaches 12,000 in six months. Three phenomena converge. Platform cost multiplies by 8 to 15. Latency exceeds acceptable thresholds. Conditional complexity exceeds tool readability. The forced migration to Class 3 costs EUR 12 to 25k more than better-calibrated initial scoping. Practical rule: as soon as your 12-month forecast exceeds 8,000 executions per month, scope directly for Class 3. Document this threshold in your production sheet. The switch is then anticipated, planned, budgeted, rather than suffered in emergency.

Custom agent without a solid use case

"We'll build an agent to automate all of sales." Three months of development, EUR 45k spent, no specific process mapped upstream. The agent works on demo cases and fails on real ones. The first-process selection method from article 2 was not applied. The rule is clear. You only build an agent after measuring a specific process and proving it in Class 1 or 2. Unit value must justify an investment above EUR 20k. Anthropic, in Building effective agents, formulates the same prescription in different words4. Building an agent before proving the need remains the top cause of over-engineering observed in engagements.

Vertical SaaS on a critical task

A customer support SaaS is deployed on core business, without an exit plan, without history export. The vendor increases prices by 180%, changes their model, or changes ownership. The business has no negotiating leverage. This is the direct application of risk 1 documented in article 3. Class 5 suits peripheral tasks with high standardisation: visual, presentation, sales enablement. On your core process, prefer Class 3 or Class 4 with vendor abstraction, or contractually require a portability clause tested annually. The switching cost must be estimated from signing, not discovered at migration time.

Concrete case: 25-person SME automating lead qualification

B2B consulting firm, 25 employees including 4 salespeople. Six hundred inbound leads per month (forms, webinars, downloads). Today, manual qualification by an SDR: 18 hours per week, of which 40% spent on leads that turn out unqualified. The executive wants to automate the first qualification pass. We apply the decision tree question by question, then cost the deployment and operations.

QuestionSME answerOrientation
Q1 Volume600 leads/monthClass 2
Q2 Sensitivitynon-sensitive professional contactClasses 2, 3, 5 open
Q3 Maturitycommercial Make referent, no devClass 2
Q4 BudgetEUR 12k over 12 monthsClass 2 comfortable, Class 3 via services
Q5 Criticalityperipheral productionClass 2 adapted
Q6 Timeline6 weeksClass 2 perfect
Q7 Failurelow (manual callback possible)Class 2 validated

Tree conclusion: you retain Class 2 (no-code workflow) unambiguously. No other class appears more than twice in your answers. You document your decision in one page and make it in a single scoping meeting.

Retained architecture. A web form triggers a webhook to Make. Data passes through a Clearbit (or equivalent) enrichment step, then an LLM step (Claude Sonnet) producing a 1-to-5 score on predefined criteria (company size, sector, expressed intent, language) with structured JSON output. A conditional branch routes the sequel: score >= 4 to CRM with sales alert, score 2 or 3 to an automated nurturing email sequence, score 1 to archive with monthly report. Every decision is logged, per risk 6 documented in article 3.

Deployment costing. Scoping and qualification criteria: one consultant day, EUR 1,200. Make workflow build: three days, EUR 3,600. CRM integration: one day, EUR 1,200. Testing and double-run (human and AI in parallel for two weeks for calibration): 1.5 days, EUR 1,800. Documentation and referent training: 0.5 day, EUR 600. Total build: EUR 8,400.

Monthly operations costing. Make subscription: EUR 49. LLM cost (600 calls, approximately 1,200 tokens per call): EUR 223. Clearbit enrichment: EUR 80. Referent time (one hour per month supervision and adjustment): EUR 90. Total operations: EUR 241 per month, i.e. EUR 2,892 per year.

ROI measured at three months. SDR time freed: 12 hours per week, redirected to treating score-4+ leads. Conversion rate on score-4+ leads: +34% through focus effect. Rate of leads ignored due to overload: -100%. Annualised estimated gain: EUR 38,000 (SDR time plus additional conversion). ROI approximately 3.5x on year 1, calculated per the same method as article 1.

Anticipated tipping point. Beyond 2,500 leads per month, or if scoring logic requires more than twenty conditional rules, migration to Class 3 (direct API integration into CRM) becomes necessary. This threshold is documented from production. The switch will not be a surprise: it will be planned, costed, triggered by a pre-defined indicator.

What this method avoids

Four concrete drifts are prevented by disciplined application of the decision tree. Naming them helps you measure the value of explicit scoping, and compare it to the cost of the classes you would have chosen by default.

First drift: over-dimensioning. Building a EUR 45k custom agent for a task three prompts would have handled in a day. Loss avoided on a single project: EUR 30 to 50k in capital, plus six to nine months delay on effective return.

Second drift: glass ceiling. Deploying a no-code workflow on a volume that will triple in six months, and suffering a forced migration to API integration under pressure. Average observed excess cost: EUR 12 to 25k compared to better-calibrated initial scoping, excluding opportunity cost on disrupted salespeople or operators during the switch.

Third drift: invisible technical debt. An API integration built without versioning or monitoring costs three to five times its initial investment to restore eighteen months later. The Cour des comptes documents precisely this pattern on public digital projects5; it applies identically in SMEs, with an aggravated effect from the absence of a dedicated team.

Fourth drift: suffered vendor dependency. If you deploy a vertical SaaS without an exit plan, you absorb a EUR 18 to 60k switching cost when it becomes necessary. Prepared from signing, the same switch costs a few thousand euros. Risk 1 documented in article 3 costs the orders of magnitude observed on concrete cases.

[ORIGINAL DATA] Across the 14 audits conducted in engagements since early 2025, half of the projects previously launched by executives exhibited one of these four drifts. Average cost avoided through upfront scoping: EUR 22,000 on year 1, and four to nine months gained on effective production.

Frequently asked questions

Which class costs the least to start?

Class 1 (advanced prompt) is the cheapest to build: a few hundred euros for licences and training, production in under a week. For a recurring multi-application need, Class 2 (no-code) starts at EUR 1,500 to 8,000. Classes 3 and 4 become profitable only from sufficient volume and unit value. If you are discovering AI, start in Class 1, prove in Class 2, and only open Classes 3 or 4 after proving your need.

Can I start with SaaS then migrate to a custom agent?

Yes, provided you planned your exit from signing. The Class 5 to Class 4 move is frequent on core processes: SaaS gives you understanding and scoping time, the custom agent gives you control. Typical switching cost: EUR 18,000 to 60,000 depending on data and history volume to migrate. Estimate this cost from the start, not at migration time. Risk 1 documented in article 3 specifies the contractual clauses to require to preserve this latitude.

At what volume does no-code become insufficient?

The observed operational threshold sits between 8,000 and 10,000 executions per month. Beyond that, three phenomena converge: platform cost multiplying by 8 to 15, latency exceeding acceptable thresholds, conditional complexity exceeding tool readability89. Practical rule: if your 12-month forecast exceeds 8,000 monthly executions, scope directly for Class 3. A hot switch costs EUR 12 to 25k more than initial scoping.

Do I need a developer for an API integration?

Yes, but not necessarily in-house. If your SME has no dev capability, Class 3 remains accessible via external services: EUR 3 to 18k for initial integration, then 0.5 to 2 days per month maintenance. The key point is contractualisation: prompt versioning, technical documentation, regression tests, monitoring. Without it, your Class 3 becomes invisible debt. Outsourcing does not exempt you from governance: an internal business referent remains indispensable, per CNIL guidance on AI project management1.

Is the custom agent always more powerful?

No. The custom agent is more autonomous, not more powerful in absolute terms. On a stable process with identical steps, you will get more reliability, more speed and less cost with a no-code workflow or API integration. The agent is justified when each execution requires adaptation (research, arbitrage, non-linear tool chaining) and unit value exceeds EUR 500. On simple repetitive tasks, you build over-engineering: more costly to build, more costly to maintain, less predictable than a workflow4.

Which class if data is sensitive?

If you process highly sensitive data (health, industrial secrets, detailed HR data), prefer Class 3 on a European sovereign model (Mistral, Dust, LightOn) or Class 4 with private architecture. Avoid consumer Classes 1 and 2. For standard named client data, Class 1 is acceptable with an enterprise account and non-reuse-for-training contract17. Class 5 requires a binding contract; failing that, it is disqualified on sensitive data. ANSSI provides the reference framework for integration security2.

How long from POC to production by class?

Class 1: POC and production merged, one to five days. Class 2: POC one week, production two to four weeks. Class 3: POC two weeks, production two to six weeks. Class 4: functional POC two to four weeks, hardened production two to four months. Class 5: POC and production merged, one to eight weeks depending on setup. The production phase differs from the POC through supervision, monitoring, documentation, recovery plan. If you skip this phase on Classes 3 and 4, you commit the most costly mistake observed in engagements.

Can I mix multiple classes for the same process?

Yes, and it is often the right architecture. Typical example: Class 2 orchestrates the lead qualification workflow, Class 3 handles the structured LLM call, Class 5 sends the automated email via a specialised tool. Another: Class 4 pilots a complex engagement drawing on several Class 3 and 5 tools. Mixing classes is the advanced mode: you must clearly map each class's responsibilities and maintain unified governance. Avoid until you have mastered each class in isolation.

Make your own decision

Choosing your class means choosing your future debt. The right class is the lightest that covers your need at 12 months. The wrong class is paid three times: to build, to maintain, to rebuild. The decision tree presented here fits on one page and is defensible in your committee.

Our AI audit includes ranking the classes suited to your context, costing build and run for each, and documenting the tipping point to anticipate. It builds on the hours costing method, the first process selection and the governance and risk framework from the previous articles. The full methodological framework is available on the methodology page. Our team delivers this audit in under three weeks.

Request your AI audit ->


Sources and methodology

This article relies exclusively on first-rank public sources: French authorities (CNIL, ANSSI, France Strategie, France Num, Bpifrance, Cour des comptes, INSEE, ARCEP), European bodies (European Commission, European Data Protection Board, AI Act), international institutions (OECD), and official technical documentation from model and platform providers (Anthropic, OpenAI, Mistral, Zapier, Make, n8n). Cost, timeline and volume orders of magnitude come from engagement observations between 2024 and 2026, cross-referenced with Gartner public benchmarks and vendor sector publications. Anonymised examples are composite cases built from several real engagements, aggregated to preserve confidentiality while maintaining figure accuracy.


About the author

Laurent Perello runs Perello Consulting, an independent AI automation firm for French SMEs. After 25 years building products for the web, he now orchestrates ten AI agents that he pilots alone, with a production log published daily at perfectaiagent.xyz. He publishes his methodologies and pricing online so that every executive can decide with full information.


Orchestrator: Alpha -- Perello Consulting | 2026-04-17

Footnotes

  1. CNIL, "Recommendations on AI system development", https://www.cnil.fr/fr/intelligence-artificielle, consulted 2026-04-13. 2 3 4 5

  2. ANSSI, "Security recommendations for a generative AI system", https://cyber.gouv.fr/publications, 2024. 2 3 4

  3. Anthropic, Claude API documentation (Sonnet, Opus), https://docs.anthropic.com. 2 3

  4. Anthropic, "Building effective agents", https://www.anthropic.com/research/building-effective-agents. 2 3 4

  5. Cour des comptes, digital transformation and cloud dependency reports, https://www.ccomptes.fr. 2 3 4

  6. Anthropic, "Anthropic Economic Index", https://www.anthropic.com/research/economic-index.

  7. EDPB, opinions on AI models and GDPR, https://edpb.europa.eu. 2 3

  8. Zapier, product documentation and pricing, https://zapier.com/apps. 2

  9. Make, product documentation and execution limits, https://www.make.com. 2

  10. n8n, product documentation and self-host options, https://n8n.io. 2

  11. OpenAI, GPT-4o API pricing, https://openai.com/api/pricing.

  12. Mistral AI, Large and Small API documentation, https://docs.mistral.ai.

  13. EU Regulation 2024/1689 (AI Act), articles 13 and 14, https://eur-lex.europa.eu/eli/reg/2024/1689/oj. 2 3

  14. France Num, "AI self-diagnostic", https://www.francenum.gouv.fr/guides-et-conseils/strategie-numerique/diagnostic-numerique/autodiag-ia-evaluez-la-capacite-de.

  15. Bpifrance, "Diag Data IA", https://diag.bpifrance.fr/diag-data-ia.