Loading...
Skip to Content

March 2026: The Infrastructure Behind AI Is Not Ready

Home  Blog  March 2026 Blog

Every organization I speak with has an AI strategy. Most have pilots running. A growing number have board presentations with impressive adoption numbers. And yet, March 2026 delivered a consistent signal across four very different dimensions: the infrastructure behind AI is not ready. Not the technology. The structures around it.

The EU admitted it cannot enforce its own AI regulation on time. Enterprises discovered that 78% of their agent pilots cannot scale to production. Boards learned that investors now expect AI oversight they have not built. And companies began using AI as a convenient explanation for workforce cuts that have little to do with actual AI performance.

Four layers. One pattern. The ambition moved faster than the architecture to support it.

Strategic takeaways:

  • Regulatory timelines are shifting because compliance infrastructure does not exist yet, not because the rules were wrong.
  • The agent scaling gap is organizational, not technological. Ownership and monitoring are the bottlenecks, not model capability.
  • Board-level AI governance is becoming a fiduciary expectation, not an optional initiative.
  • "AI-driven layoffs" require scrutiny. The narrative is getting ahead of the reality.

Layer 1: The EU Delays Its Own AI Law

On 13 March, the EU Council agreed its negotiating position on the Digital Omnibus, the legislative package that amends the AI Act's enforcement timeline.1 Five days later, on 18 March, the European Parliament's IMCO and LIBE committees adopted their joint position by 101 votes to 9.2 The plenary confirmed it on 26 March with 569 votes in favour and 45 against.3 Trilogue negotiations are now underway, targeting a final agreement by 28 April.

The core change: high-risk AI obligations for stand-alone systems shift from August 2026 to December 2027. For AI embedded in regulated products like medical devices, the new deadline is August 2028.4 That is a delay of 16 months for stand-alone systems and 24 months for embedded ones.

The reason is not political. It is operational. Harmonised standards for high-risk compliance do not exist yet. Conformity assessment bodies have not been appointed in most member states. National competent authorities are not fully operational.5 The regulatory infrastructure that was supposed to make the AI Act enforceable by August 2026 simply was not built in time.

For CEOs, this creates a paradox. The delay is not a reprieve. Organizations that pause their compliance programmes now will face compressed timelines later, against a backdrop of standards that are still being written. The companies that use this window to build governance architecture, document AI inventories, and establish risk classification processes will be the ones ready when enforcement arrives.

Strategic takeaways:

  • Timeline shift, not cancellation: High-risk obligations move to December 2027 (stand-alone) and August 2028 (product-embedded). The rules remain unchanged.
  • Infrastructure gap is the cause: No harmonised standards, no conformity assessment bodies, incomplete national authorities. The delay is a capacity problem.
  • Use the window: Organizations that treat this as breathing room for governance buildout will outpace those that treat it as permission to wait.

Layer 2: The Agent Scaling Gap: 78% Pilot, 14% Production

The agentic AI narrative dominated 2025. The reality check arrived in March 2026. Data from DigitalApplied's March 2026 enterprise survey shows the picture clearly: 78% of enterprises have AI agent pilots running, but fewer than 15% have moved any of them to production scale.6

The gap is not a technology problem. Five root causes account for 89% of scaling failures: integration complexity with legacy systems, inconsistent output quality at volume, absence of monitoring tooling, unclear organizational ownership, and insufficient domain training data.7 Every single one of these is an organizational or operational issue, not a model capability issue.

The financial services sector leads with a 21% production deployment rate. Healthcare trails at 8%, reflecting the regulatory complexity around clinical workflows.8 Across sectors, a pattern holds: agent programmes that maintain human oversight are twice as likely to achieve cost savings above 75% compared to fully autonomous setups.9

Deloitte's 2026 State of AI report adds context. Close to three-quarters of companies plan to deploy agentic AI within two years. Only 21% report having a mature governance model for agents.10 The ambition is there. The operating model is not.

This is where the "Thousand Needle Stitches" approach becomes relevant. Instead of betting on one large-scale agent deployment, organizations that run many small, measurable agent use cases, each with clear ownership, defined monitoring, and human oversight checkpoints, build the operational muscle needed to scale. The compound effect of 20 working agent processes is more valuable than one ambitious pilot that never leaves the sandbox.

Strategic takeaways:

  • The bottleneck is not the model: Legacy integration, monitoring gaps, and ownership ambiguity cause 89% of scaling failures.
  • Human oversight doubles ROI: Agent programmes with human checkpoints achieve 2x the cost savings of fully autonomous ones.
  • Start small, compound fast: Many small agent deployments with clear governance build more production readiness than one flagship pilot.

Layer 3: AI Governance Hits the Proxy Season

Board-level AI oversight went from optional to expected in a single proxy cycle. Glass Lewis flagged AI governance as the defining theme of the 2026 proxy season, citing growing pressure on companies to demonstrate effective oversight, risk management, and transparency around AI.11

The Conference Board's 2026 report quantifies the CEO perspective. US CEOs most commonly identified AI as the external factor that could negatively impact their business, at 38%, ahead of political polarization at 31% and trust in government at 25%.12 Nearly three-quarters of CEOs are now their company's chief decision maker on AI, double the share from the previous year.13

Yet the governance structures lag. PwC's director survey found that only 35% of boards have integrated AI into their oversight activities.14 Among S&P 100 companies, just over half disclose board-level AI oversight, and fewer than one-third disclose both oversight and a formal AI policy.15 The gap between what investors expect and what boards deliver is widening.

Governance is moving from high-level principles to enforceable expectations: documented AI inventories, risk classifications, third-party due diligence, and model lifecycle controls.16 Boards that cannot demonstrate these structures face a new category of reputational and fiduciary risk. The 2026 proxy season will be the first real test.

Strategic takeaways:

  • Proxy season pressure: Glass Lewis and institutional investors now expect board-level AI oversight disclosure. This is not guidance. It is an expectation with voting consequences.
  • CEO ownership is rising: 73% of CEOs are their company's chief AI decision maker, up from roughly 36% a year ago. The question is whether boards are keeping pace.
  • Governance gap is measurable: Only 35% of boards have integrated AI oversight. Fewer than one-third of S&P 100 companies disclose both oversight and a formal AI policy.

Layer 4: AI Redundancy Washing: The Narrative Outpaces the Reality

Block cut 40% of its workforce in late February, more than 4,000 people, citing "intelligence tools."17 Jack Dorsey wrote in a shareholder letter that "a significantly smaller team, using the tools we're building, can do more and do it better."18 He predicted most companies would make similar cuts within a year. The market rewarded the move. Block's shares rose 24%.19

A closer look complicates the story. Block had just delivered Q4 gross profit of $2.87 billion, up 26% year-over-year.20 The cuts came from a position of financial strength, not distress. The AI framing, however, made the story palatable to investors in a way that "cost optimization" alone would not.

This pattern has a name. Harvard Business Review published research in January 2026 showing that the vast majority of AI-attributed layoffs are anticipatory, based on what companies expect AI to deliver, not what it has already delivered.21 Nearly 60% of executives surveyed said they emphasize AI's role in workforce reductions because it is viewed more favourably than financial constraints.22 TechCrunch and other outlets began using the term "AI redundancy washing" to describe the phenomenon.23

The numbers tell the broader story. An NBER working paper based on 750 US CFOs projects approximately 502,000 AI-related job losses in 2026. That is 0.4% of the US workforce and 9 times the 55,000 from 2025.24 It is real, but it is not the tsunami the headlines suggest. Meanwhile, 90% of organizations have frozen or reduced hiring in anticipation, and AI-related job postings have increased 340% since 2024.25

For CEOs, the diagnostic question is straightforward: are your workforce decisions driven by measured AI capability, or by the narrative around it? Cutting headcount because AI might replace roles is a bet. Restructuring because AI has demonstrably changed specific workflows is a strategy. The difference matters for execution, for retention, and for the board conversation.

Strategic takeaways:

  • Scrutinize the AI attribution: 60% of executives admit they frame cuts as AI-driven because it sounds better than financial pressure.
  • The real numbers are modest: 502,000 projected AI-related job losses in 2026 represent 0.4% of the US workforce. Significant, but not existential.
  • Capability versus narrative: Workforce restructuring based on demonstrated AI performance is strategy. Restructuring based on expected AI performance is speculation.

Conclusion

March 2026 revealed that the AI challenge has shifted. The question is no longer whether organizations adopt AI. It is whether they build the structures that make adoption work. Regulatory infrastructure, agent governance, board oversight, and honest workforce narratives: these are the four dimensions where the gap between ambition and architecture is most visible.

The organizations that close this gap will not be the ones with the most pilots or the boldest AI announcements. They will be the ones that invested in the less visible work: compliance architecture before enforcement arrives, agent monitoring before production scaling, board governance before proxy season pressure, and workforce decisions based on measured capability rather than market narrative.

The infrastructure is not ready. That is the opportunity.

Stay ahead

Subscribe to the Digitainability Brief, Mariusz Bodek's monthly executive analysis on geopolitics, AI governance, and resilience strategy.

Sources

  1. Council of the European Union, "Council agrees position to streamline rules on Artificial Intelligence," 13 March 2026.
    Available at: consilium.europa.eu
  2. European Parliament, IMCO/LIBE committees joint vote on Digital Omnibus on AI, 18 March 2026.
    Available at: europarl.europa.eu
  3. NicFab Blog, "Digital Omnibus on AI: European Parliament adopts negotiating position in plenary," 26 March 2026.
    Available at: nicfab.eu
  4. OneTrust, "How the EU Digital Omnibus Reshapes AI Act Timelines and Governance in 2026."
    Available at: onetrust.com
  5. IAPP, "European Commission misses deadline for AI Act guidance on high-risk systems."
    Available at: iapp.org
  6. DigitalApplied, "AI Agent Scaling Gap March 2026: Pilot to Production."
    Available at: digitalapplied.com
  7. DigitalApplied, "AI Agent Scaling Gap: Why 90% of Pilots Never Ship."
    Available at: digitalapplied.com
  8. Ibid.
  9. Arcade.dev, "State of AI Agents 2026: 5 Enterprise Trends."
    Available at: arcade.dev
  10. Deloitte AI Institute, "The State of AI in the Enterprise, 2026 AI Report."
    Available at: deloitte.com
  11. Governance Intelligence, "AI oversight tops Glass Lewis 2026 proxy season predictions as pressures mount."
    Available at: governance-intelligence.com
  12. The Conference Board, "AI and the C-Suite: Implications for CEO Strategy in 2026."
    Available at: conference-board.org
  13. Ibid.
  14. PwC, "2026 Corporate Governance Trends: Five Priorities for Directors."
    Available at: pwc.com
  15. RealTransparentDisclosure.com, "AI Oversight: Investor Expectations, the S&P 100 and Company-Specific Analysis," 30 March 2026.
    Available at: realtransparentdisclosure.com
  16. Harvard Law School Forum on Corporate Governance, "2026 Corporate Governance Trends to Watch," 8 February 2026.
    Available at: corpgov.law.harvard.edu
  17. CNN Business, "Block lays off nearly half its staff because of AI," 26 February 2026.
    Available at: cnn.com
  18. Fortune, "Jack Dorsey lays off 40% of Block because of AI," 27 February 2026.
    Available at: fortune.com
  19. Ibid.
  20. Ibid.
  21. Harvard Business Review, "Companies Are Laying Off Workers Because of AI's Potential, Not Its Performance," January 2026.
    Available at: hbr.org
  22. Ibid.
  23. TechCrunch, "AI layoffs or 'AI-washing'?" 1 February 2026.
    Available at: techcrunch.com
  24. Fortune, "CFOs admit privately that AI layoffs will be 9x higher this year," 24 March 2026.
    Available at: fortune.com
  25. Ibid.

Disclaimer

To be completely transparent: writing about AI while claiming not to use AI in the content generation process would be dishonest. Therefore, this article was developed with AI-assisted support for source research, quote verification, SEO optimization, and formatting. However, all core ideas, insights, and strategic perspectives are my own original thinking and reflect my personal views as the author.