Chapter 4

Infrastructure is where the race gets decided

Communications infrastructure satisfaction is the #1 predictor of AI deployment success. This variable consistently outperforms all others across every statistical method applied to this dataset (correlations, regression models, and cross-tabulations). Not the investment level. Not the AI maturity. Not the guardrail sophistication.

In AI customer communications, it all comes down to the infrastructure. And that’s where the race will get decided.

Image for Infrastructure is where the race gets decided

The infrastructure gap

The correlation between infrastructure satisfaction and AI deployment confidence is 0.52 – the strongest relationship across 4,656 variable pairs analyzed in this study. Put simply: How an organization feels about its communications infrastructure is a better predictor of AI success than its investment level, guardrail maturity, or how long it has been deploying AI. 

Yet despite 87% rating high-performance communications infrastructure as essential or very important, most organizations say their current provider falls short in at least one meaningful area. 42% report insufficient reliability, 37% cite limited multi-channel capabilities, and 32% mention lack of AI platform integrations. The gap between what enterprises need and what they currently have is where most of the findings in this report originate.

87%

of enterprises rate high-performance communications infrastructure as essential or very important to their AI strategy. (Sinch, 2026)

42%

of respondents report insufficient reliability for AI at scale from their current provider. (Sinch, 2026)

37%

of respondents cite limited multi-channel capability from their current provider. (Sinch, 2026)

32%

of respondents report a lack of AI platform integrations from their current provider. (Sinch, 2026)

Heavy investment, same failure rate

Enterprises invest more in trust, security, and compliance than in AI development. It’s the #1 spending category globally, selected by 75% of respondents, ahead of AI development at 63%. The industry has voted with its budgets: Governance is the priority.

But our data shows this isn’t enough. 74% of organizations that reached production have still shut down or rolled back an agent. Among the most governed programs – the ones investing most heavily in safety – the rate is 81%. 

Ironically, the area enterprises invest in most is also what blocks them most. 37% of respondents name trust, security, and governance as the single biggest barrier to AI business impact. Something structural isn’t working.

Sinch data (2026) shows trust, security, and compliance is the #1 spending category in AI programs globally. 

The cause lies below the surface

The issue isn’t how much is being spent. It’s where the money is going. 84% of engineering teams spend at least half their time building guardrails from scratch – many of which should be provided by a native platform. Every dollar going there is a dollar treating a symptom rather than the cause. 

For many of those failures, the root cause is one layer below. Our research shows that more than half of enterprises (55%) are custom-engineering the ability to preserve customer context when someone switches channels – from chat to voice, from WhatsApp to a phone call – because their platform doesn’t provide it natively. When a customer has to repeat themselves to an AI agent, they’re not experiencing a model failure. They’re experiencing the infrastructure gap directly, and it’s your brand that pays the price.

The market is already responding

Enterprises haven’t fully articulated that diagnosis yet, but their behavior suggests they’ve felt it. 86% have had active or exploratory conversations with alternative providers in the past 12 months, and only 4% have no plans to evaluate. 

The two strongest triggers for switching providers confirm where the pain is. 91% of enterprises that have experienced a governance rollback have evaluated or are actively evaluating a new communications provider. And organizations that rate high-performance infrastructure as a top strategic concern are twice as likely. 

The market isn’t necessarily shopping because it’s unhappy with a vendor. It’s shopping because AI ambitions have outgrown what the current infrastructure was built to handle.

What AI leaders are evaluating on

When asked to rank criteria for evaluating new communications providers to power agentic interactions, reliability ranked first: 29% of respondents placed it top, ahead of global reach and compliance capability (23%), and ease of integration (18%). 

Security and trust and multichannel capabilities close the top five, ahead of pricing. In fact, pricing ranks eight out of the nine factors evaluated in the survey.

Sinch data (2026) reveals reliability and uptime is the most important factor for 29% of respondents. 

The structural cost of building without a foundation

Good security architecture doesn’t ask whether to build controls like PII masking, rate limiting, and audit logging. It asks where. The right answer is both layers: platform-native first, with engineering teams adding controls on top. That’s defense-in-depth. 

Building without a foundation means engineering teams aren’t making security decisions. Instead, they’re filling gaps, reactively, on every new agent deployment and every new channel. But when a solid foundation exists, these teams are able to build intentional controls on top of something that holds. 

That’s where the architectural question becomes a roadmap question. When platform-native controls exist, engineering leaders can see what their team is actually building and assess what adds genuine protection, and what’s just reconstructing infrastructure the platform should already own. That determines whether engineering teams are building toward the next capability or just keeping the lights on.

«Every team needs to decide what controls belong at the platform layer and what their engineers should build on top, because the cost of building custom guardrails compounds over time, especially as the team moves through the product lifecycle. Each new agent, each new channel, each new deployment adds to the pile. And eventually you lose that momentum when it comes to outperforming on the market.»
Foto de Anton Efimenko
Anton Efimenko SVP, Software Engineering • Sinch
Image for What to do with this

What to do with this

You’ve seen the data. Now it’s time to act on it. Three questions worth taking into your next leadership review:

• How much of your engineering team’s time is going toward building guardrails from scratch versus building the next customer experience?

• If those guardrails fail in a live customer interaction, would you know before it becomes a trust problem?

• And is your communications provider built for what you’re planning to deploy in the next 12 months, or for what you shipped 18 months ago? 

The full findings of this report will help you determine what your priorities should be for the year ahead. 

The full picture lands in June

This is an early look at the findings from “The AI Production Paradox” report. The full research – including regional cuts across North America, EMEA, APAC, and Latin America, vertical analysis, and cross-persona comparisons – will publish later in Q2.

Come back in June to discover what enterprises have learned after shipping and what it actually takes to resolve the production paradox. Or drop your email below to get the findings directly in your inbox.