Artificial Intelligence entered the enterprise before any architecture existed to govern it. It happened quickly, and in a decentralized way: development teams, business departments, and product lines started building applications that each call an LLM on their own — their own API key, their own contract with the provider, their own prompt library. The phenomenon already has a name, AI Sprawl, and the consequences are visible: sensitive data leaving the perimeter without anyone noticing, costs fragmented across a myriad of line items, and regulatory compliance (chiefly the AI Act and GDPR) becoming something hard to achieve.
At Bitrock, we are convinced that the answer to this fragmented, uncontrolled, and heterogeneous landscape is not to slow down AI adoption — quite the opposite. The answer is to introduce a layer of control that allows organizations to innovate, but with greater awareness — that is, to know, at any given moment, which internal applications are talking to which models, on which data, at what cost, and with what guarantees.
The solution we propose, part of the Fortitude Group product portfolio, is Radicalbit’s AI Gateway: a single transit point between applications and models, where requests are inspected, sensitive data masked, costs tracked, and audit trails produced.
In this article, together with Mauro Mariniello, Product Owner at Radicalbit, we explore why an enterprise organization today can hardly do without a component of this kind, and we unveil a piece of news: the AI Gateway is about to be released as open source. A choice that has direct effects on the product — it opens the way to vertical extensions, independent audits, and external contributions — and that is consistent with the principle of digital sovereignty that the Fortitude Group places at the core of its vision.
Shadow AI and AI Sprawl
Today, building a tool that uses a language model no longer requires a dedicated team and months of work: vibe coding, low-code tools, and the availability of SDKs for OpenAI, Anthropic, Google, Mistral, and the wider universe of open source models have lowered the barrier to entry. A single developer — or a business user with a minimum of technical confidence — can quickly stand up an integration that talks to an LLM. Multiplied by dozens of teams or hundreds of initiatives, this phenomenon produces an ecosystem that expands in an uncoordinated way. This is what we can call AI Sprawl: the uncontrolled, disorganized, and ungoverned adoption of multiple Artificial Intelligence tools, applications, and models within an enterprise, which often intertwines with a second phenomenon, more discussed but often oversimplified: Shadow AI.
When people talk about Shadow AI, the common reflex is to think of the employee who opens a public chatbot in the browser and pastes corporate data into it. It is a real problem, but one that is addressed primarily at the network, policy, and training level: DLP filters, rules on allowed domains, awareness of safe behaviors.
An AI Gateway, on this specific vector, does relatively little — and it is right to acknowledge that.
There is, however, a second face of Shadow AI, far more pervasive, that coincides precisely with AI Sprawl. In effect, it is the Shadow AI that lives inside the custom applications built by internal teams: tools that call external LLMs, integrations that send prompts enriched with production data, agents that orchestrate APIs without the IT department knowing exactly what they are doing.
And it is precisely on this front that the AI Gateway makes the difference, because it transforms an invisible proliferation into a governed flow.
The risks of ungoverned AI
Without a layer of control, AI Sprawl manifests itself in concrete dimensions that we regularly encounter in assessments with enterprise clients. Specifically, in the following areas:
- Duplication. Different teams build the same thing n times: each initiative carries its own API key, its own contract with the provider, its own prompt library, and so on. Costs are thus fragmented into items that are hard to trace back to a single AI budget, technical debt is not shared, and when one team solves a prompt injection issue, none of the others get to hear about it.
- Compliance and privacy. Internal applications often handle information that should not leave the perimeter: customer PII, confidential documents, intellectual property, financial data. When this ends up inside a prompt directed at an external model, typically no one has carried out a formal risk assessment. For organizations falling under the AI Act or GDPR, the position in front of an inspection is uncomfortable to defend.
- Operational invisibility. IT does not know how many models are running, how many euros per month are being spent, which providers are involved, or where the data ends up. Building a measurable ROI on a surface area that cannot even be measured is an exercise that, in practice, is not sustainable.
Radicalbit AI Gateway
Radicalbit’s AI Gateway is therefore a centralized hub for easily integrating AI services into applications and IT infrastructure. It acts as a secure, observable, and performance-optimized entry point for all AI traffic, where requests are validated, sensitive data masked, malicious prompts blocked, costs tracked, and behaviors logged.
The solution guarantees compliance and governance for all AI operations across the enterprise, serving as a single endpoint for optimal resource management and cost control.
A significant new development: the release of an open source version of the AI Gateway is currently underway — a step that deserves to be read in its strategic significance.
The push toward open source stems from a core conviction: at this moment in history, AI is reshaping the way companies work, make decisions, and manage their customers’ data. When a technology has an impact of this magnitude, the governance infrastructure surrounding it cannot be a closed box. It must be a common good — inspectable, verifiable, and improvable by anyone with the skills to do so.
From here, three lines of reasoning unfold which, in our view, reinforce one another.
Democratization: the same governance, without barriers to entry
Large enterprises have the resources to build an AI Gateway in-house or to purchase expensive enterprise solutions. Italian SMEs, startups, public administrations, and the research community risk being left behind — not because they don’t need governance (they need it just as much as the large players), but because the barrier to entry is too high. Opening the code drastically lowers that barrier: anyone can download the Gateway, install it, understand it, adapt it to their context, and deploy it on-premise or in private cloud without going through consumption-based licensing or enterprise contracts.
It is a choice that aligns with the Italian and European productive fabric — largely made up of SMEs and public-sector entities — and with the nature of the Fortitude Group, which operates with made-in-EU products and open technologies as a pillar of its value proposition.
Transparency: no black box for a transparency tool
There is an almost paradoxical argument in asking organizations to trust a proprietary solution to control and bring transparency to the use of AI. Open source resolves the contradiction: what the Gateway does with the data passing through it is verifiable, line by line, by anyone with the skills to read it. No hidden features, no opaque behaviors, no undeclared telemetry.
On the broader topic of security, it is worth overturning a frequent prejudice — the idea that public code equates to more vulnerable code. Recent news episodes, even though they began as very serious incidents involving open source components, have shown something different: the speed of reaction of a global community that produces patches, mitigations, and public analyses within a few days is hard to match in a closed setting.
The operating principle that emerges is that of security through transparency: hiding the code does not make software more secure, it only makes it harder to verify. For a tool that applies guardrails, masking, and controls on sensitive data, independent code auditability is a credibility requirement.
Co-building with the community
The best open source infrastructures are not built “for” the community, but “with” the community. This is one of the guiding principles behind the choice made by the Radicalbit product team and, more broadly, by the Fortitude Group.The project’s governance model, contribution channels, and the structure of the open roadmap are still being defined. We anticipate moments for discussion, previews, and opportunities for the community to weigh in on the directions the project will take. The explicit invitation to Italian and European developers, software architects, and AI leaders is to follow the journey and to participate when the doors open.
Open source, digital sovereignty, and vendor independence
The topic of open source cannot be separated from that of digital sovereignty, which the Fortitude Group has placed at the heart of its vision. You are not truly digital, we maintain, if you are not sovereign over your own digital. And you are not sovereign over your own digital if the first thing you do to control AI is to hand over control of the control layer to a third party.
An open source AI Gateway returns a freedom that a closed solution cannot offer at the same level, and that translates into practice across four dimensions.
On the deployment side, the code can run on-premise, in private cloud, in public cloud, or in hybrid configurations: no technical constraint forces you to rely on an external managed service for a tool that, by definition, is meant to defend the perimeter.
On the inspection side, there is nothing a security team cannot read — masking functions, routing rules, guardrails, and key management are all independently auditable, without requiring NDAs with the vendor.
On the extension side, vertical needs — guardrails for the financial sector, specific controls for healthcare, integrations with legacy systems — can be built in-house or delegated to a trusted partner, without having to wait for the vendor’s roadmap.
On the exit side, if the project changes direction or the organization decides to change strategy, the code remains: there is no lock-in mechanism tying the operation of the infrastructure to the will of a single supplier.
In a European scenario where the AI Act, GDPR, and the geopolitical fluidity of supply policies require not blindly depending on non-EU vendors, these freedoms are the difference between a declared technological autonomy and a real technological autonomy.
Conclusion
The open source release of Radicalbit’s AI Gateway marks a strategic step for the Fortitude Group: making the control plane between applications and AI models a common good — inspectable and adaptable — and giving SMEs, public administrations, and large enterprises the same level of governance without barriers to entry.
Innovation and security are not alternatives: the real choice is between innovating with control and innovating without, and the difference is made by the architecture you decide to put underneath. An AI Gateway is one of the first building blocks of digital sovereignty: the point through which every piece of data, every prompt, every response must pass, so that the organization can say, with full awareness, what its AI is doing, on which information, at what cost, and with what guarantees toward customers and regulators.
On this journey, Bitrock supports enterprise organizations in the design and implementation of this control layer, integrating it into the existing stack and into development processes. To learn more about our offering in Applied AI and AI Governance, get in touch!
An infrastructure layer that sits between enterprise applications and Artificial Intelligence models. All calls to the models pass through it, so that they can be inspected, filtered, tracked, and centrally governed.
hadow AI, which we explore further in an article on our blog, is the phenomenon whereby employees, teams, or entire functions start using Artificial Intelligence tools outside the company’s formal technology adoption processes. This translates into exposure of sensitive information and a traceability that is impossible to reconstruct after the fact. AI Sprawl, on the other hand, is the proliferation of custom AI applications built internally by teams without central coordination, often integrated via API with external LLMs. An AI Gateway has its main impact on AI Sprawl and on the portion of Shadow AI that lives inside custom applications.
For three converging reasons. To democratize access to AI governance for actors who cannot afford enterprise solutions. To make a transparency tool consistent with an equally transparent infrastructure, avoiding the paradox of a black box that is supposed to guarantee visibility over other black boxes. To build a roadmap with the community, and not just for the community.
On the contrary. For a governance component, code transparency is a security advantage, not a risk and it enables independent audits, responsible disclosure, and external validation.