AI in Power: What Diella Reveals About Technology, Security, and Decision-Making – Part 2 - RegInsights

Register to start your wonderful education journey!

By submitting this form, you agree to our Terms & Conditions.

In Part 1 of this series, we looked at the shockwave Albania sent through the world by appointing Diella, an AI system, to its cabinet. That article was about symbolism: what it means to see a machine sworn in as a minister, and the leadership questions it raises around trust, accountability, and legitimacy.

But symbolism only takes us so far. The real test isn’t the headlines or the political theatre but the technology itself. What happens when an algorithm sits at the decision-making table? Can a system like Diella deliver cleaner, faster, more transparent outcomes, or does it simply swap old problems for new ones: hidden bias, data manipulation, hacking, and vulnerabilities we don’t yet fully understand? These are not abstract questions. They matter for governments and for every organisation now considering how far to extend AI into decision-making.

To unpack these issues, we obtained insights from Harsha Rao, Head of the newly launched Regenesys School of Artificial Intelligence (SAI). His view is clear: while AI can already be a powerful assistant in procurement, it is nowhere near ready to take full authority without strong human oversight and a robust governance framework.

The Capabilities and Boundaries of AI

The Capabilities and Boundaries of AI

Diella’s story didn’t begin in parliament. Before her appointment, she was already active on e-Albania, the country’s digital services portal, helping citizens retrieve documents, complete forms, and navigate bureaucracy. These tasks play directly to AI’s strengths: spotting patterns in massive datasets, applying rules with consistency, and processing information at speeds no human could match.

According to Rao, AI can score bids, cross-check compliance, and detect anomalies with remarkable precision. However, he states that there is a crucial distinction between AI as a decision-support tool and AI as a decision-making authority. The former works. The latter is premature.

This distinction sits at the heart of what Rao calls the Three-Layer Governance Architecture. In this approach, AI is kept to the structured, repeatable tasks scoring bids, verifying compliance, and flagging red flags. Humans remain responsible for the areas that require judgment, context, and ethical reasoning, such as political trade-offs or strategic decisions. Above both lies the accountability layer: independent auditors with the authority and expertise to ensure the system stays transparent, fair, and free from hidden bias.

This approach reflects the direction of global regulation. The EU AI Act classifies systems like Diella’s as “high-risk,” requiring not only technical documentation but ongoing risk management, human oversight, and clear transparency mechanisms. The global consensus is clear: AI can assist, but it should not decide alone.

Superhuman Strengths and Human Gaps

Superhuman Strengths and Human Gaps

Diella brings capabilities far beyond what a human official could manage. Unlike people, an AI does not tire. The thousandth tender receives the same scrutiny as the first. It can analyse hundreds of variables at once, financial track records, compliance histories, delivery timelines with perfect recall. Properly designed, it avoids many of the cognitive biases that distort human decisions, such as recency bias or the halo effect. And unlike human reasoning, AI leaves a complete digital trail, enabling every step of a decision to be retraced.

But these strengths coexist with glaring gaps. AI cannot grasp nuance, interpret context, or sense when the spirit of a rule matters more than its letter. It cannot negotiate with stakeholders, recognise shifting dynamics in a room, or weigh competing values when efficiency and equity collide. Most of all, it falters in moments of novelty. Models trained on yesterday’s data break down when confronted with unprecedented scenarios.

Rao calls this the Asymmetric Capabilities Framework: AI excels when scale, consistency, and auditability are needed, but it falls short where wisdom, ethics, and human judgement are essential. The solution, he argues, is Hybrid Intelligence – a model where machines amplify human capacity, while humans remain the custodians of context and values.

This conclusion is echoed in wider research. The Stanford AI Index 2024 found that while AI outperforms humans in structured tasks such as pattern recognition, it consistently underperforms in areas requiring common sense, adaptability, or moral reasoning. These limitations aren’t technical flaws; they are reminders of the irreplaceable qualities’ humans bring to governance.

The Security Dimension

The Security Dimension

If Diella is the gatekeeper of Albania’s procurement system, she is also its most tempting target. As Rao warns, centralising procurement authority in a single algorithm doesn’t reduce risk, it multiplies it.

The threats are serious. AI systems can be manipulated through adversarial inputs, where bids are crafted to trick the algorithm into skewed outcomes. They are also vulnerable to data poisoning: if the training data is biased or corrupted, the system can amplify corruption instead of eliminating it, a risk highlighted in Nature’s research on data poisoning. Supply-chain risks add another layer because Diella runs on Microsoft Azure and OpenAI’s models, any compromise upstream could cascade downstream into Albania’s procurement system, as warned in the Microsoft Digital Defense Report.

The School of AI proposes a Six Pillar Security Framework: relentless red-team testing, blockchain-based audit trails, independent algorithm auditors, explainability standards, multiple-system checks, and a human override or “kill switch.” Without these safeguards, Diella represents not just an innovation but also a significant point of vulnerability.

The Neutrality Myth

The Neutrality Myth

One of Diella’s strongest selling points is that she cannot be bribed. But as Rao points out, that does not make her neutral.

He frames this as the Embedded Power Thesis: every algorithm encodes the fingerprints of its creators. Historical procurement data may already reflect patterns of favouritism; train an AI on it and those patterns are reproduced at scale. Even technical choices – which data points matter, or what metric to optimise – are value judgements. Optimising for efficiency, for instance, can sideline priorities like sustainability or fair labour unless explicitly built into the system. Cultural bias also creeps in, with English-trained models privileging polished submissions over smaller, local firms with less refined proposals.

The danger, Rao warns, is “statistical legitimacy,” because outputs are algorithmic, they appear objective. But when challenged, accountability becomes blurred: the system points to the data, the data to the design, and no single person is responsible. Neutrality, in this sense, is not the absence of bias – it is bias disguised.

These concerns are not abstract. A Politico analysis noted that Albania has offered few details on how Diella’s decisions can be audited or appealed, raising fears that inequities may be concealed rather than resolved.

From Trust to Proof

From Trust to Proof

If there is one principle that must guide AI in governance, it is this: do not ask for trust, provide proof.

Rao calls this earned transparency. Every decision must be explained in plain language: why one vendor won, why another lost, and how the result might have shifted under different circumstances. Vendors should have formal rights of appeal, with human experts reviewing contested cases and before being given authority, systems like Diella should be tested in parallel with human decision-makers and their past outputs regularly audited for bias or error.

Transparency, however, is not just documentation.Rao argues for multi-stakeholder oversight councils that include civil society, business associations, and citizen representatives. Failures must be disclosed quickly and visibly, with clear corrective actions. Trust is not built on promises of perfection but on visible honesty in repair.

These principles align with the EU AI Act, which requires high-risk systems to be explainable, auditable, and contestable. Governments can no longer rely on “trust us” when delegating authority to machines – they must design systems that prove themselves, decision by decision.

The Road Ahead

The Road Ahead

Looking forward, Rao argues that the adoption of AI in government is inevitable, driven by seven converging forces. Advances in verifiable AI will reduce the trust gap by allowing systems to prove their reasoning cryptographically. Fiscal pressures will push governments to automate, while regulatory clarity, paradoxically, will accelerate adoption by lowering institutional risk. Competitive pressure will ensure no country wants to appear technologically behind.

At the same time, corruption networks will adapt, investing in adversarial tactics of their own. This will trigger an arms race of AI versus counter-AI. Generative AI will bring new capabilities from drafting tenders to modelling scenarios; while crises, whether economic or political, will act as catalysts for rapid adoption.

But Rao is clear: two futures are possible. In one, governments rush in without safeguards, leading to high-profile failures that erode confidence. In the other, adoption is steady, guided by regulation and best practice that matures by 2030. The deciding factor will not be the brilliance of the algorithms, but the resilience of the institutions built around them.

Ultimately, the question is not whether AI will be used in governance – that is certain – but how societies build the rules, institutions, and ethical boundaries around it. Algorithms may calculate, but it is politics that decides what counts as fair, legitimate, or corrupt.

In Part 3 of this series, we will turn to those political and ethical frontiers. What does it mean for democracy when a machine holds ministerial power? Can AI truly root out corruption, or does it risk becoming a new shield for unaccountable authority? And how do leaders ensure that efficiency never comes at the expense of justice, transparency, and human dignity?

Please rate this article

0 / 5. 0

Author

Dip Media Practices Content Writer | Regenesys Business School Neo is a Content Writer at Regenesys Education with a passion for crafting engaging, purpose-driven content. She contributes to various Regenesys platforms, including the RegInsights blog and Regenesys Business World Magazine, focusing on leadership, education, and personal development. With a background in marketing communications, Neo brings creativity, strategy, and a strong sense of purpose to her work. Outside of the office, she’s committed to using her voice to advocate for education, wellness, and opportunities for neurodivergent individuals.

Write A Comment