{"id":169132,"date":"2025-10-02T19:09:47","date_gmt":"2025-10-02T13:39:47","guid":{"rendered":"https:\/\/www.regenesys.net\/reginsights\/?p=169132"},"modified":"2025-11-04T16:48:41","modified_gmt":"2025-11-04T11:18:41","slug":"ai-in-power-what-diella-reveals-about-technology-security-and-decision-making-part-2","status":"publish","type":"post","link":"https:\/\/www.regenesys.net\/reginsights\/ai-in-power-what-diella-reveals-about-technology-security-and-decision-making-part-2","title":{"rendered":"AI in Power: What Diella Reveals About Technology, Security, and Decision-Making \u2013 Part 2"},"content":{"rendered":"\n
In Part 1<\/a> of this series, we looked at the shockwave Albania sent through the world by appointing Diella, an AI system, to its cabinet. That article was about symbolism: what it means to see a machine sworn in as a minister, and the leadership questions it raises around trust, accountability, and legitimacy.<\/p>\n\n\n\n But symbolism only takes us so far. The real test isn\u2019t the headlines or the political theatre but the technology itself. What happens when an algorithm sits at the decision-making table? Can a system like Diella deliver cleaner, faster, more transparent outcomes, or does it simply swap old problems for new ones: hidden bias, data manipulation, hacking, and vulnerabilities we don\u2019t yet fully understand? These are not abstract questions. They matter for governments and for every organisation now considering how far to extend AI into decision-making.<\/p>\n\n\n\n To unpack these issues, we obtained insights from Harsha Rao, Head of the newly launched Regenesys School of Artificial Intelligence (SAI)<\/a>. His view is clear: while AI can already be a powerful assistant in procurement, it is nowhere near ready to take full authority without strong human oversight and a robust governance framework.<\/p>\n\n\n\n Diella\u2019s story didn\u2019t begin in parliament. Before her appointment, she was already active on e-Albania<\/a>, the country\u2019s digital services portal, helping citizens retrieve documents, complete forms, and navigate bureaucracy. These tasks play directly to AI\u2019s strengths: spotting patterns in massive datasets, applying rules with consistency, and processing information at speeds no human could match.<\/p>\n\n\n\n According to Rao, AI can score bids, cross-check compliance, and detect anomalies with remarkable precision. However, he states that there is a crucial distinction between AI as a decision-support tool and AI as a decision-making authority. The former works. The latter is premature.<\/p>\n\n\n\n This distinction sits at the heart of what Rao calls the Three-Layer Governance Architecture. In this approach, AI is kept to the structured, repeatable tasks scoring bids, verifying compliance, and flagging red flags. Humans remain responsible for the areas that require judgment, context, and ethical reasoning, such as political trade-offs or strategic decisions. Above both lies the accountability layer: independent auditors with the authority and expertise to ensure the system stays transparent, fair, and free from hidden bias.<\/p>\n\n\n\n This approach reflects the direction of global regulation. The EU AI Act<\/a> classifies systems like Diella\u2019s as \u201chigh-risk,\u201d requiring not only technical documentation but ongoing risk management, human oversight, and clear transparency mechanisms. The global consensus is clear: AI can assist, but it should not decide alone.<\/p>\n\n\n\n Diella brings capabilities far beyond what a human official could manage. Unlike people, an AI does not tire. The thousandth tender receives the same scrutiny as the first. It can analyse hundreds of variables at once, financial track records, compliance histories, delivery timelines with perfect recall. Properly designed, it avoids many of the cognitive biases that distort human decisions, such as recency bias or the halo effect. And unlike human reasoning, AI leaves a complete digital trail, enabling every step of a decision to be retraced.<\/p>\n\n\n\n But these strengths coexist with glaring gaps. AI cannot grasp nuance, interpret context, or sense when the spirit of a rule matters more than its letter. It cannot negotiate with stakeholders, recognise shifting dynamics in a room, or weigh competing values when efficiency and equity collide. Most of all, it falters in moments of novelty. Models trained on yesterday\u2019s data break down when confronted with unprecedented scenarios.<\/p>\n\n\n\n Rao calls this the Asymmetric Capabilities Framework<\/em>: AI excels when scale, consistency, and auditability are needed, but it falls short where wisdom, ethics, and human judgement are essential. The solution, he argues, is Hybrid Intelligence<\/em> \u2013 a model where machines amplify human capacity, while humans remain the custodians of context and values.<\/p>\n\n\n\n This conclusion is echoed in wider research. The Stanford AI Index 2024<\/a> found that while AI outperforms humans in structured tasks such as pattern recognition, it consistently underperforms in areas requiring common sense, adaptability, or moral reasoning. These limitations aren\u2019t technical flaws; they are reminders of the irreplaceable qualities\u2019 humans bring to governance.<\/p>\n\n\n\n If Diella is the gatekeeper of Albania\u2019s procurement system, she is also its most tempting target. As Rao warns, centralising procurement authority in a single algorithm doesn\u2019t reduce risk, it multiplies it.<\/p>\n\n\n\n The threats are serious. AI systems can be manipulated through adversarial inputs, where bids are crafted to trick the algorithm into skewed outcomes. They are also vulnerable to data poisoning: if the training data is biased or corrupted, the system can amplify corruption instead of eliminating it, a risk highlighted in Nature\u2019s research on data poisoning<\/a>. Supply-chain risks add another layer because Diella runs on Microsoft Azure and OpenAI\u2019s models, any compromise upstream could cascade downstream into Albania\u2019s procurement system, as warned in the Microsoft Digital Defense Report<\/a>.<\/p>\n\n\n\n The School of AI proposes a Six Pillar Security Framework<\/em>: relentless red-team testing, blockchain-based audit trails, independent algorithm auditors, explainability standards, multiple-system checks, and a human override or \u201ckill switch.\u201d Without these safeguards, Diella represents not just an innovation but also a significant point of vulnerability.<\/p>\n\n\n\n One of Diella\u2019s strongest selling points is that she cannot be bribed. But as Rao points out, that does not make her neutral.<\/p>\n\n\n\n He frames this as the Embedded Power Thesis<\/em>: every algorithm encodes the fingerprints of its creators. Historical procurement data may already reflect patterns of favouritism; train an AI on it and those patterns are reproduced at scale. Even technical choices \u2013 which data points matter, or what metric to optimise \u2013 are value judgements. Optimising for efficiency, for instance, can sideline priorities like sustainability or fair labour unless explicitly built into the system. Cultural bias also creeps in, with English-trained models privileging polished submissions over smaller, local firms with less refined proposals.<\/p>\n\n\n\n The danger, Rao warns, is \u201cstatistical legitimacy,\u201d because outputs are algorithmic, they appear objective. But when challenged, accountability becomes blurred: the system points to the data, the data to the design, and no single person is responsible. Neutrality, in this sense, is not the absence of bias \u2013 it is bias disguised.<\/p>\n\n\n\nThe Capabilities and Boundaries of AI<\/h2>\n\n\n\n
<\/figure>\n\n\n\nSuperhuman Strengths and Human Gaps<\/h2>\n\n\n\n
<\/figure>\n\n\n\nThe Security Dimension<\/h2>\n\n\n\n
<\/figure>\n\n\n\nThe Neutrality Myth<\/h2>\n\n\n\n
<\/figure>\n\n\n\n