South Africa’s now-withdrawn draft artificial intelligence policy should be corrected, republished and used to regain momentum. It shouldn’t become another casualty of government embarrassment, say AI experts, including those snubbed in drawing up the infamous document caught up in a fictitious source scandal.
Dirk Brand, a legal consultant and part-time lecturer at Stellenbosch University’s School of Public Leadership, who has published research on responsible AI in government, tells Currency that the draft’s now-notorious fake references were a serious failure, but not a reason to abandon the policy process.
“Somebody tried to finish the document and then put it through an LLM and came up with something which they didn’t check,” he says, comparing the incident to cases in which lawyers internationally have been criticised by courts for submitting AI-generated material containing non-existent legal authorities.
Brand’s assessment comes after DA communications minister Solly Malatsi withdrew the Draft National Artificial Intelligence Policy this week following confirmation that its reference list contained fictitious sources apparently generated by AI.
The draft, gazetted for public comment in April after Cabinet approval, had been intended to position South Africa as a continental leader in AI innovation while dealing with ethical, social and economic risks. It proposed new institutions including a National AI Commission, an AI Ethics Board and an AI Regulatory Authority, and incentives including tax breaks, grants and subsidies to encourage private-sector collaboration.
Schadenfreude
The snafu has drawn howls of gleeful schadenfreude from DA critics, society, and academics snubbed in the drawing up of the policy, including a group of about 70 leading AI experts in South Africa that had been working with the Department of Science, Technology and Innovation (DSTI) on an AI policy, according to MyBroadband.
Asked whether the likely sequence was that the authors wrote the policy first and then asked AI to supply academic support for its propositions, Brand said: “That could well be an explanation of what happened.”
“In many cases, people play around with a tool like GPT because it’s accessible,” says Brand, but they may not understand data quality, false outputs and the need to check results. “So that’s extremely risky.”
The irony is especially sharp: a policy intended to regulate and guide the use of AI in South Africa was itself damaged by what appears to have been careless AI use.
Brand’s criticism, however, is measured. He says the substance of the policy was “quite balanced” and “quite useful”, even though the fake references were unfortunate.
Institutional sprawl
His main policy criticism is institutional sprawl. The draft proposed several new AI-related bodies, and Brand says: “there are perhaps too many institutions that they want to create”.
Some functions, he argues, could be combined. He also says the role of the Information Regulator was not sufficiently examined in the draft, particularly given the overlap between AI governance, data protection and existing privacy law.
The draft made a deliberate choice in favour of a sectoral approach rather than a broad horizontal AI law like the European Union’s model. South Africa, he said, could in principle make either model work, depending on how existing legal mechanisms are used.
Brand also accepts the criticism that the draft did not sufficiently project a positive national vision for AI.
That concern overlaps with a sharply worded open letter by technology investor Stafford Masie, published by TechCentral before the withdrawal. Masie argued that the draft was fundamentally misordered because it proposed multiple governance bodies before South Africa had committed serious resources to compute infrastructure, energy, incentives or the basic ecosystem required for AI companies to build locally.
Masie’s central line was blunt: “South Africa will not regulate its way into the AI economy. It must build its way into it.”
‘Get on with it’
Masie’s argument is that governance without an ecosystem to govern produces bureaucracy, not AI capacity. South African AI start-ups, he said, struggle less because of ethical ambiguity than because of the lack of affordable GPU time, venture funding and infrastructure, and because other countries already offer clearer incentives, compute credits and co-investment.
Masie also warned that South Africa was missing a time-limited opportunity created by its improved electricity position and global demand for AI data-centre capacity.
https://techcentral.co.za/stafford-masie-south-africa-risks-regulating-away-its-ai-future/280274
Asked what government should do now, Brand says: “the next step would be to issue it afresh, but the correct version.”
He believes disciplinary action should be considered against those involved in producing the final document because the error was “unacceptable”. But he also stressed that the public attention created by the mistake could be turned to the state’s advantage.
“The mistake created a lot of attention,” says. “Use that to benefit the process and publish the correct report as soon as possible and get on with it.”
The citation scandal has exposed exactly the problem the policy was supposed to address: AI is already being used inside institutions without adequate rules, competence or verification. But it has also given South Africa a rare chance to improve the draft in public, with sharper academic input, a more disciplined institutional design and a clearer emphasis on building the AI economy rather than merely supervising its absence.
The risk is not only that South Africa produces a bad AI policy. It is that, after the embarrassment, the country produces no usable AI policy at all.
Top image: Photo by Gallo Images/Frennie Shivambu/Rawpixel/Currency collage
Sign up to Currency’s weekly newsletters to receive your own bulletin of weekday news and weekend treats. Register here.
