The Gates Foundation’s “AI initiative” is getting scrutinised, and criticised, from a variety of points of view. And now a trio of academics has offered their take on the controversial push into using AI to supposedly advance “global health.” What seems to have prompted this particular reaction – authored by researchers from the University of Vermont, Oxford University, and the University of Cape Town – was an announcement in early August, says author Dee Rankovic.
The Gates Foundation at that time let the world know that it was in for a new scheme, worth $5 million, set to bankroll 48 projects whose task was to implement AI large language models (LLM) “in low-income and middle-income countries to improve the livelihood and well-being of communities globally.”
Every time – and it’s been many times now – that the Foundation chooses to present itself as the “benefactor” of “low or middle income countries” (i.e., undeveloped ones with little recourse to protect themselves from many things, including Bill Gates’ apparent “savior” complex) – it leaves observers critical of the organization and its founder’s “experiments” – and feeling somewhat, if not a lot, ill at ease.
But feelings are one thing and scientific facts hopefully often another, and the paper, the gist of which is available in an article, asks the question: is the Gates Foundation trying to “leapfrog global health inequalities?”
Well, as they would say in the American south – is a frog’s… anatomy watertight?
But in scientific language, the initiative announced on August 9 is highly likely yet another Gates’ project that, while making all the right promises – improving lives and well-being of people around the world, particularly the poor or verging on poverty (and therefore obviously extra vulnerable, particularly to questionable “altruism”) the results might be very different.
The study is not mincing too many words here. From a related article:
“There are at least three reasons to believe that the unfettered imposition of these tools into already fragile and fragmented healthcare delivery systems risks doing far more harm than good.”
The research then breaks it down into the very nature of “AI,” i.e. – machine learning. “If you feed biased or low-quality data into a machine that supposedly ‘learns,” out comes the reproduction thereof, perhaps even worse than before,” is how the authors put it.
So then – if we are to believe what many scholars and activists do – namely that “the world and its governing political economy is structurally racist,” what could be expected as the outcome of “AI” learning, from that particular huge dataset?
And then – another reason “to oppose the careless deployment of AI in global health,” according to this, “is the near complete absence of real, democratic regulation and control – an issue that is applicable to global health more broadly.”
You wouldn’t necessarily expect scientists to cut this deep, but here they are: “At the end of the day, the hard, sharp edges of capital, command and control are in the hands of a very few entities and individuals, notably including the conflictingly interested Microsoft corporation itself, which has invested more than US$10 billion in OpenAI.”
How do you say, “mic drop” – in sciencespeak?