Celebrating 100 years of insulin Discovery for Diabetes

Written By Amir Jamil. What is Diabetes? World Diabetes Day was celebrated on the 14th of November and this year marked 100 years since...

The Silicon Curtain Descends on SB 1047 – The Health Care Blog

The Silicon Curtain Descends on SB 1047 – The Health Care Blog

By MIKE MAGEE

Whether you’re talking health, environment, technology or politics, the common denominator these days appears to be information.  And the injection of AI, not surprisingly, has managed to reinforce our worst fears about information overload and misinformation. As the “godfather of AI”, Geoffrey Hinton, confessed as he left Google after a decade of leading their AI effort, “It is hard to see how you can prevent the bad actors from using AI for bad things.”

Hinton is a 75-year-old British expatriate who has been around the world. In 1972 he began to work with neural networks that are today the foundation of AI. Back then he was a graduate student at the University of Edinburgh. Mathematics and computer science were his life. but they co-existed alongside a well evolved social conscience, which caused him to abandon a 1980’s post at Carnegie Mellon rather that accept Pentagon funding with a possible endpoint that included “robotic soldiers.” 

Four years later in 2013, he was comfortably resettled at the University of Toronto where he managed to create a computer neural network able to teach itself image identification by analyzing data over and over again. That caught Google’s eye and made Hinton $44 million dollars richer overnight. It also won Hinton the Turing Award, the “Nobel Prize of Computing” in 2018. But on May 1 2023, he unceremoniously quit over a range of safety concerns.

He didn’t go quietly. At the time, Hinton took the lead in signing on to a public statement by scientists that read, “We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.” This was part of an effort to encourage Governor Newsom of California to sign SB 1047 which the California Legislature passed to codify regulations that the industry had already pledged to pursue voluntarily. They failed, but more on that in a moment.

At the time of his resignation from Google, Hinton didn’t mix words. In an interview with the BBC, he described the generative AI as “quite scary…This is just a kind of worst-case scenario, kind of a nightmare scenario.”

Hinton has a knack for explaining complex mathematical and computer concepts in simple terms.

As he said to the BBC in 2023, “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have. We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world. And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Hinton’s report card in 2023 placed humans ahead of machines, but not by much. “Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

This week, Gov. Gavin Newsom sided with venture capitalists and industry powerhouses, and against Hinton and his colleagues, declining to sign the AI safety legislation, S.B. 1047. His official statement stated “I do not believe this is the best approach to protecting the public.” Most believe his chief concern was losing the support and presence of the Information Technology corporations (32 of the world’s 50 largest AI companies are based in California) to another state should the regulatory environment become hostile.

Still Newsom along with everyone else know the clock is ticking as generative AI grows more capable of reasoning and potentially sentient day by day. Guardrails are a given, and eventually will likely resemble the European Union’s A.I. Act with its mandated transparency platform.

That emphasis on transparency and guardrails has now popularized the term “Silicon Curtain” and drawn the attention of world experts in human communication like Yuval Noah Harari, author of the 2011 classic “Sapiens” that sold 25 million copies. In his newest book, Nexus, Harari makes a good case for the fact that the true difference between the democracy of Biden/Harris and the dictatorship which appears the destination of choice for Trump is “how they handle information.”

According to Harari, while one form of governance favors “transparent information networks” and self-correcting “conversations and mutuality”; the other is focused on “controlling data” while undermining its “truth value”, preferring subjects exhibiting “blind, disenfranchised subservience.”

And AI? According to Harari, democratic societies maintain the capacity to control the dark side of AI, but they can’t allow tech companies and elite financiers to control themselves. Harari sees a “Silicon Curtain” fast descending and a near future where humans are outpaced and shut out by the algorithms that we have created and unwittingly released.

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside America’s Medical Industrial Complex. (Grove/2020)