Cookie Consent by Free Privacy Policy Generator website
top of page
Search
Writer's pictureGeopolitics.Λsia

Global AI Safety Summit Fans the Flames of Already Scorching Attention on AI Governance

Updated: Jan 18

As the AI Safety Summit unfolds at Bletchley Park, hosted by the UK government on November 1st and 2nd, the world watches intently. It's worth noting that just a day before the summit, U.S. President Joe Biden and Vice President Kamala Harris jointly announced an executive order focused on Safe, Secure, and Trustworthy Artificial Intelligence. The order mandates that the development of new AI models be reported to the government for oversight, marking a push by the U.S. for global leadership in AI governance.



King Charles's Speech on AI Safety, [source]



Adding gravitas to this already significant event was an unexpected speech by King Charles. As previously summarized, the King underscored the monumental scale of AI, called for urgent international collaboration, and stressed collective responsibility in addressing both the technology's risks and rewards. The rest of this column will delve into global movements in the AI industry and noteworthy AI research findings.



The King's Speech


When a member of the British royal family takes the podium at a tech summit, eyebrows inevitably raise. However, King Charles’ surprise address at the inaugural global AI summit hosted by the UK wasn't merely ceremonial. His remarks carried a clear message: artificial intelligence (AI) is a discovery as transformative as fire or the wheel, and we must grapple with its risks as aggressively as its opportunities.


During his speech broadcasting at Bletchley Park, often considered the birthplace of modern computing, King Charles offered a clarion call. He urged global leaders and stakeholders to approach the issue with "urgency, unity, and collective strength." Drawing parallels between the international approach to climate change and the requisite global tack for AI governance, the King emphasized the shared responsibility to mitigate risks.




This isn't mere palace talk; it has material consequences for the technology sector and public governance. King Charles’ endorsement amplifies the growing sentiment that AI, despite its potential to cure diseases or make strides in clean energy, could also present existential risks that demand immediate action.


His comments come at a time when the summit itself, featuring representatives from 27 countries, has agreed upon the "Bletchley declaration on AI safety." This new accord, hailed as a "landmark achievement" by Technology Secretary Michelle Donelan, adds a layer of bureaucratic imperative to the global conversation on AI.



A Crucible for Contested Visions of AI's Future


Amid a feverish debate within the AI research community, the UK's AI Safety Summit couldn't be timelier. On one side of the intellectual ring, you have figures like DeepMind co-founder Shane Legg, who in an interview with tech podcaster Dwarkesh Patel, reasserted his belief that humanity has a 50-50 shot at achieving artificial general intelligence (AGI). This stance harks back to his blog post from late 2011. Contrarily, AI luminaries like Yann Lecun, head of AI research at Meta (formerly Facebook), argue that AI's "causal inference" hasn't yet matched reality. Lecun contends that AGI remains a distant dream, and that open sourcing AI is the most effective way to ensure safety.





The summit is not a talking shop for academics alone. It has drawn a high-caliber roster including Chinese researchers—whose contributions to AI have lately been both prolific and impactful—as well as industry magnates like Elon Musk and political figures such as U.S. Vice President Kamala Harris.

Speaking of Harris, she is slated to present new initiatives on AI safety. As part of the Biden administration's efforts to lead in AI regulation, she will announce the establishment of a new AI Safety Institute. This institute will be tasked with developing evaluations known as "red teaming" to assess the risks of AI systems. Harris is also expected to release draft regulations governing federal workers' use of AI, a move with potentially broad ramifications for Silicon Valley.


It's clear that the Biden administration is eager to assert its regulatory prowess. While other legislatures—most notably the European Union—are working quickly to craft their AI bills, the U.S. is leveraging this summit to showcase its own vision for AI's future.


What we're witnessing here is more than a clash of titans; it's a crucial dialogue shaping the path of one of the most transformative technologies of our time. The stakes? Nothing less than how we'll navigate the myriad opportunities and existential risks that AI brings to the global stage.






As governments worldwide increasingly advocate for the integration of AI into various facets of public affairs, Thailand is no exception. Today, Thai Prime Minister Srettha Tavisin took to his X account (the platform that has supplanted Twitter) to tease discussions he's had with both Google and Microsoft. The topic? A Memorandum of Understanding (MOU) aimed at fostering AI usage in Thailand, with a particular focus on upskilling and reskilling initiatives. The formal announcement is slated for this month's APEC meeting.



AI Industrial and Research Community Movement


DeepMind's AlphaFold and the Quantum Leap in Drug Discovery


DeepMind's recent update to AlphaFold 2 has turned heads in the scientific community. The software can now predict 3D structures for nearly all cataloged molecules in biology, including DNA, RNA, and small molecules. This technological advancement offers far more than mere incremental change—it propels our understanding of complex biological systems like protein interactions and cell signaling pathways into a new paradigm. In doing so, AlphaFold catalyzes accelerated progress in disease research, synthetic biology, and drug development.






Shiksha Copilot: Microsoft's Answer to Personalized Education in India


Microsoft Research is working with teachers in India on an AI teaching assistant named Shiksha Copilot, which is designed to generate personalized lesson plans. The tool ingests a myriad of content types and aligns them with curriculum goals, effectively reducing teachers' daily planning time from an arduous 60-90 minutes to a mere minute and a half. While it’s early days, Shiksha seems poised to redefine the teaching paradigm, moving us from a one-size-fits-all model to a future of personalized education.


President Biden's AI Directive: A Step Forward, Yet a Step Short


President Biden's newly issued regulations on AI systems may constitute the first of their kind, but they don't go the full mile. While the directive does establish testing requirements for advanced AI models and addresses anti-discrimination measures, critics argue that it lacks robust enforcement mechanisms and fails to deal with pressing issues like disinformation.


OpenAI's ChatGPT: The Swiss Army Knife of AI Tools


OpenAI's ChatGPT is becoming an increasingly versatile tool. A recent update allows users to toggle between browsing, DALL-E 3, and data analysis features—all within a single interface. This comes just before OpenAI’s highly anticipated Dev Day, where further major announcements are expected. While the update is trickling out to premium users, code spelunkers have noted references to a mysterious "GPT-4 Magic Create," fueling speculation about what might come next.


Boston Dynamics' Spot: The Gift of Gab


In a new twist, Boston Dynamics has equipped its four-legged robot, Spot, with a voice enabled by ChatGPT. The robot can now provide tours of its facility, responding to questions and narrating the experience in various personalities and accents. This represents another stride in making AI not just functional, but also engaging and relatable.


Advancements in Offline Reinforcement Learning


Ruizhe Shi and team have introduced a framework called Language Models for Motion Control (LaMo), aiming to optimize offline RL in scenarios where data collection is limited. The method demonstrates superior performance, particularly in sparse-reward tasks, and offers a promising avenue for further development in the RL domain.


The Ongoing Battle of Neural Network Backbones


Micah Goldblum and colleagues provide a comprehensive benchmarking analysis of pretrained models in computer vision. Their study, aptly named "Battle of the Backbones," guides practitioners in making informed choices among a plethora of models. While vision transformers and self-supervised learning are making inroads, the study finds that traditional convolutional neural networks still hold their ground in most tasks.



SIU's First AI Training Class


By any metric, the recent debut of our AI training course was nothing short of a triumph. Designed to provide an immersive, offline experience, the curriculum enabled attendees to grapple hands-on with cutting-edge AI technologies, from ChatGPT, PaLM2, Google Bard, Claude2 to LLaMa2. Far from a lecture-heavy slog, the course was engineered to foster dynamic conversations and shared insights among its participants.





One particularly compelling facet was the exposure to multiple AI engines, including both GPT-3.5 and GPT-4. This feature offered attendees a comparative lens through which to examine various systems, dissecting their capabilities to discern their individual strengths and limitations.


Throughout the course, an intriguing pattern emerged—the inconsistency in AI outputs when subjected to identical prompts. This underlined the imperative for nuanced approaches such as prompt engineering and the triangulation of outputs across multiple AI platforms, thereby mitigating pitfalls like hallucinatory responses.


The course didn't just offer technical skill-building; it also acted as a nexus for the local community of professionals with a vested interest in emerging technologies. Post-course survey results spoke volumes: a unanimous recommendation rate and over 90% of attendees reporting a marked increase in their preparedness to integrate AI tools into their professional lives.


Given this initial success, our vision for the program's future is expansive. The overwhelmingly positive response underscores a burgeoning appetite for practical AI education, a hunger exacerbated by the breakneck speed of technological advancements. Moving forward, we intend to weave in themes from the emerging field of "Computational Social Science," formalized just last year, into our course offerings. As in-person events regain momentum, we are perfectly positioned to bridge a critical gap, equipping technically oriented professionals with a well-rounded skill set that marries AI proficiencies with traditional social science perspectives. In an era steered increasingly by artificial intelligence, this fusion of disciplines doesn't just prepare today's leaders and executives—it makes them.






Yorumlar


bottom of page