Cookie Consent by Free Privacy Policy Generator website
top of page
Search
Writer's pictureGeopolitics.Λsia

Tech Titans Clash: The AI Duel of Altman and Musk

Updated: Jan 18

Sam Altman and Elon Musk, once collaborators in the pursuit of Artificial General Intelligence (AGI), have diverged on their quest to harness AI's potential to address challenges like climate change and complex diseases. Initially united at OpenAI, a then non-profit, Altman's shift towards commercialization, bolstered by a $10 billion infusion from Microsoft, marked a turning point. This move, perceived as a departure from their shared vision, sparked a rift with Musk. Now, in a striking turn of events, these former allies face off: Musk with his debut AI "Grok," and Altman, alongside Microsoft's Satya Nadella, unveiling transformative products like GPT-4 Turbo, signaling a new era of personalized AI solutions and heightened competition in the tech arena.







OpenAI, a frontrunner in AI technology, has recently unveiled a suite of innovative developments. Among these is a new Text-to-Speech (TTS) model that allows developers to generate human-quality speech from text. This model features six preset voices and two variants, optimized for real-time use and high-quality output, with pricing set at an accessible rate​​. In a substantial stride forward, OpenAI announced the GPT-4 Turbo model, a significantly cheaper and more data-efficient version of its predecessor, GPT-4. This model is poised to benefit its 2 million developers by offering a cost-effective yet powerful AI solution​​​​. Further enhancing its platform, OpenAI introduced additions like the Assistants API and multimodal capabilities, along with price reductions, at its first Developer Day​​. Additionally, the launch of the DALL-E 3 API, providing access to OpenAI's advanced text-to-image model, marks another milestone, offering capabilities for creating high-resolution images with built-in moderation tools



​.

OpenAI also unveils Copyright Shield, a new initiative providing legal and financial assistance to customers facing copyright claims over use of OpenAI's generative AI systems. As AI-created content goes mainstream, valid copyright concerns persist, but OpenAI wants customers utilizing its ChatGPT Enterprise and developer tools properly to avoid baseless litigation. The company insists it already builds plagiarism safeguards directly into its AI, but Copyright Shield takes protection further. Now OpenAI vows to legally defend and cover costs for customers who act responsibly if they somehow get targeted for infringement anyway. It's a bold move demonstrating OpenAI’s confidence in its tech’s integrity, while still addressing wider copyright issues as AI proliferates.






On the other side of the AI spectrum, x.AI, led by tech magnate Elon Musk, has stepped into the limelight with the release of its first product, Grok. Grok-1, the engine powering this initiative, is a frontier Large Language Model (LLM) developed over four months, boasting 33 billion parameters and capabilities nearing those of the LLaMA 2 model​​. Musk's x.AI, which aims to focus on scientific research and developing applications for both enterprises and consumers, positions Grok as a direct competitor to OpenAI's ChatGPT, indicating a new era of competition in the AI chatbot space​​​​​



​.Benchmarking Comparison: [Source]



These developments signal a dynamic and rapidly evolving landscape in AI technology. OpenAI's focus on enhancing accessibility and functionality through cost-effective models and APIs reflects a commitment to broadening AI's reach and utility. Conversely, x.AI's entry with Grok underscores the escalating competition in the AI sector, potentially catalyzing further innovation and diversified applications. This pivotal moment in AI history not only showcases the technological prowess of these companies but also sets the stage for a new chapter in AI-driven solutions, impacting industries and consumers alike.



Our Assessment


In the wake of OpenAI's latest revelation - the GPT-4 Turbo API - a subtle yet crucial transformation is on the cards for Python libraries underpinning the current large language model (LLM) applications. While existing codebases remain functional, the new GPT-4 Turbo's intricacies necessitate a discerning upgrade, particularly in API and prompt engineering facets.





Cost efficiency is a standout feature of GPT-4 Turbo. OpenAI has priced input processing at a mere $0.01 per 1,000 tokens, a significant reduction from GPT-4's $0.03 rate. Output costs remain at $0.03 per 1,000 tokens, positioning the new model as a cost-effective alternative, tripling the affordability compared to its predecessors.


A pivotal advancement of GPT-4 Turbo lies in its expanded context window. The model can now assimilate up to 128,000 tokens - a staggering leap from GPT-4's 8,000-token capacity. This enhancement equates to a processing capability of approximately 300 pages of text, dwarfing the 24-page capacity of GPT-4 and establishing a new benchmark in the realm of LLMs.




Calling DALL-E API to generate images works well, but integrating text-to-speech still pending.



This augmented buffer space unlocks unprecedented versatility in LLM applications. It facilitates the development of more nuanced, "personalized" GPT models, capable of mimicking specific dialogic styles, such as Gen-Z vernacular and meme-centric exchanges. Although existing ChatBot implementations employ a 'sliding window' technique to preserve conversational continuity, they falter when overloaded with extensive dialogue, necessitating human intervention as a corrective measure. In contrast, the 128,000-token window of GPT-4 Turbo heralds a new era of automation in LLM agents, promising enhanced efficacy and reduced reliance on human oversight.



GPTs Store


Customised GPTs are more about "personality shaping" through prompts and custom instructions than fine-tuning. We tested a customised chatbot called GPT genz (genz4meme) that uses Gen Z slang. When asked, it said it just has an altered personality, not additional training. To check this, we asked it to suggest a prompt to make normal GPT act like genz4meme. It offered: "Yo, from here on out, I want you to keep it 100 with the Gen Z lingo. Think memes, slangs, and all that good stuff, no cap."








So standard GPT can mimic genz4meme's pattern. As we speculated earlier, this is not like proper fine-tuning, which takes time and expertise beyond regular users.


More interestingly, chatGPT can now use its own internet search instead of separate menus. It can get Dall-E to generate images or read various files - pdfs, word docs etc. Multiple functions combined, not just chosen from a menu.


OpenAI built these features as customisable chatbots grow popular - see Inflection's Pi or Meta's coming avatars. Not robust enough for real work, but useful for regular interactions and niche applications. OpenAI made an automated system to build customised GPTs (not yet public) and a shop to sell them, sharing revenue.



Different Marketing Focus


As the competition is much more intense, the recent development of customized ChatBot as demonstrated both from OpenAI's GPTs, x.AI's Grok, Inflection's Pi, or Meta's AI avatar seem to reflect our previous prediction (in 2024 trend) that customized GPT is on trendy. But from our own assessment, the "functionable" ChatBot one should do their assigned work with or withour persona, i.e. to ask the ChatBot to write the article, it should do well whether or not to assume its personal "the professional writer" or not, but the "flavor" of writing can be added alter, like writing in dramatic tone or investigative report tone. Therefore, we have less consideration on this metric, but perhaps personalized chatbot might be in flavor with some users particularly in the US and European countries that find more intimate and growing grooming digital friendship (or couple?) with their new digital ChatBot friend.





From this consideration, we still weight OpenAI's GPT-4 as the top functionable LLM one (A+), followed by Antropic's Claude2 (A). As Anthorpic have just gotten more investment from both Amazon and Google, Anthorpic might be a serious competitor to OpenAI than Elon Musk's Grok which still be enable in some part of Western countries only. We give PaLM2 and LLaMa2 as A-, for both might be used via a bit on technological knowledge than mere user, while LLaMa2 is opensource coding, with PaLM2 has more decisive decision making process, but still sometimes prone to hallucination, while both Claude2 and GPT-4 have more better hallucination protection or correction better. Another level is Google's Bard, OpenAI's GPT-3.5 and Inflection's PI, all rated B, as all can be publicly accessible without more special fee or payment, but the capabilities is a bit lower than our previous ChatBot's list. The Grok is unaccessible so no score here.


As OpenAI has launched its GPTs store and sharing revenue, AI marketing has become clearer. It seems OpenAI has focused on the high-end "personalized Chatbot" to compete with peer competitors like Inflection, Meta and even x.AI, and it maintains the web-based Chatbot, chatGPT now with several enabled personalized Chatbots as the stronghold. However, the developer's products and services are still being maintained via API now enabling several capabilities beyond the flagship GPT-4's stronghold and with less cost and faster service.



Google AI Vertex Garden



On the contrary, Huggingface, hosting opensourcing LLM platform seemed to be more interested among developers to publish and host their more finetuning LLMs (mostly based on LLaMa2). And for Google, with its AI Vertex garden, it hosts customized both fine-tuning LLMs and specialized ML engines dedicated for hardcore but high value AI specific tasks. However, Google aims to counter with its high-profile Gemeni chatbot from Google Deepmind led by the notable Demis Hassabis. As both Deepmind and Hassabis have proven themselves several times from AlphaGo to AlphaFold, Deepmind's Gemini can't be underestimated, and its launch perhaps end of this year or earlier next year will rock the AI industry.



Enterprise AI Adoption: Expert Insights for Practical Implementation


As SIU has soft-launched its AI Tensibility program aiming to promote using AI in real business, and this might reflect the same idea as IBM Launches $500 Million Enterprise AI Venture Fund dedicating to invest in generative AI startups focused on business customers. We have started launching our social media page on AI Tensibility and doing experiments on discussing how to apply AI in upgrading the business environment starting with Sales and Marketing units. This week we have hosted the discussion with Pin Kee Lam the veteran pharma's executive and several business consultants in Singapore on how AI can help utilized boosting sales and marketing via several techniques whether on big data and data mining or via improving and cost reduction approaching on the sales funneling process via social media.







Apple can no longer remain quiet on generative AI. On Thursday's earnings call, Apple CEO Tim Cook finally opened up when asked about the company's "efforts" towards generative AI. Revealing details, Cook confirmed Apple is already using AI across its products and is actively working on generative AI. While characteristically tight-lipped on specifics, Cook's acknowledgement signals Apple can no longer ignore rising competition in this space. With rivals racing ahead, generative AI looks set to be an increasing priority for the tech giant in 2023 and beyond.





Comments


bottom of page