Contents ...
udn網路城邦
Supremacy- AI, ChatGPT, and the Race That Will Change the World
2025/11/21 05:32
瀏覽119
迴響0
推薦1
引用0

Writer:

Parmy Olson is a British technology journalist and author, widely recognized for her sharp reporting on the intersection of innovation, corporate power, and society. She is currently a technology columnist for Bloomberg Opinion, where she writes extensively about artificial intelligence, social media, and the challenges of regulating fast-moving technologies. Prior to joining Bloomberg, Olson worked at The Wall Street Journal and spent nearly a decade at Forbes, where she served as bureau chief in both London and San Francisco, covering Silicon Valley’s culture and the rise of global tech giants.

 

Olson is the author of two acclaimed nonfiction books. Her first, We Are Anonymous (2012), is an investigative account of the hacktivist collective Anonymous and its offshoot LulzSec. The book received widespread praise for its gripping narrative and insider-level detail, cementing Olson’s reputation as a reporter who can make complex, opaque subcultures accessible to general readers.

 

Her second book, Supremacy: AI, ChatGPT, and the Race That Will Change the World (2024), examines the fierce competition between OpenAI and DeepMind in the race to build artificial general intelligence (AGI). Through detailed reporting and vivid storytelling, she contrasts the leadership styles of Sam Altman and Demis Hassabis, while also highlighting the risks of unchecked AI development—bias, surveillance, and the absence of robust governance. Supremacy was awarded the 2024 Financial Times and Schroders Business Book of the Year, affirming Olson’s standing as one of the most insightful writers on the social and political consequences of AI.

 

Beyond her books, Olson has broken major stories, including coverage of Facebook’s $19 billion acquisition of WhatsApp, which landed her a Forbes cover story and recognition from SABEW’s Best in Business Awards. With a style that blends narrative flair and investigative rigor, Olson continues to shape public understanding of how powerful technologies are transforming the world.

Story:

Supremacy: AI, ChatGPT, and the Race That Will Change the World (2024) by Bloomberg journalist Parmy Olson is a nonfiction account of the global race to dominate artificial intelligence, particularly generative AI. The book, which won the Financial Times Business Book of the Year Award, traces the rivalry between two tech giants: OpenAI, led by Sam Altman, and DeepMind, led by Demis Hassabis. At stake is the pursuit of artificial general intelligence (AGI), a technology with the potential to reshape economies, politics, and human life.

 

Olson presents a dual narrative of Altman and Hassabis, highlighting their contrasting backgrounds and philosophies. Altman, shaped by Silicon Valley’s entrepreneurial culture, approaches AI with a venture-capital mindset, emphasizing scale, speed, and market impact. Hassabis, a prodigy in neuroscience and gaming from London, views AI as a scientific and ethical challenge, prioritizing careful progress over commercial ambition. Their stories form a human lens through which the broader technological arms race is explored.

 

The book also examines the origins of the Transformer model at Google—an invention that enabled today’s large language models. Olson notes the irony that Google initially applied this breakthrough mostly to advertising optimization, while competitors used it to leap ahead in generative AI, underscoring the tension between innovation and corporate conservatism.

 

Beyond the corporate drama, Olson raises urgent questions about the societal risks of unchecked AI development: algorithmic bias, surveillance, safety trade-offs in pursuit of profit, and the absence of robust governance. She argues that humanity is at a pivotal moment: establishing rules and safeguards now could ensure AI serves the public good, but failure to act could entrench power imbalances and unleash destabilizing consequences.

 

Ultimately, Supremacy is both a gripping story of competition and a cautionary call for responsible stewardship of one of the most transformative technologies of our time.

Conclusion: The story of how two men dreamed of building superintelligent machines, then became rivals and ultimately agents of competition among major tech companies.

1. Founding & Origins

 

DeepMind: Founded in 2010 in London by Demis Hassabis, Shane Legg, and Mustafa Suleyman. It was acquired by Google in 2014 (later under Alphabet). From the start, DeepMind positioned itself as a research-first AI lab, focused on using machine learning to solve science and health challenges, not just commercial tasks.

 

OpenAI: Founded in 2015 in San Francisco by Sam Altman, Elon Musk, Greg Brockman, and others. Initially a nonprofit, it later transitioned into a “capped-profit” model to attract funding while maintaining a mission to build artificial general intelligence (AGI) that benefits humanity.

 

2. Mission & Philosophy

 

DeepMind: Emphasizes scientific discovery and safety. Its vision is to build AI that can advance science and solve global challenges (e.g., protein folding, energy efficiency). DeepMind tends to publish research widely and stresses careful progress.

 

OpenAI: Focuses on scaling AI quickly and responsibly. It aims to ensure AGI is developed safely but also accessible to the public, rather than controlled by a few corporations. Its work is more product-oriented, seen in ChatGPT and GPT models, which are widely deployed.

 

3. Breakthroughs & Achievements

 

DeepMind: Famous for AlphaGo, which defeated the world champion in Go; AlphaFold, which solved protein-folding predictions; and cutting-edge reinforcement learning systems. Its impact is strongest in scientific research and AI theory.

 

OpenAI: Known for GPT language models, DALL·E, and ChatGPT, which brought generative AI into the mainstream. OpenAI’s breakthroughs are more visible to the public and have influenced industries, education, and communication worldwide.

Aspect

DeepMind

OpenAI

Focus

Scientific research, AI for long-term breakthroughs

Practical AI applications, widely accessible tools

Notable Achievements

AlphaGo (beat world champion in Go), AlphaFold (protein structure prediction)

GPT series, ChatGPT, DALL·E

Strengths

Strong in foundational research, scientific impact

Strong in products, user adoption, and real-world influence

Approach

More research lab–style, publishing in top journals

More product- and deployment-oriented, with APIs and apps

Global Impact

Transformative in science and academia

Transformative in everyday life, business, and industry

👉 Summary:

·     If you care about scientific breakthroughs, DeepMind stands out more.

·     If you care about practical use and global visibility, OpenAI is currently more outstanding.

#three types of people most at risk of being replaced by AI:

1.  Those without initiative – people who only follow instructions and cannot define goals or break down problems.

2.  Those who don’t ask questions – people who fail to interact meaningfully with AI. The key is not just giving commands but asking high-quality questions.

3.  Those unwilling to learn – people who treat AI as a simple answer machine without providing meaningful input.

#The Amish are a Christian community in the U.S., mainly in Pennsylvania, Ohio, and Indiana, known for their simple lifestyle and rejection of modern technology. They live around farming, craftsmanship, and cooperation, using horses and hand tools instead of cars and electricity. Education ends at eighth grade, focusing on practical skills. Community support is central, with neighbors helping in farming and building. Religious services are held in homes, emphasizing simplicity and faith. Though strict, their life brings joy through tradition, family, and spirituality, showing that true happiness can come from values and community rather than technology.

Open AI of Microsoft

On December 11, 2024, Apple integrated ChatGPT into iOS 18.2, enhancing the functionality of Siri and writing tools.[64]

#Altman was raised a vegetarian and came out as gay at the age of seventeen. Shortly after the company was acquired, he dated LoopT co-founder Nick Sivo for nine years, breaking up in 2012 when the company was acquired. Altman and software engineer Oliver Mulherin were married in a Jewish ceremony in January 2024. They live in the Russian Hill neighborhood of San Francisco and often spend their weekends in Napa, California. He has a personal interest in AI and nuclear energy and nuclear technology management, and has invested $375 million of his own money in nuclear technology startups.

#Copilot is a great tool that helps programmers write new code faster and work on existing code.

#Why has so much money gone to engineers tinkering on larger AI systems on the pretext of making them safer in the future, and so little to researchers trying to scrutinize them today? The answer partly comes down to the way Silicon Valley became fixated on the most efficient way to do good and the ideas spread by a small group of philosophers at Oxford University, England.

#To help humanity in the long term, but Bankman-Fried is willing to spend up to $8 billion to purchase the land alongside Musk and stand on the same pedestal with the world’s richest man as an act of effective altruism.

#Altman openly agreed with senators’ concerns about AI manipulating citizens, violating privacy, and worsening online attention battles. He stressed his willingness to cooperate with Washington, even agreeing with their dissatisfaction about online platforms.

The hearing was seen as a masterclass in defusing political grandstanding, with one senator even suggesting Altman should lead U.S. AI regulation—a role he politely declined. Soon after, Altman toured Europe, meeting top leaders and lobbying to soften the upcoming EU AI Act, partly succeeding.

OpenAI needed regulators to allow continued scaling of large models while keeping training methods secret.

 

#Deep mind of Google

Key Points:

·     DeepMind once saw itself as morally and technically superior in AI.

·     Its reputation declined after failures in the health sector.

·     It shut down its “Applied AI” division, moving away from real-world problem solving.

·     Its research has focused on simulations (games, proteins).

·     OpenAI’s internet-based approach produced more powerful AI, making DeepMind seem shortsighted.

·     Internal doubts arose: “Life isn’t a Rubik’s Cube—you can’t just solve it.”

deep mind(Google DeepMindDeepMind是一家英國人工智慧公司。公司建立於2010年,在2014年被Google收購)

#Hassabis was born in North London to a Greek-Cypriot father and a Singaporean-Chinese mother. A chess prodigy, he earned the title of chess master at the age of 13. Among players under the age of 14, Demis rating was 2300, second only to Polgar Judys 2335. He only began learning Go at the age of 19.

After the release of ChatGPT, DeepMind was forced to throw itself into building an even better version for Google. Hassabis had taken control of the newly merged Google DeepMind and started overseeing the development of a large language model called Gemini, an AI assistant that used techniques from AlphaGo to excel at strategy and planning. Gemini could process text, “see” images, and reason, which meant it was more capable than Bard, which Google had rushed out and which had been making embarrassing mistakes. But the company was so desperate to get ahead of OpenAI and Microsoft that it also rushed out Gemini, and exaggerated its abilities.

Just before Christmas 202

difference between Open AI & Deep Mind

#As Hassabis was folded more deeply into the bowels of a Big Tech firm, Altman took OpenAI in an even more commercial direction. Altman confirmed that OpenAI was working on GPT-5 and also raising more money. High training costs meant the company was still in the red but on a reasonable course to turn a profit.

※future problem

#In the next few years, 90% of the text and images on the internet will no longer be created by humans. Most of the content we see will be generated by artificial intelligence.

#The long-term consequences are difficult to predict. Some economists say that rather than creating wealth for everyone, powerful AI systems could exacerbate inequality. They could also widen the cognitive gap between rich and poor. A popular view among technologists is that when general artificial intelligence truly arrives, it will not exist as an independent intelligent entity but rather as an extension of our minds through neural interfaces. Elon Musks brain-computer interface company, Neuralink, is at the forefront of this research, and Musk hopes to one day implant these brain chips in billions of people. Musk is also working towards this goal.

📘 Summary of Supremacy by Parmy Olson

👩‍💼 About the Author

Parmy Olson is a seasoned technology journalist known for her investigative reporting and clear-eyed analysis of Silicon Valley culture. In Supremacy, she brings a journalist’s rigor and a storyteller’s touch to the race for Artificial General Intelligence (AGI), focusing on the people, companies, and ideologies shaping this transformative technology.

Summary of Supremacy (Parmy Olson)

A clear, narrative account of the race to build Artificial General Intelligence (AGI), Supremacy follows the people, firms, and philosophies shaping today’s most powerful AI projects. The book centers on DeepMind (led by Demis Hassabis) and OpenAI (led by Sam Altman), tracing how their differing motivations—scientific discovery versus broad deployment and commercial scale—produced competing strategies, cultures, and risks. Olson chronicles personalities, funding struggles, technical breakthroughs (AlphaGo, AlphaFold, transformers), corporate moves (DeepMind → Google; OpenAI → Microsoft partnership), and the ethical governance efforts that arose in response to real harms and concentrated power.

Key characters and motives

  • Demis Hassabis — Chess prodigy, neuroscientist and game player who sees AGI as a scientific and quasi‑spiritual quest to explain mind and nature.
  • Sam Altman — organizer and builder who sees AGI as an opportunity to reshape society and generate broad economic change.
  • Mustafa Suleyman, Shane Legg, Ilya Sutskever, Elon Musk, Jaan Tallinn, Nick Bostrom, Eliezer Yudkowsky — contribute ideas, funding, warnings, and institutional pressure, representing a spectrum from techno‑optimism to deep existential caution.
  • Big tech (Google, Microsoft, Meta) — supply scale, data, and compute that accelerate model development while concentrating power.

Technical and institutional timeline (short)

  • Early AI research and fringe AGI ideas moved into mainstream efforts when compute, data, and new architectures (deep learning, recuurent neutral network, transformers, reinforcement learning) scaled.
  • DeepMind emphasized games, simulations, and scientific discovery (AlphaGo, AlphaFold. Alpha Fold 2 won Hassabis Nobel Prize in Chemistry in 2024).
  • OpenAI emphasized large models and deployment, popularizing generative language and agentic systems.
  • Both approaches produced tools with beneficial uses and harms, prompting internal ethics boards, governance experiments, and external debate

 

🧠 Two Founders, Two Paths: Hassabis vs Altman

  • Demis Hassabis grew up in London, a chess prodigy and video game designer turned neuroscientist. His personality is deeply scholastic—introspective, methodical, and driven by a desire to understand intelligence itself and unlock divine myth. He built DeepMind with a research-first culture, favoring PhDs and scientific rigor.
  • Sam Altman, raised in the Midwest, is a networker and entrepreneur. He’s charismatic, ambitious, pursuit thrives on building connections and scaling ideas. OpenAI reflects his ethos: fast-moving, engineer-driven, and focused on deployment and impact.

·         Their contrasting personalities shaped their companies:

DeepMind (Hassabis)

OpenAI (Altman)

Scholarly, cautious

Entrepreneurial, bold

PhD-heavy culture

Engineer-heavy culture

Focus on understanding intelligence

Focus on building useful tools

Emphasis on games, science

Emphasis on language, agents

🎯 Why Build AGI? Philosophical Divergence

  • Hassabis sees AGI as a scientific and almost spiritual quest—to decode the nature of intelligence and solve deep problems like protein folding and climate modeling.
  • Altman views AGI as a tool to reshape society, redistribute wealth, and accelerate progress. His vision is more utilitarian and expansive, with a focus on broad deployment and economic transformation.

Both claim altruistic motives, but Olson probes the tension between idealism and power: who gets to decide what “beneficial” AI looks like?

 

🏛️ The Rise of DeepMind and OpenAI

  • DeepMind began in London in 2010, funded by tech investors and thinkers like Elon Musk and Peter Thiel. It gained fame with AlphaGo and AlphaFold, showing AI could master games and science. In 2014, Google acquired DeepMind, promising autonomy and ethical oversight—but tensions emerged over data use and governance.
  • OpenAI launched in 2015 as a nonprofit, with a mission to build safe AGI for humanity. It later pivoted to a “capped-profit” model to attract funding and scale. Microsoft invested billions, integrating OpenAI’s models into its products.

Defining the Capped-Profit Model

At its core, the capped-profit or capped-return model is an innovative approach to organizational structure that occupies the middle ground between traditional nonprofits and for-profit corporations. A capped-profit company can earn profits and attract external investment, as in a for-profit enterprise; however, the profits or returns to investors are limited or “capped” at a predetermined level. Beyond this cap, any additional profits or surpluses are not distributed to investors or shareholders, but instead are channeled toward broader mission-driven goals, public benefit, or reinvestment.

 

Both companies now sit inside tech giants—Google and Microsoft—raising questions about independence, transparency, and the concentration of power.

🌍 Utopia vs Dystopia: The Stakes of AGI

Olson explores two competing visions:

  • Utopian: AGI solves climate change, cures disease, and frees humans from drudgery.
  • Dystopian: AGI amplifies surveillance, misinformation, job loss, and even existential risk if misaligned systems pursue harmful goals.

She doesn’t offer easy answers but emphasizes the urgency of governance, transparency, and public engagement. The book invites readers to ask: who benefits, who decides, and what values guide this technology?

🔍 Six Key Takeaways

  1. AGI moved from fringe idea to corporate priority due to breakthroughs in deep learning, data, and compute.
  2. Hassabis and Altman represent two philosophical poles: scientific discovery vs societal transformation.
  3. DeepMind and OpenAI reflect their founders’ personalities, shaping culture, pace, and priorities.
  4. Big tech partnerships (Google, Microsoft) provide scale but raise concerns about centralization and control.
  5. AI already brings benefits and harms—governance efforts lag behind technical progress.
  6. The future could be utopian or dystopian, depending on choices made now about ethics, regulation, and public oversight.

🗣️Discussion questions

1.      The author seems worried about AI’s future. Do you share her concerns, or do you see AI as more helpful than harmful?

I share her concerns. AI is useful now, but its rapid growth may outpace regulation and ethical oversight. Without transparency and public participation, misuse or concentration of power is a real risk. For now, though, the benefits outweigh the harms.

  1. What are some ways AI helps you in daily life—like voice assistants, smart devices, or online recommendations?

·  Recommendation systems: suggesting music, videos, or shopping items.

      ·  Smart assistants: reminders, alarms, and quick searches.

      ·  Translation tools: making communication across languages easier.

      ·  Work support: drafting, summarizing, and generating ideas.

 

  1. Do you know the difference between AI and IoT (Internet of Things)?
    • AI is about machines learning and making decisions.
    • IoT is about devices connected to the internet, like smart thermostats or fitness tracker

AI is about learning and decision-making (e.g., ChatGPT, image recognition).

IoT is about connecting devices (e.g., smartwatches, smart thermostats). IoT provides the data, AI interprets and acts on it

 

4.       ·Do you think companies like Google and Microsoft should be the ones controlling powerful AI? Why or why not?

        I don’t think they should control it entirely. Big companies have the resources to push innovation, but too much concentration leads to monopoly and lack of transparency. Ideally, governments, academia, civil society, and industry should share responsibility.

5.       The book mentions that some people see AGI as a spiritual or philosophical quest. Do you think technology can help us understand life’s big questions?

Technology can give us new perspectives—for example, simulating the brain to explore what “intelligence” means. But ultimate questions of meaning and purpose go beyond technology, touching philosophy, spirituality, and personal values.

 

6.       If AI becomes smarter than humans, what should we do to stay safe? Should there be rules or limits?

·  International agreements (like nuclear treaties).

·  Transparency and auditing of models and decision processes.

·  Mandatory safety tests and risk assessments.

·  Emergency off-switch mechanisms (“kill switch”).

 

7.       Would you want to talk to an AI that mimics a loved one who passed away, like the chatbot mentioned in the book? Why or why not?

Personally, no. While it might provide comfort, it risks creating dependence on a simulation and delaying the natural grieving process. AI can imitate language, but it can’t replace a person’s soul or essence.

8.     If you had to take one small action after this meeting, what would it be—learn more, change a product you use, or talk to someone about AI?

I’d choose to learn more about AI. Understanding its principles and risks is the best way to make informed choices about how to use it and how to discuss it with others.

Conclusion:

1.      Right people, right project, no more , no raw dog

2.      Bold vision, security enhancing setting, create different algorithms to get there

3.      Wholistic design

4.      Soft ware, hard ware, wholistic design, partnership,

5.      Supercomputer

6.      High performance computing technology

7.      No company has all the solutions

8.      TSMC style: open ecosystem, open model, infrastructure on shore. semiconductor , leadership

9.      Different kind of leaders, education is helpful, learn to create the difficult problem and find the solution

10.  Whatever you do, choose the hardest problem-Lisa Su

11.  Learn the most from the biggest problem-Lisa Su

12.  Political science is more difficult than AI problem

13.  Roadmap, collaboration, holistic , productive

14.  Setbacks: be sad for a short time, after that, keep going

15.  I’m A-, everyone is different

Building the Future: The Power of Vision, Collaboration, and Resilience

 

In today’s rapidly changing technological landscape, success requires more than just intelligence or resources — it demands the right people on the right projects. Innovation thrives when individuals are carefully matched to challenges that align with their strengths, creating teams that are focused, disciplined, and purpose-driven. No more chaos, no unnecessary risks — just clarity, direction, and collaboration.

 

Bold Vision, Secure Foundations

 

A bold vision is only as strong as the foundation it stands on. To shape the future, we must design security-enhancing environments and create new algorithms that push the limits of possibility. High-performance computing, supercomputers, and advanced technologies are not merely tools — they are the engines of progress, enabling discoveries that once seemed unreachable.

 

Holistic Design and Open Collaboration

 

True innovation does not happen in isolation. A holistic design philosophy unites software, hardware, and human creativity into one seamless system. This approach mirrors the TSMC model: an open ecosystem where partnerships, transparency, and local infrastructure come together to build global leadership. No single company holds all the solutions — progress emerges when industries collaborate and share knowledge.

 

Leadership Through Learning

 

The world needs different kinds of leaders — those who are not afraid to tackle difficult problems and who understand that education is a lifelong process. As AMD CEO Lisa Su once said, “Whatever you do, choose the hardest problem,” and “You learn the most from the biggest challenges.” Great leadership is not about avoiding failure, but about embracing complexity and finding the courage to move forward after every setback.

 

The Human Element

 

Technology alone is not enough. Behind every algorithm and every chip lies the human spirit — curious, emotional, and resilient. Even in moments of disappointment, we are allowed to be sad, but only for a short time. Then, we rise, continue, and keep building. Because everyone is different — “I’m an A-, and that’s okay.” Our diversity of thought and experience is what fuels creativity and drives progress.

 

Conclusion: A Roadmap for the Future

 

1.The roadmap forward demands collaboration, holistic thinking, and productivity. It asks us to merge bold vision with secure, open systems; to bridge the gap between hardware and software; and to embrace both human and technological growth.

 

And perhaps, in doing so, we’ll come to realize that while AI may be complex, political science — understanding people — is even harder. Yet it’s precisely that human challenge that makes innovation worth pursuing.

1.      The values of TSMC are integrity, commitment, innovation, and partnership.
TSMC is not only the chip of Taiwan, but also the heart of the world — built on integrity, commitment, innovation, and partnership.
If friendship brings fear, it is no longer true friendship.
Over the past 30 years, we have learned through challenges to engage with the world in pursuit of greater global security.

Review of October Book Club Meeting by our consultant Clive

                                                                                    

Parmy Olson’s Supremacy offers a sharp, fast-paced look at the power struggle shaping artificial intelligence. Shannon was a wonderful leader who stayed up past her bedtime to lead us in this fascinating topic. Focusing on Sam Altman of OpenAI and Demis Hassabis of DeepMind, Olson turns a dense technological story into a very human one. The book explores how lofty ambitions, to create safe, ethical AI, collide with money, politics, and ego. What begins as a tale of idealism gradually becomes a study in compromise: Altman’s partnership with Microsoft and Hassabis’s absorption into Google show how even visionary founders are pulled toward profit and scale.

Shannon lead us with clarity and energy, translating complex issues like training data, computing power, and AI ethics into vivid scenes and interviews. The book’s strength lies in its storytelling; the rivalry feels personal yet symbolic of a larger question….whether innovation can remain moral under capitalist pressure. We appreciate that Lydia joined us from Shanghai, and our small group in Kaohsiung, including Emma, Lily and Angela.  Some of us at the meeting found the challenges of AI both exciting and scary, but we all accept that this is the world we are in.  Florence made the very wise observation that doing this book was timely.  It was amazing to find out that Shannon and Lily subscribe and use AI in their daily lives.  Angela mentioned that her husband speaks to ChatGPT like it is a friend.  During the discussion, our group split on some issues especially regarding the future of the world with AI.  One of the areas worth discussing is the subtitle’s claim, “the race that will change the world.” Does the future truly depend on these few companies? We were split on this topic.  Perhaps Olson simply gave these companies mythic stature? The consensus was that Supremacy is less about winners than about what’s lost when ethics trail ambition.

Overall, this is a great book for accessibility, relevance, and depth of character. It sparked one of our liveliest discussions this year…proof that Olson has written not just a tech chronicle, but a mirror for how modern power operates.

 

Related reading:

https://youtu.be/aZs3MgrkZv4?si=mwpppIfuikWVs_Rt

https://youtu.be/qi3TySfAk64?si=jMYzMj3b2E8QUVLn

https://youtu.be/A26OIgzNR34?si=zwhYTYhKSa8J79DD

https://youtu.be/wYKvePtJUkY?si=M64KBfiOegacMssY

https://youtu.be/uQgPV3Z7bmc?si=lPTdGXOtPCRC1DBv

https://youtu.be/WOduUGVHNWc?si=xFLZaRURbds_SOPN

https://youtu.be/RtH1oepOfe4?si=h7qNqc8RKajXuq1q

https://youtu.be/qbhVnkTAeaI?si=c6HPLOuABixR4qoC

有誰推薦more
全站分類:創作 另類創作
自訂分類:英文小說悅讀
上一則: Be Ready When The Luck Happens
下一則: Mindset

限會員,要發表迴響,請先登入