Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Instead of Timothée Chalamett or Tom Holland, Sean Penn declares the Oscar-winning actress “the last movie star.” Hollywood

Does an American pope change U.S. politics? : The NPR Politics Podcast : NPR

Amazon will face Elon Musk’s Tesla with the robot launch.

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Democracies must maintain the lead in AI
AI

Democracies must maintain the lead in AI

Adnan MaharBy Adnan MaharDecember 5, 2024No Comments18 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Dario Amodei has worked in the world’s most advanced artificial intelligence labs: at Google, OpenAI, and now Anthropic. At OpenAI, Amodei drove the company’s core research strategy, building its GPT class of models for five years, until 2021 — a year before the launch of ChatGPT.

After quitting over differences about the future of the technology, he founded Anthropic, the AI start-up now known for its industry-leading chatbot, Claude.

Anthropic was valued at just over $18bn earlier this year and, last month, Amazon invested $4bn, taking its total to $8bn — its biggest-ever venture capital commitment. Amazon is working to embed Anthropic’s Claude models into the next-generation version of its Alexa speaker.

Amodei, who co-founded Anthropic with his sister Daniela, came to artificial intelligence from biophysics, and is known for observing the so-called scaling laws — the phenomenon whereby AI software gets dramatically better with more data and computing power.

In this conversation with the FT’s Madhumita Murgia, he speaks about new products, the concentration of power in the industry, and why an “entente” strategy is central to building responsible AI.

Madhumita Murgia: I want to kick off by talking about your essay, Machines of Loving Grace, which describes in great depth the ways in which AI could be beneficial to society. Why choose to outline these upsides in this detail right now?

Dario Amodei: In a sense, it shouldn’t be new because this dichotomy between the risks of AI and the benefits of AI has been playing out in the world for the last two or three years. No one is more tired of it than me. On the risk side . . . I’ve tried to be specific. On the benefits side there, it’s very motivated by techno-optimism, right? You’ll see these Twitter posts with developers talking about “build, build, build” and they’ll post these pictures of gleaming cities. But there’s been a real lack of concreteness in the positive benefits.

MM: There are a lot of assumptions when people talk about the upsides. Do you feel that there was a bit of fatigue from people . . . never being (told) what that could actually look like?

AI Exchange

This spin-off from our popular Tech Exchange series of dialogues will examine the benefits, risks and ethics of using artificial intelligence, by talking to those at the centre of its development

DA: Yeah, the upside is being explained in either very vague, emotive terms, or really extreme. The whole singularity discourse is . . . “We’re all going to upload ourselves to the cloud and whatever problem you have, of course, AI will instantly solve it”. I think it is too extreme and it lacks texture.

Can we actually envision a world that is good, that people want to live in? And what are the specific things that will get better? And what are the challenges around them? If we look at things like cancer and Alzheimer’s, there’s nothing magic about them. There’s an incredible amount of complexity, but AI specialises in complexity. It’s not going to happen all at once. But — bit by bit — we’re going to unravel this complexity that we couldn’t deal with before.

MM: What drew you to the areas that you did pick, like biology, neuroscience, economic development and work?

If we look at things like cancer and Alzheimer’s, there’s nothing magic about them. There’s an incredible amount of complexity, but AI specialises in complexity

DA: I looked at the places that could make the most difference to human life. For me, that really pointed to biology and economic development. There are huge parts of the world where these inventions that we’ve developed in the developed world haven’t yet propagated. I wanted to target what immediately occurred to me as some of the biggest predictors and determiners of how good life is for humans.

MM: In an ideal world, what would you like to spend Anthropic’s time on in 2025?

DA: Two things: one would be mechanistic interpretability, looking inside the models to open the black box and understand what’s inside them. I think that’s the most exciting area of AI research right now, and perhaps the most societally important.

And the second would be applications of AI to biology. One reason that I went from biological science to AI is I looked at the problems of biology and . . . they seemed almost beyond human scale, almost beyond human comprehension — not that they were intellectually too difficult, but there was just too much information, too much complexity.

It is my hope, like some other people in the field — I think Demis Hassabis is also driven in this way too — to use AI to solve the problems of science and particularly biology, in order to make human life better. 

Anthropic is working with pharmaceutical companies and biotech start-ups (but) it’s very much at the “how can we apply Claude models right now?” level. I hope we start in 2025 to really work on the more blue-sky, long-term ambitious version of that — both with companies and with researchers and academics. 

A man wearing glasses and a blue shirt speaking on stage, equipped with a headset microphone, gesturing with one hand during a presentation
Dario Amodei on stage last year during TechCrunch Disrupt in San Francisco © Kimberly White/Getty Images for TechCrunch

MM: You’ve been instrumental in pushing forward the frontiers of AI technology. It’s been five months since Sonnet 3. 5, your last major model came out. Are people using it in new ways to some of the older models?

DA: I’ll give an example in the field of coding. I’ve seen a lot of users who are very strong coders, including some of the most talented people within Anthropic who have said previous models weren’t useful to (them) at all. They’re working on some hard problem, something very difficult and technical, and they never felt that previous models actually saved them time.

It’s just like if you’re working with another human: if they don’t have enough of the skill that you have, then collaborating with them may not be useful. But I saw a big change in the number of extremely talented researchers, programmers, employees . . . for whom Sonnet 3.5 was the first time that the models were actually helpful to them.

Another thing I would point to is Artifacts: a tool on the consumer side of Claude. (With it,) you can do back-and-forth development. You can have this back-and-forth where you tell the model: “Make a video game for me where the main character looks like this, and the environment looks like this”. And, then, they’ll make it. (But) you can go back and talk to it and say: “I don’t think my main character looks right. He looks like Mario. I want him to look more like Luigi.” Again, it shows the collaborative development between you and the AI system.

MM: Has this led to revenue streams or business models you’re excited about? Do you think there are new products that you can envision coming out of it, based on these new capabilities?

DA: Yes. While we have a consumer product, the majority of Anthropic’s business has come from selling our model to other businesses, via an API on which they build these products. So I think our general position in the ecosystem has been that we’re enabling other companies to build these amazing products and we’ve seen lots of things that have been built.

Recommended

For example, last month, we released a capability called “Computer Use” to developers. Developers can build on top of this capability: you can tell it, “book me a reservation at this restaurant” or “plan a trip for this day”, and the model will just directly use your computer. It’ll look at the screen. It’ll click at various positions on the mouse. And it will type in things using the keyboard.

It’s not a physical robot, but it’s able to type in . . . automate and control your computer for you. Within a few days of when we released it, people had released versions that control an iPhone screen and Android screen, Linux, Mac.

MM: Is that something you would release as its own product? The word being thrown around everywhere these days is an agent. You could have your own version of that, right? 

DA: Yes, I can imagine us directly making a product that would do this. I actually think the most challenging thing about AI agents is making sure they’re safe, reliable and predictable. It’s one thing when you talk to a chatbot, right? It can say the wrong thing. It might offend someone. It might misinform someone. Of course, we should take those things seriously. But making sure that the models do exactly what we want them to do becomes much more highlighted when we start to work with agents.

MM: What are some of the challenges?

DA: As a thought experiment, just imagine I have this agent and I say: “Do some research for me on the internet, form a hypothesis, and then go and buy some materials to build (some)thing, or, make some trades undertaking my trading strategy.” Once the models are doing things out there in the world for several hours, it opens up the possibility that they could do things I didn’t want them to do.

Maybe they’re changing the settings on my computer in some way. Maybe they’re representing me when they talk to someone and they’re saying something that I wouldn’t endorse at all. Maybe they’re taking some action on another set of servers. Maybe they’re even doing something malicious.

The wildness and unpredictability needs to be tamed. And we’ve made a lot of progress with that

So, the wildness and unpredictability needs to be tamed. And we’ve made a lot of progress with that. It’s using the same methods that we use to control the safety of our ordinary systems, but the level of predictability you need is substantially higher.

I know this is what’s holding it up. It’s not the capabilities of the model. It’s getting to the point where we’re assured that we can release something like this with confidence and it will reliably do what people want it to do; when people can actually have trust in the system.

Once we get to that point, then we’ll release these systems.

MM: Yes, the stakes are a lot higher when it moves from it telling you something you can act on, versus acting on something for you.

DA: Do you want to let a gremlin loose in the internals of your computer to just change random things? You might never know what changed those things. To be clear, I think all these problems are solvable. But these are the practical challenges we face when we design systems like this.

MM: So when do you think we get to a point of enough predictability and mundanity with these agents that you’d be able to put something out?

DA: This is an early product. Its level of reliability is not all that high. Don’t trust it with critical tasks. I think we’ll make a lot of progress towards that by 2025. So I would predict that there will be products in 2025 that do roughly this, but it’s not a binary. There will always still be tasks that you don’t quite trust an AI system to do because it’s not smart enough or not autonomous enough or not reliable enough.

I’d like us to get to the point where you can just give the AI system a task for a few hours — similar to a task you might give to a human intern or an employee. Every once in a while, it comes back to you, it asks for clarification, and then it completes the task. If I want to have a virtual employee, where I say go off for several hours, do all this research, write up this report — think of a management consultant or a programmer — people (must have) confidence that it’ll actually do what you said it would do, and not some crazy other thing.

MM: There’s been talk recently about how these capabilities are perhaps plateauing, and we’re starting to see limits to the current techniques, in what is known as the “scaling law”. Are you seeing evidence of this, and looking at alternative ways in which to scale up intelligence in these models?

Recommended

Jensen Huang

DA: I’ve been in this field for 10 years and I’ve been following the scaling laws for most of that period. I think the thing we’re seeing is in many ways pretty ordinary and has happened many times during the history of the field. It’s just that, because the field is a bigger deal with more economic consequences, more people are paying attention to it (now). And very much over-interpreting very ambiguous data.

If we go back to the history, the scaling laws don’t say that anytime you train a larger model, it does better. The scaling laws say that if you scale up models with the model size in proportion to the data, if all the engineering processes work well in training the models, if the quality of the data remains constant, as you scale it up, (then) . . . the models will continue to get better and better.

MM: And this, as you say, isn’t a mathematical constant, right?

DA: It’s an observed phenomenon and nothing I’ve seen gives any evidence whatsoever against this phenomenon. We’ve seen nothing to refute the pattern that we’ve seen over the last few years.

What I have seen (is) cases where, because something wasn’t scaled up in quite the right way the first time, it would appear as though things were levelling off. There were four or five other times at which this happened.

MM: So in the current moment, when you’re looking at your training runs of your current models, are there any limitations?

DA: I’ve talked many times about synthetic data. As we run out of natural data, we start to increase the amount of synthetic data. So, for example, AlphaGo Zero (a version of Google DeepMind’s Go-playing software) was trained with synthetic data. Then there are also reasoning methods, where you teach the model to self-reflect. So there are a number of ways to get around the data wall.

MM: When we talk about scaling, the big requirement is cost. Costs seem to be rising steeply. How does a company like Anthropic survive when the costs are going up like that? Where is this money coming from over the next year or so?

DA: I think people continue to understand the value and the potential of this technology. So I’m quite confident that some of the large players that have funded us and others, as well as the investment ecosystem, will support this.

Recommended

Anthropic and Amazon logos

And revenue is growing very fast. I think the math for this works. I’m pretty confident the level of say, $10bn — in terms of the cost of the models — is something that an Anthropic will be able to afford.

In terms of profitability, this is one thing that a number of folks have gotten wrong. People often look at: how much did you spend and how much did something cost, in a given year. But it’s actually more enlightening to look at a particular model.

Let’s just take a hypothetical company. Let’s say you train a model in 2023. The model costs $100mn dollars. And, then, in 2024, that model generates, say, $300mn of revenue. Then, in 2024, you train the next model, which costs $1bn. And that model isn’t done yet, or it gets released near the end of 2024. Then, of course, it doesn’t generate revenue until 2025.

So, if you ask “is the company profitable in 2024”, well, you made $300mn and you spent $1bn, so it doesn’t look profitable. If you ask, was each model profitable? Well, the 2023 model cost $100mn and generated several hundred million in revenue. So, the 2023 model is a profitable proposition.

These numbers are not Anthropic numbers. But what I’m saying here is: the cost of the models is going up, but the revenue of each model is going up and there’s a mismatch in time because models are deployed substantially later than they’re trained.

MM: Do you think it’s possible for a company like Anthropic to do this without a hyperscaler (like Amazon or Google)? And do you worry about their concentrating power, since start-ups building LLMs can’t actually work without their funding, without their infrastructure?

DA: I think the deals with hyperscalers have made a lot of sense for both sides (as) investment is a way to bring the future into the present. What we mainly need to buy with that money is chips. And both the company and the hyperscaler are going to deploy the products on clouds, which are also run by hyperscalers. So it makes economic sense.

I’m certainly worried about the influence of the hyperscalers, but we’re very careful in how we do our deals

I’m certainly worried about the influence of the hyperscalers, but we’re very careful in how we do our deals.

The things that are important to Anthropic are, for example, our responsible scaling policy, which is basically: when your models’ capabilities get to a certain level, you have to measure those capabilities and put safeguards in place if they are going to be used.

In every deal we’ve ever made with a hyperscaler, it has to bind the hyperscaler, when they deploy our technology, to the rules of our scaling policy. It doesn’t matter what surface we’re deploying the model on. They have to go through the testing and monitoring that our responsible scaling calls for.

Another thing is our long-term benefit trust. It’s a body that ultimately has oversight over Anthropic. It has the hard power to appoint many of Anthropic’s board seats. Meanwhile, hyperscalers are not represented on Anthropic’s board. So the ultimate control over the company remains in the hands of the long-term benefit trust, which is financially disinterested actors that have ultimate authority over Anthropic.

MM: Do you think it’s viable for an LLM-building company today to continue to hold the power in terms of the products you produce and the impact it has on people, without an Amazon or Google or a Microsoft?

A smartphone displaying the text ‘Do your best work with Claude’ in front of a computer screen showing Anthropic’s website with a focus on ‘AI research and products that put safety at the frontier’ and an announcement about Claude 3.5 Sonnet
The Anthropic website and mobile phone app © AP

DA: I think it’s economically viable to do it while maintaining control over the company. And while maintaining your values. I think doing it requires a large amount of resources to come from somewhere. That can be from a hyperscaler. That could, in theory, be from the venture capital system. That could even be from a government.

We’ve seen some cases, for better or worse, (in which) individuals like Elon Musk are taking their large private wealth and using that. I do think (that) to build these very large foundation models requires some very large source of capital, but there are many different possible sources of capital. And I think it’s possible to do it while staying in line with your values.

MM: You recently signed a deal with the US Department of Defense. Was that partly a funding decision?

DA: No, it absolutely was not a funding decision. Deploying things with governments, at the procurement stage? Anyone who’s starting up a company will tell you that, if you want to get revenue quickly, that’s just about the worst way to do it.

We’re actually doing that because it’s a decision in line with our values. I think it’s very important that democracies maintain the lead in this technology and that they’re properly equipped with resources to make sure that they can’t be dominated or pushed around by autocracies.

While the US may be ahead of other countries in the development of this technology, our adversaries — like China or Russia — may be better at deploying what they have to their own governments

One worry I have is, while the US and its allies may be ahead of other countries in the fundamental development of this technology, our adversaries — like China or Russia — may be better at deploying what they have to their own governments. I wouldn’t do this if it were just a matter of revenue. It’s something I actually believe . . . is central to our mission. 

MM: You wrote about this “entente strategy”, with a coalition of democracies building AI. Is it part of your responsibility as an AI company to play a role in advancing those values as part of the (military) ecosystem?

DA: Yes, I think so. Now, it’s important to do it carefully. I don’t want a world in which AIs are used indiscriminately in military and intelligence settings. As with any other deployment of the technology — maybe even more so — there need to be strict guardrails on how the technology is deployed.

Our view as always is we’re not dogmatically against or for something. The position that we should never use AI in defence and intelligence settings doesn’t make sense to me. The position that we should go gangbusters and use it to make anything we want — up to and including doomsday weapons — that’s obviously just as crazy. We’re trying to seek the middle ground, to do things responsibly.

MM: Looking ahead to artificial general intelligence, or super intelligent AI, how do you envision those systems? Do we need new ideas to make the next breakthroughs? Or will it be iterative?

DA: I think innovation is going to coexist with this industrial scaling up. Getting to very powerful AI, I don’t think there’s one point. We’re going to get more and more capable systems over time. My view is that we’re basically on the right track and unlikely to be more than a few years away. And, yeah, it’s going to be continuous, but fast.

This transcript has been edited for brevity and clarity



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleDonatella Versace, 69 years old, surprises with her youthful appearance
Next Article What is Doge and why is Musk cutting back so much work?
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025

As Deepseek and ChatGpt Surge, is Delhi behind?

February 18, 2025
Leave A Reply Cancel Reply

Top Posts

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 202495 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202453 Views

2025 Best Actress Oscar Predictions

December 12, 202434 Views

20 Most Anticipated Sex Movies of 2025

January 22, 202533 Views
Don't Miss
AI April 14, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

Alphabet and Nvidia are investing in Safe Superintelligence (SSI), a stealth mode AI startup co-founded…

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Openai’s Sam Altman reveals his daily use of ChatGpt, and that’s not what you think

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Instead of Timothée Chalamett or Tom Holland, Sean Penn declares the Oscar-winning actress “the last movie star.” Hollywood

Does an American pope change U.S. politics? : The NPR Politics Podcast : NPR

Amazon will face Elon Musk’s Tesla with the robot launch.

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.