Claude 3.5 编程收入暴增 10 倍,抢走 Cursor 反杀 OpenAI

xxn 阅读:17356 2024-12-15 14:00:40 评论:0

Anthropic is a competitive rival of OpenAI that keeps the latter's top executives up at night.

AI programming used to be one of OpenAI's strong points and a major reason why millions of users subscribed to ChatGPT. However, in July of this year, the promising start-up Cursor, which had already received an $8 million investment from OpenAI, made an unambiguous choice to switch its AI programming assistant's default model from GPT to Claude.

Cursor's co-founder Aman Sanger even praised Anthropoc on Lex Fridman's podcast in October by saying, "Due to better understanding of user needs, Claude 3.5 Sonnet can be considered the current 'best' programming tool."

What's more, OpenAI found in benchmark tests in the early fall that its own models had been severely outperformed by Anthropoc in automatic programming tasks. All of this alarmed OpenAI's leadership.

With Anthropoc's success in the field of programming, the company's growth was rapidly converting into commercial results. In the past three months, the company's annual revenues from software development and code generation businesses have grown by a factor of 10.

To counter this trend, OpenAI has begun to urgently improve the coding abilities of its own models.

OpenAI needs to be vigilant not only of Anthropoc, but also of Google, which has just released Gemini 2.0, and xAI, which has developed the world's most powerful supercomputer

Nevertheless, OpenAI, which was founded five years earlier than Anthropoc, still maintains a significant advantage in terms of revenue. This year, OpenAI is expected to earn about $40 billion, which is five times more than Anthropoc.

In terms of scale, OpenAI far surpasses Anthropoc. OpenAI has raised 200 billion dollars in funding and has become valued as high as 157 billion dollars. Anthropoc's funding is $11 billion, with a valuation of $18 billion.

In terms of finances, OpenAI also has an advantage. The split between OpenAI and cloud provider Microsoft is lower than that between Anthropoc and Amazon.

Since the cost of developing and operating AI technology is extremely high, both OpenAI and Anthropoc are burning through billions of dollars. They both need to continue seeking funding in the foreseeable future because OpenAI plans to develop its own data center chips and other hardware facilities in order to reduce its dependence on external suppliers.

During development, Anthropoc has always maintained a self-restraint: a high degree of emphasis on safety. This refers to the fact that AI companies prevent models from making serious mistakes or taking actions that are potentially harmful to human life, such as developing biological weapons or carrying out nuclear attacks. (Providing models to the US military is not counted.)

All seven co-founders of Anthropoc, including CEO Dario Amodei, had previously worked at OpenAI, but they left the company due to concerns about AI safety at the end of 2020.

According to Amodei, the company had already developed an AI chat bot in the summer of 2022, but they chose to continue safety testing rather than release it hastily.

In November 2023, OpenAI released ChatGPT, which caused a sensation in the industry and among the public. Four months later, Anthropoc released its Claude.

Recently, Anthropoc has become even bolder in challenging the giant OpenAI.

In October of this year, after several of OpenAI's top executives, including CTO Mira Murati, left the company, Anthropoc put up an advertisement for its Claude AI at San Francisco International Airport, mocking OpenAI: "It's a choice without drama."

In terms of experimental feature releases, Anthropoc is more decisive as well. In March, despite acknowledging potential network security risks on its blog, the company released a groundbreaking feature called "Computer Use," which allows the Claude model to operate like a human on a computer, enabling it to not only view the screen and move the cursor, but also click buttons and type text!

This move prompted ridicule from OpenAI's leadership, who criticized Anthropoc for going against their AI safety concept during a recent meeting.

Another grudge behind the founding of the two companies: project disputes and private teams

In fact, the enmity between Anthropoc and OpenAI is far more complex than what is known publicly. The seeds of conflict were sown among the founders long before they split due to differences in AI safety development concepts.

As OpenAI's research vice president, Dario Amodei led the development of GPT-2 and GPT-3 models. He also joined forces with researchers from OpenAI, Google DeepMind, and others to co-author a groundbreaking paper on reinforcement learning based on human feedback (RLHF).

Paper URL: https://arxiv.org/ pdf/1706.03741

This breakthrough technology significantly advanced conversational AI, enabling humans to directly participate in optimizing and improving AI models.

It has been revealed that while he was in office, Dario and Daniela Amodei had serious disagreements with other executives, particularly over project leadership and security issues with Altman and Greg.

Left: Dario Amodei; Right: Daniela Amodei (Dario's sister and Anthropoc's CEO)

In early 2019, a seemingly routine project proposal became a flashpoint for the coming split. Greg was leading the development of an AI project that could play online multiplayer game Dota 2. He then asked to join Dario Amodei's GPT model team.

This model later became the foundation technology for best-selling products, such as ChatGPT. Surprisingly, the Amodei siblings directly vetoed Greg's request to join the project. They explained to other staff that Greg had a reputation for being difficult to work with, frequently modifying code without prior communication.

The escalating conflict was even more astonishing. According to a former OpenAI employee, in the months before they left, the split between the Amodei siblings and other members of OpenAI had deepened.

Amodei even created a private Slack discussion group that only specific researchers could join, excluding altman, Greg, and other senior members of the company.

This nearly cracked the team apart, which foreshadowed the teams eventually splitting due to differences in the fundamental concepts of AI safety development. In November of this year, Dario Amodei made a significant statement on a podcast episode, saying, "If you have your own vision for achieving a goal, you should pursue and pursue it. Trying to persuade others to change their minds is extremely inefficient."

Ultimately,

声明

1.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源;2.本站的原创文章,请转载时务必注明文章作者和来源,不尊重原创的行为我们将追究责任;3.作者投稿可能会经我们编辑修改或补充。

搜索
排行榜
关注我们

扫一扫关注我们,了解最新精彩内容