GPT-3 is the latest achievement of OpenAI, an artificial intelligence research laboratory. It is the largest language model in history, sparking a series of discussions about how artificial intelligence will rapidly change many industries .
However, few people discuss how GPT-3 has changed OpenAI itself. In the process of creating the most successful natural language processing system ever, OpenAI has gradually evolved from a non-profit artificial intelligence laboratory to a company selling artificial intelligence services.
The laboratory is in an unstable state due to entanglement between two conflicting goals. On the one hand, it is necessary to develop profitable artificial intelligence services, on the other hand, it is necessary to explore human-level artificial intelligence and benefit everyone. Seeking a balance between these is the special mission of OpenAI.
Changes in OpenAI architecture
In March 2019, OpenAI announced that it would transform from a non-profit laboratory to a “profit-limited” company. This model opens the way for raising funds from investors and large technology companies, and investors are told that their return will be capped at 100 times the investment amount (this cap is attractive!).
Why is there such an architectural change? The company announced in a website notice that the move is aimed at “rapidly increasing our investment in computing power and talent to balance the expenditure required to achieve the company’s mission.”
“Computer power and talent” are the key words here.
The cost of talents and computing power are two core challenges in artificial intelligence research. The talent pool for such research as OpenAI is very limited. Given the growing interest in the commercialization of artificial intelligence, there is fierce competition among large technology companies in recruiting artificial intelligence researchers for their projects. This has triggered an arms race, and each technology giant has given higher salaries and allowances to attract artificial intelligence researchers.
Google and Facebook took away 2 of the 3 founders of deep learning, namely Geoffrey Hinton and Yann LeCun. The well-respected artificial intelligence expert Ian Goodfellow (also the inventor of the adversarial generative network GAN) works for Apple, and another artificial intelligence genius Andrej Karpathy works for Tesla.
OpenAI still has a strong interest in scientific research, but as most artificial intelligence talents are attracted to companies that can provide good salaries, non-profit artificial intelligence laboratories find it increasingly difficult to fill their vacancies unless they can Provide a similar level of salary. According to a report in the New York Times in 2018, only a few OpenAI researchers earn more than $1 million in revenue per year, while the report says that another artificial intelligence research laboratory, DeepMind, paid more than $483 million to 700 employees in 2018 Salary.
Deep learning algorithm is the main component of artificial neural network, and its computing power demand is the main reason for the further increase of artificial intelligence cost. Before being able to perform the actual task, the neural network must be trained to use a large number of cases, and this process requires expensive computing resources. In recent years, OpenAI has participated in some very expensive artificial intelligence projects, including playing Rubik’s cube with robotic arms, defeating Dota 2 champions by playing opponents, and a group of artificial intelligence playing different roles to play 5 million hide-and-seek games.
It is estimated that at least US$4.6 million is required to train GPT-3. What needs to be clear is that training a deep learning model is not a purely one-off process. Repeated trials, unexpected errors, and hyperparameter adjustments may increase costs by several times.
OpenAI is not the first artificial intelligence research laboratory to adopt a business model. Facing a similar dilemma, DeepMind accepted Google’s $650 million acquisition proposal in 2014.
Changes in OpenAI leadership
Under the leadership of one of the co-founders, Sam Altman, OpenAI began to open market marketing to investors. He resigned as the president of the highly regarded startup accelerator Y Combinator and became the CEO of OpenAI.
Before Altman, Greg Brockman was the representative of the organization. Brockman is the co-founder and CTO of OpenAI, as well as an experienced scientist and engineer.
In the field of technology investment, reputation and product management capabilities are more valued than scientific geniuses, and Altman is the kind of person who is trusted by investors and can delegate funds to him. During his tenure at Y Combinator, he helped found many successful companies, including Airbnb and Dropbox.
In an interview with the well-known technology media TechCrunch in May 2019, Altman said: “We have never made any money, and we currently have no plans to make money. Maybe one day we will make money, but we don’t know how to do it.”
But this has not prevented investors from investing heavily in OpenAI. Microsoft believes that Altman will somehow find a way to make the investment profitable, so they provided the company with a $1 billion investment in July.
Changes in OpenAI tasks
However, there is a fundamental conflict between technology investment companies and scientific research laboratories like OpenAI.
OpenAI’s stated mission is to ensure that it can “build a safe general artificial intelligence (AGI) and share this technology with the world to benefit all mankind”.
But according to expert estimates, it will take at least decades to achieve the lofty goal of AGI, and the patience of technology investors rarely lasts that long. If their investment cannot be repaid within a few years, they will become lax. For this, you only need to look at the famous Boston Dynamics. Although the robot videos posted on YouTube spread like a virus, the fundraiser has changed hands several times.
So, how can OpenAI gain the favor of funders while maintaining AGI research?
“OpenAI is developing a series of increasingly powerful artificial intelligence technologies, which require a lot of funds to support computing power. The most obvious way to cover these costs is to create a product, but this will mean a shift in our focus. We have chosen a compromise method. We intend to launch some “quasi-AGI” technologies, and Microsoft will become our preferred partner to commercialize these “quasi-AGI technologies”,” OpenAI wrote in a blog announcement announcing the acquisition of Microsoft investment .
Related Links:
https://openai.com/blog/microsoft/
But there are clear signs that OpenAI has at least partially become a product company.
Commercial version of GPT-3
In May 2020, Microsoft announced a partnership with OpenAI to build the world’s top five supercomputers specifically for OpenAI, so that Microsoft can fully tap the talents of OpenAI to create what Altman calls “our dream system.” The supercomputer will help OpenAI train its deep learning model on the one hand, and will also provide services to other customers of the Microsoft Azure cloud computing platform on the other.
Less than two weeks later, OpenAI published the first edition of the GPT-3 paper on arXiv (the paper preprint website). Unlike the previous GPT-2, GPT-3 will not be released to the public. OpenAI chose a commercial release, and developers can purchase access to GPT-3 through an application programming interface (API).
OpenAI’s API announcement was released on June 11, and some developers can get early access to related technologies.
This makes GPT-3 very similar to Microsoft Cognitive Services, which is a black box-based artificial intelligence cloud platform that provides developers with computer vision, natural language processing and other artificial intelligence functions through API interfaces , Without providing the actual details of the model running in the background.
This will at least help OpenAI return part of Microsoft’s investment, and Microsoft will also benefit from the cooperation between the two parties and apply the technology more deeply, and be able to integrate it with products such as Bing, Office 365, Outlook.com and Teams.
The commercial release of GPT-3 brings OpenAI one step closer to becoming an artificial intelligence product company, while deviating from the original intention of non-profit artificial intelligence scientific research.
Downplaying artificial intelligence warnings
After the development of GPT-2, the OpenAI team decided not to release it to the public based on concerns about “malicious applications of technology” (such as spreading spam and fake news). Instead, a phased approach was adopted, where a smaller version of the artificial intelligence model was first released and evaluated before the larger model was released.
Although the author at the time believed that a well-performing language model would not cause the proliferation of fake news, he also supported thinking about the possible consequences of this technology before releasing it.
GPT-3 is three orders of magnitude larger than GPT-2. In the language model of deep learning, one of the key issues is memory span. As the text generated by the neural network becomes longer, artificial intelligence begins to lose coherence. Experiments have shown that larger neural networks usually have a longer memory span, which means that the possibility of misuse in GPT-3 is much greater than in GPT-2.
But this time OpenAI did not scream for the possibility that GPT-3 may become a weapon for spamming machines and fake news. Instead, OpenAI executives tried to downplay warnings about GPT-3. In July, Sam Altman deleted “GPT-3 Promotion” in a tweet.
Most of Altman’s comments are correct, because artificial intelligence still has a way to go before it reaches the level of human intelligence. Many experiments of GPT-3 show that despite the fascinating progress, the language model is still difficult to solve some basic tasks that embody intelligence.
Nevertheless, Altman’s remarks can still show that company executives assure investors that everything is under control.
OpenAI as a product company
GPT-3 has received wide acclaim from the technical community since its release. Many developers and entrepreneurs have posted tweets with GPT-3 automatically generated content, such as poems, memes (the basic unit of culture, which is similar to the role of genes in biological evolution), tweets and website models .
A developer even managed to use GPT-3 to generate Python code to build a deep learning model.
GPT-3 has obvious advantages and may become a turning point for artificial intelligence business. One of the main limitations of deep learning is that it is a narrow artificial intelligence system. It can perform well on specific tasks, but it will not perform well when it is extended to other fields. To create a new deep learning application, you must train the model from scratch or use transfer learning to fine-tune the parameters of the pre-trained model for the new task.
This restriction hinders the development of artificial intelligence services as a platform. Although GPT-3 still belongs to the field of artificial intelligence in the narrow sense, it has been proved that it can still run in many trainings without learning samples. This means that you do not need to re-adjust the parameters, it can also adapt to new applications.
This feature has spawned many ideas for using artificial intelligence models to create new services. Debuild.co is such a company that creates web applications through GPT-3.
Augrented, a company that helps tenants research potential landlords, is exploring the use of GPT-3 to write legal notices or other simple English statements to help tenants defend their rights.
OthersideAI is also using GPT-3 to provide users with creative tools.
GPT-3 may eventually become a new platform on which new businesses and ecosystems will be created. This is successful for Altman, but it will make OpenAI a product/service company, which is completely different from publishing an open source artificial intelligence model and letting developers use it to do their own things.
At this stage, OpenAI needs to meet customer needs, expand its infrastructure, and handle compliance issues. As the artificial intelligence model becomes a life-and-death force for startups, OpenAI must also be able to deal with the specific challenges posed by developing a deep learning business. OpenAI still has many problems to deal with, such as eliminating harmful deviations, solving model attenuation, and so on. These are extremely costly tasks, especially for deep learning models that process 175 billion parameters.
At the same time, OpenAI needs to figure out how to solve these problems while maintaining profitability.
Although Altman is a very successful entrepreneur, he can’t do it alone. As OpenAI further enters the product management field, it will need more help from Microsoft.
OpenAI is already relying on Microsoft’s cloud architecture to train and run its models, but in the near future, it may need this technology giant to assist with other issues such as legality, customer support, security and privacy, product expansion, etc.
The future of OpenAI
San Francisco OpenAI headquarters
The story of OpenAI depicts the challenges faced by scientific artificial intelligence research. At present, it is generally believed that larger deep learning models will develop more advanced artificial intelligence systems. This means that artificial intelligence research laboratories need a lot of funds to attract talents to train their increasingly large deep learning models.
At present, the only ones willing to provide funds are those large high-tech companies, but investors also hope to get a return on investment, which forces research laboratories to use some of their resources to create profitable products. As a result, large companies may fully incorporate the laboratory into their business goals.
After Google acquired DeepMind, we have seen this trend: the artificial intelligence laboratory must allocate resources to both the AGI research department and the “applied artificial intelligence” department dedicated to creating profitable products, but the company has not yet achieved a breakeven .
As for OpenAI, it is still developing very well. The more trapped in commercializing artificial intelligence services, the harder it is to not forget the original intention. Will it insist on maintaining the transparency of artificial intelligence in the study of human intelligence and insisting on the nature of open source, or will it tend to produce commercial entities and strictly protect its research as the company’s confidential and intellectual property rights? Will it adhere to the “people-oriented” approach, or will it satisfy investors (and future owners) as its main focus?
Time will prove everything.
For more such interesting article like this, app/softwares, games, Gadget Reviews, comparisons, troubleshooting guides, listicles, and tips & tricks related to Windows, Android, iOS, and macOS, follow us on Google News, Facebook, Instagram, Twitter, YouTube, and Pinterest.