Browse By

OpenAI: Was the Shift to Closed Source Justified?

Written By: Braeden Cullen

The debate over the democratization of AI-related technologies has become increasingly pertinent in recent years. As AI development continues to accelerate, both sides of this debate continue to clash on whether these developments should be made available to the general public or if they should only be restricted to a few individuals. 

 OpenAI

OpenAI was the brainchild of Elon Musk, Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. The aptly named nonprofit was founded with a goal of “build[ing] value for everyone rather than shareholders” and stated that “[our] papers, blog posts, or code, and our patents (if any) will be shared with the world.” (OpenAI) OpenAI quickly was thrust into the spotlight as researchers from around the globe chipped in to help contribute to the promise of AI for all. OpenAI promptly began to make headlines through the release of numerous machine learning environments and training challenges that soon became popular among individuals delving into the field of Machine learning. Although OpenAI was off to an excellent start, just one year into the nonprofit’s existence, the leadership at openAI determined that maintaining nonprofit status was simply financially unattainable. Access to immense computational resources is a critical tool when attempting to stay at the forefront of AI development and, ultimately, OpenAI could not keep up with competitors who were generating increasing amounts of revenue (Hao). Without the approval of many employees, OpenAI quietly updated its core values with certain clauses that directly contradicted the original mission statement published by OpenAI. OpenAI promptly transitioned from a non-profit to a for-profit organization and quickly started racking up high profile investments from the likes of Microsoft (Hao). OpenAI had strayed far from their original promise to “stay free from financial obligations” and have angered many of their original supporters who were met with this harsh reality. Recently, as OpenAI has once again been thrust into the spotlight due to developments such as the GPT language models, they have become increasingly closed-source and are opting to share little to none of the code behind these incredible developments.

GPT

Generative pre-trained Transformer (GPT) models were developed by OpenAI and quickly took the natural language processing world by storm. These models were able to perform complex NLP tasks such as text summarization and question answering without any supervised training (Gershgorn). These models were able to achieve comparable performance to the state of the art NLP models that were trained in a supervised fashion. This distinction is incredibly important because models trained using supervised learning are extremely limited in their long term potential because labeling images to be fed into the algorithm is a costly and time-consuming process that is completely mitigated through the use of an unsupervised model. The goal of the GPT learning models was to create a generalized model that was capable of a large subset of tasks after being trained on a large unsupervised dataset. Another key advantage of this model was its zero-shot performance, which essentially tests the performance of a model on a variety of tasks without tailoring the model to that specific task (Brown et. al). GPT-2 was very similar to its predecessor but was trained on a much larger dataset which subsequently improved performance and further proved that gpt based models were incredibly effective. The third iteration of GPT, aptly named GPT-3, demonstrated an even greater leap in performance compared to GPT-1 and GPT-2. GPT-3 made use of a larger dataset than GPT-2 and contained over 100 times more parameters than it’s predecessor. The result was a model that kickstarted the conversation over the ethical concerns of a generalized model like GPT-3 and the potential consequences if the model was accessed by individuals with malicious intent. 

The Shift to Closed Source: Was it Justified?

The code behind both GPT-1 and GPT-2 has been officially released by OpenAI and is available on GitHub for any developer to utilize and make improvements on. The same cannot be said for GPT-3. Rather than deliver on their original promise listed in their mission statement claiming that “[our code] will be shared with the world,” (OpenAI) OpenAI instead decided to not release the source code for GPT-3 and instead release the model in the form of service. This service is unreasonably expensive, which effectively makes it only accessible to large corporations who can afford to stomach this cost. OpenAI currently claims that the shift away from open-source was due to concerns about the potential dangers of this technology making its way into the hands of groups with negative intentions. According to OpenAI, “Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting.” (OpenAI) The model’s ability to produce writen work indistinguishable from human writing could potentially be abused by bad actors. With safety concerns in mind, along with OpenAI’s increasing operating costs, the decision to make gpt-3 proprietary should have been expected, but it is not justified. By refusing to release the source code behind the model, it only serves to stifle innovation by preventing talented developers from improving on the work done by the OpenAI team. Open-sourcing advanced models are a key step that must be taken to prevent large corporate entities from having exclusive control over these powerful models. According to OpenAI’s initial mission statement, “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.” (OpenAI) OpenAI understands the threat that unequal access to AI could bring, and their original mission statement directly reflected this mentality. They understood the dangers that come along with the decision to make the model proprietary, but ultimately shed this sentiment once they realized how profitable the model could be.

Conclusion

OpenAI is no longer “open,” and has become just another privatized research organization funded and influenced by large corporations and including “open” in the name is simply misleading. Making state of the art AI models available to the general public is a crucial step that needs to be taken to ensure that humanity as a whole can benefit from AI-related technologies. OpenAI’s decision to make the gpt-3 model proprietary must be reversed if they want to deliver on their original promise to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate a financial return.” (OpenAI) 

References and Sources 

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., . . . Amodei, D. (2020, July 22). Language models are few-shot learners. Retrieved February 03, 2021, from https://arxiv.org/abs/2005.14165

Shree, P. (2020, November 10). The journey of Open AI GPT MODELS. Retrieved February 03, 2021, from https://medium.com/walmartglobaltech/the-journey-of-open-ai-gpt-models-32d95b7b7fb2

Gershgorn, D. (2020, August 20). GPT-3 is an amazing research Tool. but Openai isn’t sharing the code. Retrieved February 03, 2021, from https://onezero.medium.com/gpt-3-is-an-amazing-research-tool-openai-isnt-sharing-the-code-d048ba39bbfd

Hao, K. (2020, July 14). The messy, secretive reality behind Openai’s bid to save the world. Retrieved February 03, 2021, from https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/

 

Leave a Reply

Your email address will not be published. Required fields are marked *