GPT models, like GPT3 & 4, have garnered significant attention for their impressive capabilities in generating coherent, human-like text. However, they have also been subject to criticism and objections. In this article, we will discuss the seven most common objections to GPT models and offer rebuttals to them.
Objection 1: Bias in Bias out
One of the most significant concerns with GPT models is the potential for them to reflect and perpetuate biases present in the data used to train them. For example, if the training data contains a disproportionate amount of text from a particular demographic group, the model may struggle to understand or generate text related to other groups. This can lead to biased outputs and reinforce existing inequalities.
Rebuttal: While it is true that GPT models can reflect and perpetuate biases present in the data used to train them, there are approaches that can be taken to mitigate this issue. For example, researchers can use data augmentation techniques to balance the training data and ensure that the model is exposed to a diverse range of perspectives. Additionally, there are efforts underway to develop methods for identifying and mitigating bias in GPT models. These include techniques such as debiasing and adversarial training, which aim to reduce the impact of biased data on the model’s output. In the brighter cases, this kind of attention to removing bias may actually perform better than humans but the risk in automated vetting tools like resume screeners should be carefully implemented.
Objection 2: Lack of Transparency
Another objection to GPT models is the lack of transparency in how they arrive at their outputs. Because these models can be complex and difficult to interpret, it can be challenging to understand how they generate their text. There should be a better understanding of the “black box” that can explain how a GPT came on an answer – especially if it is deemed harmful.
Rebuttal: While it is true that GPT models can be complex and difficult to interpret, efforts are being made to improve transparency and interpretability. For example, researchers have developed methods for visualizing the attention patterns of GPT models, which can provide insight into how the model is processing and generating text. Additionally, there are ongoing efforts to develop more transparent and interpretable AI models, including GPT models.
Objection 3: Misuse
There is a risk that GPT models can be misused or abused for malicious purposes, such as generating fake news or impersonating individuals online. The power of bad actors to 100x their negative impact on society should make us stop the development of these tools.
Rebuttal: While it is true that GPT models can be misused or abused, this is not a fundamental flaw with the technology itself. Rather, it is a concern that applies to any technology that can be used for both good and bad purposes. There are ongoing efforts to develop approaches for identifying and mitigating the misuse of GPT models, including developing techniques for detecting fake news and identifying instances of impersonation. If anything this should increase the imperative for social impact organizations to learn how to use fire to fight fire.
Objection 4: Complete Human Dependence
Some critics argue that the use of GPT models can lead to a dependence on technology and a lack of human creativity and innovation. This may be especially true for a rising generation of writers in school or early career stages.
Rebuttal: While it is true that there is a risk of dependence on technology, this is not a fundamental flaw with GPT models specifically. Rather, it is a concern that applies to any technology that is designed to automate or augment human tasks. Additionally, there is evidence to suggest that the use of GPT models can actually stimulate human creativity and innovation, by providing new tools and approaches for generating and exploring ideas. It should also be noted that similar critiques were levied upon the introduction of the calculator.
Objection 5: Environmental Impact
The energy consumption of GPT models can vary depending on a number of factors, such as the size of the model, the hardware used to train and run the model, and the specific task the model is being used for. However, it is generally accepted that GPT models require a significant amount of computing power and energy.
For example, OpenAI’s GPT-3 model, which has 175 billion parameters, reportedly requires over 3.2 gigawatts of power to train, which is equivalent to the power consumption of over 400,000 homes in the United States. Additionally, training a GPT model like GPT-3 can produce a large amount of carbon emissions, which can contribute to climate change.
Rebuttal: While it is true that the energy consumption and carbon emissions associated with GPT models are a concern, efforts are being made to develop more energy-efficient approaches to training and running these models. For example, researchers are exploring techniques like model distillation, which can reduce the energy consumption of GPT models by compressing them into smaller, more efficient models. Additionally, there are ongoing efforts to develop more sustainable computing technologies, including renewable energy sources and more efficient hardware.
Objection 6: Exploitive Labor Practice
Exploitive labor practices by out-sourcing moderation in data cleanup. In 2022, it was reported by Time Magazine and other sources (Open AI Used Kenyan Workers, Jan 2023) that OpenAI used outsourcing firm Sama, a b-corp firm that hires workers in places like Kenya. The net pay for removing violent and disturbing content from the training set was reported to be $2 per hour.
Rebuttal: As a b corp company, Sama claims to be raising over 50,000 people out of poverty in places like Nairobi where the national average salary is $1.29 per hour. Beyond this, other companies like Facebook have relied on services that Sama provides to keep user content safe. In fact, Sama stated they would be canceling the Facebook (Meta) contract due to alleged union-busting tactics employed by the company.
Objection 7: Generic Outputs
Initially, what appears to be a magical output of original content is actually just a fancy autocomplete. In a YouTube critique on AI, comedian Adam Conover makes the point that the claims of what this AI can do are driven by their need to pump up share price rather than reflect reality. A risk here is that organizations assume they are getting something unique but when they end up posting to the public their content ends up looking generic and quite similar.
Tools like GPTzero.me and CauseWriter detect AI can quickly reveal these using perplexity scores.
Rebuttal: Whole Whale has framed this as the ‘Grey Jacket Problem’ and we think it is real. There is a level of learning that staff and organizations need to invest in before just using off-the-shelf AI tools. An AI Prompt architect/engineer mindset will be needed for organizations to build out unique outputs.
A Final Thought…
In conclusion, while there are certainly objections to the use of GPT models, many of these objections can be addressed through ongoing research and development. It is important to recognize that GPT models, like any technology, are not without their limitations and risks. However, by working to mitigate these risks and improve the technology, GPT models have the potential to provide significant benefits to a wide range of applications and industries.
One of the most exciting aspects of GPT models is their ability to automate and augment human tasks in a variety of domains, including language translation, content creation, and customer service. By providing new tools and approaches for generating and processing text, GPT models can help to streamline and improve many of the tasks that we currently rely on humans to perform.
However, it is important to approach the use of GPT models with a critical eye, and to be aware of the potential risks and limitations associated with the technology. By doing so, we can work to ensure that GPT models are developed and deployed in a responsible and ethical manner, and that they provide maximum benefit to society as a whole.
For nonprofit organizations in particular, GPT models can provide powerful new tools for generating and processing text, from creating compelling content to providing personalized customer service. By staying up-to-date on the latest developments in GPT technology, nonprofits can leverage these tools to achieve their mission and make a positive impact on the world. At the same time, they can work to ensure that the use of GPT models is guided by ethical principles and a commitment to social responsibility.