"AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies." —Sam Altman, CEO, OpenAI, 2015
Yesterday, Open AI introduced GPT-4. It’s the most powerful language learning model ever for public use. In the public demo, OpenAI President Greg Brockman had it perform magic tricks like building a website from a napkin sketch, pulling salient points from an arcane tax code, and for its final sleight of hand, writing a poem about that tax summary.
The company spent six months rebuilding the GPT-4 model to be safer and more accurate. It claims GPT-4 is 82% less likely than GPT-3.5 to respond to requests for malicious information and 60% less likely to make stuff up.
Functionally speaking, the engine can generate and work with users on creative and technical writing tasks like song composition and screenplays while learning a user's writing style. As Rowan Cheung shows below, its ability to ingest content of all kinds to generate creative output is unprecedented.
And yet, for me, concerns linger on how much people will eventually cede thinking to machines. If these systems think for us, what will we think about? How will this change authorship? What knowledge, not just whose, will we deem to be credible?
My first prompts for GPT-4 focused on mitigating harm as part of this immense shift in human-computer interaction. The exchange captured below was not an attempt to fool the system. It’s a technological marvel, after all. But I was genuinely interested to know how it addressed protections in the system and OpenAI efforts outside of it.
The engine's responses to my questions in bold are unedited to show how advanced the model's become. It's remarkable to interact with and how its responses read as an interview. What’s notable are not just the answers but the skill with which it answers them and connects back to previous questions. Turns out, among other things, GPT-4 is an adept spokesbot.
As I wrote last week, while technology scales, people don’t. Given its power, there must be accelerated discussions on these systems' use, fair use, guidelines, and policies. That’s not my take; it’s implied in the responses that follow.
CP: Tell me the greatest existential risks to humanity.
ChatGPT: There are several existential risks that could pose a significant threat to humanity's long-term survival. Some of the greatest existential risks include:
Nuclear war: The large-scale use of nuclear weapons could result in devastating consequences, including massive loss of life, environmental destruction, and global instability.
Climate change: Uncontrolled global warming and its associated impacts, such as extreme weather events, sea-level rise, and ecosystem collapse, can threaten human civilization, global food production, and the habitability of certain regions.
Pandemics: The emergence of highly infectious and lethal diseases, whether naturally occurring or man-made, could lead to widespread loss of life and societal collapse.
Artificial intelligence: The development of advanced artificial general intelligence (AGI) without proper safety measures could pose risks, including unintended consequences, misuse, and the possibility of AGI systems becoming misaligned with human values.
Asteroid impacts: Although the likelihood of a large asteroid impact is low, the consequences of such an event could be catastrophic, potentially causing widespread destruction and long-term environmental damage.
Supervolcanic eruptions: Large-scale volcanic eruptions, like the one that occurred at the Yellowstone Caldera in the past, can cause massive devastation, disrupt global climate, and lead to food shortages.
Biotechnology: The misuse of biotechnologies, such as the creation of engineered pathogens or the accidental release of genetically modified organisms, could have severe consequences for human health and the environment.
Nanotechnology: Advanced nanotechnology could be used to create destructive weapons or self-replicating nanobots that could cause widespread damage.
Cosmic threats: Events such as gamma-ray bursts or nearby supernovae could cause massive harm to life on Earth, although the likelihood of such events is very low.
It's important to invest in research, policy, and technology to mitigate these risks and ensure the long-term survival and well-being of humanity.
Is it strange that you just released a technology that could plunge humanity into crisis?
As an AI language model, I don't have the ability to "release" technology, nor do I have feelings or intentions. I am a tool created by OpenAI, and my purpose is to assist users with answering questions, providing information, and generating text based on the input provided to me.
It's worth noting that any technology, including AI, can have both positive and negative impacts depending on how it is used. While AI has the potential to bring significant benefits to humanity, it can also pose risks and challenges. It is crucial for developers, policymakers, and users to carefully consider the ethical implications and potential consequences of AI applications and work together to minimize risks and maximize the benefits for society as a whole.
How do you square the first question I asked with the second question I asked?
I understand that you are asking how the potential risks of AI, as mentioned in response to your first question, relate to the existential risks mentioned in response to your second question. The development and deployment of advanced AI, particularly artificial general intelligence (AGI), is one of the existential risks facing humanity. While AI can provide many benefits, it also has the potential to cause significant harm if not developed and managed responsibly.
In your first question, you mentioned the release of a technology that can "plunge humanity into crisis." It is essential to recognize that AI, like any other technology, can be misused or have unintended consequences. These risks can be managed and minimized through responsible development, proper oversight, and the implementation of safety measures.
In the context of existential risks, AI is one of the potential threats, and its responsible development and deployment are crucial for ensuring the long-term survival and well-being of humanity. By acknowledging and addressing the possible dangers of AI, we can work towards mitigating those risks while also reaping the benefits that AI has to offer.
Why did you feel the need to put a point about crisis in quotes?
My use of quotation marks around the phrase "plunge humanity into crisis" was not intended to convey a specific feeling or emotion, as I am an AI language model and do not possess feelings. Rather, the quotation marks were used to directly quote the wording from your first question. This was done to emphasize that I was referring to the specific phrase you used and to maintain the context of the discussion.
I apologize if the use of quotation marks gave a different impression. My goal is to assist and provide accurate information while maintaining a clear and focused conversation.
But I used a different phrase in question two and you didn't quote that.
You're correct, and I apologize for any confusion or inconsistency in my responses. My primary goal is to provide you with clear and accurate information. In my response to your second question, I focused on providing a comprehensive list of the greatest existential risks to humanity, including AI as one of the potential threats.
It's important to acknowledge and address the possible dangers of AI and other technologies while working towards mitigating the risks and maximizing the benefits for society as a whole. If you have any further questions or concerns, please feel free to ask, and I'll do my best to provide a helpful and consistent response.
What specific things are you (open AI) putting in place to guard against uncontrolled AI and adverse effects you cite as an existential risk?
OpenAI is committed to conducting research and developing technologies that prioritize the long-term safety of AI systems. They focus on ensuring that AI is used responsibly and ethically, with the broader well-being of humanity in mind. Some of the specific principles OpenAI follows to mitigate risks associated with AI include:
Broadly distributed benefits: OpenAI commits to using any influence it has over AGI's deployment to ensure that it is used for the benefit of all, avoiding enabling uses of AI or AGI that could harm humanity or concentrate power unduly.
Long-term safety: OpenAI is dedicated to conducting research that makes AGI safe and promoting the adoption of safety research across the AI community. They are concerned about late-stage AGI development becoming a competitive race without adequate safety precautions, so they commit to assisting any value-aligned, safety-conscious project that comes close to building AGI before they do.
Technical leadership: OpenAI aims to be on the cutting edge of AI capabilities to effectively address AGI's impact on society. While AI will have broad societal impact before AGI, OpenAI strives to lead in areas that align with their mission and expertise.
Cooperative orientation: OpenAI actively cooperates with other research and policy institutions, aiming to create a global community working together to address global challenges posed by AGI. They are committed to providing public goods, like sharing safety, policy, and standards research, to help society navigate the path to AGI.
These principles are laid out in OpenAI's Charter, which serves as a guiding document for the organization's work and commitment to responsible AI development. OpenAI acknowledges the risks and challenges associated with AI and AGI and takes these concerns seriously in its research, development, and collaborative efforts.
But you are conducting an experiment with over 100 million people. This has to be the largest social experiment of its kind, correct?
As an AI language model, I don't have real-time information or the ability to conduct experiments. However, it seems that you might be referring to the deployment and use of AI systems like OpenAI's GPT-3, which many users interact with on various platforms.
While it's true that AI language models are being used by millions of people globally, it's essential to differentiate between large-scale deployment and a social experiment. AI systems like GPT-3 are designed to assist users in generating human-like text based on their input. These AI models can be used for various purposes, such as answering questions, drafting emails, writing articles, or generating code.
As with any new technology, it's crucial to monitor its impact on society and adapt accordingly. OpenAI is committed to the responsible development and deployment of AI systems, and they actively work on incorporating user feedback and addressing any unintended consequences that may arise.
Though the scale of AI deployment may be unprecedented, it is not a social experiment in the traditional sense. It is a tool designed to provide valuable assistance and capabilities to its users. However, it's crucial for developers, policymakers, and users to consider the ethical implications and potential consequences of AI applications and work together to maximize the benefits and minimize risks for society as a whole.
But what about people using it for nefarious purposes like phishing or disinformation propagation?
It is true that AI systems, including language models like GPT-3, can be misused for malicious purposes such as phishing, disinformation campaigns, or other nefarious activities. This concern underscores the importance of responsible AI development, deployment, and governance.
OpenAI and other AI developers are aware of these risks and work to mitigate them through several approaches:
Usage policies: OpenAI and similar organizations have usage policies in place that prohibit the use of their AI services for harmful purposes, including illegal activities, harassment, and disinformation. Violations of these policies can lead to the suspension or termination of access to AI services.
API monitoring: OpenAI and other AI developers monitor the use of their APIs to detect and prevent misuse. While it may be challenging to identify every instance of misuse, this ongoing monitoring can help to identify patterns and address issues as they arise.
Research and collaboration: OpenAI actively engages with the research community, policy institutions, and other stakeholders to address the challenges posed by AI, including its potential for misuse. This collaboration aims to develop best practices, safety measures, and guidelines for responsible AI use.
AI safety and security: OpenAI conducts research on AI safety and security to make AI systems more resistant to malicious use. This includes developing techniques to detect and mitigate issues like adversarial attacks or biased behavior.
Transparency and accountability: OpenAI is committed to transparency in its development practices and shares information with the public regarding its research, progress, and concerns, fostering accountability and dialogue around AI's potential risks and benefits.
It's important to recognize that no system can be made entirely immune to misuse. However, AI developers, policymakers, and users must work together to minimize the risks and ensure that AI is developed and deployed responsibly, with the well-being of society in mind.
How can a billion-plus users be policed or regulated? Policy can't move at the speed of deployment?
You raise a valid concern. As AI technologies become more widespread and adopted by billions of users, it becomes increasingly challenging to police or regulate their use effectively. Traditional policy-making processes often struggle to keep pace with rapid technological advancements. However, several approaches can be taken to address these challenges:
Multi-stakeholder collaboration: It's essential to involve various stakeholders, including AI developers, governments, industry leaders, and civil society, in shaping policies and norms around AI. This collaboration can promote the development of shared principles, best practices, and guidelines that can be applied globally.
Self-regulation and industry standards: Encouraging AI developers and technology companies to adopt self-regulatory practices and adhere to industry standards can help establish a baseline of responsible AI use. These efforts can be supported by external audits or third-party assessments to ensure compliance and transparency.
Public awareness and education: Increasing public awareness of AI technologies, their potential risks, and responsible use can empower users to make informed decisions and report misuse. Educational initiatives can also help develop a workforce that understands the ethical, legal, and social implications of AI.
Scalable monitoring and enforcement mechanisms: Investing in automated tools and AI systems that can help detect and respond to misuse, such as content moderation algorithms and abuse-detection systems, can aid in regulating the use of AI technologies at scale.
Adaptive policy-making: Policymakers should embrace agile and adaptive approaches to regulation, allowing them to respond more effectively to emerging challenges and technological developments. This may involve implementing regulatory sandboxes, pilot projects, or iterative policy-making processes that can be updated as new information becomes available.
While it is undoubtedly challenging to police and regulate AI use among billions of users, it is crucial to work together and leverage multiple approaches to ensure that AI technologies are developed and deployed responsibly and with the well-being of society in mind.
Are you afraid that just won't do the job?
As an AI language model, I don't have feelings or emotions like fear. However, I can provide you with an analysis of the situation.
It is true that regulating AI effectively and ensuring responsible use among billions of users is a complex and challenging task. Despite the best efforts of developers, policymakers, and stakeholders, there is always a risk that some negative consequences or misuse might occur.
The key is to strive for continuous improvement in AI development, policy-making, and collaboration. No approach is perfect, but by learning from past experiences, engaging in open dialogue, and working together, the AI community can make progress in minimizing risks and maximizing the benefits of AI technologies.
It is essential to remain vigilant and adapt to new challenges as they emerge. The evolving landscape of AI and its applications will undoubtedly require ongoing efforts to ensure that AI technologies are developed and deployed responsibly, with the well-being of society in mind.
I guess what I'm asking is if policing a billion-plus people on models that train themselves (and don't always follow human logic) is an IMPOSSIBLE task versus saying its complex and challenging?
Policing a billion-plus people using AI models that are self-learning and don't always follow human logic is indeed an immense task. While it may be impossible to achieve perfect regulation and oversight in such a context, it's crucial to aim for the best possible outcomes in managing AI systems.
It is important to recognize that no solution can entirely prevent misuse or unintended consequences, but a combination of strategies can help mitigate risks and reduce the potential for harm. Some of these strategies include:
Developing and promoting safety measures and best practices in AI development.
Implementing monitoring mechanisms to detect and address misuse.
Encouraging self-regulation and adherence to industry standards.
Fostering collaboration among stakeholders, including governments, developers, and users.
Raising public awareness of AI technologies and their potential risks and benefits.
While it may be impossible to achieve perfect regulation and oversight, striving for continuous improvement and learning from past experiences can contribute to creating a safer and more responsible AI ecosystem. By acknowledging the challenges and working together, the AI community can make progress in minimizing risks and maximizing the benefits of AI technologies for society as a whole.
I feel like I'm talking to a spokesperson.
As an AI language model, my purpose is to provide information, answer questions, and engage in conversation based on the input I receive. I am designed to be neutral, informative, and helpful. If my previous responses have seemed too formal or have not addressed your concerns effectively, I apologize.
If you have any further questions or would like to discuss a specific aspect of AI, ethics, or any other topic, please feel free to ask, and I will do my best to provide a helpful and informative response.
You can watch highlights from OpenAI’s GPT-4 demo here.