OpenAI’s Approach to AI Safety

What will happen with safety approaches in AI systems after OpenAI’s CEO Sam Altman testified about the concerns around new technology?



OpenAI’s Approach to AI Safety
Image by Author

 

You may or may not have seen the videos of OpenAI CEO Sam Altman at a US Senate committee on Tuesday the 16th. If you haven’t, Sam Altman called on US lawmakers to regulate artificial intelligence (AI). The CEO testified about the concerns and possible pitfalls of the new technology.

Since the release of ChatGPT, the market has been flooded with large language models, along with other AI models. Over the past few months, there have been various government conversations happening about the regulation of AI and its safety to society. The EU has been pushing their AI Act, and other continents are following suit. 

Sam Altman has always addressed the ethical issues and concerns around the use of AI and has pushed for more regulation. He made a statement saying:

 

"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that. We want to work with the government to prevent that from happening."

 

OpenAI’s Safety Commitment

 

OpenAI has been standing on its commitment to keep AI safe and beneficial. They have understood that their tools such as ChatGPT have improved productivity, creativity and an overall working experience for many. However, safety is still one of their major priorities. 

So how exactly is OpenAI ensuring that their AI models are safe?

 

Rigorous Testing

 

Before any AI system is released, OpenAI goes through rigorous testing, with the help of external experts and continuously finds ways to improve the system. They use techniques such as reinforcement learning with human feedback to improve the model’s behaviour, allowing them to build better safety and monitoring systems.

OpenAI spent more than 6 months ensuring their latest model, GPT-4 was safe before releasing it to the public. 

 

Real-World Use

 

There’s no better testing than putting it to real-world use. It’s all good and well creating a new system in the lab and trying to prevent all risks possible. But you won’t be able to control these risks in the lab, you will have to let it out to the public for real-world use.

Unfortunately with AI systems, you will not be able to limit or even predict how the public will use it - to their benefit or abuse it. OpenAI releases AI systems with several safeguards put in place, and once they broaden the group of people who can access their AI system, they ensure continuous improvements.

The API available to developers has also allowed OpenAI to monitor action on potential misuse, and use this to build mitigations. OpenAI believes that society should have a significant say in how AI continues to further develop.

 

Protecting Children

 

One of the major focuses around AI safety for OpenAI is protecting children. They are currently looking into verification options where their users must be 18+, or 13+ with parental consent. They have stated that they do not permit their technology to be used to generate any form of hateful, violent or adult content. 

They have also put more tools and methods in place to protect children, for example, when users try to upload known Child Sexual Abuse Material to our image tools, OpenAI has put Thorn’s Safer in place to detect, review and report it to the National Center for Missing and Exploited Children.

To ensure children are benefiting from tools such as ChatGPT, OpenAI has teamed up with Khan Academy and built an AI-powered assistant that acts as a virtual tutor for students and a classroom assistant for teachers.

 

Privacy

 

Large language models have been trained on a variety of publicly available sources, in which some people believe there are privacy concerns. OpenAI has stated that:

 

“We don’t use data for selling our services, advertising, or building profiles of people—we use data to make our models more helpful for people.”

 

Their aim is for tools like ChatGPT to learn about the world, and not about private individuals. To ensure this, where achievable OpenAI removes personal information from the training dataset and fine-tunes its models to reject any form of requests about the personal information of individuals. It also responds by requesting individuals to delete their personal information from OpenAI’s system.

 

Accuracy

 

User feedback on tools such as ChatGPT has allowed OpenAI to flag outputs that are deemed incorrect and use that as a main source of data. Improving factual accuracy is big on their list, with GPT-4 being 40% more likely to produce factual content than GPT-3.5.

 

Wrapping it up

 

With OpenAI stating their approach to AI safety along with the CEO Sam Altman addressing the potential issues with AI systems and urging the government to put regulations in place, this is a start to solving AI safety concerns.

This will come with more time, resources and learning from the most capable models on the market. OpenAI waited over 6 months to deploy GPT-4, however, they have stated to ensure safety it can take longer.

What do you think will happen from now on?

If you would like to watch the hearing of OpenAI’s Sam Altman from Tuesday the 16th, you can do so here: ChatGPT Chief Sam Altman Testifies Before Congress on AI.

 
 
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.