OpenAI Usage Policy now Allows Military to use their AI

Jan 15, 2024 | 0 comments

OpenAI Usage Policy now Allows Military to use AI

OpenAI has subtly updated its usage policy and now OpenAI usage policy allows the use of its AI application for “military and warfare” purposes. The change was first noted in a report by the Intercept, which stated that previously OpenAI’s usage policies didn’t allow the use of its AI applications for any activity that could cause harm to someone, which included “weapons development” and “military warfare.”

The company updated their usage policy page on January 10. It still prohibits users from using AI for causing harm to themselves or someone else, as well as for “weapons development.” Whereas, now it has removed wording related to “military and warfare.”

Some experts worry about this change in OpenAI’s usage policy, especially when AI is already being used by the Israeli military to target bombs inside Palestinian territory. Sarah Myers West, managing director of the AI Now Institute and former AI policy analyst, expressed her views and told the Intercept “The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.”

OpenAI on the Usage Policy Change

An OpenAI spokesperson said “the company wanted to create a set of universal principles that are easy to remember and apply, especially as its tools are now widely used by everyday users who can also create their customized versions of ChatGPT, called “GPTs.” OpenAI launched its GPT Store on January 10, a store that allows users to share and explore different GPTs.

Another OpenAI spokesperson told Business Insider that “some national security use cases are consistent with the company’s vision and that the policy change was partly motivated by that. For instance, OpenAI is already working with the Defense Advanced Research Projects Agency (DARPA) to develop new cybersecurity tools to protect open-source software essential for critical infrastructure and industry.” He added, “It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.”

Conclusion

This is an interesting change in OpenAI Usage Policy. Although it is quite common for tech companies to change their policies, OpenAI also recently launched the GPT store, so they had to change the terms of use anyway. OpenAI usage policy still states that you can’t use it to cause harm to others and develop weapons, but the word “military and warfare” has been deleted, and the military currently engages in activities such as research, infrastructure, documentation, and many other things that are beyond warfare and developing weapons. So, it makes sense to remove the word military from their policy.

Read More

AI Regulation Body Should be imposed just like Nuclear Energy: Bill Gates’ Podcast with Sam Altman

What is the hot selling AI Powered device Rabbit R1

RECENT POSTS

Elon Musk’s xAI Announces Grok 1.5 with Great Capabilities

Elon Musk’s xAI Announces Grok 1.5 with Great Capabilities

Image Credits: xAI Elon Musk's xAI launched Grok last year in November to compete with chatbots from big tech giants like Google, Microsoft, and OpenAI. Elon Musk's xAI is soon launching the next version of their chatbot, Grok 1.5, which performs really well as...

Meta’s Ray-Ban Smart Glasses Are Getting New AI Features

Meta’s Ray-Ban Smart Glasses Are Getting New AI Features

Image Credits: Meta Meta’s $300 smart glasses, made in collaboration with Ray-Ban, allow users to take pictures, record videos, make calls, hear music, and do much more. Now, new AI features are being added to Meta's Ray-Ban smart glasses.  New AI Features in Meta’s...

Claude 3 beats GPT-4 for the First Time on LMSYS Leaderboard

Claude 3 beats GPT-4 for the First Time on LMSYS Leaderboard

Anthropic released the Claude 3 model family earlier this month, and they have become highly popular since their release. Now Anthropic's Claude 3 Opus Model beats OpenAI's GPT-4 model for the first time on the LMSYS Chatbot Arena Leaderboard. LMSYS Chatbot Arena is a...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *