The introduction of OpenAI’s latest generative tool, Sora, has ignited dynamic technology debates in recent days, eliciting excitement and apprehension from supporters and detractors.
Sora, OpenAI’s new text-to-video tool, is a big step toward artificial general intelligence (AGI). The model can make convincing videos up to one minute long from still images or simple text cues. It knows a lot about language and can make characters that are interesting and show strong feelings. However, it might not be able to correctly simulate “the physics of a complex scene” and might not always be able to tell the difference between causes and effects.
Google and smaller companies like Runway already have AI tools that can turn words into videos, so OpenAI is not the first of its kind. A small group of artists and “red teamers” have been given access to Sora. They are constantly looking for problems with the model, such as bias, hateful content, and false information. There is a chance that OpenAI will be sued for copyright violations if they decide to share the model.
The company has been sued several times over the data used to train its ChatGPT language model and DALL-E picture model. The New York Times is suing both OpenAI and its partner, Microsoft, over what it says is improper use of its news material. This is the most well-known case. Open AI says that the model was only trained on licensed movies that are available to the public and that it has a built-in system that will reject any text prompts that break its rules.
Text-to-video can cause more harm than just spreading false information. Someone on the social networking site X said, “The future of porn just changed forever.” In addition to the effects that will happen right away, OpenAI sees Sora as a big step toward artificial general intelligence (AGI). It will help build models that can understand and mimic the real world.