On February 16, OpenAI’s Sora model was launched, allowing users to create realistic videos with just a few short text prompts.
One solution currently available is to require text-to-video AI to use a watermark, either in the form of a visible mark in the video, or a human signature added to the video's metadata, which can be removed. Sora has also affected the film and television entrepreneurial industry. Hollywood actors, film producers and studio owners, after witnessing Sora's abilities, quickly suspended a series of film and television studio expansion plans such as the design and construction of 12 sound stages.
One of the most important things that text-to-video models can be used to do is communicate professional research results to a lay audience, with the ability to visualize fairly complex concepts.
Another potential use could be in healthcare, where text-to-video AI could replace human doctors in talking to patients.
Text-to-video AI models can help researchers access massive amounts of data, such as those produced by the European particle physics laboratory CERN near Geneva, Switzerland, and other large scientific projects.
Generative AI can complete some common research tasks, such as screening code, or it can do more complex work, such as inputting data and asking the model to make predictions.