In particular, the artificial intelligence didn’t seem to be able to get the hands right. One time it was more of a paw than a hand, another time it was four fingers instead of five.
It’s fortunate, according to many, because it allows you to know without a doubt whether an image is real or artificially created. Less than a year and a half later, this assessment is not only outdated, but is almost a tale from ancient times. Because the development of AI applications is already further, much further.
- More on the subject: Believe your eyes and ears – don’t
OpenAI, the company that made artificial intelligence usable for everyone with ChatGPT – the generative AI application for texts – and with Dall-E – the one for images – has now taken the next step. With Sora, it has developed a program that generates videos based on prompts (descriptive text input).
And they’re not just any videos. The images are deceptively real – even if they show a man sitting on clouds or a monkey playing chess. You can find an example of this in this video:
The program has been accessible to selected users since mid-February. It’s only a matter of time that Sora will soon be available to everyone (for a fee). This will change video production – just as text and photo production are already changing gradually.
Johannes Brandstetter, AI researcher at the JKU, is convinced in an OÖN interview that the hype surrounding artificial intelligence will be followed by a real AI revolution. This is a reason to be happy, but also a cause for concern: “All of these applications are in the hands of US corporations. We have no insight or access to data and models.” From a European perspective, this is a situation that urgently needs to be changed.
Mandatory labeling required
Brandstetter also advocates for mandatory labeling of AI-created content. If a new technology finds its way into people’s everyday lives, one thing is crucial: “You have to know what you’re dealing with.” The danger that a large number of fake videos will appear cannot be dismissed – especially in politically turbulent times.
Logic errors still sometimes happen in the videos. An example shows a person taking a bite of a cookie. The biscuit is then whole again. Anyone who now believes that they can expose the videos based on these errors should be reminded of the days when we still laughed at our hands.
more from Brave New World
Do you speak AI?
Is Bluesky the better Twitter?
Payment is always made
What comes after the year of AI?
: Nachrichten

David William is a talented author who has made a name for himself in the world of writing. He is a professional author who writes on a wide range of topics, from general interest to opinion news. David is currently working as a writer at 24 hours worlds where he brings his unique perspective and in-depth research to his articles, making them both informative and engaging.