Lights, Ready the Smart Software, Now Hit Enter
June 11, 2025
Just a dinobaby and no AI: How horrible an approach?
I like snappy quotes. Here’s a good one from “You Are Not Prepared for This Terrifying New Wave of AI-Generated Videos.” The write up says:
I don’t mean to be alarmist, but I do think it’s time to start assuming everything you see online is fake.
I like the categorical affirmative. I like the “alarmist.” I particularly like “fake.”
The article explains:
Something happened this week that only made me more pessimistic about the future of truth on the internet. During this week’s Google I/O event, Google unveiled Veo 3, its latest AI video model. Like other competitive models out there, Veo 3 can generate highly realistic sequences, which Google showed off throughout the presentation. Sure, not great, but also, nothing really new there. But Veo 3 isn’t just capable of generating video that might trick your eye into thinking its real: Veo 3 can also generate audio to go alongside the video. That includes sound effects, but also dialogue—lip-synced dialogue.
If the Google-type synths are good enough and cheap, I wonder how many budding film directors will note the capabilities and think about their magnum opus on smart software dollies. Cough up a credit card and for $250 per month imagine what videos Google may allow you to make. My hunch is that Mother Google will block certain topics, themes, and “treatments.” (How easy would it be for a Google-type service to weaponize videos about the news, social movements, and recalcitrant advertisers?)
The write worries gently as well, stating:
We’re in scary territory now. Today, it’s demos of musicians and streamers. Tomorrow, it’s a politician saying something they didn’t; a suspect committing the crime they’re accused of; a “reporter” feeding you lies through the “news.” I hope this is as good as the technology gets. I hope AI companies run out of training data to improve their models, and that governments take some action to regulate this technology. But seeing as the Republicans in the United States passed a bill that included a ban on state-enforced AI regulations for ten years, I’m pretty pessimistic on that latter point. In all likelihood, this tech is going to get better, with zero guardrails to ensure it advances safely. I’m left wondering how many of those politicians who voted yes on that bill watched an AI-generated video on their phone this week and thought nothing of it.
My view is that several questions may warrant some noodling by a humanoid or possibly an “ethical” smart software system; for example:
- Can AI detectors spot and flag AI-generated video? Ignoring or missing may have interesting social knock on effects.
- Will a Google-type outfit ignore videos that praise an advertiser whose products are problematic? (Health and medical videos? Who defines “problematic”?)
- Will firms with video generating technology self regulate or just do what yields revenue? (Producers of adult content may have some clever ideas, and many of these professionals are willing to innovate.)
Net net: When will synth videos win an Oscar?
Stephen E Arnold, June 11, 2025
Comments
Got something to say?