I look forward to a day when capabilities like this are trivial and boring to the average person. When my phone (locally) will be able to generate a fully voice acted 24 episode series anime on a whim for a meme with my group chat. It's astounding what we can do now, but will be completely ignorable before we know it, which is equally wild.
Some of the shots are impressive but… Even among these hand-picked examples there’s a plenty of unnatural movement. And it seems like it was trained on the most hyperactive subset of tiktok as it apparently can’t hold a scene for more than 5 seconds.
> I wouldn’t be surprised if in 5 years all content is generated on the fly. You say something, and a 5-second video plays in response.
live mode means content stops being fixed assets and starts becoming ephemeral responses
video turns into output stream, not uploads
voice prompt is the new swipe
what they're doing isn’t pushing a format shift, they’re testing runtime content systems
on backend they’re compressing model infra with comet and tilting up llms that run cheaper and faster
that combo means they can serve gen content at scale without needing to batch or cache
if that holds, feed stops being a scroll and becomes a render loop
nothing about this is about media anymore, it’s turning the app into a low-latency model host disguised as video platform
really coo, but wheres the sound? i'd expect that they'd have built in the sound model since its gonna look like SOTA for video, VEO3 is great for video but the audios what knocks it out of the park
There is something in motion heavy videos that is making me nauseas/sick in my stomach. Last time I felt this was with first Sora release. It's not as bad as Sora, but its there. Veo 3 didn't gave me these feelings or may be I haven't seen its motion heavy samples.
Does anyone else feels same looking at motion heavy samples of Seedance?
Has the realism of AI already caught on to that of animated CGI movies?
I assume that an expert in CGI can point out obvious flaws in these outputs. But I wonder if it is possible to fix those details by prompting it to change only specific segments.
There is also the question of how much compute/money they are spending per second of output, compared to a high-budget Hollywood CGI.
Like every AI launch demo I've ever seen, the results are unbelievably high quality, but if you take a second to read the prompts they never quite match. Here basically every single example is ignoring a portion of the prompt; sometimes the camera directions, sometimes the atmospheric description, sometimes making up very distinct elements that were not mentioned at all. People talk about "AI slop" because these models are really good when you just want "content" and you don't really care exactly what it looks like, but if you are trying to produce something specific, which you are in every real-world use case I can think of, it is very frustrating and often impossible to get there.
Decent 1080p quality. Not bluray level, but getting close. Definitely ahead of every other video generator.
Video production just got a lot cheaper and requires very few skills. This is basically destroying the creative video production industry (ads, product videography, youtube content of all kinds) and probably VFX industry as well.
People are already way too easy to get to believe conspiracy theories. Shit like Pizzagate or whatever is only going to get more common when bad people start making, "and look, here's the video proof!"
And we've already got Tiktok and Youtube Shorts just pumping the dopamine centers in the brain for short form content. Generating shit you like dynamically is going to be an addictive nightmare. The moment it gets monetized we're going to see the equivalent of slot machines pumped at us from every channel -- flashing lights and emotional tugs to get us to part with our valuable money or attention.
And that's to say nothing of the impact these tools have on artists and creative people or the costs to train and deploy these tools.
We're already seeing it today. The amount of 'footage' about LA right now that's showing some sort of war zone that is clearly AI generated, but being consumed as if it was real is staggering.
I feel so bad for the next generation who will never have watched man made movies, they will not be able to tell whether something is junk or not because there will be no baseline.
Only light skinned people on the video examples.
Ethnic diversity and accuracy used to be a problem with the models of the past. I wonder how the model would excel at prompts grokking at that.
Seedance 1.0
(seed.bytedance.com)214 points by matallo 12 June 2025 | 113 comments
Comments
As you scroll, it learns what you like and generates more videos.
live mode means content stops being fixed assets and starts becoming ephemeral responses
video turns into output stream, not uploads voice prompt is the new swipe
what they're doing isn’t pushing a format shift, they’re testing runtime content systems on backend they’re compressing model infra with comet and tilting up llms that run cheaper and faster that combo means they can serve gen content at scale without needing to batch or cache if that holds, feed stops being a scroll and becomes a render loop nothing about this is about media anymore, it’s turning the app into a low-latency model host disguised as video platform
Does anyone else feels same looking at motion heavy samples of Seedance?
I assume that an expert in CGI can point out obvious flaws in these outputs. But I wonder if it is possible to fix those details by prompting it to change only specific segments.
There is also the question of how much compute/money they are spending per second of output, compared to a high-budget Hollywood CGI.
Video production just got a lot cheaper and requires very few skills. This is basically destroying the creative video production industry (ads, product videography, youtube content of all kinds) and probably VFX industry as well.
People are already way too easy to get to believe conspiracy theories. Shit like Pizzagate or whatever is only going to get more common when bad people start making, "and look, here's the video proof!"
And we've already got Tiktok and Youtube Shorts just pumping the dopamine centers in the brain for short form content. Generating shit you like dynamically is going to be an addictive nightmare. The moment it gets monetized we're going to see the equivalent of slot machines pumped at us from every channel -- flashing lights and emotional tugs to get us to part with our valuable money or attention.
And that's to say nothing of the impact these tools have on artists and creative people or the costs to train and deploy these tools.
We're already seeing it today. The amount of 'footage' about LA right now that's showing some sort of war zone that is clearly AI generated, but being consumed as if it was real is staggering.