
I've tested pretty much every AI video tool that's come out in the last year.
None of them kept me as engrossed into the rabbithole as Seedance 2.0.
ByteDance (yes, TikTok's parent company) just dropped their next-gen AI video model and the internet is losing it.
Elon Musk saw a comparison between Seedance 2.0 and Google's Genie 3 and commented: "It is happening fast."
I'll show you exactly why everyone's freaking out. But first, let's catch up on AI this week.
Tools of the Week
An AI browser extension that auto-extracts security questionnaire questions and answers them using your existing reports, policies, and tech stack. right inside your Chrome tab. Trusted by Bland, Wisprflow, Greptile and 1000+ companies for SOC 2, ISO 27001, and GDPR compliance.
An AI-native workspace designed specifically for architecture teams to streamline design workflows, documentation, and collaboration. It reflects the growing trend of vertical AI tools built for specialized professions rather than general-purpose chat interfaces.
An AI video creation and editing platform designed to help teams generate, edit, and publish videos quickly from scripts, prompts, or existing footage. It reflects the growing shift toward end-to-end AI video workspaces that combine scripting, editing, and production in a single tool.
What's the Deal?
Seedance 2.0 is the first model that made me feel like I was directing.
And it's truly multimodal.
You can feed it up to 9 images, 3 video clips, and 3 audio files, all at once.
The model treats each input as a reference for something specific: style, motion, camera work, rhythm, character appearance.
How it works
The killer feature is what ByteDance calls "All-Round Reference."
Upload a reference video, Seedance extracts the camera movements, the pacing, the transitions.
Then you tell it to apply all of that to completely different characters and settings.
How to use it
Seedance 2.0 is still gradually rolling out and getting access to it feels like a goldrush.
I managed to get access, but there’s plenty of ways to get access, as people have shared.
Meanwhile, let’s run the tests.
The Tests
Within hours of going live, people generated Tom Cruise fighting Brad Pitt on a rooftop, cats fighting godzilla with a pan, etc.
The motion is smoother than anything I've seen from Sora or Runway.
Test 1: Fluids

The milk texture is super consistent.
I had to take a chai break after generating this video.
Test 2: Motion

Took the OG Kolkata tram and gave it a cyberpunk twist.
The scene, lighting and audio was all self-directed by Seedance.
Test 3: Creativity

The prompt for this one is interesting. I wanted to push its creative boundaries and see how far it can get.
I am very impressed. So much better than SOTA video models so far.
The textures and everything are very accurate.
Part 2 of this test
I went one step further and threw in another overcomplicated prompt about Vikram Betal.
It couldn’t keep up. And still, i dont think its bad.

Test 4: Anime scenes

All weebs must have seen the Kakashi v Gojo videos on youtube at this point.
But the animation that was always lacklustre is pitch perfect here.
Another classic. The zoom in shots, the facial expressions, the character details are perfect.
What’s missing is the action, and honestly the feel that you’d otherwise experience when watching the show.
Perhaps this is the soulless aspect of AI video gen that people keep raving about.

Test 5: I saved Jack

Absolute cliche. But it was fun generating this.
The dialogue being right, and even the modified version of it making sense, the facial expressions and the setting are more or less nailed down.
The output is nice but it goofs up at the end with the unreal neck twist.
My Prompts
I found some hacky ways to prompt inject and generate the videos.
My take
Easily the best video model by far.
China keeps winning.
The AI video race just got a new frontrunner, and it came from the same company that owns TikTok.
Every tool treats video gen as a slot machine. Type prompt, pull lever, pray.
Seedance treats it like a creative brief.
You bring references, you direct, you iterate.
Also the copyright storm is already here.
OpenAI went through this exact cycle with Sora and ended up doing licensing deals with Disney.
ByteDance will face even more heat given the TikTok politics.
But the genie is out of the bottle.
Access is still messy.
Chinese phone numbers, Douyin accounts, queues thousands deep.
That won't last.
When this opens up internationally, the early adopters who've already figured out the reference-based workflow will have a serious head start.
Hit reply: Did you try Seedance? Share your wildest output with me.
Until next time,
Vaibhav 🤝
If you read till here, you might find this interesting
#AD1
Better input, better output
Voice-first prompts capture details you forget to type. Wispr Flow turns speech into clean prompts you can paste into your AI tools for faster, more useful results. Try Wispr Flow for AI.
#AD2
Learn how to make every AI investment count.
Successful AI transformation starts with deeply understanding your organization’s most critical use cases. We recommend this practical guide from You.com that walks through a proven framework to identify, prioritize, and document high-value AI opportunities.
In this AI Use Case Discovery Guide, you’ll learn how to:
Map internal workflows and customer journeys to pinpoint where AI can drive measurable ROI
Ask the right questions when it comes to AI use cases
Align cross-functional teams and stakeholders for a unified, scalable approach







