TechFlow News: On February 10, according to JIN10 Data, tech blogger Tim reported that ByteDance’s AI video-generation model Seedance 2.0 had extensively trained on videos from the channel “Film Hurricane,” enabling it to autonomously generate videos featuring Tim’s voice—even without explicit prompting. In response, the official Seedance 2.0 team announced it would temporarily suspend support for using real-person footage as subject reference material. A reporter learned that within the “Ji Meng” creator community, an operations staff member posted: “During its closed beta period, Seedance 2.0 has garnered far more attention than anticipated—thank you all for your valuable feedback. To ensure a healthy and sustainable creative environment, we are urgently optimizing the product based on user input. Currently, uploading real-person images or videos as subject references is temporarily unsupported.” The post added that the platform deeply understands that respecting others’ rights defines the boundary of creativity, and that after these adjustments, Seedance 2.0 will officially launch in a more refined form. As of press time, ByteDance’s official team has not issued any public response.
Navigating Web3 tides with focused insights
Contribute An Article
Media Requests
Risk Disclosure: This website's content is not investment advice and offers no trading guidance or related services. Per regulations from the PBOC and other authorities, users must be aware of virtual currency risks. Contact us / support@techflowpost.com ICP License: 琼ICP备2022009338号




