Ready-to-use Colab Notebook & Physical Causality Performance Tests
Hi everyone, and thanks to the Netflix team for open-sourcing this amazing model!
I noticed that setting up the environment and getting the quadmasks recognized properly can be a bit tricky. To help the community, I've created a repository containing a fully working, ready-to-run Google Colab Notebook for the VOID model.
I also ran several performance tests focusing specifically on the model's physical causal reasoning capabilities (e.g., removing large obstacles from dynamic scenes, falling objects, and fluid dynamics).
You can check out the inference results and use the Colab notebook here:
🔗 https://github.com/ErenAta16/Netflix-Void-Model-Performance-Tests
The repo includes:
A streamlined Colab pipeline (bypassing common file-naming and setup errors).
Inference results for "Ducky Float" (fluid dynamic recalculation).
Inference results for "The Lime" (gravity/falling object interaction).

A custom real-world test removing a massive Ice Cream Van blocking a walkway to test background reconstruction.
Hope this helps anyone looking to quickly test the model. Feedback and contributions are welcome!
what's you review on the model? do you think it's a viable vfx tool or just a piece of crap netflix dumped because it has no value?
I also ran several performance tests focusing specifically on the model's physical causal reasoning capabilities (e.g., removing large obstacles from dynamic scenes, falling objects, and fluid dynamics).
Hey, thanks for the question! It's a fair point, but after testing VOID, I can honestly say it's far from a piece of crap. It brings real value, especially in physical causal reasoning.
If you check the tests in the repo, it reconstructs environments surprisingly well—like maintaining water ripples after an object is removed (Ducky test) or handling contact points and underlying background structures (Lime and Ice Cream Van tests). It's not a magic 1-click VFX tool yet, but its interaction-aware quadmask approach makes it a solid foundation.
However, looking at it from my professional focus in data science and model fine-tuning, its biggest bottleneck right now is the dataset. The training data feels a bit weak, which directly limits its performance ceiling in complex scenarios.
To address this, I'm currently preparing a fine-tuning pipeline. We are building a new dataset using a completely different methodology to push its physical reasoning capabilities further. Actually, I shared the first prototype of this approach with the VOID team, and I’m currently waiting for their feedback.
So, it's definitely a viable tool with a lot of room to grow.