Rule: OpenAI Video to Video (AI Interpolator OpenAI)
Base data:
Summary:
The OpenAI Video to Video takes any video under 20 minutes and takes a prompt with what parts that should be cut out.
NOTE that this is experimental!! It works by splitting the video into keyframes and puts them on a grid with timestamps and then uses Whisper to transcribe the audio with timestamps. Then it uses both those inputs as context for your prompt. Its as cheap as it can be, but it might still cost you noticeable money.
Module needed:
AI Interpolator OpenAI
Field types to populate:
- File
Base Fields types to use as context:
- File
Extra Requirements:
You need a paid OpenAI account with API access.
Requires FFmpeg to be installed on the server.
Requires the file field to allow any of the formats that FFmpeg supports.
Prompting tips:
Note that the model only sees the first images of a scenecut or the next keyframe. This means that it is not possible to prompt on certrain motions that only show up on one frame.
You can basically ask for anything to add up, like "First cut out the quote where Bill Gates talks about the 486 processor and then the video of Steve Ballmer dancing".
You can also ask for either one video or clips. If you ask for clips you have to set the video field to multi field.
Extra Settings:
Cutting Prompt
This is the prompt instead of the normal prompt. Install the Token module to make it dynamic.
Extra Advanced Settings:
None
Possible example use cases:
- Search a video for something special.
- Edit a video.
Help improve this page
You can:
- Log in, click Edit, and edit this page
- Log in, click Discuss, update the Page status value, and suggest an improvement
- Log in and create a Documentation issue with your suggestion