Most older videos that use real footage use footage from pexels.com,
which I change a lot and cut in KDEnlive, and in Pixilang.
Those videos with graphic visualisations that react to sound are programmed in Pixilang, and cut and changed in KDEnlive ( some older ones in Alight Motion app on Android) in Linux.
The last two videos footage were made with AI on Linux.
The first one in Automatic1111/Stable Diffusion UI using AnimateDiff model
and the second in ComfyUI using Wan2.1 model, which is very new and surprisingly real.
The AI can only render a few frames / seconds, and the footage was cut and changed in KDEnlive.
The black and white style of the first AI video was intended to look like drawn with white chalk, but this style didnt work well with AnimateDiff.
The style was trained in kohya_ss Dreambooth on linux from images I created in GIMP.
The blurry/dreamy smear effect in many videos is done in KDEnlive ( vertigo effect), but I also programmed a similar effect in Pixilang which is used in older videos.
Most older videos were created on a Raspberry Pi computer ( Raspi 400) , since I could not afford a better computer for some years, but now I have a new gaming laptop with 6 GB VRAM. But for AI video its not enough... ( which I didnt know).
Most of the music is done on an MPC One, but I also use SunVox on Linux from time to time.
SunVox and Pixilang are from the same programmer, very cool programs I think.
EDIT: and singing voice on some recent tracks was made with SynthV on Linux ( free trial version).
I dont like it so much, but its interesting that the voice is very japanese ( although it should be english) which I like cause it makes it hybrid/ unreal a bit.