I've been making a few video recordings of my bicycle rides and playing around with FFMPEG for video processing. I've been wanting to use it to make kaleidoscopic effects but the syntax of the filter chains had so far eluded me. I decided it was time to try and understand the syntax rather than blindly experimenting in frustration.
For some reason I'm somewhat taken by mirroring effects on videos - probably a symptom of my inexperience with video. One of the ideas I wanted to try was to mirror the top two thirds of the left hand side of a video, and the bottom third of the right hand side. The FFMPEG documentation for filtering looks a bit intimidating at first, but with a little effort is not quite as bad as it looks.
On my way to mirroring the LHS of the top 2/3rds of a video, and the RHS of the bottom third, after a few stabs in the near-dark, I decided to simplify the task at hand to slicing the bottom third of the video off and recombining it - if I could not tell any slicing had taken place then all was well. The command below horizontally flips the bottom third however as evidence of the processing that has taken place.
ffplay -i test.mov \ -vf "split [tmp1][tmp2]; \ [tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \ [tmp2] crop=iw:ih/3:0:(ih/3)*2, hflip [bottom]; \ [top][bottom] overlay=0:(H/3)*2"
The split command creates two streams from the single video source naming them [tmp1] and [tmp2]. A stream must have one destination, it cannot have no destination nor more than one destination, hence the split command. [tmp1] is then cropped, using the crop command in the form of crop=width:height:top-left-x:top-left-y. The variable iw means input-width of [tmp1], ih is input height. Padding is then added to the cropped frames so they have the same vertical dimension as the uncropped source. The resulting stream is then named [top]. [tmp2] is then also cropped and then horizontally flipped and the result named [bottom]. Lastly, [bottom] is overlaid onto the bottom third of [top] - this is why the padding was added to [top] - so there was somewhere to overlay [bottom] onto. The overlay command does not use the same naming convention for input dimensions. Here, H is input height, and it is the height of the first input [top].
Once I was satisfied I had the slicing and placement correct I moved onto mirroring the left half of the top, and the right half of the bottom. The command to do that is below. For my own benefit I've ignored the fact that there are shorter ways of writing these commands. You may have also noticed I'm using the ffplay command and not ffmpeg, that was so I could see immediately what effect the processing was having and processing was not intensive enough to be too slow to be performed during playback.
ffplay -i test.mov \ -vf "split [tmp1][tmp2]; \ [tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \ [tmp2] crop=iw:ih/3:0:(ih/3)*2 [bottom]; \ \ [top] split [tl][tmp3]; \ [tmp3] crop=iw/2:ih:0:0, pad=iw*2:ih:iw:0, hflip [tr]; \ [tl][tr] overlay=W/2 [mirrortop]; \ \ [bottom] hflip, split [bl][tmp4]; \ [tmp4] crop=iw/2:ih:0:0, pad=iw*2:ih:iw:0, hflip [br]; \ [bl][br] overlay=W/2 [mirrorbottom]; \ \ [mirrortop][mirrorbottom] overlay=0:(H/3)*2"
I didn't find the end result of the above command particularly appealing so moved on two axis of rotational reflection and then 4 axis of rotational reflection (if that's the correct term for it). Somewhere along the way I ended up with the following code which uses rotation to introduce something vaguely reminiscent of kaleidoscopic effects (neglecting to remember the amount of reflections that take place inside a kaleidoscope) with the end result shown by the video at the top of this post.
screen ffmpeg -i source.mp4 -vf "\ # video filters lutrgb=r=val*1.633587:g=val*1.642:b=val*0.80578, \ hue=H=2*PI*t/5:s=sin(2*PI*t/4)+1, \ rotate=2*PI*t/19, \ crop=iw:ih/2:0:0 [top]; \ \ [top] split [tl][tmp1]; \ [tmp1] crop=iw/2:ih:0:0, pad=iw*2:ih:iw:0, hflip [tr]; \ [tl][tr] overlay=W/2 [tmp2]; \ \ [tmp2] split [tmp3][tmp4]; \ [tmp3] pad=0:ih*2 [mirrortop]; \ [tmp4] vflip [mirrorbottom]; \ \ [mirrortop][mirrorbottom] overlay=0:H/2, rotate=7.5*sin(2*PI*t/20)" \ -af "\ # audio filters aecho=1.0 : 0.34 : 260|265|270 : 0.25|0.25|0.250, \ aecho=1.0 : 0.34 : 160|165|170 : 0.25|0.25|0.250, \ aphaser=0.75 : 0.75 : 4.85 : 0.75 : 0.1, aecho=1.0 : 0.34 : 460|465|470 : 0.25|0.25|0.250, \ aecho=1.0 : 0.34 : 660|665|670 : 0.25|0.25|0.250, \ compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:4:-90:0.2" -strict -2 heppy-babs.mp4 youtube-upload --client-secrets=client_secret_3244811245-vct2i33a0je5ohcvh39rn212imt41eej.apps.googleusercontent.com.json \ --title="Heppy Babs" \ --description="Command line video processing with FFMPEG" \ --category="Film & Animation" \ --tags="ffmpeg, linux, mirror" \ heppy-babs.mp4
As well as the aforementioned rotation, I've also added colour adjustment and some basic audio effects. If you're familiar with FFMPEG you'll also notice I've neglected to explicitly state anything about the video codec to encode the output file with - that might explain why youtube decided not to offer anything better than 360p.
The last thing I want to mention is the screen command which I use as an extra layer of protection against forgetting that there is processing taking place and closing the terminal or logging out - I'm in the habit of closing windows I don't use and sometimes misjudge. If I do either of these things while running ffmpeg under a screen session, screen ensures I do not potentially lose several hours worth or more of processing time. The command continues running and I can simply re-attach to the screen session should I need to see what is going on. This is also why I use the command line tool youtube-upload to upload the video; low bandwidth means this took an hour or so to upload... and I like to pretend I know what I'm doing.
rackaid screen tutorial (got me started with screen)
youtube-upload on GitHub (view README.md for instruction on getting started)