This week we use Darktable's Command Line Interface, Bash, Perl, and ImageMagick to create a smooth timelapse from a webcam image I found on the internet.
We walk through the process together. First, we find a webcam on the internet to provide our images. We set up a timer to download them every five minutes and archive them in a file. As we download, we process each image by setting up a generalized edit in Darktable and applying it with Darktable's cli.
Once we have enough to make a movie, we deflicker and average the frames with moving window to create a smoother experience. The result is a smooth, corrected timelapse you can create from any sequence of JPGs or PNGs
#!/bin/zsh url='http://www.esrl.noaa.gov/gmd/webdata/mlo/webcam/northcam.jpg' xmp='maunaloa/maunaloa.xmp' dir='maunaloa' TZ='HST' date |read time TZ='HST' date +%s |read seconds # DOWNLOAD WEBCAM IMAGE wget -T6 --random-wait -O /tmp/image.jpg "$url" # PROCESS WITH DARKTABLE rm /tmp/image_dt* /opt/darktable/bin/darktable-cli /tmp/image.jpg "$xmp" --width 1920 --height 1080 --upscale true /tmp/image_dt.jpg # ARCHIVE WITH DATE STAMP convert /tmp/image_dt.jpg -pointsize 36 -fill "#FFFFFF" -draw "text 1400,1050 '$time'" ~/$dir/$seconds.jpg
#!/bin/zsh num_images=2000 slow=8 frame_window=16 source_dir='/home/harry/int/maunaloa' mkdir final &> /dev/null cat /dev/null >frames # COPY FILES ls "$source_dir" |tail -n $num_images |while read file; do cp "$source_dir/$file" ./ done # DEFLICKER /home/harry/bin/timelapse-deflicker.pl -w 15 -p 2 # BUILD FRAME LIST ls Deflickered |while read file; do seq 1 $slow |while read i; do echo Deflickered/$file >>frames done done wc -l frames |cut -d' ' -f1 |read num_frames typeset -Z6 c # CREATE FRAMES seq 1 $num_frames |while read c; do convert `head -n $frame_window frames` -average final/$c.jpg sed -i '1d' frames echo -ne "e[0Kr$c / $num_frames" done # MAKE MOVIE avconv -f image2 -r 30 -i final/%06d.jpg -aspect 16:9 -b:v 15000k -y video.avi &> /dev/null
Complete Show Text
This is Harry with another edition of Weekly Edit.
What you're looking at here is a timelapse that I made using images from a webcam on the CO2 observatory on Mauna Loa on the Big Island of Hawaii.
This is our first significant snowfall of the winter.
This camera faces Mauna Kea, where all the telescopes are.
This image is taken from the webcam every five minutes.
It's real jerky, and there are a lot of changes in brightness.
I've got some techniques I use when I make timelapses of images where there's a lot of time in between the images.
I've worked on this to smooth images from the webcams overlooking our volcano too, where you only get an image every five minutes.
I use a Perl script called Timelapse Deflicker, which is amazing.
It does a great job.
I'll have that in the show notes.
I also use a way of averaging frames together, combining those two techniques to give a more smooth look to the timelapse and make it easier to understand what's happening because there's so much time in between each frame.
This episode will be pretty technical.
We'll do a lot of Bash scripting.
I made my scripts as simple as possible, so even people who aren't familiar with Bash scripting should be able to follow along to some extent.
I'll be using Darktable's Command Line Interface (cli), which lets us make changes in a script, using Darktable's engine.
I've broken up what we're doing into two different scripts.
The first one is called get_images In this script I do three things: One, I find which image I want to get.
It can be any webcam image.
There are webcams on resorts and government ones all over the place.
It's a lot of fun to find and play with them.
Then, there's a sidecar .xmp file.
We'll create that and move it over here.
Then, I have a working directory.
First thing, we'll find this image.
I'll show you where I got this URL, then we'll download this image.
I'll show you how this wget works.
Then we'll create this sidecar file so that the image looks the way we want it with Darktable.
Then we'll go back into this script.
So, first, we'll find this.
We've got these Mauna Kea Weather Cams.
If we look at the webcams up here, here's the one we want.
It looks at Mauna Kea from the CO2 observatory.
We want the north one.
This is my URL right here.
If you drill down on a site, you can usually find a single image like this or something that you can grab.
It's usually a JPG.
I'll copy and paste this.
It goes right here.
I just have my URL as a variable.
The reason I use single quotes everywhere is, if I want to have a space in my directory name, or if there's an odd character in my URL, then I just single-quote it and I don't have to worry about it.
The way this wget works is, the '-t' is the number of tries.
It says, 'if I can't get the image, should I try again?' and I say 'yeah, give it a few tries.' 'random-wait' is really nice.
It waits a certain number of milliseconds.
If you happen to hit a server that may think you're a bot if you try more than once, this random-wait will help with that.
Most of the time it doesn't matter at all.
' -O' is just the name of the output file.
I'm going to store these images in the tmp directory because I'm not going to keep them.
I'll create this image, and I'm going to save it in the tmp directory too.
So, I get my image with the wget.
I modify it with Darktable.
Then, I save it AGAIN in the temporary image directory.
Then, I'll grab that image, and use ImageMagick's 'convert' command to add a time-date stamp and name it in a way that I can archive it.
Next thing we'll do is download this image and modify it with Darktable.
Then we'll grab the XMP file and use it on every image that we download.
So, let's get the image with wget.
Open another terminal.
Here's our URL.
Here's our wget.
Okay, we got the image and stored it.
Here we are in tmp directory.
Here's our image.
We've got some fisheye with this lens.
I don't mind that too much, but I really want the tower to be straight.
I'll look in the Lens Correction Module to find something to de-fisheye it.
I've already messed with a bunch of options here in the past.
There isn't anything that matches it exactly; it's a small webcam.
But, it looks a lot like what I would get with a wide angle on my RX100, so I just grabbed the RX100.
If we go to 10mm, it straightens it out quite a bit.
You can play with things like this.
Here's 8.8 You lose a little more on top, but it does make it a little straighter.
That looks a little better than the 10mm.
Here's the 12mm.
It doesn't quite de-fisheye it enough, so let's go with that 8.8 Now, I can do the rest of the correction in Crop and Rotate.
If you'll notice, Lens Correction occurs before Crop and Rotate.
Here we are with Crop and Rotate.
My horizon is level, so I'll use Keystoning to correct this.
Vertical Keystoning: I pull it towards where I want it to be vertical.
That might be a little too much, but we'll see.
And I click Okay.
That looks pretty good.
Now, I'll be working as a video in 16:9, so I'm setting my aspect to 16:9 now, and I'll go ahead and crop it now.
That looks good.
There we go.
Now our tower is a little straighter and I think our horizon looks okay too.
Now, this image goes 24 hours a day, so the lighting will vary considerably, even though it has an auto-gain feature on the camera.
So, I'll make changes that aren't going to be as affected by different lighting conditions and will give me positive effects either way.
I'm a little limited on what I can do because I'm not just editing a single image.
One of the first things I'll do is use the Shadows and Highlights Module because that will correct itself to allow me to have good effects under a variety of conditions.
I'll soften with the Bilateral Filter.
I'll change my Radius to right about there.
That looks good.
And my Compression.
There we go.
Now, let's see: Shadows.
We're a little strong.
And Highlights: give us a little more.
Alright, there's our before and there's our after.
It normalizes the image a bit and should give good results regardless of the conditions.
A lot of low-quality images have pixelation in areas that have smooth tonal changes, like in here.
I can use a Gaussian Blur based on the L Channel because, see, it shows up more in the lighter parts than in the darker parts.
So, I'll use a Parametric Mask and select the lighter parts of the image where the pixelation is occurring.
Then in those areas, I'll apply a small Gaussian Blur.
I'll start with zero and slowly bring it up until this pixelation gets better.
That's significantly better, and it's just 1.80 pixel radius.
Now, we're losing some detail in this tower and on the building.
If I increase my Mask Blur a little, will that help? That made it worse.
Well, we'll see if we can pick that up again later on.
If not, we'll come back and affect this.
I guess one of the things we can do is just not use it full force.
There we go.
Now I'd like to do another pass with the Lowpass Module to add some contour shading to the clouds and buildings.
So, I'll turn my Saturation all the way off and bring my Radius down until I can see some kind of contours in the buildings and in the clouds.
See, I can see some in the clouds here and I'm getting shapes in the buildings.
So, that's good.
Then I'll apply that with the Softlight Blend Mode and bring that down a little.
This is with it off, and this is with it on.
Right around there, I think that livens up the image a bit, especially in the clouds.
It gives me more range.
Now, I'd like to add a little color to the image.
I like to use the Subtract Method to add color and a little contrast at the same time.
I always use a Parametric Mask based on the L Channel with my Subtract Method.
Now, if I do that, though, I stand the risk of making things too noisy because I've got all this color noise.
Well, here's what it looks like if I add it with the Tone Curve.
We'll take a Snapshot of this so we can compare it.
We'll try to do the same thing, but with a little bit of a Blur and see what that looks like.
So, let's do a Radius -- Oh, that looks good, around 4 pixels.
Okay, and I'll do the same thing: Parametric Mask based on the L Channel and Subtract Blend Mode.
And bring that up a little bit.
Right about there, I think.
Okay, so we've added a similar amount of color, but see: we've got a lot of fringing, and we've got color fringing around these features when we added it with the Tone Curve.
Now, we lose a little detail, but I think that's okay, because this is going to be a video.
Look how much our colors are behaving when we add it with the Lowpass Filter and the Gaussian Blur.
This is more of the effect I'm looking for, for this video.
As a matter of fact, I could probably make it a little smaller and still have a good smoothing effect.
There we go.
There is 3 pixels.
Now I want to bring out these features and some of the details.
I'll use a Highpass Filter.
I'll adjust Sharpness and Contrast Boost to bring out the features I want without getting halos.
I think this is good.
I'm looking for this tower and for these rocks and stuff, but I don't want to get strong halos.
This should be okay, because I'll apply it with the Softlight Blend Mode.
I'll use the L Mask again.
I don't want as much of the effect on the darker parts of the image.
That brings out a little more detail in this tower and knocks down some of the fringing that we had around the edges.
Here's where we started and here's where we ended.
It's a softer image with better colors and a lot less noise and smoother transitions.
We'll take this sidecar XMP file and re-name it and save it.
I'll save it in this maunaloa directory: cp /tmp/image.jpg.xmp (sidecar file, okay?) as: maunaloa/maunaloa.xmp Now we have our XMP file.
Every time I save something with this Darktable Command Line Interface, it will increment the name of the file so it doesn't overwrite it.
So, in my code here, I do an 'rm' on the last one that was saved before I save a new one so it doesn't increment it.
All this code does here is: it calls Darktable cli with the image file.
Then this XMP file that we stored right here upscales the width (new feature in Darktable -- love that!), and then saves it with this name right here.
When it's all done doing that...
--well, here, let's try it...
And there it is.
That's the one we did with the Command Line Interface.
After we do that, I use ImageMagick's 'convert' to grab that file we just created and draw the time in the corner.
You set the point size and the color before you execute the 'draw' command with Convert.
So, first you give it the parameters, which are your default parameters, and then I just '-draw' text at this position.
This is my horizontal and my vertical axes.
And what I'm drawing is the time.
I got the time just by calling 'date'.
This TZ='HST' What this does is sets the time zone.
I'm in Hawaii, so HST.
The reason I'm setting the time zones when I call 'date' is, if I'm running this code locally or on a cloudserver, I want to have the same time zone either way.
Then, it just saves it in the directory, this directory I called maunaloa.
That's where we'll save all our files.
A naming convention I'm using is I just call 'date' with a number of seconds since the epoch and give it an extension .jpg So, if we run this script, there should be a file in the maunaloa directory that's named with the number of seconds since the epoch that is all processed for us.
Let's do that.
There, it downloads the image, processes it with Darktable, and now we should have an image in our directory that's named with the seconds since the epoch.
There we are.
We've got our time in the corner and there's our processed image.
It's all straightened up; it's got a little extra color and a little extra punch; we got rid of some of the pixelation.
Now we can build up an archive of these images by adding that script to a cron job.
Let's look at that.
Every 5 minutes we run our command.
That's it; that's all we have to do.
Now that our script's been running for a while, we should have a whole bunch of images.
There they are.
So, let's look at this script that is going to create our movie.
This script has five parts.
It copies over the right number of files, because we only want to make the movie be maybe the last 12 hours or so.
That gets it from 'num_images' right here So, for instance, we have an image every five minutes, so that's 12 images an hour.
In 12 hours, that would be 144 images.
So, it copies over from our source directory, which was maunaloa --that's where we put all those images from our cron job -- and just picks 144 of them and moves them into our directory where we're going to be working.
We'll call it 'timelapse' Then we call this really cool Perl script right here.
This Perl script normalizes all the images on their L Channel.
It uses a moving window.
This '-w' and I set it to 15 frames.
You can set it to less or more; it gives you nice results.
'-p 2' just means to do two passes.
You can set it to one pass or two.
I find that two passes gives nice results.
It puts all the de-flickered images --and it will work with either a JPG or a PNG -- into a directory that it creates called Deflickered The third part of this script goes through and builds a frame list.
Let me show you what I mean by that.
If I've got a frame, and it's five minutes until the next frame, it's going to look really jerky.
So, I'll create a frame that will be this moving window.
It will start here and take a little bit of Frame 1 and Frame 2; and then a little bit of Frame 1, Frame 2, and Frame 3; and none of Frame 1 but some of Frame 2 and Frame 3.
It keeps moving like that, giving me nice smooth transitions in between my frames.
To do that, I create a file I call 'frames'.
All I do is put the name of the file in there a certain number of times.
Like, here: I said slow=4 Okay, so it will do 4 of each file.
So, file 1 it will do 4 times; file 2 it will do 4 times; file 3 it will do 4 times...
Then we're going to have a window.
This one is an 8-image wide window.
That window will take the first 8 file names in this text file, do an average on those, ditch the top one, do an average on the next 8, ditch the top one, do an average on the next 8, and I'll show you what happens.
We'll go through it.
So, that's this part here, where it ditches the top one after doing an average.
Then, the last thing is just to make a movie.
Let's go through it one at a time.
We'll put an exit in here so we can see what it looks like after it runs.
We'll work in this timelapse directory.
What's there? Oh, a bunch of JPGs.
Let's get rid of those.
There we go.
Let's call our file.
we_timelapse/make_movie.zsh and it runs, and now we have 144 images copied over.
That brought us here.
So, now we'll go down one more and run the Deflicker program.
Add an exit there.
Now it's running the Deflicker program.
Let's see what's in our Deflicker directory.
There are all our images.
They've all been normalized.
That was quick.
The next step is to build this frame list, so we'll put an exit here and see what that frame list looks like when it's done being built.
Alright, here we are.
As you can see, the first image occurs four times, then the second image occurs four times, and the third image occurs four times.
So, we'll take these first eight images and average them together.
Then we'll ditch this top line, and we'll take these eight images and average them together.
See, there's one of this one, four of this one, and three of this one.
Then we'll ditch this line and do these eight images.
We'll keep going like that all the way down this file to the end.
So, that's where we create the frames right here.
See, it says '-average' Now, this is really important.
This 'typeset -Z6' here.
The reason it's important is avconvert will only make a movie if they're sequentially numbered and zero padded.
So, I've got this '%6d' That's a 6-digit zero-padded number, and it has to be sequential or it will throw an error.
So, I start my c with this 'typeset' command which says 'no matter what value c has, zero-pad it' Then 'seq 1 $num_frames' All I do is look at how many lines there are in this file 'frames' that we just looked at here.
We just looked at 'frames' and it had 576 lines.
So, this 'num_frames' will be 576.
So sequence from 1 to 576 and then 'while read c' so it starts at 1, then goes to 2, then goes to 3; it loops through until it hits 576.
The first 'convert', which is ImageMagick's convert command, says 'give me the first 8 frames' because this 'frame_window' was set to 8 up here.
'give me the first 8 file names, average them together, and store them in this directory called 'final' with the value of c' So, 'c' starts at 1, which will be 000001 because we gave it six digits, and then .jpg Then, this sed command right here: '-i 1d' says 'pop the first line and ditch it from this file frames' So, every time we run through this, 'frames' is one line smaller.
This 'echo' here just gives us a nice little status update while it's running so we can see where it's at.
Then, all we do is make a movie.
The way we make a movie is 'avconv' The '-f' is 'what format?' and 'image2' is what you want to use when you make a movie.
The frame rate: 30 frames/second.
And '-i' is 'input' and the 'final'-- this is that final directory here with all the images in it.
That '%06d' says 'it's a six-digit zero-padded number.jpg' I've got the aspect ratio here, but I don't think it's required.
I set my quality to 15000k (1500K); my output is video.avi, and that's just the name of the file.
I send my status messages to /dev/null in case it has a problem with one of the JPG files or something, so I don't have to see a bunch of junk on the screen.
Let's run the whole thing from start to finish and see how it goes.
Here we go.
Creating our inter-frames here that are eight images wide.
They're all averages.
576 Frames at 30 frames/second should be just shy of 20 seconds.
And, let's see what our movie looks like.
Here comes the sun at 6:00.
There are some clouds.
And a rainbow.
And snow on our mountains.
Okay, let's take a minute to look at that Perl script we used to do smoothing.
It's written by Vangelis Tasoulas.
I sure appreciate this package.
All you have to do is take this file and put it in the same directory that you're working in, and then just run it.
And it goes.
It works with JPGs and PNGs.
I will link to the file in the show notes.
Well, thanks for watching another week's episode of Weekly Edit.
I wanted to show you my website here.
It's at weeklyedit.com There are all the recent posts here, and there are resources.
All my scripts are listed here.
The scripts for today's episode are all here, including the Perl script for the L Channel normalization.
And my Lab Color Reference Chart is here.
You can make playlists based on Topics or Modules.
I want to show you Shoot with Harry.
You can come to Hawaii and go shooting with me.
We'll do some processing and just have fun.
Here's our Shoot With Harry page.
It's got a calendar showing all sorts of stuff that's happening on the island.
It makes it easy to decide when to come because you can see all the upcoming events.
We also have a Patreon campaign.
We have three people who have subscribed already, which is fantastic.
You can go to my Patreon page and pledge to support some amount per month.
This encourages and supports my wife and I in continuing with all of what we do.
Thank you everybody for watching these videos, for contributing your opinions and comments, for asking questions and giving me suggestions.
I really appreciate you.
Have a great week.
I'll see you here next week.