Sunday, May 17, 2009

Trials with RecordMyDesktop

I've been playing with recordmydesktop lately in order to put some suitable screencasts together for demonstrating various Linux-related activities to university students next week.

Unfortunately, both Fedora 11 and Ubuntu Jaunty have issues with RecordMyDesktop -- the current version (0.3.8.1) is much better than the last version I tried (0.3.5) in terms of both video and audio quality, but the more-than-occassional



Broken pipe: Overrun occurred.



Cause audio to be clipped rather badly.

The default settings make the video speed up and therefore finish well before the audio too -- so I set about trying to find the optimium settings to make the best encode I could.

First, if you are using a 3D Window Manager (such as Compiz), please use the --on-the-fly-encoding and --full-shots options, or everything from opening new windows to redrawing your background will end up with corruption.

If you do not know which Window Manager you are running, you can use the Visual Effects tab of the Appearence Preferences window.



If the button is in the first box (None) then you are not running a 3D Window Manager, if it is in either the Normal or Extra boxes, you are.

Now we've figured out if we need the options for 3D Window Management, it's time to move on to the audio -- in it's default configuration, the program will clip and/or drop audio because of the overrun mentioned above, but you can reduce that by a fair proportion by either:


  1. Using pasuspender to launch the program. This particular solution was suggested by an Ubuntu developer on a reported bug about RecordMyDesktop that looks very similar to the particular issue we're covering here.

  2. Configuring an .asoundrc file to use PulseAudio for all ALSA based audio by default.


The latter is done by creating a new blank file in your home directory and adding the following code:



pcm.!default {
type pulse
}

ctl.!default {
type pulse
}



Save the file, exit, log out of your session and login again.

This alone reduced the dropped audio from 7 occassions in one minute's worth of recorded footage to 3.

The former is done by adding: pasuspender -- to the beginning of the command line.

On my laptops, using pasuspender made no real difference to the problem, which is why we're preferring the .asoundrc solution for the remainder of this discussion.

Another thing that helps lower the frequency of audio dropouts is to increase the buffer size used by RecordMyDesktop -- the defaults are 4k, raising that to 16k helps dramatically, but any higher doesn't seem to offer that much of an improvement (I tried 32k (32769) and 64k (65538) when testing -- i'm only recording two channels on an internal microphone though, your milage may vary)

You can do this by adding the --buffer-size 16384 option to the command line.

update (2009-05-17): At the highest quality audio and video settings, 16384 can still cause dropouts, so 65538 is now preferred.

Next, if you're using onboard audio (like Altec Lansing laptop audio) you may see:



Playback frequency 22050Hz is not available...
Using 44100Hz instead.
Recording on device hw:0,0 is set to:
2 channels at 44100Hz



The Altec Lansing Audio, as well as several other Intel based sound cards that use the hda-intel-audio driver, will resample 44100Hz input to 48000Hz, hence the error.

To fix that, you pass --freq 48000, which allows recording of audio without any extra resampling.

Now, if you've used the pasuspender solution above, or your audio seems to be correctly synced, you can move on to the next section.

If, on the other hand -- you still see Buffer Overrun errors, or your audio is out of sync, or you use my .asoundrc solution above, you'll need to pass an additional audio-related option on the command line:

-device plughw:0,0

Using plughw, allows ALSA and PulseAudio's internals to handle all of the resampling work -- or endian-conversion work automatically, which reduces a painfully large ALSA configuration that will differ on every soundcard on the market, to a single configuration change for our purposes.

The 0,0 (aka. use the first detected virtual sound device) should be fine for almost everyone working on a standard configuration -- if you've got:



You may have more than one plughw device to choose from -- choosing which one is beyond the scope of this article though.

Now we can move onto improving the video quality.

Next, it helps to set the framerate -- i've seen numerous articles on the web about this, everyone suggesting everything from 10fps to 90fps to get a clear picture.

I tried 10, 12, 20, 25, 29, 50 and 60 (these seem to be the ones that are mentioned the most, in a four page Google search on the subject) -- 10 works if you want to keep the filesize low, but the playback video seems too jerky -- 60 doesn't increase the filesize, but does have an unwanted side-effect of the video finishing well before the audio when you play it back.

The optimum setting I found was --fps 20.

Finally, because I wanted to re-encode these for different purposes, I like to do the initial encoding with the highest quality possible.

However, the difference between -v_quality 1 and -v_quality 63 was minimal at best on my 1280x800 resolution.

Somewhat obviously though, the -v_bitrate option makes A LOT of difference, so I bumped that to the highest number available (2000000).

The recommended command line(s)



I've provided both a high and low quality setting for those people who are interested

The final command line, that works for me and provides the highest quality, lowest distortion with no dropouts -- was:



recordmydesktop --on-the-fly-encoding -v_quality 63 -v_bitrate 2000000 -s_quality 10 --full-shots --fps 20 --freq 48000 --buffer-size 65538 -device plughw:0,0



For a smaller filesize or for encoding on older or busier machines, with quality suitable for downloading or streaming, you can use:



recordmydesktop --quick-subsampling --on-the-fly-encoding -v_quality 10 -v_bitrate 50000 -s_quality 1 --full-shots --fps 10 --freq 48000 --buffer-size 16384 -device plughw:0,0



This option does the encoding after the video capture has stopped, which results in a smaller filesize at the expense of taking longer to encode your original footage, you can offset this somewhat by adding the --quick-subsampling option to your command line, which saves CPU time by discarding extra pixels when calculating the chroma values during the encoding process.

note: If you are using a 2D Window Manager (such as Metacity), you may omit the --full-shots option, which halves the filesize on my Ubuntu Jaunty Jackalope install.

Why No Examples?



Google doesn't support Theora based videos to be uploaded to Blogger, I had two videos recorded of 10 seconds each, the low quality one (using the settings above) clocked in at 220 kilobytes, the high quality one was 1.3 megabytes.

When El Goog does decide to allow Theora based videos on Blogger, i'll post them -- but converting them to .mp4's just to show you all what they looked like, seemed rather purpose defeating :)

Tuesday, May 12, 2009

Converting Videos to be Nokia N9x Compatible using FFMPEG

This is being posted here so I have something easy to look up when I need to do it over again, but using FFMPEG 0.5.x, you can easily convert any playable movie to a Nokia N9x (N95, N96) compatible MP4 format file with the line:



(ffmpeg | ffmpeg.exe) -y -i [input file].[extension] -f mp4 -vcodec libx264 -level 21 -s 320x240 -b 768k -bt 768k -bufsize 2M -maxrate 2M -g 250 -trellis 1 -refs 3 -keyint_min 25 -acodec libfaac -b 128k [output filename].mp4



Which converts any playable video to a compatible (-level 21), correctly sized (-s 320x240) MP4 based video file with AAC audio (-acodec libfaac -b 128k) that doesn't crash the RealPlayer version on phones in Australia/New Zealand because the buffers and bitrate are cool (-b 768k -bt 768k -bufsize 2M -maxrate 2M).

The other options are entirely optional, but the -g 250 & -keyint_min 25 are recommended if you have a PAL based input stream and would like to be able to fast forward and rewind your video using the funkey buttons on the N96.

Thursday, May 7, 2009

Automatic Vista Speedup with a D-Link DIR-625

note: This article is intended for a technical audience -- you should use caution when modifying a production system -- caveat emptor.

This probably works with the other DIR-6xx models that D-Link sells too, but recently oldfeeb had an issue where his girlfriend's computer (Vista Home Premium) stopped talking wirelessly to the new DIR-625 he'd bought.

He uses Ubuntu, so he saw no difference, but the symptoms on Vista were related to Windows Mail and Internet Explorer refusing to browse, or overly long timeouts.

First up, update Vista for all the recent fixes and reboot -- no luck.

Noticing the cordless phone handset on the wall, I thought: change the wifi channel on the router to something that doesn't conflict with things like cordless 2.4Ghz phones, move channel 5 to channel 11, check.

Reboot Vista, slightly better signal, but the network is still slow -- Google works now, but only after 30 odd seconds, but her mail times out and heavier webpages die a quiet death.

So, using a tip that I found courtesy of Australian Personal Computer magazine some years ago and a bit of Googling, we switched off Vista's 'Autotuning of TCP parameters'.

To do this, you'll first need to open a command prompt as the Administrative User -- which involves:


  1. Go to Start / Run

  2. Type "cmd.exe" into the box provided.

  3. Hold down the Control and Shift keys together -- then -- press [ENTER].


Now, type the following at the prompt:


netsh interface tcp set global autotuning=disabled


Restarted IE7, fired up Google, works flawlessly.

"OK, I thought -- put the Wireless Channel on the router back to 5 (the default)."

We did, rebooted Vista -- Stone, Cold, Silence.

Switched it back to 11, rebooted Vista -- everything's peachy again.

So, sometimes it's not just Vista's fault -- I guess if you're going to buy a Wireless router, you should check what channel ranges are valid for the country you're trying to use the hardware in, before blaming the workstations directly.

Friday, May 1, 2009

Youtube Costs Google.

Stumbled across this article by David Silversmith this morning, which makes interesting reading in the year of the economic downturn.

I looked at the list of leaks that David lists -- and wondered twofold:

How many of those 375 vistors actually look at the content? -- You see sites like Facebook and LiveJournal have the ability to embed Youtube content, so people find a video they like, embed it and every subsequent pageview sends a request to YT to begin streaming the content (in order to show the thumbnail image).

Now, Facebook has the News Feed, as well as the Highlights, as well as Sponsored Clips -- more than once a day, these contain (for me, anyway -- with 100 or so friends) at least one YT link, 99% of which, i'll never actually play, but every one of them actually generate an HTTP request anyway.

Is Youtube simply Google's Loss Leader? -- Google does many other things which are less bandwidth and resource intensive than Youtube, such as Blogger, GMail, Google Scholar, Google Checkout and to a lesser extent, Google Earth to name a few, all of which have the same advertising strategy (Adwords) that Youtube has, at similar pricing scales, it strikes me that Google may take a bath on Youtube in order to draw people in to making a Google Account, thereby gaining access to their other resources and presenting Google with many, many more advertising opportunities they would not otherwise have had.

A thought along similar lines, was "Would Youtube have been this popular if Google hadn't acquired it? -- Reading David's piece further, he talks about Content Acquisition costing money, which is true -- but i'd suggest an ever growing amount of content that Google sends to viewers will be commercial over the course of the next 12-24 months, as the economic downturn bites over and over and causes people to cancel luxury items like subscription television, possibly turning to Youtube to pick up highlights of events as they happen instead.

Google has also done a lot for Youtube that i doubt they could have done on their own -- everything from striking deals for the content, to providing the bandwidth and storage space to sustain the "1 second attention span" internet generation, to adding features like the "High Definition" button to the site itself.

"With Popularity Comes Expense" is a phrase I came up with nearly 15 years ago to define the ability of the "modern" internet to actually exist -- at the time I was working in the pioneering phase of Internet Advertising, back before web based video and when only the technically elite and businesses with vast IT budgets had pages at all.

Nice to see it still rings true today.

How does Google stop the bleeding?

My guess is, it doesn't have to.