Wednesday, December 31, 2008

Something Completely Different

A completely non-technical post to end the year on -- highlighting two albums i'm almost sure nobody reading this has ever heard, but are definately worth the listen.

Lazyboy TV - Lazyboy


Personally, I wouldn't have thought i'd like an album that is more "spoken word" than Pop -- but this is a great album that deals with everything from homelessness to drug use to oddities you'll find in the news combined with an infectious beat that gets this played over and over again.

Buy This Album from Amazon


Angles - Dan le Sac versus Scroobius Pip


This one was completely out of left field, I overheard the track Tommy C in a bar and then hung around to listen to the rest of the album -- it's kind of like the Lazyboy album, but with an sound that makes me think of what would happen if you threw electro and dubstep in a blender and put themes like youth suicide, fads and the evangelism of pop music over the top.

Buy This Album from Amazon

Friday, December 26, 2008

Automounting SAMBA Shares

note: This article is intended for a technical audience -- you should use caution when modifying a production system -- caveat emptor.

This is more of a collaboration of other people's posts, with some additions of my own for performance-related issues.

Basically, I manage a bunch of WD MyBook Network Drive (World Edition) boxes for various people -- typically, these are hooked up via SMB shares to various types of Linux install for redundant network backups over the LAN.

After various hacks, from mounting as part of a cronjob to modifying /etc/rc.local -- I decided to attempt automatic fstab mounting under the christmas break and figured i'd document my findings here.

First off, to automatically mount the filesystems on the MyBook -- you need to add lines similar to the following to your /etc/fstab file.


//[SMB SHARE]/[sharename] /media/[mountpoint] cifs credentials=/root/.smbcredentials,rw,iocharset=utf8,uid=[username],gid=groupname,file_mode=0664,dir_mode=0775 0 0


(note: Blogger has wordwrapped this post, but this should be one line when copied to your /etc/fstab file.)

Where:


  • SMB SHARE -- Is the NETBIOS name or IP address of the MyBook.

  • sharename -- Is the name of the share you need to mount (personally, I like to make at least shares based on the usernames using the box).




  • credentials=/root/.smbcredentials -- Is a plaintext file containing the username and password of the user you have created on the MyBook.




  • iocharset=utf8 -- Specifies that all files written or read from the device should be in the UTF-8 character set.

  • rw -- Specifies that access to the share should be read-write.

  • uid=username,gid=groupname -- Specifies the username and groupname on the local Linux machine.

  • file_mode=0664,dir_mode=0775 -- Specifies the octal permissions of the files written on the MyBook.




  • 0 0
  • -- Means fsck will not attempt to check the filesystem under any circumstances, this is always advisable when mounting SMB shares.


After you have edited your /etc/fstab file, you need to make your credentials file -- this file specifies the name and password of the user on the MyBook.

This file needs two lines, with a trailing blank line -- and should usually be placed in the /root or /etc/samba directory and have 0600 permissions.

An example of this file is:


username=winuser
password=winpassword


Save this file and change it's permissions, then alter your /etc/fstab file to point to it's location.

Once you've done this, you should be able to have your shares automatically mounted by your Linux box (after either rebooting or running mount -a as root).

One particular quirk you might find using this method -- is unmounting errors -- these occur because the shutdown routine (by default) shuts down the network devices (specifically, those machines running NetworkManager to control network resources) before unmounting any mounted network shares (ie. what we're trying to achieve here).

These errors usually halt the shutdown of your machine (usually meaning you have to power off using the power button, which can damage your filesystem).

To fix this, you can run the following as root:


ln -s /etc/init.d/umountnfs.sh /etc/rc0.d/K15umountnfs.sh
ln -s /etc/init.d/umountnfs.sh /etc/rc6.d/K15umountnfs.sh


Which will alter your system to unmount the network-attached shares before NetworkManager has a chance to shutdown the network devices.

Friday, December 19, 2008

Unattended Password Creation Failing?

Earlier, I was asked if I had any good solutions for scripting a user account generator in bash -- asking the user what they had already, I received:

useradd -n -g users -p [password] -s /bin/false [username]


I asked what the problem seemed to be, and the response was the password didn't seem to work -- if they used 'passwd' interactively, it'd work -- but unattended, it failed.

Having a look at the useradd manual, we see:


-p -- The encrypted password, as returned by crypt(3).


After trying various combinations - I thought about OpenSSL. What if I gave it the password and got it to do the crypt work first, then fed the encrypted string to useradd?

Something like:



#!/bin/bash
clear="[password]"
crypt="openssl passwd -crypt $clear"

success="0"
failure="1"

useradd -n -g users -p $crypt -s /bin/false [username]

exit $?



Cut and Paste the above into a script and instantly you have a semi-autonomous way of adding generated (or defined) passwords to your machine, all with the help of OpenSSL.

Friday, December 12, 2008

Interesting Spin on Proposed Internet Filtering

Oh, December seems to be rant month.

Some Swinburne University students asked me for a little more clarification on why the proposed Australian Labour Government's filtering idea is a bad one.

I'll be writing my concerns up at length and back-posting them here as they are finished. However, because tomorrow is protest day, i'll post an couple of interesting links you may wish to read:



  • The ACMA's Report on Closed-Environment Filtering for 2008 : Basically suggests 'The Filters were better than the last time we tested them in 2005, because they filter SSL based traffic now, but they could still degrade network performance between 2 and 87 percent and still have a 23 to 40 percent chance of false-positive filtering.


  • Telstra Says No To Filtering - The Australian : When Australia's Largest Carrier decides they can't participate because of 'customer management issues' (possibly due to them moving a large chunk of their support staff, who'd receive the brunt of the complaints offshore earlier in the week) -- it says something about the ways this particular idea will effect everyone in the country, in one way or another.


  • How To Easily Bypass Australia's Internet Filters (for free) - Sydney Morning Herald : Explains to the technical neophyte how to use VPN software and other proxy methods to bypass the filter, in a worst case scenerio.


  • Labor’s Mandatory ISP Internet Blocking Plan - Electronic Frontiers Australia : Analysis of the ACMA proposal, discussions on why this proposal effects everything from online commerce to the civil liberties of Australian citizens and a well reasoned argument on why parents should filter their children's use, followed by a locally installed filter on the computers in the home -- If you haven't read this, you certainly should.

Wednesday, December 10, 2008

Apple Sued Over iPhone Performance Issues

I stumbled across this article on Wired a few days ago. US-centric as it is -- but the same drop-outs seem to happen here in Australia, whenever the phone switches from 3G to the older GSM network.

Two bars on 3G on my Nokia E66 or N73 versus 4 bars on the iPhone, less than 3 inches away from each from each other when the phones are idle, yet the Nokia's complete the calls and the iPhones drop out.

Back in August, Optus (SingTel) offered 'Goodwill Credits' to users who suffered woeful network performance following the launch of the iPhone, which they're not doing now -- yet the latest firmware update doesn't seem to make a scrap of difference.

Perhaps that's why I saw a paper-printed advert in my local Optus dealership that said 'iPhones available for pre-paid plans, $799 AUD for 8GB, $899 for 16GB - while stocks last.'

Does anyone else have similar issues with providers here, or is this just another case of 'never buy G1 hardware' coupled with 'if it isn't broken, don't replace it?'

Thursday, December 4, 2008

Regarding 'The Free World', 'The Internet' and You.

It's not often that politics gets my back up about something that I feel the need to post it here, but while watching Question Time in the Senate last night, the topic of the great firewall of Australia came up -- again.

For International Readers: This isn't exactly new, compulsory ISP-level filtering was tried in closed-quarters in 1999 and 2001 -- however, these were opt-out and focused on guarding against underaged illicit content.

For Local Readers: There's a protest going on in the state capitals for the weekend of the 13th and 14th of December, if you care -- you should be there.

For everyone: The EFA has a very well researched document into why said filtering of this type is a flawed exercise, which you should read.

This time around, there's at least two lists -- one banning underaged illicit content and the other banning 'undesirable content'.

Couple this with the fact both of these lists are privately built, without public consultation -- and the fact -- there's no ability to opt-out.

"Canberra, We may have a problem."

(oh, and the rest of the world is laughing at you, just by the by.)

Anyway...

While Conroy was discussing porn and blocked keyword sites that the ACMA list comprises of during the trial run (beginning on December 24th).

Senator Bernardi said:

I note that the minister failed miserably to answer that question, which was specifically about the number of people needed for a trial to be credible. I also note that in the expression of interest documents the second stream of the trial includes a filtering of other unwanted content. I ask the minister:
Has this unwanted content been identified, and by whom?


Senator Conroy came up with an interesting number in regards to the number of blocked sites, that i'd not heard before:

The list could contain 10,000 [potential sites].
When you look around the world at Interpol, the FBI, Europol and other law enforcement agencies and you look at the size of the lists that they are actually using at the moment, 1,300 would not be sufficient to cover the URLs that we would have supplied to us with the purpose of blocking.



(Quotes from Australian Government Senate Hansard - 03/12/2008) (any emphasis mine)

I wonder, Supplied by whom?

Given the ACMA already have a link for people to report prohibited content, one wonders if the government plans on listing these sites as well as the ones supplied by law enforcement verbatim, or at least vetting them to ensure rogue parties aren't submitting them for their own ends.

Thursday, November 20, 2008

Repairing an Apache 2.2 Modules Installation.

note: This article is intended for a technical audience -- you should use caution when modifying a production system -- caveat emptor.

Sometimes, Ubuntu (and before it, Debian) drives me up the wall.

Recently, an apache2.2-common upgrade saw fit to blow away my /etc/apache2/mods-enabled directory, but not recreate the defaults, so I was left with an empty directory and a server that wouldn't restart due to various errors that looked similar to:


"Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration"?"


Looking around on the web, this doesn't appear to be a new issue, but it doesn't appear to be terribly well answered either -- all I knew, was that I had a box that needed SSL, PHP, Expiry Headers and well, that's it -- aside from the basic functionality.

My first test was to purge the package and re-install it, which did give me back the functionality I wanted, together with a bunch of other modules I didn't need -- I proceeded to leave it and go to bed, only to be greeted with a message from the hosting provider telling me i'd overblown my shared-hosting's RAM quota for the day and that my account was temporarily suspended.

So, I removed the directory and started working piece by piece to put things back together until both Wordpress and OSCommerce booted up and ran correctly.

note: You can also do this with Apache's handy a2enmod program, but i'm a purist -- so i'm going to do it here via 'ln -s'.

First, to obtain basic functionality, you'll need to log in as root (or sudo to root) and issue the following commands, depending on which error your Apache 2.2 installation gives you.

If you see: "Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration?", run:


ln -s /etc/apache2/mods-available/authz_default.load /etc/apache2/mods-enabled/authz_default.load
ln -s /etc/apache2/mods-available/authz_host.load /etc/apache2/mods-enabled/authz_host.load



If you see: "Invalid command 'DirectoryIndex', perhaps misspelled or defined by a module not included in the server configuration?", run:


ln -s /etc/apache2/mods-available/dir.load /etc/apache2/mods-enabled/dir.load


If you see: "Invalid command 'Alias', perhaps misspelled or defined by a module not included in the server configuration?", run:


ln -s /etc/apache2/mods-available/alias.conf /etc/apache2/mods-enabled/alias.conf
ln -s /etc/apache2/mods-available/alias.load /etc/apache2/mods-enabled/alias.load



If you see: "Invalid command 'AddType', perhaps misspelled or defined by a module not included in the server configuration?", run:


ln -s /etc/apache2/mods-available/mime.conf /etc/apache2/mods-enabled/mime.conf
ln -s /etc/apache2/mods-available/mime.load /etc/apache2/mods-enabled/mime.load



If you don't see any of those, but Wordpress 2.6.x (or 2.7.x) will not let you login (ie. you can install it, and you see the admin screen, but you get 'permission denied', 'forbidden' or a directory index -- rather than your admin dashboard) then try running:


ln -s /etc/apache2/mods-available/asis.load /etc/apache2/mods-enabled/asis.load


and restart Apache.

update [12-12-2008]: Wordpress 2.7 will fail to install unless you have the 'env' module installed, so you may also need to run:


ln -s /etc/apache2/mods-available/env.load /etc/apache2/mods-enabled/env.load


Once you've restarted, your Apache installation should basically work -- to add advanced functionality, you should visit the Apache 2.2 Modules Documentation pages and adapt the lines above to suit the function you need.

Tuesday, November 4, 2008

Signs Of The Future.

Since Luis Villa first mentioned it, i've been following the election blog at Princeton -- I don't read many political blogs anymore, but it does have to be one of the better ones I keep on my list, the posts are impartial and the graphs and charts are clear and concise.

I'm not an American, so I can't vote -- but having dealt with Americans on a near daily basis for the last eight years, i'd like to think there'll be enough people with enough power to enact some serious change in the political landscape, for the people, for the country and for the world.

It'll be interesting to watch the live graphs on the Princeton site over the next few hours.

update [05/11/2008 16:15 GMT+11]: Looks like Obama Wins -- the people have made their choice.

Monday, October 13, 2008

Smart-ISP Configuration for Postfix.

note: This is another 'no-brainer, don't forget this again' type post, but I thought i'd put it here incase anyone else can make use of it.

Recently, I was told that a client needed to forward all their corporate mail via their local ISP and that I should set that up for them using their existing, internal-only mail handling Postfix server.

The key additions to their /etc/postfix/main.cf file was (aside from configuring SASL, which you should do anyway and setting the relayhost parameter correctly) was:


+++ mydestination =
+++ local_recipient_maps =
+++ local_transport = error: no local delivery service


and:


+++ myorigin = the outbound domain name you need.


Then, you need to comment out (with a #) the local transport method in your /etc/postfix/master.cf file.

Remember that, brain!

Friday, October 10, 2008

Clickjacking, 1999 Called...

Betanews isn't something I read often anymore, but this article intrigued me.

It's amazing that after nearly 10 years of active development on the web, standards and the rest -- and the best idea people can come up with for preventing clickjacking is using security=restricted to break frames (aka. frame busting code).

Using mod_security, you can at least write filtering rules that eliminate iframes and other annoying content at the server level and for Firefox users, NoScript does an excellent job of handling it at the desktop level (after, of course enabling the "Forbid IFRAME" option.)

For Internet Explorer users, I don't know what to tell you -- something tells me that's why the Internet remains as it is.

I often wonder if the whole IFRAME tag will be removed from HTML 5.0 -- with all the forward-thinking ideas that CSS 2.1 and 3.0 bring to the table, the only people seeming to use frames on new pages now, seem to be those wishing to exploit it.

Saturday, October 4, 2008

VSFTPd Configuration for Non-Interactive Users

note: This article is intended for a technical audience -- you should use extreme caution when modifying a production system -- caveat emptor.

This is one of those 'it's not broken until you trip over it' things that I wish i'd known about earlier, in posting it here -- i'm hoping other people can find it without wasting the amount of time I did.

So, I had reason to set up a new VPS box with the LAMP stack, SSH/SCP and VSFTPd for FTP use -- my usual command says something like:

useradd -n --gid www-data -s /bin/false [username]


In a nutshell, this:



  • Does not create a new group specific to the user.


  • Makes the user's primary group www-data (useful for users being able to access / upload / modify their own web code).


  • Sets the default shell to /bin/false, which does not allow the account to login interactively.



However, when you use an FTP client, it bails out with a 530 error, telling the user either their username or password are wrong.

So, the next step is to reset their password and try again -- nope, no difference.

Try changing the shell from /bin/false to /bin/bash with chsh though:

chsh -s /bin/bash [username]


... and everything works correctly.

We don't want to use /bin/bash though, because (amongst other things) it allows interactive logins via SSH or the console, which poses a security risk for the other users of the box.

Ubuntu & Red Hat (and probably a whole bunch of others) include the nologin command, which does exactly what we want (provides a message that the user cannot login, and exits).

OK:

chsh -s /sbin/nologin [username]


Try FTP.

It still errors.

As it turns out, the reason it errors is due to it not being included in the /etc/shells file that VSFTPd and other system daemons use to determine if your shell is valid.

Turns out, it's a very easy thing to fix -- simply:

echo /sbin/nologin >> /etc/shells


note: Some distributions ship this as /usr/sbin/nologin -- you may wish to run whereis nologin first to determine where your copy is.

Try FTP again.

:)

Thursday, October 2, 2008

Disabling Control-Alt-Delete on Linux Servers

A colleague rebooted a server unintentionally by having the keyboard plugged into the wrong machine (or, more correctly, typing on a keyboard not connected to the box he was looking at)

After the users swore at him a little, he rang for advice -- luckily, in these days of event driven machines, on a recent Linux distribution (RHEL 5 / Ubuntu 8.04) disabling the C-A-D on critical machines is easy.

First, open /etc/event.d/control-alt-delete.

Now, change the line:


--- exec /sbin/shutdown -r now “Control-Alt-Delete pressed”


To:


+++ exec /bin/echo "Control-Alt-Disable has been disabled"


Save the file and restart the machine, the next time you press Control-Alt-Delete, you'll get a nice message saying things were disabled, rather than a bunch of angry users.

Saturday, August 30, 2008

Bandwidth: Limits, Speeds, Standards?

Christopher Blizzard recently wrote about Comcast (a major US ISP) imposing bandwidth limits on their customers -- unfortunately, something those of us in Australia know all too well.

My initial thoughts, included a comment:


I don’t think it’s ‘that’s a huge pile of angry non-americans’ any more that it is ‘the rest of us, would like to welcome you, the americans — to needing to put up with what the rest of the world deals with on a daily basis’ or, consumerism.


After pondering on it for a while longer, especially in relation to local users -- I have to wonder:



  • If the average cost of bandwidth in Australia is $50 AUD for 5/GB of bandwidth, which at 1.5M p/sec equates to 3.75hrs of sustained download time -- what does an average household actually do with their internet the rest of the time?


  • Comparing the three ISPs I deal with on a daily basis:


  • Telstra, ran a TV commercial recently where they demonstrated a 'BigPond Connected Home' with 2 Adults, 1 Adult Child, 2 Children and a Dog using the internet at the same time, one for streaming TV, one for viewing Facebook, one for browsing eBay and one for booking travel.

    Presuming the family only paid the 'average' fee, $59.95 at 1.5M gives you 600M -- yes, that's Megabytes (not a typo), which excess usage charged at $0.15 a megabyte.

    Looking at the list of Telstra unmetered sites, Facebook isn't in the list, YouTube isn't, neither is eBay, which begs the question -- how much is the average family's internet bill per month.



  • SingTel/Optus come out marginally better, providing a 15GB plan for $59.95 AUD a month, with no excess data charges -- but a throttled speed of 128kbps once you've hit your limit, as well as not apparent unmetered content.



  • Internode came out better still, providing a 25GB plan at ADSL2 speeds for $59.95 with no excess usage charge, albeit a slower throttled speed of 64kbps, but provided an extensive online games and free software unmetered mirror, as well as a variety of uncapped site access.




In the last week, i'd been using my Playstation 3 at home a fair amount -- and decided to do a bit of research into how much bandwidth the average game used by running a few demos, plugging my DSL model directly into the PS3, thus removing the rest of the traffic from my LAN from the equation.



  1. Call Of Duty 4, 6-man multiplayer with a 15 minute time-limit used 23M of data the first time, and 24M of data the second time (the only difference was the map we used).



  2. Grand Theft Auto 4, 12-man multiplayer with a 20 minute time limit used 40M of data each time.



  3. The new demo for EA Sports NHL '09 came out on the Australian PS3 Network -- which I promptly grabbed, as a 'normal use case' test for the average household, at 1102M on it's own, that accounts for $169.33 AUD of excess usage charge at Telstra, or nearly a tenth of the available bandwidth at Optus using the above numbers on it's own.



After returning my LAN to the equation, I then stumbled over the following article, that talked about the USA lagging Japan in terms of aggregate bandwidth for the medium-to-long term and thought, sure -- Japan can reach the Japanese at 63M, possibly the Japanese eBay or Amazon too, but what's the speed like from there to Europe, or the USA.

I understand that is actually costs a lot of money for anything to get to Australia & the cost of cable, backchannel links and maintenance is prohibitive.

I also understand that Telstra is still in the position where they can play overlord to the communications network in this country and a mentality of 'we built the links, you paid for them, now we profit from them' is the standard, but that's the case with any 'Shareholder Concerned' business.

Maybe things would have been different (better?) if the government had forced Telstra to hand over control of the physical line infrastructure to AUSTEL 15-20 years ago, rather than dissolving AUSTEL, forming the ACA (and ACMA) instead and creating yet another mid-range government department while allowing the sole telecoms provider to become a complete juggernaut too.

They didn't -- and now we the consumers are paying and will continue to pay until either:

a) government regulates in consumers favour across the country (rather than concentrating on 'the bush')

or

b) the cows come home. (especially with the G9 facing resistance from the ACCC with their Fibre-To-The-Node proposal and Telstra pulling the telecoms version of an All-In-Over-The-Top-Raise, launching their own proposal for the rebuilding and revamping of Australia's telecommunications infrastructure.)

According to Chris's other comments, at least the pricing situation here isn't as bad as the one in South Africa were the equivalent of $140 AUD buys you 10GB, but that's for a userbase of around 10,000 users -- not 16.6 million.

The next few years will be interesting.

Saturday, August 23, 2008

Playstation 3, Streaming, Formats ... Ick.

I've been working away at a media streaming project for the last week or two -- mainly because running between the machine that holds most of my web-content and the TV becomes tiresome, especially when you've got most of the content in .mkv or .mp4 formats that don't fit more than an episode or two on a DVD-R.

I've tried TwonkyVision, which was widely recommended as the best solution for streaming with transcoding from external sources -- but is purchase-ware, as well as both FUPPES and MediaTomb from the world of OSS.

Quite frankly, I came away quite unimpressed.

Not because any of the software is bad, or particularly difficult to setup (albeit that you need a newer version of FFMPEG to get any reasonable quality from the HDTV transcodes, and that creates debian pain.) ...

More because the Playstation doesn't do half the things you'd expect of it.

Granted, it's behind the XBox360 juggernaut in terms of market time -- but if you're going to do format compatibility, please -- SONY, make a decent job of it.

First, I tried connecting the three media servers using the default instructions for each, enabled uPnP and a default multicast route for the boxes that held the data -- of course, while that meant my laptops could see the share -- the PS3 didn't.

Further investigation concluded it's nigh on impossible to get the PS3 (40GB, running firmware 2.42) to talk to a media server via the wireless interface.

15M of CAT5 later and a re-configuration of the PS3 to use a wired interface -- we had liftoff, well -- nearly, the PS3 had dropouts whenever it tried to look up directories on the share, it'd start searching, get through the first 20 or 30 entries and stop.

More poking ensued -- turns out the PS3 needed an explicit route to the box hosting the media, easily fixed -- but in no manual, it only triggered in my brain because the box doing the serving (running Red Hat Enterprise Linux 4.x) was triggering source route notifications in my logfiles.

Now the PS3 saw the media, as a variety of MPEG-2's, and Unknown Data.

Back to the interschnitzel, to find MediaTomb and FUPPES both have 'a workaround' to make the PS3 see DivX files, except they both don't actually work on PS3's with 2.4x firmware 'out of the box' (a phrase becoming my new favourite annoyance)

For MediaTomb this means adding:


<map from="avi" to="video/x-divx"/>
<map from="divx" to="video/x-divx"/>


For FUPPES this means:


<file ext="avi">
<type>VIDEO_ITEM</type>
<mime_type>video/x-divx</mime_type>
</file>


... to your configurations.

A 'Re-boot and Re-Import' later the PS3 saw a bunch of MPEG-2 and DivX files, Good.

Except half of them wouldn't play, Bad.

Infact, the PS3 seems more picky about which DivX/XviD files it'll play than the documentation suggests -- the exact same file with the exact same settings encoded in XviD 1.10 and 1.12 play differently on the PS3, the 1.10 file is called 'Corrupted Data' but the 1.12 copy plays normally, albeit with audio skew caused by lag.

Then there's MPEG-2's that won't play if they are in a TS container, but will if the stream is copied to a PS container first.

Matroska (MKV) based H.264 files won't play at all either, having a platform that says it supports 'new media' and not having a Matroska muxer/parser is ... er, strange.

My personal favourite though, is that transcoding anything high-definition fails using 'chunked' encoding because the 'buffer' size (I set for 512k) is too large.

Luckily, that's an easy fix for FUPPES. For MediaTomb it's not straightforward, you need to change the 'fill' size to workaround it ... but again, that's not documented anywhere either.

The working settings I came up with (which need to be added to each transcoding section where you're converting H.264 (.mkv or .mp4 mostly, unless you watch High-Def Pornography, in which case i'll leave it as an exercise to the reader) were:

MediaTomb:


<buffer size="20971520" chunk-size="524288" fill-size="10485760"/>


FUPPES:


<http_encoding>stream</http_encoding>


In the end, after recompiling both platforms -- I settled on MediaTomb, although I now have both RHEL 4 and Ubuntu 8.04 packages for both platforms from their respective RCSes from 20080818, so I can switch easily if I want to change.

There'll be a forthcoming post on how to configure the transcoder scripts and FFMPEG, sometime when i've got hours up my sleeve to document it with sane reasoning and screenshots, i've got it going now -- but if I wrote it up, it'd look ranty and like I was SONY-bashing.

I agree with this guy. The XMB is nice, the changing colour for the seasons is a nice touch too, but if the PS3 doesn't increase the titles available via the Playstation Network (In Australia, we don't have a Madden '09 demo yet, for example -- nor do we have Castlevania, but we do have a bunch of music videos and some streaming from various trade events .. for all the use they are.) then it'll be behind the X360 for a while to come.

However, if they don't fix the format capabilities -- probably by this holiday season, then they'll be behind the X360 for a long, long, long time to come.

Really, an Matroska muxer -- support for all three main H.264/AAC profiles, MPEG2-TS and -PS support and a differentiation between DivX (DX50/DX60) and XviD (XVID) would be nice, at least so I don't have to transcode the latter to view something you're natively capable of viewing in the first place.

If we got that, plus maybe an Dirac and Vorbis implementation (I mean, does anyone use ATRAC) -- the Playstation 3, to quote a great movie reference ... "would become more powerful than you could possibly imagine."

... but at the moment, it's a games platform, with a swanky front-end.

If SONY think it's anything more than that, they owe me my weekend back.

Tuesday, August 19, 2008

Totem Packages Available (see: Totem, GStreamer & nVidia Graphics Cards)

Playing with a possible fix for these bugs in Totem.

There's packages for Ubuntu Hardy here -- that are the same as the release ones except one rather messy hack that shifts the Hue plus-or-minus 90 from whatever position Totem starts in.

It's basically the code from here cleaned up a tad and dropped in as a patch.

They seem to work for me, using the GStreamer pipelines I presented in the last post.

Not a clean solution, but I can play a full playlist of videos without the colour skewing once -- which is better than upstream can do at present.

Friday, August 15, 2008

Bluetooth's "Operation Not Supported By Backend" Message

This morning, while trying to move some files between my mobile phone and my Ubuntu 8.04.1 machine -- I was greeted with a "Operation Not Supported By Backend" message with the address of my phone and the drag between Nautilus windows was terminated.

Turns out, it is because the GVFS backend for Bluetooth doesn't support the device -- and using the older, GNOME-VFS way isn't supported either.

The 'Send-To Bluetooth' option (right-clicking the files you want to send and selecting your mobile) works as per normal and successfully transferred 25M of files to my phone in under a minute.

Monday, August 11, 2008

Mozilla, SSL & the 'non-optimum' Security Warning

A number of people have been blogging about the state of the SSL Certificate Security Warning since the release of Firefox 3.0.

I must admit, personally I don't mind the dialog that pops up -- it scares the everyday user into thinking twice before sending their data to Nigeria by accident.

It is actually far more awkward to import the various extra root certificates into the various operating environments, than it is to do certificate exemptions on a site by site basis.

I found the report that Federico linked to slightly disturbing -- if 58% of certificates are indeed invalid, expired or otherwise bad, that's a hell of a lot of users that are experiencing an all-too-confusing dialog box far too often.

(On that note: If you have an expired certificate, you should really get it renewed -- especially with a commercial signer, after all -- you've built a reputation with that certificate, you shouldn't have customers turning away because that little yellow bar they've been used to becomes a scary looking error message.)

I like CACert myself, I use it for things regularly and i've configured several e-commerce installations to use certificates for it, after going through the somewhat painful verification process to get a two-year certificate instead of a three-month one.

For commercial stuff though, CACert isn't really practical -- especially considering very few operating environments include their root certificates by default.

For semi-commercial stuff, there's no middle-ground, there's either commerical CA's, Homebrew, or nothing.

For personal use, there's GNU Privacy Guard -- a much better, but less Microsoft-supported way of confirming you really are who you say you are.

I've often thought about the issue in my business, where I see all sorts of certificates on a week-to-week basis -- and often need to handle the case of 'a user complained my certificate was invalid, I bought it and gave it to you, so you must have broken it.'

The thing I haven't been able to come up with yet, is the solution:


  • For big corporates, there's Verisign or Thwate, which is prohibitively expensive for a single-user in the home.

  • For SME's there's second-tier signers, like Comodo, GoDaddy or Network Solutions.
  • For ~$100USD p/year, you can get a certificate that works 'most of the time'.
  • For Free Software Developers and other Personal Use there's basically CACert, or doing it yourself. Neither of which, are supported by anything remotely mainstream without doing a hell of a lot of legwork yourself.



Maybe Mozilla themselves, or Google could do something to help the situation by running a CA that works in parallel with the other services they provide -- but how would that be any less work that rubber-stamping CACert?

Well, even though the principle is the same, if Google did it -- it'd probably be supported everywhere -- but as of now, CACert are still running the gauntlet with Mozilla and will probably have a much more difficult task getting past Microsoft and Apple, accordingly.

Wednesday, August 6, 2008

Autodesk Backburner & VMWare Clones

If you've installed Autodesk's Backburner product within VMWare (part of 3D Studio 7/8/9/2008) and have trouble getting the Backburner server to start because the "UDP interface is not valid in this context", the solution is two-fold:

First, Power off your VM
Then, edit the .vmx file for the failing VM and change:



ethernet0.generatedAddressOffset = "0"



To:



ethernet0.generatedAddressOffset = "1"



Power up the VM and log in to Windows -- using Windows Explorer, navigate to the C:\Program Files\Autodesk\Backburner\Network directory.


Delete the backburner.xml file.

Now when you restart the Backburner server application, a new configuration file will be created with the new GUID and SID of the VMWare instance and it should start up and run normally.

Tuesday, August 5, 2008

Encoding Videos for your PS3 using GStreamer

Following up my previous post, i've been playing more with HDTV content on my machines -- and spending more time re-coding content for my PS3 so I can watch it from the comfort of my lounge.

Using Avidemux is nice, but occassionally -- it's nice to stick it in a terminal, with relatively low overheads and use the GStreamer framework to do the same job.

So, here's two examples of how to use the CLI to generate content playable on the PS3.

For Standard Definition (SDTV) content -- you can use:



gst-launch-0.10 filesrc location="input.avi" ! decodebin name="decode" decode. ! queue ! ffmpegcolorspace ! ffenc_mpeg4 bitrate=999999 gop-size=24 qmin=2 qmax=31 flags=0x00000010 ! avimux name=mux ! filesink location="output.avi" decode. ! queue ! audioconvert ! lame name=enc vbr=0 ! queue ! mux.



As of the 2.42 firmware, the PS3 has issues playing VBR audio -- so you have to explicitly turn it off (setting it to CBR audio) using the vbr=0 option to lame.

For High Definition (HDTV) content in .MKV format -- the x264 encoder works better, with a higher quality re-encode at the expense of a larger output file (obviously, if you've downloaded a HDTV file that's already been encoded in the .avi format, you'd want to use the SDTV code above, because the .avi file would already have significant quality loss over the original source material)

In HDTV, we can also use AAC audio to generate 4.1 channel audio -- as opposed to recoding to MP3.



gst-launch-0.10 filesrc location="input.avi" ! decodebin name="decode" decode. ! queue ! ffmpegcolorspace ! x264enc ! ffmux_mp4 name=mux ! filesink location="output.avi" decode. ! queue ! audioconvert ! faac name=enc ! queue ! mux.



Where input.avi is the path to your existing media source andoutput.avi is the path and file that you'd like to save.

important note: Yes, the full stop (after the mux.) in both cases is intentional -- if you take it out, GStreamer will complain about the pipeline.

Monday, August 4, 2008

Totem, GStreamer & nVidia Graphics Cards

I've been bitten by these bugs fairly often on my HP DV6000 laptop -- and with nVidia claiming it's nothing to do with them, I decided to do a little investigation.

Turns out, Totem seems to reset the video settings after each video has been played.

If the quality sliders in Totem are dead center for all four settings (Saturation, Contrast, Hue and Brightness) ...



... the video displays with a bluish tinge unless you use the following GStreamer Video Output pipeline:


ffmpegcolorspace ! video/x-raw-yuv,format=(fourcc)YV12 ! videobalance contrast=1 brightness=0 hue=-1 saturation=1 ! autovideosink


If the colour settings slider for Hue is at the far left (as has been suggested as a solution by several people), the following pipeline works:


ffmpegcolorspace ! video/x-raw-yuv,format=(fourcc)YV12 ! videobalance contrast=1 brightness=0 hue=0 saturation=1 ! autovideosink




However, regardless of which pipeline one chooses, Totem seems to reset itself each time, seemingly trying to adapt to the optimum setting, which means the first video you play will display correctly, but following videos will be blue.

At this point, i'm not really sure how to fix it -- but nVidia suggest that it isn't their problem and Totem should fix it.

The interesting thing about that, is if I take a screenshot of a playing video -- the screenshot is the correct colour, all the time.

Saturday, July 19, 2008

Tweaking XDG Settings on Ubuntu

More of a remember this for later post, but the content took over an hour to figure out, even though it was remarkably simple.

XDG allows you to alter the default directories for a number of commonly used locations found on a users desktop.

This file is stored in $HOME/.config/user-dirs.dirs, you can change -- for example, Videos to be My Videos (making it more like Windows), or hide the Templates directory by changing it to .config/nautilus/Templates (which, you'll need to create first)

However, after editing this file and logging out/in, the GNOME panel and filechooser may show duplicates of these directories (once for the old directory, once for the new one).

The solution, is to remove the $HOME/.gtk-bookmarks file, then log out and re-log in, the file gets generated at login if it doesn't exist and will read the contents of your $HOME/.config/user-dirs.dirs file in order to get the correct locations.

Thursday, July 10, 2008

Encoding Videos for your PS3 with Avidemux

Sometimes, the interscnitzel is helpful -- sometimes, it leaves you without hair while you try every conceivable setting to make things work.

My better half wanted to watch some IPTV we'd downloaded on the big TV (Standard Def), which is hooked up to my Playstation 3. IPTV comes down as .AVI in SDTV and .MKV for HDTV stuff, but the PS3 decided all of that was 'Unsupported Data'

Checking the list of supported video files (please Sony, add .MKV to your list), we found that AVI files should be supported.

For those interested, there's a more detailed examination of what AVI files (DivX or XviD) the PS3 will play, over here.

After some searching, we fired up Avidemux (2.4.2, Linux -- but the Windows one would work too, if you didn't have anything else available) and played with Settings -- the best set the we came up with was:

Video: MPEG-4 ASP (lavc)
Audio: MP2 (lavc)
Format: AVI



To make the whole process a little easier, I also went into Edit/Preferences/Automation and switched on Automatically build VBR map, Automatically rebuild index and Automatically remove packed bitstream (as all the AVI files come as "AVI, pack VOP", which needs to be unpacked first before you can change the output format type).



Now, save your file -- put it on a USB stick or DVD and watch TV on your PS3 :)

note: if you're using DVD-RW's to watch movies from (my personal preference), the Sony DVD-RW (Gold, in Green cases) work flawlessly and seem much more reliable than the TDK or Imation ones (My 40GB PS3 has yet to successfully read an Imation DVD, for example).

important note about USB sticks: After some headscratching over why videos don't play from USB sticks, I discovered the USB device should be FAT32 formatted (default on SANDISK drives, if you've got access to a Linux box, mkfs.vfat will do the job. Then, on the device, you need to create a directory called "VIDEO" (all caps, without the quotes) and put your videos in that directory, then the PS3 will see them properly.

Wednesday, July 2, 2008

Fixing Printer Scaling on Ubuntu 8.04 (bug: 217151)

During a recent rollout of "Ubuntu on the Desktop" I ran foul of this bug in Epiphany.

The symptoms, as the bug describes -- are the headers and footers of a document are printed, but no body text -- we tested this on various printers and came out with the same result.

To fix it, we applied a hack at login-time to manually set people's preferences to Scale=100, by editing the ~/.gnome2/epiphany/print-settings.ini file and changing:

--- Scale=1

to

+++ Scale=99

(note: I never use 100, because it makes badly-designed websites like Online Banking websites sometimes print outer-borders on a second page, 99 fixes that with no user-discernable difference to the print quality)

It fixed the problem, but hacks are never elegant, break easily and are not good to apply over a 400-workstation installation, so this morning -- my task was to track it down and fix it.

Turns out, it's an innoculous little bastard in the Epiphany gecko code that caused it.

In the embed/mozilla/GeckoPrintService.cpp file, there's a line (around 737-739) that reads:

" gtk_print_settings_set_scale (aGtkSettings, 1.0); "

But this states:

" scale : the scale in percent "

So, the solution turned out to be changing:



gtk_print_settings_set_scale (aGtkSettings, 1.0);



To:



gtk_print_settings_set_scale (aGtkSettings, 99.0);



The bug has now been noted, with a patch that fixes the issue here

Friday, June 27, 2008

Interesting Viewpoint

I wandered across this article this morning and thought it deserved posting, especially in the light of similar Microsoft sentiments in recent months.

While Microsoft can't possibly 'buy out' Open Source as a whole, it'll be interesting to see if it means Microsoft start interacting with Open Source in a better light to ensure more interoperability between systems (possibly the only way to achieve continued market share), or if they'll make better quality end-products, or systems to deploy those products on, or if they'll continue to do exactly what they've done in the past.

One thing is nearly certain, within the next five years, Microsoft will need to do something a) special or b) underhanded to continue to operate in the market as they have done -- the amount of end-users that are coming to me and other ISV's since the advent of Microsoft's Vista product-line and asking about Linux on the desktop is increasing and that can only be a promising sign for the future.

The time for lipservicing the FOSS market, is over.

Monday, June 16, 2008

Using mod_security 2.5.x with Apache 2.x

note: This article is intended for a technical audience -- you should use extreme caution when modifying a production system, as your data will be nearly impossible to recover if you use this command incorrectly -- caveat emptor.

There's a lot of posts about how to setup the mod_security module for Apache, but few on how to configure it -- hopefully people find this post useful in doing just that.

Before we start, i'm assuming you've actually installed mod_security 2.1.3 or 2.5.x here already (Red Hat/CentOS packages are here, Ubuntu / Debian ones are here, an OpenSuSE howto is here).

I'm also assuming you've made a copy of the core rules that come supplied with the package and put them in the /etc/modsecurity directory.

note: If your distribution of choice doesn't ship the core rules with the packages, you can download those from here.

Now, to make a decent configuration.

First, move (don't copy, or the default configuration may override any environment-specific changes you make) the /etc/modsecurity/modsecurity_crs_10_config.conf file to /etc/apache2/conf.d/mod_security.

Open the newly copied /etc/apache/conf.d/mod_securityfile and edit the following parameters:


  • SecResponseBodyLimit
  • -- Because the default configuration doesn't check binary files, you may wish to reduce this to 256K, so change this value to 262144.
  • SecAuditLog
  • -- The default configuration saves the logfiles relative to the configuration file directory, under most modern Linux/BSD distributions, the apache or www-user account already has rights to the /var/log/apache2 directory, so you can safely change this to /var/log/apache2/modsec_audit.log
  • SecDebugLog
  • -- Using the same rationale, you can change this to /var/log/apache2/modsec_debug.log


At this point, you should save your file and restart your Apache 2.x server in order to ensure your configuration works. If you run:

cat /var/log/apache2/error.log | grep “ModSecurity”


You should see the string:

“[Fri Jun 13 23:23:23 2008] [notice] ModSecurity for Apache/2.5.5 (http://www.modsecurity.org/) configured.”


Which means we can proceed to add our rules to the configuration. To do this, open your configuration file again and add the following line to the bottom:

Include /etc/modsecurity/rules/*_crs_*.conf


This will add the core rules to your configuration. Once again, you can restart your server and the changes will take effect.

Sunday, June 15, 2008

Transmission, MIME-types, Java?

I'm posting this here for two reasons, a) the solution works but sounds like overkill and b) because I couldn't find any information on the interschnitzel directly -- it just happened to be a solution for another application that worked in this case too.

After updating Transmission (my favourite bittorrent client) on my Ubuntu 8.04 desktop machine, I found that torrent files I downloaded weren't appearing in my client, but they weren't appearing on my desktop either.

Sometimes they worked if I downloaded the .torrent file more than once, but more often than not, they didn't.

Looking through all the usual suspects (MIME-types in Nautilus, etc) proved unsatisfying, so I had a look in my .xsession-errors file (using: tail -f ~/.xsession-errors from a terminal while downloading a file, and found:


GCJ PLUGIN: thread 0x816c470: NP_GetMIMEDescription
GCJ PLUGIN: thread 0x816c470: NP_GetMIMEDescription return
GCJ PLUGIN: thread 0x816c470: NP_GetValue
GCJ PLUGIN: thread 0x816c470: NP_GetValue: returning plugin name.
GCJ PLUGIN: thread 0x816c470: NP_GetValue return
GCJ PLUGIN: thread 0x816c470: NP_GetValue
GCJ PLUGIN: thread 0x816c470: NP_GetValue: returning plugin description.
GCJ PLUGIN: thread 0x816c470: NP_GetValue return


GCJ?

Ah, the Open Java Variation of Sun's Java Runtime.

Turns out, if you have the Ubuntu Multiverse and Universe repositories configured & you install the ubuntu-restricted-extras package (the one that gives you multimedia encoding and playback capabilities, Java, Flashplayer amongst other things) it installs the Open Java (openjdk-6-jre) variations and not the Sun one.

To install the Sun Java Runtime from Multiverse, run the following from a terminal window:

sudo apt-get install sun-java6-bin


Then, to make Ubuntu use it, instead of the OpenJDK code, type:

sudo update-java-alternatives -s java-6-sun


Then restart Transmission & your chosen Web Browser and try downloading your .torrent files again.

Friday, June 13, 2008

Using GnuPG Agent on the Console

After hunting around on the internet to find a definitive answer on how to use GnuPG's Agent with a remote console (it's easy if you have X installed, even easier than that if you use the awesome Seahorse application with GNOME -- but pretty awkward if you have neither of the above)

I think i've come up with a usable solution, that uses gpg-agent's --write-env-file variable & does some minimal extra checking to make sure it doesn't get killed accidently, as well as correctly exporting the GPG_TTY variable so applications like mutt and the package builder applications on the console get the key handling right.

In your .bash_profile file, you need to add the following code near the end:


# Invoke GnuPG-Agent the first time we login.
# If it exists, use this:
if test -f $HOME/.gpg-agent-info && \
kill -0 `cut -d: -f 2 $HOME/.gpg-agent-info` 2>/dev/null; then
GPG_AGENT_INFO=`cat $HOME/.gpg-agent-info | cut -c 16-`
GPG_TTY=`tty`
export GPG_TTY
export GPG_AGENT_INFO
else
# Otherwise, it either hasn't been started, or was killed:
eval `gpg-agent --daemon --no-grab --write-env-file $HOME/.gpg-agent-info`
GPG_TTY=`tty`
export GPG_TTY
export GPG_AGENT_INFO
fi


Save the file, then logout and re-login and you should find gpg-agent has been started correctly.

note: We use the "| cut -c 16-" in the first section in order to remove the duplicated GPG_AGENT_INFO= string from the output that causes errors like:


gpg-agent: can't connect to `/home/paul/.gnupg/S.gpg-agent': No such file or directory
gpg-agent: can't connect to the agent: invalid value

Sunday, June 8, 2008

Fixing Font Resolution In Epiphany

After recently scheduling a large client upgrade from Ubuntu 6.06 to Ubuntu 8.04, a number of people started complaining that fonts looked bad on the web browsers (Epiphany).

Although they all had video cards (nVidia) and monitors (Samsung SyncMaster's) in common -- a bit of Googling indicated an underlying software issue.

Fortunately, this is easy to fix.

If you open a web browser and type about:config into the location bar, which should take you to the configuration screen -- consisting of a filter box and a larger portion containing all of the relevant tweakable parameters.

In the filter textbox type: layout.css.dpi it should be a Default, Integer value (that is not bold) that looks like the screen below:



Now, if you right-click the bold text and select modify you should get a textbox appear in the center of the screen, simply use the keyboard or mouse to select and remove the -1 and enter either 70 (if you have a resolution lower than 1024x768) or 92 (resolutions of 1024x768 or higher) instead -- and press OK to return to the configuration menu.

At this point, the screen should look like the one below:



Now you can close the browser, the next time you restart it -- your fonts should look smoother and more readable.

edit: If you find that you can't edit the text-box on an OpenSUSE 11.0RC1 or Ubuntu Hardy 8.04 installation, try forcefully closing Epiphany by typing the following command in a terminal window:

killall -9 epiphany-browser && rm ~/.gnome2/epiphany/mozilla/epiphany/!lock


and then try to modify it again.

Wednesday, June 4, 2008

Updated Unofficial GStreamer FFMPEG Plugin Packages for Ubuntu 8.04

As a follow-up to my recent post about the Fluendo Codecs, I decided to take a look at the GStreamer FFMPEG Plugin. -- and because a new release had just appeared with a lot of new fixes, built packages for it.

The packages are here, are based around the recently released 0.10.4 release and been built using the recommended upstream build of FFMPEG's libavcodec and libavformat code, rather than the (much) older code that ships by default in the Ubuntu 8.04 release.

These should be drop-in replacements for anyone using the current packages in the Ubuntu Multiverse repository.

Monday, June 2, 2008

Updated Unofficial Rhythmbox Packages for Ubuntu 8.04

I've been playing about over the last week integrating a bunch of useful patches into my preferred music player, Rhythmbox.

The packages are here, are based around SVN revision 5710 and have a number of extra patches that haven't made it into the tree yet, including:

Bug 528814: RB should use podcast date and time when transferring to iPod (useful if you put your podcasts in playlists)
Bug 529873: The artdisplay plugin should be able to supply metadata (useful if you have coverart embedded in tracks and you'd like it transferred to the iPod)
Bug 345975: Show album covers embedded in files e.g. mp3 ID3 tags (very, very useful if you have art embedded in music from iTunes or some other tagging program)
Bug 140020: Song skips when position is moved maximum right (fixes a big bugbear i've had for some time, where clicking forward or back within the first second or two after fast-forwarding/rewinding with the slider causes Rhythmbox to skip the next track).

If you have been looking to try a newer Rhythmbox on your Ubuntu 8.04 installation, or have an iPod that you'd like to get more use out of, you might want to give these a go.

Monday, May 26, 2008

GStreamer Codec Install Gone Bad, Part Two?

Tonight, while tracking down an issue with another video file that wouldn't play in Ubuntu 8.04, but would in anything else -- I discovered yet-another-quirk about the Fluendo codec megabundle, this time with the H.264 codec.

Attempting to play a movie on a fresh installation of Ubuntu 8.04, in Totem with the fluendo codecs installed brought up a black screen with a properties dialog that looked like:



By fresh -- I mean fresh, this box was a demo to show a new user how easy multimedia was (after successfully demonstrating how easy the printer was to set up using CUPS), it has only got the standard installation of Ubuntu, no multiverse, no universe -- infact, the only thing we'd done (aside from installing the foomatic printer drivers and configuring ufw to do basic firewalling) was install the binary codecs in the .gstreamer-0.10/plugins/ directory, as directed by the instructions.

By new user -- I mean someone who is planning on updating from Windows 98 to something for the first time, has enough computing power to run a non-Microsoft based operating environment, but not enough to run Vista and has heard enough people rabbit on about how good it could be to finally bite the bullet.

So, having hit the wall -- I fired up Totem from the command line using: totem --gst-debug-level=2 (which outputs minimal debug spew to the console) and found:


** Message: Error: File "/home/paul/.gstreamer-0.10/plugins/libmch264dec.so.7.4.0.20778" could not be used.
fluh264dec.c(414): gst_fluh264dec_setup (): /play/decodebin1/fluh264dec0:
Could not open module: libstdc++.so.5: cannot open shared object file: No such file or directory


After hunting around, the libstdc++5 package from the universe repository was installed, Totem was restarted and now happily displays the file correctly, albeit with a major visual issue when you use the arrow-keys to rewind the movie, but that's a story for another day.



Unfortunately, there's no documentation on the website, or in the tarball that tells you about needing to install older, relatively unsupported libraries to get something to play, hence this post.

I just hope the next version is compiled against something slightly more recent, so it works out of the box in future.

Friday, May 23, 2008

Customising the GNOME Screenshot Application

Circa GNOME 2.20, the screenshot application became simplified -- it now just takes the screenshot of your entire desktop, with borders and asks you to save it somewhere.

What happens if you'd like to take a screenshot of a specific window, or remove the borders, or add a drop-shadow, or do any sort of resizing?

You can make a custom launcher for it. Luckily, none of the backend settings were removed, the user interface was just tided up.

First, right-click on your desktop and select 'Create Launcher' from the menu, which should take you to the configuration screen -- consisting of four boxes you will need to fill in.

In the Application drop-down box, make sure: Application is selected.

In the Name text-box, you enter the name of the launcher, Take Screenshot is a good choice, but you can call this anything you want.

In the Application text-box, you need to add the program with the options you need, you can select from:

--delay=X -- Delays the screenshot for X seconds after you double-click the link, very handy if you need to take shots of a menu bar or dialogue.
--window -- Takes a shot of the current window instead of the entire desktop.
--border-effect=X -- Takes a unmodified screenshot if you use 'none', adds a 2 pixel drop-shadow if you use 'shadow' or a 1 pixel black border if you use 'border'.
--window -- Takes a shot of the current window instead of the entire desktop.

The Comment text-box is optional, it offers you the ability to make a longer description of the launcher if you highlight it on the desktop, because of the options we've used above, we'll call it: Takes Screenshot of the current window after 10 seconds.

When you are finished, your launcher screen should look like the one below:



Click on the OK button when you're happy with your settings -- and the launcher should appear somewhere on your desktop.

Saturday, May 17, 2008

OpenSSL Vulnerability for Ubuntu 6.06 LTS

Had a phone-call earlier about this as well as a few e-mails since DSA 1571-1 appeared, so i thought i'd post this here in order to respond to multiple birds with one stone.

Servers that run Ubuntu 6.06 LTS in it's default configuration (or it's LAMP configuration) are not vulnerable to the OpenSSL problem, because they are running OpenSSL 0.9.7, not 0.9.8c-1, which is the first version to exhibit the bug.

Systems which are running any of the following releases are vulnerable to this bug:


  • Ubuntu 7.04 (Feisty)

  • Ubuntu 7.10 (Gutsy)

  • Ubuntu 8.04 LTS (Hardy)

  • Ubuntu “Intrepid Ibex” (development): libssl <= 0.9.8g-8

  • Debian 4.0 (etch) (see corresponding Debian security advisory)



This is not mean that you shouldn't check your users keys -- if you've got users who use affected versions of Debian or Ubuntu (above), you should use the dowkd.pl script available here (GPG key) with the user option to scan your servers for users who have potentially compromised keys.

You can scan the local server using:

perl dowkd.pl host localhost

... and local users with keys using:

perl dowkd.pl user

If you see something like:

/home/[username]/.ssh/id_dsa.pub:1: weak key

You should re-generate keys for that user, using:

ssh-keygen -t [rsa/dsa] -b [1024/2048/4096]

... depending on your individual security needs.

note: If you see:

/home/[username]/.ssh/id_dsa.pub:1: 2048 bits DSA key not recommended

You are not necessarily vulnerable, there's nothing wrong with using 2048-bit DSA keys, as longer key lengths provide better security at the cost of decreased performance.

note II: Using a blank passphrase for your public key is strongly discouraged -- if a would-be-intruder can just press the ENTER key to enter your machine, what security is a public key?

(The most-used analogy I have for passphrase-less SSH keys is: 'A public key with a passphrase is like a door with a lock -- without one, it's just a door.)

Saturday, May 3, 2008

Blank Text Area in Zen-Cart?

Have you ever been editing a product description, or a category title in Zen-Cart and had the field go blank?

Since updating a bunch of older 1.2.x stores to 1.3.x, I have -- and customers don't like data entry at the best of times, let alone when you have to tell them to edit it down to less than 200 characters and try inputting all again, rather than just editing out the excess characters.

So i've spent the day tracking down a solution, which happens to be via the javascript counters for the various products.

If we look at /includes/modules/pages/product_info/jscript_textarea_counter.js

Line 8-10 states:



if (excesschars > 0) {
field.value = field.value.substring(0, excesschars);
alert("Error:\n\n- You are only allowed to enter up to"+maxchars+" characters.");



If you change that to read:



if (excesschars > 0) {
field.value = field.value.substring(0, maxchars);
alert("Error:\n\n- You are only allowed to enter up to"+maxchars+" characters.");



You'll find you get the error, rather than the field going blank -- because it gives the user the ability to then edit what they've typed, it's a much better solution to the problem.

note: If you use "Free Shipping" for any of your products, you should also apply this fix to the /includes/modules/pages/product_free_shipping_info/jscript_textarea_counter.js file (the files are identical).

Monday, April 14, 2008

Transparent Proxying with Shorewall, SQUID and Privoxy on Ubuntu

note: This article is intended for a technical audience -- and while it seems to works for the two systems i've tried it on since, you should use caution when modifying a production system -- caveat emptor.

A 'remember these for later brain' post, mainly because I always forget the order when I haven't done it for a while.

note:, you can do this with any distribution -- Ubuntu Server works and I use that, but you can substitute the names of your distributions packages instead and still get much the same result.

First, having installed Ubuntu Server, you'll need the universe repository, as privoxy isn't officially blessed by the Ubuntu maintainers.

Next, you'll need to install the various packages you'll need to make this fly -- as root, run:


apt-get install squid privoxy shorewall


After that's done, you'll want to make SQUID work first -- using Ubuntu 8.04, this entails opening the default SQUID configuration at /etc/squid/squid.conf and:



  1. Adding transparent to the port configuration by changing:

    --- http_port 3128
    +++ http_port 8080 transparent




  2. Adding your {W/L}AN block to the ACL list by adding:

    +++ ACL local_network src 192.168.0.0/24

    ... and enabling it in the http_access list by adding:

    +++ http_access allow local_network




Save the configuration and exit the editor.

Now restart SQUID with: /etc/init.d/squid restart to have your changes take effect.

At this point, configure a browser to use a manual proxy on the server's IP address on port 8080 and make sure it actually works, if you receive an error that talks about Access Control Lists, check that you added the right network mask to the SQUID local_network line you added above.

If that works, we can move on to the Privoxy part of things. Privoxy, for those who are unaware is one of the best 'web-crud(tm)' filters i've ever had the pleasure of using, it was originally built from the Internet Junkbuster (IJB) but has now got many more features, is stable on pretty much any platform for a wide-variety of users and protects your privacy too.

By far, the easiest way to configure Privoxy is via the web-interface, but the Ubuntu package disables that by default, so before we hook it up to SQUID, we should enable that.

Open the /etc/privoxy/config file and make the following changes:



  1. Enable editing of actions via the web interface by changing:

    --- enable-edit-actions 0
    +++ enable-edit-actions 1




Save the configuration and exit the editor.

Now restart Privoxy with: /etc/init.d/privoxy restart to have your changes take effect.

Now, we need to hook Privoxy up to our proxy as the default parent cache -- you'll need to open the SQUID configuration file again and make the following adjustments:

note: If you haven't done so already, it's a good idea at this point to make a backup copy of your /etc/squid/squid.conf file before making these changes.



  1. Adding Privoxy as a cache peer by changing:

    +++ cache_peer 127.0.0.1 parent 8118 0 no-query no-delay no-digest no-netdb-exchange


    note: because Privoxy cannot influence any of SQUID's cache settings, setting no-query no-delay no-digest no-netdb-exchange as options for the peer cache lessens the delay between Privoxy filtering the transaction and SQUID caching it of up to a second on slower hardware (for example, a Pentium 4 1.2Ghz machine with 1GB of memory).



  2. Telling SQUID to always send traffic from the firewall directly to the internet by changing:

    +++ always_direct allow localhost




  3. Telling SQUID to never send traffic from the local LAN (thereby forcing users to use the Privoxy/SQUID cache) directly to the internet by changing:

    +++ never_direct allow local_network




Save the configuration and exit the editor.

Now restart SQUID with: /etc/init.d/squid restart to have your changes take effect.

Finally we need to add the rules for transparently using our proxy to our Shorewall Firewall Configuration -- you'll need to open the Shorewall rules configuration file and make the following alteration:

note: For the purposes of simplicity, i'm assuming that your WAN interface is eth0, your LAN interface is eth1, your LAN IP range is 192.168.0.0/24 and your Shorewall configuration is already complete (for some reason I still don't fully understand, the Debian and Ubuntu packages don't ship with a default configuration file, so if you don't see a /etc/shorewall/rules file, you'll need to download one, or grab a prefabricated copy from the /usr/share/doc/shorewall directory and set it up first.



  1. Adding a redirection to our transparent proxy by changing:

    +++ REDIRECT     loc     8080     tcp     www     -
    +++ ACCEPT     $FW     net     tcp     www




note: the minus (-) with the trailing space at the end of the redirect line is important, it means it will ignore the source port when working out the request and force any already established request to continue to use the proxy.

Save the configuration and exit the editor.

Now restart the Shorewall Firewall with: /etc/init.d/shorewall restart -- fire up a browser, make sure it is not configured to use a proxy (ie. it uses a direct connection to the internet) and browse to your hearts content.

Thursday, April 10, 2008

Australian'ising Epiphany's Keyword Search

As part of the upgrading process to the new Ubuntu LTS release -- I found my default search engine had been reset to http://www.google.co.uk rather than http://www.google.com.au in Epiphany.

Everything else about Epiphany's default setup is Australian, the language defaults are en-au and the languages are set correctly, but the keyword search isn't.

Fortunately, this is easy to change.

First, fire up your browser and type about:config into the location bar, which should take you to the configuration screen -- consisting of a filter box and a larger portion containing all of the relevant tweakable parameters.

In the filter textbox type: keyword.URL it should be a User Set, String value (in bold) that looks like the screen below:



Now, if you right-click the bold text and select modify you should get a textbox appear in the center of the screen, simply use the keyboard or mouse to select and remove the .co.uk section of the highlighted URL and enter .com.au instead -- and press OK to return to the configuration menu.

At this point, the screen should look like the one below:



Now just restart Epiphany and do a search, the browser should now (correctly) take you to Google Australia's search.