Tuesday, March 25, 2008

Recovering your Password Database in Epiphany 2.20.x on Ubuntu

Having just upgraded my laptop to the recently released beta version of Ubuntu's next Long Term Service release (Hardy Heron -- aka. 8.04), I started up my preferred browser (Epiphany) and discovered that my saved web password database was empty.

After a bit of web hunting, I discovered that the new Firefox 3.0 betas have a new version of the signons.txt file that Mozilla/Firefox/Epiphany use to store the actual password.

Looking in my .gnome2/epiphany/mozilla/epiphany directory, I noticed a signons2.txt file and a new, blank signons3.txt file.

I also noticed that the datestamp on the key3.db file had been updated to today's date.

First, I tried renaming the signons2.txt file to signons3.txt and restarting Epiphany -- as expected -- the datestamp of signon3.txt was now the same as key3.db.

Going into the Personal Data / Passwords menu in Epiphany now gave me access to some passwords, but not all -- plus there was some duplication between passwords.

So, I tried deleting both the key3.db and the signons3.txt file -- and copying a backup version of both files (with an older datestamp) to my epiphany profile directory and then renaming signons2.txt to signons3.txt.

Restarted Epiphany and from a terminal window, noticed both files had been updated to the current time.

This time, going back to the Personal Data / Passwords menu in Epiphany now gave me access to all my old password information.

So, in a nutshell -- to upgrade from Ubuntu Gutsy to Ubuntu Hardy's Epiphany seamlessly, you should use a three step process:

* Backup your .gnome2/epiphany directory before doing your upgrade.

* After doing your upgrade, copy the key3.db and signons2.txt files from your backup into your updated .gnome2/epiphany/mozilla/epiphany directory.

* Before using Epiphany for the first time, copy your signons2.txt file to a new file called signons3.txt.

note: this issue has also been reported to the Ubuntu Bug Tracker as #180205.

Sunday, March 23, 2008

WD's 'My Book' Product & Linux

note: This article is intended for a technical audience -- you should use extreme caution when modifying a production system, as your data will be nearly impossible to recover if you use this command incorrectly -- caveat emptor.

Today I picked up one of the new 'essential edition' 750 GiB USB External Hard Drives -- very nice piece of kit, it looks like the volume of a book and fits on the bookshelf (albeit if one runs the power supply and related cords down the back of said bookshelf first) as advertised on the box.

To make it work under Linux wasn't hard either.

It comes with a FAT32 (vfat) filesystem by default and includes Windows and MacOSX versions of Google Desktop, Skype and a few other things.

The procedure here was run on OpenSuSE 10.3, but any semi-recent Linux distribution will do -- it uses one command, mke2fs -- which is present on all Linux distributions back to the dark ages:

First, unmount the drive (either from the right-click desktop option, or from the command line)

Second, open a terminal and sudo to root.

Third, format the drive -- I use ext3 filesystems, so my command line was:

mke2fs -j -L "My Book" -m 1 -O dir_index,sparse_super -T largefile /dev/sdX1

Where sdX is the WD device, I bought two of them and they were sdd and sde respectively. (note: this is never usually /dev/hda or /dev/sda -- please make sure you're writing to the correct device before pressing ENTER, type once, look twice!).

As for options, I used those to get the most storage and speed from my new drives:

-j -- Adds an ext3 journal to the filesystem.
-L "My Book" -- Adds a Label to the new drive (by default, the drive mounts out of the box with a label of 'My Book', so I use that, but you could call yours anything as long as it is less than 16 characters long, just remember to put it in double quotes on the command line.
-m 1 -- Uses 1% of the total drive space (750MiB) as reserved blocks for the root user, which is handy to have when the drive dies and one needs to run recovery tools to get back their data.
-O dir_index,sparse_super -- Two options, seperated by commas -- one (dir_index) to speed up lookups in large directories (say, one with the contents of your digital camera or music collection) and the other (sparse_super)to gain a few extra megabytes of usable space by not creating as many superblock backups (on a 750GiB drive, this option creates 24, rather than 80-odd due to the sheer size)
-T largefile -- Creates a filesystem suitable for medium to large files (a few megabytes or more per file) rather than one suited to a standard home directory (lots of files worth a few kilobytes, intermixed with some larger ones).

Once you have pressed ENTER, the mke2fs will format your drive using the parameters specified on the command line, then return you to the prompt.

Fourth, you will need to give your user account permission to write to the drive (at the moment, only root can do this) -- as root, type:

chown -R username:users

Where username is your username, mine's paul, yours probably isn't.

Finally, remove the USB cable and plug it back in -- if you're running a recent Linux distribution (2006 onwards) it should re-appear on your desktop as 'My Book' with an unreadable directory called lost+found.

Now you can use your drive as normal, with a native filesystem, access rights and all the rest.

Thursday, March 20, 2008

Repairing Zen-Cart's use of HTML Characters (umlauts, cyrillic, etc)

One of the bigger alterations i've made to the Zen-Cart code recently has been a large update of it's country and state zone files (so instead of users typing their state and city into text fields, they can select them from a drop-down box that only shows references for their particular country).

Unfortunately, in 1.3.7 and 1.3.8 -- the code for handling HTML entities is slightly broken, so users from Nordic or Germanic countries see:

Baden-Württemberg rather than Baden-Württemberg

In places like the Shipping Information or the Account Creation code.

The solution i've been using, is to edit the files that reference the zen_js_zone_list code and correctly sanitise the output-- this is done in two places (one for admin, one for userland).

Firstly, edit the admin/includes/functions/html_output.php file and around line 188 change:


--- return $output_string;
+++ return html_entity_decode($output_string);


Secondly, edit the includes/functions/functions_general.php file and around line 1448 change:


--- return $output_string;
+++ return html_entity_decode($output_string);


You should now have accented characters in places they should be.

Wednesday, March 19, 2008

Improving Browser Speeds in Epiphany

Mainly a 'remember these for later brain' post -- but people might still get a fairly good speed improvement out of them with Xulrunner or Gecko.

updated (10-04-2008): added screenshots -- so it matches the other HOWTO's here :)

Remove the initial rendering delay

Firstly, Open Epiphany (all of these work for Firefox as well, mind you.) and type about:config into your location bar.

You should see a screen that looks like the one below:



First, Gecko (the Mozilla Backend) uses a delay when rendering any new page on-screen -- you can remove this delay by right-clicking anywhere in the main window and creating a New Integer key.

In the textbox that appears, name the new key “nglayout.initialpaint.delay” and set its value to “0″.

Disabling IPv6

Next, if you don't use it -- IPv6 can add delays to any page you view -- you can turn it off in Epiphany by toggling the "network.dns.disableIPv6" key to true.

First, find the "network.dns.disableIPv6" key in the filter, your screen should look similar to:



Right click the key and press the Toggle button -- your screen should now look like:



Disabling Prefetching


Prefetching, otherwise known as the ability to cache the next page you might want to read can actually slow your browsing down if you're on a larger network, behind a proxy or just browsing a site on a slow server -- also, SSL pages (such as those on online banking sites) are much snappier if you turn this off, although i've yet to prove why this is true, or if it's just a side-effect something else.

Anyway, to turn prefetching off -- find the "network.prefetch-next" key in the filter, your screen should look similar to:



Right click the key and press the Toggle button -- your screen should now look like:



Tuning Pipelining


Pipelining refers to the ability to use one HTTP connection to obtain multiple pieces of data -- reducing the load on the servers you are browsing and maximising the amount of data your connection can receive.

There's a LOT of discussion about pipelines and their use -- i've seen pages that tell people to increase their pipeline count from 4 to 30, 60 or even 100 -- of course, telling the server you can fit more data into one connection than you can possibly receive is annoying for server administrators ...

Why?

Imagine you've got a server that hosts something reasonably popular and you've got a client that doesn't support pipelining, one that does using the default and and one that's tuned their client to receive 100 requests at once ...

The server, that basically knows nothing about what an end-user can support when it is asked to build a new connection, so in this case it would allocate enough resources to send 105 requests at once.

Even though the average broadband (512k ADSL) connection would use 5 in about the same fashion at the receivers end.

Considering the maximum number of actual connections to a particular server is in the range of 4 and 16 and that HTTP requests remain open between your client and the server for 300 seconds (both of which we'll tune later in this article) -- requesting an excessive amount of data per request is just poor considered poor etiquette.

So, let's tune our pipelining to a value that makes sense from both an available bandwidth point of view and doesn't annoy every network administrator on the planet.

To do this, we need to make sure pipelining is turned on, then we need to set two values -- one for connections created normally and one for those created by a proxy.

First, find the "network.http.pipelining" key in the filter, your screen should look similar to:



Right click the key and press the Toggle button.

Next, repeat the process for the network.http.proxy.pipelining key.

Finally, we need to define a value -- this basically depends on the type of site you're browsing, but it is considered to be poor form to make this value more than a double digit number -- I'm using 8 on all my boxes, 12 works quite well too on my 3 megabit connection, but I personally wouldn't recommend anything higher than that -- when you're done, your screen should now look like:



Tuning Per-Server Connection Limits

Now that we've tuned the pipelined connections, we need to increase the number of connections that your browser can open to a server before needing to queue new ones -- this speeds up sites that host their static content on other servers to the dynamic content, as well as making more efficent use of prefetching too.

To do this, find the "network.http.max-persistent-connections-per-server" key in the filter, your screen should look similar to:



The defaults in Firefox 2.x (xulrunner 1.8) are 2, they have been recently increased for Firefox 3.x (xulrunner 1.9) to 6.

Personally, i've found 4 to be a reasonable number on my lower-end machines (those less than a P4 1.5ghz), I use 8 everywhere else, because it matches my pipelining use.

Increasing this number to more than 10 makes the performance of Epiphany actually reduce considerably, it makes significant delays after the fourth or so page I access at once, possibly due to the fact the maximum total number of connections that Gecko opens at once is 24 (or 32, if you're using a newer Firefox 3.x build).

Right click the key and press the Modify button. Now enter the value in seconds that you'd like to change it to -- remembering the guidelines listed above.

Your screen should now look like:



Tuning Keepalive Timeouts

HTTP Keepalives play an important role in how long upstream servers keep connections open to the one client. It's incredibly useful being able to hold a connection open for a client, but having your client tell the server to hold it for 300 seconds (5 minutes) uses valuable resources on internet servers, as well as holds valuable RAM within your browser -- Basically, on the off chance you might go back to that site within the 5 minute window.

In my experience, 60 seconds (1 minute) works just as well, as is far friendler to the internet at large.

Before anyone asks (someone over my shoulder just did) -- this is not the same thing as a website timing your session out due to inactivity (like an online shop or a bank) -- this is the lower-level server infrastructure of the interschnitzel, not the website itself.

To change the keepalives to something more friendly:

Find the "network.http.keep-alive.timeout" key in the filter, your screen should look similar to:



Right click the key and press the Modify button. Now enter the value in seconds that you'd like to change it to -- personally, I wouldn't recommend going below 30 seconds (or the pipelining improvements we made above become mooted), but 30, 60, 90 and 120 are all reasonably acceptable values.

When you're done, your screen should now look like:



At this point, you might be inclined to say -- that's fast enough -- but there's one other rendering tweak which makes Epiphany much snappier ...

Tuning Page Re-rendering Delay


These tweaks control the number of times that Gecko/Xulrunner will refresh the page as data is being retrieved. The default is off, which will make the backend re-render the page periodically as each piece of data is being retrived.

If you're not behind a caching proxy, these particular tweaks will improve the amount of time it takes to render a page that contains content coming from a number of different sources (for example, Youtube or the Australian Stock Exchange) as well as remove the hideous delay that the non-free Flash Player has when it is being embedded into a page that contains .swf content.

In order to make this work, we have to add two new entries to our configuration -- right-click anywhere in the main window and creating a New String key.

In the textbox that appears, name the new key “content.notify.ontimer” and set its value to “true″.

Repeat the process, except this time -- make a New Integer key.

In the textbox that appears, name the new key “content.notify.backoffcount” and set its value to “5″.

After pressing OK, your screen should look similar to the one below:



Skip To The End...

Don't want to hear the explainations? Just want to cut-and-paste some recommendations? If so, make sure Epiphany is completely shutdown (or the preferences file will be overwritten with the defaults after you eventually close it).

Open the .gnome2/epiphany/mozilla/epiphany/prefs.js file with your favourite text editor and add:



user_pref("content.notify.backoffcount", 5);
user_pref("content.notify.ontimer", true);
user_pref("nglayout.initialpaint.delay", 0);
user_pref("network.dns.disableIPv6", true);
user_pref("network.http.keep-alive.timeout", 60);
user_pref("network.http.max-persistent-connections-per-server", 8);
user_pref("network.http.pipelining", true);
user_pref("network.http.proxy.pipelining", true);
user_pref("network.http.pipelining.maxrequests", 8);
user_pref("network.prefetch-next", false);



Save your file and re-open your browser for a much, much speedier experience.

Other Advanced Tweaks

note: These particular tweaks should be used by a technical audience only -- they may adversely affect your browsing speed on low-end machines, Epiphany's memory usage, or cause some machines to crash, you should use caution when modifying a production system -- caveat emptor.

These two tweaks control the amount of time (in milliseconds) that it takes the backend to return control to the rendering engine after the user has moved the mouse, used the keyboard or done some other "high-latency" task that pauses the rendering engine in favour of user interaction.

They both depend on the key “content.notify.ontimer” being available and enabled, which you should already have if you have followed the rest of this guide.

user_pref("content.notify.interval", 750000);
user_pref("content.switch.threshold", 750000);

This means the browser will wait less than 1 second (0.75 seconds) to return processing to the renderer, rather than the default 1 second.

On much slower machines, you may want to move them to slightly higher than 1 second values -- which makes the backend give more processing to the user interface at the expense of rendering time. A friend with an older Pentium III on a 256k ADSL connection has better performance from:

user_pref("content.notify.interval", 1250000);
user_pref("content.switch.threshold", 1250000);

Which causes the browser to wait an extra quarter of a second for the user to stop doing anything before giving everything back to the rendering engine -- it also means they can view an extra few pages before Epiphany starts becoming sluggish.

The End...

Tuesday, March 18, 2008

Context-Sensitive Menus in GNOME ...

While wandering the interschnitzel today, I wandered pass planet.gnome and noticed an entry that i've given a bit of thought to over the years ...

Lucas wonders why GNOME uses context sensitive menus for all devices / locations?

While I agree, in theory -- if Nautilus (and other GNOME components) start removing these context-sensitive actions, consider the use-case for USB sticks (CF cards, etc) that are mounted read-only, been mounted by the system (thus you don't have rights) or have run out of room?

Personally, I find the greyed out icons for Cut/Copy/Paste actions a big timesaver -- if I do a copy and paste, I can instantly tell right now if I can 'do' that, because the icons are there, but context-disabled if I cannot.

If GNOME chooses to remove those because of 'clutter' -- the next notification one gets is a 'cannot write to XXX' *after* they've attempted once, twice, three times to do such an action.

Care should be taken not to be too overzealous with this sort of thing, in my opinion.

Anyone else agree, disagree, other -- voice your comment on the bug.

Wednesday, March 12, 2008

Adventures with GARNOME 2.22.0

Just to see if it would work, i've just attempted to compile the platform and desktop directories from the upcoming GARNOME 2.22.0 release of GNOME on both OpenSuSE 10.3 and Slackware 12.

Needless to say, the experience was better than previous releases -- a few minor caveats to be aware of for those who are stuck:

OpenSuSE will not build the platform/gnome-vfs directory without the libtasn1-dev libraries from Factory being installed first, libtasn is a spinoff of the gnupg libraries -- but it doesn't seem to be included in the official repositories, you can use the OpenSuSE package search to find a version suitable for your machine.

Slackware gets much further (Slackware 10, in contrast -- never got really far) -- a common build mistake that the uninitiated may trip on is that some required includes (like limits.h) are included in the kernel-headers package.

A full Slackware 12 build toolchain -- as required to build GARNOME is:

binutils, gcc, gcc-c++, gcc-g++, diffutils, patch, flex, bison, gawk, m4, make & kernel-headers


edit i: A new dependency list (DEPS-LIST) detailing all the packages required, has been sent to the current GARNOME maintainers -- hopefully it'll arrive for GARNOME 2.20 (fingers are crossed)

Friday, March 7, 2008

Securely deleting the contents of a USB stick

note: This article is intended for a technical audience -- you should use extreme caution when modifying a production system, as your data will be nearly impossible to recover if you use this command incorrectly -- caveat emptor.

When discarding, selling or lending a USB memory stick to someone -- you probably want to ensure there's no critical or sensitive data on it, so here's another handy one-liner to purge data from a USB memory stick when you don't have access to specific wiping programs such as shred or wipe on your platform.

This uses the 'dd' command, so it can be used on most systems with a random entropy generator and the coreutils/fileutils/base-utils package installed:


dd if=/dev/urandom of=/dev/[X] bs=512 count=1 conv=fsync oflag=direct


Where [X] is the device you wish to wipe (note: this is never usually /dev/hda or /dev/sda -- please make sure you're writing to the correct device before pressing ENTER, type once, look twice!).

This will write blocks using direct I/O, 512 bytes at a time to your USB stick from the entropy buffer -- additionally, the conv=fsync option makes sure that each block is completely written to disk before proceeding on to the next one.

When it is finished, you should see something along the lines of:

dd: writing to `/dev/sdd1': No space left on device
10490446+0 records in
10490445+0 records out
1971107432 bytes (1.9 GB) copied, 2097.36 seconds, 1.0 MB/s


Now you can go off and put a filesystem on your drive, as normal.

Thursday, March 6, 2008

Converting FLAC to MP3 with GStreamer

Handy one-line command to convert a FLAC file into an MP3 suitable for use with iTunes when you've only got access to the command-line:

gst-launch filesrc location = "song_name.flac" ! flacdec ! lame ! filesink location = "song_name.mp3"

I personally like CBR 320 bitrate MP3's, so my particular command-line looks like:

gst-launch filesrc location = "song_name.flac" ! flacdec ! lame vbr=0 bitrate=320 ! filesink location = "song_name.mp3"


You'll need gst-launch (or gst-launch-0.8 | gst-launch-0.10 if you're using Debian or Ubuntu) and the relevant codecs. (in this case, flac and lame)

Tuesday, March 4, 2008

Adventures with a D-Link DWL-G123

I've spent the last few hours attempting to get a newly purchased D-Link USB Wireless Adapter working on one of my older desktop boxes that had a dead ISA NIC.

(Yep, not a PCI NIC, but an ISA one)

So, after purchasing a USB daughterboard with 4 ports on it and plugging in the USB adapter -- there's no lights.

A quick search on the web indicated that the card might be atheros based, but the Madwifi site didn't gleam any good answers.

Next, the box in question had a half-complete Ubuntu 7.04 install (upgraded from 6.10) on it and no working NIC.

Easier solution to that, having found the CDRW in the box doesn't like DVDRW discs -- putting the Ubuntu 7.10 installation on a USB key and mounting the ISO via loopback using:

mount -t iso9660 -o loop /media/disk/ubuntu-7.10-alternate-i386.iso /media/cdrom

Then adding:

deb file:///media/cdrom gutsy main restricted universe

To the /etc/apt/sources.list file, then doing a standard apt-get update && apt-get -f dist-upgrade command line got things underway.

After an hour or so, the upgrade had completed and prompted me to reboot -- after I did, it bombed to the command line and told me that /dev/sda1 had been used by another device or driver -- while the kernel spewed device-mapper lines across the screen talking about devices it couldn't locate.

Returning to the internet on another machine, I found this bug post that looked similar -- sure enough, removing EVMS with apt-get remove evms and rebooting did the trick.

At this point, I had a desktop -- but no Internet.

So I put the driver CD into the machine and run:

ndiswrapper -i /media/cdrom/Drivers/2KXP/NetA5AGU.inf

Which returns:

installing neta5agu ...
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64
forcing parameter MapRegisters from 256 to 64


Then ndiswrapper -l to make sure the driver is correctly loaded, which returns:

neta5agu : driver installed
device (2001:3A03) present


Finally, I modprobe the ndiswrapper module with modprobe ndiswrapper

... and the lights come on -- and clicking on the network-manager icon finds my network and prompts me for my password.

But that's as far as it gets, the lower green dot stays on for a good 90 seconds, then goes back to showing disconnected.

OK, off to get the newest drivers -- which are located here

Extract the drivers (use the 2KXP drivers and not the Vista ones, which are incompatible with ndiswrapper) and remove and replace the drivers.

Use modprobe -r ndiswrapper && ndiswrapper -r neta5agu to remove the drivers.

Then use the instructions above to install the new drivers, replacing /media/cdrom with the patch you extracted your downloaded drivers to.

But ...

Network Manager still didn't want to connect, I selected my network, entered my password and waited -- but Network Manager still didn't connect to my WLAN.

In a fit of desperation, I decided to backport the version of ndiswrapper that is in the upcoming version of Ubuntu to 7.10.

You can get those from here.

Then I installed those using dpkg -i ndiswrapper*.deb

Rebooted.

Then re-inserted the ndiswrapper module using modprobe.

... both lights came on ...

Logged into my machine.

... crossed my fingers while the green dot sat with the grey one ...

... then the green one ...

... then the strength bar! ...

At this point, I had networking, could use Epiphany and all.

So to finish up, I built the configuration for ndiswrapper using ndiswrapper -m then added ndiswrapper to the /etc/modules file so that the ndiswrapper module would be automatically loaded on startup.

I rebooted again, just to make sure everything remained in a working fashion -- everything came up normally, complete with the strength bar in the NetworkManager applet.

It's all good.

In summary, if you use the latest driver from D-Link and Ubuntu Hardy (8.0x) it might "just work" -- but if you use any of the earlier Ubuntu's and are having trouble getting your card working, I hope the instructions above work for you too.