Archive

Author Archive

Vim: highlight repeated properties of property file

While refactoring some localization property files I found that many messages and properties were repeated. This might be due to the lack of a naming convention for the strings or maybe because the properties were not sorted alphabetically.

For example, in one of the files the string Upload was repeated in three different places:

BUTTON_UPLOAD=Upload
UPLOAD=Upload
VIDEO_UPLOAD=Upload...

After normalizing the file I ended up with:

UPLOAD=Upload
UPLOAD=Upload
UPLOAD=Upload...

Before deleting the duplicated properties I wanted to check if the string they had was the same in all occurrences. With that in mind, I decided to make a little vim function that would highlight all the duplicated property names. For that you have to write the following in your .vimrc file:

function! HighlightRepeatedProps() range
let propCounts = {}
let lineNum = a:firstline
while lineNum <= a:lastline
let lineText = getline(lineNum)
if lineText != ""
let propName = matchstr(lineText, "^[^=]*")
let propCounts[propName] = (has_key(propCounts, propName) ?
propCounts[propName] : 0) + 1
endif
let lineNum = lineNum + 1
endwhile
exe 'syn clear Repeat'
for propName in keys(propCounts)
if propCounts[propName] >= 2
exe 'syn match Repeat
"^' . escape(propName, '".\^$*[]') . '=.*$"'
endif
endfor
endfunction

command! -range=% HighlightRepeatedProps <line1>,<line2>call HighlightRepeatedProps()

After saving your changes on .vimrc open your .properties file and type :HighlightRepeatedProps, if you have any repeated property names vim will highlight them in a different colour.

Useful, right?

Advertisements
Categories: Uncategorized

From Mercurial to Git and from GoogleCode to GitHub

2012/10/10 2 comments

Some time ago we decided to modify the VCS of our code repository at Google Code from SVN to Mercurial, the only DVCS alternative offered in Google Code (at the time). This system has far more power than non distributed systems and allowed us more freedom to develop on different branches and to merge all the work. But, this decision was made mainly to keep the current repository location rather than by choice, since we prefer to work with Git.

Lately, though, we have been having some problems with Mercurial and that, added to the difficulty to export and import this kind of repository in other code hosting sites, led us to change yet again, this time to Git and to GitHub.

Here are the steps we took in order to do so.

First, make some folders to store the tools and the repositories

cd ~
mkdir repository_conversion
cd repository_conversion
mkdir gitprojectname

Next, make a clone of the mercurial repository in your local machine.

hg clone https://code.google.com/p/projectname

This will make a folder projectname with the contents of the repository. Then, download Fast-Export, a tool that converts mercurial repositories into git repositories.

git clone http://repo.or.cz/w/fast-export.git

Before going any further, you should know that Git is more restrictive with the username format of the person doing a commit. Mercurial lets you commit using partial or different username information for the same set of credentials. For example, if you have a committer called John Doe you might find Mercurial commits with the following aliases:

johndoe@dcb55125-116f-0410-8251-c326c5fbc55d
johndoe@gmail.com
johndoe
John Doe <johndoe@gmail.com>

The correct commit format is the last one (User Name <email@youremail.org>), so you should map the wrong aliases to a correct format before converting the repository. To do this, you first need to get the list of all the people that has made a commit in your repository. For that purpose, we can either use the hg log command or the churn extension.

hg log --template "{author}\n" | sort | uniq -c | sort -nr

If you want to use the churn extension instead, you must enable it first in the Mercurial configuration file. You can enable it system-wide editing the /etc/mercurial/hgrc file (or just for your repository editing ~/repository_conversion/projectname/.hg/hgrc) and adding the following to it:

[extensions]
hgext.churn =

Then you can call it like this:

hg churn --template "{author}"

These commands output a list of committers sorted by number of commits. You can copy that list to a text file called authors.map to do the user mapping. Following our previous example, you would map John Doe’s aliases like this:

johndoe@dcb55125-116f-0410-8251-c326c5fbc55d=John Doe <johndoe@gmail.com>
johndoe@gmail.com=John Doe <johndoe@gmail.com>
johndoe=John Doe <johndoe@gmail.com>
John Doe <johndoe@gmail.com>=John Doe <johndoe@gmail.com>

Once you are done mapping the users, go to the git repository folder, init a git repository and call the conversion script:

cd ~/repository_conversion/gitprojectname
git init
../fast-export/hg-fast-export.sh -r ../projectname/ -A ../authors.map

Depending on the number of commits and branches this process may take a while. Once the conversion is finished, you can check if all the committers were correctly mapped typing:

cd ~/repository_conversion/gitprojectname
git shortlog -nse --all

So, with that we have a fully converted git repository. Now we have to upload it to a repository hosting site, like say GitHub. GitHub has very good and detailed guides on how to setup your repositories, so I will just assume I signed up on the site as johndoe@gmail.com and tell you the commands you need in order to push the repository without going into much detail.

Create a set of SSH keys to be able to push your changes:

cd ~/.ssh
mkdir key_backup
cp id_rsa* key_backup
rm id_rsa*
ssh-keygen -t rsa -C "johndoe@gmail.com"

Copy the contents of your id_rsa.pub file (exactly as they are, without adding or removing anything) in your SSH Key management area and create a repository named gitprojectname using the web interface.

Before starting to commit and push things in your repository, you should configure the commit user of Git (the user and email that appears in the commit’s metadata). If you haven’t done so by now, you can set your system-wide git user like this:

git config --global user.email "johndoe@gmail.com"
git config --global user.name "John Doe"

If you prefer to use this user only for a particular repository:

cd ~/repository_conversion/gitprojectname
git config --local user.email "johndoe@gmail.com"
git config --local user.name "John Doe"

Finally, push your local repository to the remote site (you can also do this via SSH using the keyset you generated in the previous step and pointing to an URL that looks like git@github.com:johndoe/gitprojectname.git):

cd ~/repository_conversion/gitprojectname
git remote add origin https://github.com/johndoe/gitprojectname.git
git push -u origin master

If you have several branches and want to upload them all you can try replacing the last step with git push --all.

Hope that was helpful.

Reset nondetachable USB devices on your laptop

2012/01/17 1 comment

I’ve had problems with Ubuntu and my laptop’s integrated webcam for quite some time. Because of these problems, a couple of developers started working on some alternative drivers, but the project seems to be abandoned right now (7 months without commits).

With no appropriate drivers, the device displays odd colours and randomly hangs up (specially when using Flash Player) leaving the power led on. This is very unpleasant because it gives you the impression that somebody might be spying on you.

So I googled a bit searching for a way to reset the devices that I can’t detach, and found this great post by Alan Stern in which he gives us a piece of code to do just that.

cd
mkdir usbreset
cd usbreset

Copy the code into usbreset.c:

/* usbreset -- send a USB port reset to a USB device */

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/ioctl.h>

#include <linux/usbdevice_fs.h>

int main(int argc, char **argv)
{
 const char *filename;
 int fd;
 int rc;

 if (argc != 2) {
 fprintf(stderr, "Usage: usbreset device-filename\n");
 return 1;
 }
 filename = argv[1];

 fd = open(filename, O_WRONLY);
 if (fd < 0) {
 perror("Error opening output file");
 return 1;
 }

 printf("Resetting USB device %s\n", filename);
 rc = ioctl(fd, USBDEVFS_RESET, 0);
 if (rc < 0) {
 perror("Error in ioctl");
 return 1;
 }
 printf("Reset successful\n");

 close(fd);
 return 0;
}

Then build it:

cc  usbreset.c -o usbreset
chmod +x usbreset

Now we have to know which is the bus and device our webcam is attached to:

lsusb
Bus 002 Device 005: ID xxxx:xxxx (...) Webcam

Last, call usbreset with the path of the device as a parameter:

sudo ./usbreset /dev/bus/usb/002/005

And that’s it. Bus and Device might change after the reset but ID won’t. So, we note down that ID and make a little sh script to avoid the lsusb step (put it in the same place as the usbreset binary). Let’s call it usbreset.sh (for originality’s sake):

#!/bin/sh
ID='xxxx:xxxx'
MATCHES=$(lsusb | sed -n 's/Bus \([0-9]*\) Device \([0-9]*\): ID '$ID'.*/\/dev\/bus\/usb\/\1\/\2/p')
if [ -z ${MATCHES} ]; then
 echo "No match found"
else
 sudo ./usbreset $MATCHES
fi

And now we can reset our webcam by simply calling ./usbreset.sh.

USBReset source code:
http://marc.info/?l=linux-usb&m=121459435621262&w=2

Build latest ffmpeg from source

2012/01/11 1 comment

FFmpeg logoI use ffmpeg a lot in my work because I need to process lots of multimedia resources programmatically (without human intervention). The prebuilt binaries of ffmpeg usually suffice for your average encoding/decoding tasks (if due to your particular needs you lack certain propietary codecs you can always grab a more codec-rich build such as the one medibuntu offers). But sometimes you need advanced features such as filters (overlays,  scaling, padding…) and since filters are a constantly evolving feature it is interesting to know how to build ffmpeg from source.

Removing old stuff and solving dependencies

First, you need to install git (if you don’t already have it):

sudo apt-get install git

Next, uninstall any previous ffmpeg builds from your system (if you’re building ffmpeg with x264 support like I’m going to do, uninstall x264 as well):

sudo apt-get remove ffmpeg x264 libx264-dev
sudo apt-get autoremove

Now we need to install a bunch of dependencies. This list may vary depending on the ffmpeg configuration you want to use. Don’t worry too much if you forget about some codec or dependency at this point, ffmpeg will tell you if something’s missing in the configuration step.

In my case, I wanted as many codecs as I could remember available to ffmpeg so as you can see the dependency list is quite long:

sudo apt-get install build-essential git-core checkinstall yasm texi2html \
     libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev \
     libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev \
     libvorbis-dev libvpx-dev libx11-dev libxfixes-dev libxvidcore-dev \
     zlib1g-dev librtmp-dev libgsm0710-dev libgsm0710mux-dev libgsm1-dev \
     libgsmme-dev libschroedinger-dev libspeechd-dev libspeex-dev \
     libspeexdsp-dev libspeex-ocaml-dev libdc1394-22-dev

Ok, if you read the dependency list (did you, really?) you’ll have noticed that x264 isn’t among the installed packages. The reason is that I’ll also be building x264 from source because the prebuilt binaries (I’m talking about the ones in Ubuntu’s repository) seem to be too old for the ffmpeg we’re about to build.

Building x264 as a shared library

First, clone x264‘s git repository to grab the latest version of the code. Then, use the –enable-shared flag when configuring to build it as a shared library, otherways ffmpeg won’t be able to use it.

cd
git clone git://git.videolan.org/x264.git
cd x264
./configure --enable-shared
make
sudo make install

Optionally, you can use checkinstall to build a .deb package and thus make the binaries redistributable:

sudo checkinstall --pkgname=libx264 \
    --pkgversion="2:0.$(grep X264_BUILD x264.h -m1 | \
    cut -d' ' -f3).$(git rev-list HEAD | wc -l)+git$(git rev-list HEAD -n 1 | \
    head -c 7)" --backup=no --deldoc=yes \
    --fstrans=no --default

Well, now that we’ve got all the libraries we need it’s time to build our customized ffmpeg.

Building ffmpeg

Since  January 2011 ffmpeg no longer uses svn to host the code, you should keep this in mind when you read other ffmpeg tutorials (they may be outdated).

cd
git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg
./configure --enable-avfilter --enable-vdpau --enable-bzlib \
    --enable-libgsm --enable-libschroedinger --enable-libspeex \
    --enable-pthreads --enable-zlib --enable-libvpx \
    --disable-stripping --enable-runtime-cpudetect \
    --enable-vaapi --enable-swscale --enable-libdc1394 \
    --enable-shared --disable-static --enable-librtmp \
    --enable-gpl --enable-version3 --enable-nonfree \
    --enable-postproc --enable-libfaac --enable-libmp3lame \
    --enable-libopencore-amrnb --enable-libopencore-amrwb \
    --enable-libtheora --enable-libvorbis --enable-libx264 \
    --enable-libxvid --enable-x11grab --enable-filter=movie
make
sudo make install

Building ffmpeg takes quite some time, be patient. When everything’s done, call ffmpeg without parameters to see if it works.

In my case it didn’t, so I had to use strace to find out what was wrong.

Fixing runtime problems

sudo strace ffmpeg

Reveals the following:

access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
(...)
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)

It seems ffmpeg is trying to access two files that don’t exist. I’ll create them and see if that works.

sudo touch /etc/ld.so.nohwcap
sudo touch /etc/ld.so.preload

And… that actually worked!

Now you have a fully functional customized ffmpeg build. Congratulations.

If you want to know more about the latest features and examples of ffmpeg filters, please check out the libavfilter documentation.

Generate ADDRESSBOOK type QR Codes

Recently I had to design some business cards for a computer science research group. So I decided to add a small touch of innovation by using a QR code that stored all the contact info.

Plain text QR’s are good as they are, but they weren’t enough for my purposes, so after researching a bit around the issue I found out that barcode scanning apps are also able to identify QR’s that are encoded with the vCard notation, and thus store the information in addressbook fashion.

So the first thing I did was having a look at vCard 3.0 specification‘s notation. Actually there also are other addressbook syntaxes out there, but vCard is probably the one that offers most options.

Here’s what Julius Caesar’s contact info would look like if written in vCard syntax:

BEGIN:VCARD
VERSION:3.0
N:Caesar Augustus;Galus Julius;
FN:Galus Julius Caesar Augustus
TITLE:CEO/Emperor
TEL;TYPE=WORK;VOICE:+555 946017
TEL;TYPE=WORK;CELL:+555 678658
EMAIL;TYPE=WORK:caesar.rules@gmail.es
ADR;TYPE=INTL,POSTAL,WORK:;;Velitrae Ox Head avenue, 1;Rome;Augusta;14567;Italy
URL;TYPE=WORK:http://www.thosewhoareabouttodiesaluteyou.com
END:VCARD

After writing the vCard it’s time to generate a QR with the encoded information. To do so, you can use one of the many available online QR code generation tools, such as Google’s Chart API’s Wizard. This is what Julius Caesar’s ADDRESSBOOK type QR code would look like.


When you scan the QR code (using a smartphone’s camera via barcode scanning app) it will show all the contact info and automatically tell you if you want to do one of the following:

Add contact, Show map, Call number, Send email

If you save the contact you’ll see there’s a few bugs on retrieving the vCard info. The address is treated as a whole thing instead of splitting it by postal code, location …

The first phone number on the vCard is treated as it were the cellphone number nevertheless if you specify VOICE and not CELL.

So it’s a very promising way to add contacts but still has some full-support issues.

Batch processing of images using Photoshop

Sometimes you have a bunch of images or photos and you want to apply the same filters or changes to all of them. Changes or filters like resizing, rotating, cropping or increasing the contrast of the image. So why waste time doing this manually with each image if you can automatize all the work with a batch process and tell photoshop to apply this changes to all the files in a folder with just one click.

Photoshop’s batch processes are defined in a similar fashion as Microsoft Office’s macros. You tell Photoshop to start recording your actions so that the application knows which steps to follow, and when you’re done you just push stop to end the thing.

Let’s say I want a batch process to resize the photos of last night’s party because my friend’s brand new camera takes pictures of 14Mpx and I’m happier with a 5Mpx size.

Open one of the aforementioned photos using photoshop. Go to the Window menu and select Actions. A small window should appear.

In that window click on the Create new action button (the one beside the trash can) give a meaningful name to your custom batch process (Photo resizing) and store it under the Custom category. From this point on until you click the stop button in the actions window, all the actions you perform to the image will be recorded.

We want to resize the image so we choose Image and then Image size… There we enter the desired size and tick the Restrict proportions option. Finally, we choose File and Save as… (in this step you can choose to save the file in a different format from the original or even change the compression ratio applied to your jpg images) and we’re done.

Once you’ve recorded all the actions of your batch process click on the stop button. If you click in your custom batch process you can review the sequence of actions that’s recorded in the process and modify them if you want so.

Now that we have our custom batch process how do we apply it?

Go to File and then choose Automatize > Batch… a new window should appear. There you must select the action set you will be using and the batch process you want to repeat. Choose the folder in which you left those weighty party photos and another folder to leave the resized results. Click OK and relax while Photoshop saves you lot’s of time in resizing tasks.