Vim: highlight repeated properties of property file

While refactoring some localization property files I found that many messages and properties were repeated. This might be due to the lack of a naming convention for the strings or maybe because the properties were not sorted alphabetically.

For example, in one of the files the string Upload was repeated in three different places:

BUTTON_UPLOAD=Upload
UPLOAD=Upload
VIDEO_UPLOAD=Upload...

After normalizing the file I ended up with:

UPLOAD=Upload
UPLOAD=Upload
UPLOAD=Upload...

Before deleting the duplicated properties I wanted to check if the string they had was the same in all occurrences. With that in mind, I decided to make a little vim function that would highlight all the duplicated property names. For that you have to write the following in your .vimrc file:

function! HighlightRepeatedProps() range
let propCounts = {}
let lineNum = a:firstline
while lineNum <= a:lastline
let lineText = getline(lineNum)
if lineText != ""
let propName = matchstr(lineText, "^[^=]*")
let propCounts[propName] = (has_key(propCounts, propName) ?
propCounts[propName] : 0) + 1
endif
let lineNum = lineNum + 1
endwhile
exe 'syn clear Repeat'
for propName in keys(propCounts)
if propCounts[propName] >= 2
exe 'syn match Repeat
"^' . escape(propName, '".\^$*[]') . '=.*$"'
endif
endfor
endfunction

command! -range=% HighlightRepeatedProps <line1>,<line2>call HighlightRepeatedProps()

After saving your changes on .vimrc open your .properties file and type :HighlightRepeatedProps, if you have any repeated property names vim will highlight them in a different colour.

Useful, right?

Categories: Uncategorized

From Mercurial to Git and from GoogleCode to GitHub

2012/10/10 2 comments

Some time ago we decided to modify the VCS of our code repository at Google Code from SVN to Mercurial, the only DVCS alternative offered in Google Code (at the time). This system has far more power than non distributed systems and allowed us more freedom to develop on different branches and to merge all the work. But, this decision was made mainly to keep the current repository location rather than by choice, since we prefer to work with Git.

Lately, though, we have been having some problems with Mercurial and that, added to the difficulty to export and import this kind of repository in other code hosting sites, led us to change yet again, this time to Git and to GitHub.

Here are the steps we took in order to do so.

First, make some folders to store the tools and the repositories

cd ~
mkdir repository_conversion
cd repository_conversion
mkdir gitprojectname

Next, make a clone of the mercurial repository in your local machine.

hg clone https://code.google.com/p/projectname

This will make a folder projectname with the contents of the repository. Then, download Fast-Export, a tool that converts mercurial repositories into git repositories.

git clone http://repo.or.cz/w/fast-export.git

Before going any further, you should know that Git is more restrictive with the username format of the person doing a commit. Mercurial lets you commit using partial or different username information for the same set of credentials. For example, if you have a committer called John Doe you might find Mercurial commits with the following aliases:

johndoe@dcb55125-116f-0410-8251-c326c5fbc55d
johndoe@gmail.com
johndoe
John Doe <johndoe@gmail.com>

The correct commit format is the last one (User Name <email@youremail.org>), so you should map the wrong aliases to a correct format before converting the repository. To do this, you first need to get the list of all the people that has made a commit in your repository. For that purpose, we can either use the hg log command or the churn extension.

hg log --template "{author}\n" | sort | uniq -c | sort -nr

If you want to use the churn extension instead, you must enable it first in the Mercurial configuration file. You can enable it system-wide editing the /etc/mercurial/hgrc file (or just for your repository editing ~/repository_conversion/projectname/.hg/hgrc) and adding the following to it:

[extensions]
hgext.churn =

Then you can call it like this:

hg churn --template "{author}"

These commands output a list of committers sorted by number of commits. You can copy that list to a text file called authors.map to do the user mapping. Following our previous example, you would map John Doe’s aliases like this:

johndoe@dcb55125-116f-0410-8251-c326c5fbc55d=John Doe <johndoe@gmail.com>
johndoe@gmail.com=John Doe <johndoe@gmail.com>
johndoe=John Doe <johndoe@gmail.com>
John Doe <johndoe@gmail.com>=John Doe <johndoe@gmail.com>

Once you are done mapping the users, go to the git repository folder, init a git repository and call the conversion script:

cd ~/repository_conversion/gitprojectname
git init
../fast-export/hg-fast-export.sh -r ../projectname/ -A ../authors.map

Depending on the number of commits and branches this process may take a while. Once the conversion is finished, you can check if all the committers were correctly mapped typing:

cd ~/repository_conversion/gitprojectname
git shortlog -nse --all

So, with that we have a fully converted git repository. Now we have to upload it to a repository hosting site, like say GitHub. GitHub has very good and detailed guides on how to setup your repositories, so I will just assume I signed up on the site as johndoe@gmail.com and tell you the commands you need in order to push the repository without going into much detail.

Create a set of SSH keys to be able to push your changes:

cd ~/.ssh
mkdir key_backup
cp id_rsa* key_backup
rm id_rsa*
ssh-keygen -t rsa -C "johndoe@gmail.com"

Copy the contents of your id_rsa.pub file (exactly as they are, without adding or removing anything) in your SSH Key management area and create a repository named gitprojectname using the web interface.

Before starting to commit and push things in your repository, you should configure the commit user of Git (the user and email that appears in the commit’s metadata). If you haven’t done so by now, you can set your system-wide git user like this:

git config --global user.email "johndoe@gmail.com"
git config --global user.name "John Doe"

If you prefer to use this user only for a particular repository:

cd ~/repository_conversion/gitprojectname
git config --local user.email "johndoe@gmail.com"
git config --local user.name "John Doe"

Finally, push your local repository to the remote site (you can also do this via SSH using the keyset you generated in the previous step and pointing to an URL that looks like git@github.com:johndoe/gitprojectname.git):

cd ~/repository_conversion/gitprojectname
git remote add origin https://github.com/johndoe/gitprojectname.git
git push -u origin master

If you have several branches and want to upload them all you can try replacing the last step with git push --all.

Hope that was helpful.

Rescue data from disk in Ubuntu

I was asked to audit a disk in order to see any deleted file. For this task, I used Ubuntu Rescue Remix, an Ubuntu live CD customized with lots of applications for data recovery and forensics. Using a live CD for data rescuing is really useful, as you won’t be writing any data on the disk, and therefore you won’t overwrite anything you want to rescue. On the other hand, using a live CD you’ll be able to turn on the computer, even if the disk is physically damaged.

Once you’ve loaded the live CD, you’ll be ready to start typing Linux commands as usual. Yet, if you are not using an English keyboard, you’ll probably be interested in changing the keyboard layout. Just type loadkeys and your keyboard’s layout code. For example, if you have a Spanish keyboard execute:

loadkeys es

Remember that these commands must be run with root privileges, so type sudo before every command if the systems complains about permissions.

You may want to store the image in a remote server. Let’s use samba to mount a remote folder.

apt-get install smbfs
mkdir /mnt/recovery
smbmount //SERVERIP/recovery /mnt/recovery/ -o user=sambausername
cd /mnt/recovery

Finally, we create the image using ddrescue. Remember that you will need at least as much room as the capacity of the disk you want to rescue.

ddrescue --no-split /dev/sda image_file log_file

If the disk is damaged you might get better results running successive passes.

sudo ddrescue -r 3 -C /dev/sda image_file log_file

Once you have the image done, you can use Autopsy to recover any data from the disk.

Categories: Uncategorized Tags:

Turn off your laptop and leave the server working

If you usually connect to servers via SSH, you have probably had to wait to finish a time consuming task before you could close the console and therefore, your computer. However, there is at least one way for executing the needed commands on the server and going home. The screen command will help you with that.

The first thing you have to do is logging into the SSH server. That’s easy, you know how to do it:

ssh user@mydomanin.com

Once you are in, install screen if you don’t have it yet. As easy as this for an Ubuntu server:

sudo apt-get install screen

Now that you have everything you need, execute screen:

screen

This will open another session in the same terminal.

Perform any task you need now. For example, upload a large file to a remote FTP server:

sftp user@myftpserver.com
sftp>put a_big_file.tar.gz
Uploading a_big_file.tar.gz to somewhere in your FTP server very slowly
a_big_file.tar.gz 1% 5KB 1.4KB/s 00:05 ETA

That’s going to take long and you have to leave now, so it’s time to detach the session. Press on your keyboard:

Ctrl + a

and then, to definitely detach the session, press:

d

The server will keep on uploading the file, but now you can close the SSH connection and turn off your computer.

Tomorrow, when you arrive at the office, you might want to know whether the task was finished correctly. Connect to the server and run:

screen -r

This will resume any previous screen sessions, or will show the screens to be resumed if there are more than one.

Reset nondetachable USB devices on your laptop

2012/01/17 1 comment

I’ve had problems with Ubuntu and my laptop’s integrated webcam for quite some time. Because of these problems, a couple of developers started working on some alternative drivers, but the project seems to be abandoned right now (7 months without commits).

With no appropriate drivers, the device displays odd colours and randomly hangs up (specially when using Flash Player) leaving the power led on. This is very unpleasant because it gives you the impression that somebody might be spying on you.

So I googled a bit searching for a way to reset the devices that I can’t detach, and found this great post by Alan Stern in which he gives us a piece of code to do just that.

cd
mkdir usbreset
cd usbreset

Copy the code into usbreset.c:

/* usbreset -- send a USB port reset to a USB device */

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sys/ioctl.h>

#include <linux/usbdevice_fs.h>

int main(int argc, char **argv)
{
 const char *filename;
 int fd;
 int rc;

 if (argc != 2) {
 fprintf(stderr, "Usage: usbreset device-filename\n");
 return 1;
 }
 filename = argv[1];

 fd = open(filename, O_WRONLY);
 if (fd < 0) {
 perror("Error opening output file");
 return 1;
 }

 printf("Resetting USB device %s\n", filename);
 rc = ioctl(fd, USBDEVFS_RESET, 0);
 if (rc < 0) {
 perror("Error in ioctl");
 return 1;
 }
 printf("Reset successful\n");

 close(fd);
 return 0;
}

Then build it:

cc  usbreset.c -o usbreset
chmod +x usbreset

Now we have to know which is the bus and device our webcam is attached to:

lsusb
Bus 002 Device 005: ID xxxx:xxxx (...) Webcam

Last, call usbreset with the path of the device as a parameter:

sudo ./usbreset /dev/bus/usb/002/005

And that’s it. Bus and Device might change after the reset but ID won’t. So, we note down that ID and make a little sh script to avoid the lsusb step (put it in the same place as the usbreset binary). Let’s call it usbreset.sh (for originality’s sake):

#!/bin/sh
ID='xxxx:xxxx'
MATCHES=$(lsusb | sed -n 's/Bus \([0-9]*\) Device \([0-9]*\): ID '$ID'.*/\/dev\/bus\/usb\/\1\/\2/p')
if [ -z ${MATCHES} ]; then
 echo "No match found"
else
 sudo ./usbreset $MATCHES
fi

And now we can reset our webcam by simply calling ./usbreset.sh.

USBReset source code:
http://marc.info/?l=linux-usb&m=121459435621262&w=2

Build latest ffmpeg from source

2012/01/11 1 comment

FFmpeg logoI use ffmpeg a lot in my work because I need to process lots of multimedia resources programmatically (without human intervention). The prebuilt binaries of ffmpeg usually suffice for your average encoding/decoding tasks (if due to your particular needs you lack certain propietary codecs you can always grab a more codec-rich build such as the one medibuntu offers). But sometimes you need advanced features such as filters (overlays,  scaling, padding…) and since filters are a constantly evolving feature it is interesting to know how to build ffmpeg from source.

Removing old stuff and solving dependencies

First, you need to install git (if you don’t already have it):

sudo apt-get install git

Next, uninstall any previous ffmpeg builds from your system (if you’re building ffmpeg with x264 support like I’m going to do, uninstall x264 as well):

sudo apt-get remove ffmpeg x264 libx264-dev
sudo apt-get autoremove

Now we need to install a bunch of dependencies. This list may vary depending on the ffmpeg configuration you want to use. Don’t worry too much if you forget about some codec or dependency at this point, ffmpeg will tell you if something’s missing in the configuration step.

In my case, I wanted as many codecs as I could remember available to ffmpeg so as you can see the dependency list is quite long:

sudo apt-get install build-essential git-core checkinstall yasm texi2html \
     libfaac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev \
     libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libva-dev libvdpau-dev \
     libvorbis-dev libvpx-dev libx11-dev libxfixes-dev libxvidcore-dev \
     zlib1g-dev librtmp-dev libgsm0710-dev libgsm0710mux-dev libgsm1-dev \
     libgsmme-dev libschroedinger-dev libspeechd-dev libspeex-dev \
     libspeexdsp-dev libspeex-ocaml-dev libdc1394-22-dev

Ok, if you read the dependency list (did you, really?) you’ll have noticed that x264 isn’t among the installed packages. The reason is that I’ll also be building x264 from source because the prebuilt binaries (I’m talking about the ones in Ubuntu’s repository) seem to be too old for the ffmpeg we’re about to build.

Building x264 as a shared library

First, clone x264‘s git repository to grab the latest version of the code. Then, use the –enable-shared flag when configuring to build it as a shared library, otherways ffmpeg won’t be able to use it.

cd
git clone git://git.videolan.org/x264.git
cd x264
./configure --enable-shared
make
sudo make install

Optionally, you can use checkinstall to build a .deb package and thus make the binaries redistributable:

sudo checkinstall --pkgname=libx264 \
    --pkgversion="2:0.$(grep X264_BUILD x264.h -m1 | \
    cut -d' ' -f3).$(git rev-list HEAD | wc -l)+git$(git rev-list HEAD -n 1 | \
    head -c 7)" --backup=no --deldoc=yes \
    --fstrans=no --default

Well, now that we’ve got all the libraries we need it’s time to build our customized ffmpeg.

Building ffmpeg

Since  January 2011 ffmpeg no longer uses svn to host the code, you should keep this in mind when you read other ffmpeg tutorials (they may be outdated).

cd
git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg
./configure --enable-avfilter --enable-vdpau --enable-bzlib \
    --enable-libgsm --enable-libschroedinger --enable-libspeex \
    --enable-pthreads --enable-zlib --enable-libvpx \
    --disable-stripping --enable-runtime-cpudetect \
    --enable-vaapi --enable-swscale --enable-libdc1394 \
    --enable-shared --disable-static --enable-librtmp \
    --enable-gpl --enable-version3 --enable-nonfree \
    --enable-postproc --enable-libfaac --enable-libmp3lame \
    --enable-libopencore-amrnb --enable-libopencore-amrwb \
    --enable-libtheora --enable-libvorbis --enable-libx264 \
    --enable-libxvid --enable-x11grab --enable-filter=movie
make
sudo make install

Building ffmpeg takes quite some time, be patient. When everything’s done, call ffmpeg without parameters to see if it works.

In my case it didn’t, so I had to use strace to find out what was wrong.

Fixing runtime problems

sudo strace ffmpeg

Reveals the following:

access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
(...)
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)

It seems ffmpeg is trying to access two files that don’t exist. I’ll create them and see if that works.

sudo touch /etc/ld.so.nohwcap
sudo touch /etc/ld.so.preload

And… that actually worked!

Now you have a fully functional customized ffmpeg build. Congratulations.

If you want to know more about the latest features and examples of ffmpeg filters, please check out the libavfilter documentation.

Color picker for mobile devices with Flex 4.5

2012/01/02 8 comments

I don’t have much experience with Flex. I’ve just participated in the Babelium Project, a web application for language practising developed in Flex, and now I’ve been asked to port an Adobe AIR desktop application to android devices. In this last work, I came across some problems replacing the unsupported MX components by Spark components and some of them where a pain in the neck.

The ColorPicker was a tricky one to substitute. I found this SparkColorPicker, but I couldn’t make it work correctly on a tablet PC (I told you, I’m far from being an expert). The problem seems to be in the ComboBox which extends, as combo boxes and drop down lists are discouraged in AIR for mobile development. When clicking the button, the DropDown was opened and immediately closed making it impossible to choose a color.

After several attempts to fix the issue, I decided to implement my own color picker using the Callout class. This is how it looks like:

You can see the source code in this github repository:
https://github.com/blizarazu/ColorPickerCallout

Categories: Programming Tags: , ,
Follow

Get every new post delivered to your Inbox.