Guake Terminal Improvement for Multi-Monitor Setups


Guake Terminal

Guake Terminal

Guake is a top-down “Quake-style” terminal. I use it on a daily basis on the Xfce desktop. The only drawback, Guake doesn’t work the way I want it on a multi-monitor setup. On such a setup the terminal always starts on the main (left) monitor. But for many people, including myself, the left monitor is the small Laptop monitor. Therefor many people prefer to open the terminal on the secondary (right) monitor. If you search for “Guake multi-monitor” you can find many patches to achieve this behavior.

For me it is not enough that the terminal always starts on the right monitor. I want the terminal to always start at the currently active monitor, the monitor which contain the mouse pointer. Luckily Guake is written in Python, this makes it quite easy to patch it without the need to re-compile and re-package it. Together with the patches already available on the Internet and a short look at the Gtk documentation I found a solution. To always show the terminal on the currently active monitor you have to edit /usr/bin/guake and replace the method get_final_window_rect(self) with following code:

    def get_final_window_rect(self):
        """Gets the final size of the main window of guake. The height
        is the window_height property, width is window_width and the
        horizontal alignment is given by window_alignment.
        """
        screen = self.window.get_screen()
        height = self.client.get_int(KEY('/general/window_height'))
        width = 100
        halignment = self.client.get_int(KEY('/general/window_halignment'))
 
        # get the rectangle from the currently active monitor
        x, y, mods = screen.get_root_window().get_pointer()
        monitor = screen.get_monitor_at_point(x, y)
        window_rect = screen.get_monitor_geometry(monitor)
        total_width = window_rect.width
        window_rect.height = window_rect.height * height / 100
        window_rect.width = window_rect.width * width / 100
 
        if width < total_width:
            if halignment == ALIGN_CENTER:
                window_rect.x = (total_width - window_rect.width) / 2
                if monitor == 1:
                    right_window_rect = screen.get_monitor_geometry(0)
                    window_rect.x += right_window_rect.width
            elif halignment == ALIGN_LEFT:
                window_rect.x = 0
            elif halignment == ALIGN_RIGHT:
                window_rect.x = total_width - window_rect.width
        window_rect.y = 0
        return window_rect

This patch is based on Guake 0.4.4. The current stable version is already at 0.8.4 and no longer contain the method shown above. Still version 0.4.4 is in use on the current Debian stable version (Jessie), therefore I thought that it might be useful for more people than just for me.

The next Generation of Code Hosting Platforms


Source Code

CC BY-SA 2.0 by
Christiaan Colen

The last few weeks there has been a lot of rumors about GitHub. GitHub is a code hosting platform which tries to make it as easy as possible to develop software and collaborate with people. The main achievement from GitHub is probably to moved the social part of software development to a complete new level. As more and more Free Software initiatives started using GitHub it became really easy to contribute a bug fix or a new feature to the 3rd party library or application you use. With a few clicks you can create a fork, add your changes and send them back to the original project as a pull request. You don’t need to create a new account, don’t need to learn the tools used by the project, etc. Everybody is on the same platform and you can contribute immediately. In many cases this improves the collaboration between projects a lot. Also the ability to mention the developer of other projects easily in your pull request or issue improved the social interactions between developers and makes collaboration across different projects the default.

That’s the good parts of GitHub, but there are also bad parts. GitHub is completely proprietary which makes it impossible to fix or improve stuff by yourself or run it by your own. Benjamin Mako Hill already argued 2010 why this is a problem and why Free Software needs free tools. More and more people seems to realize that this can create serious problems and a large group of active and influential GitHub users sent a letter to GitHub which ends with:

“Hopefully none of these are a surprise to you as we’ve told you them before. We’ve waited years now for progress on any of them. If GitHub were open source itself, we would be implementing these things ourselves as a community — we’re very good at that!”

I can’t stress this argument enough. The Free Software community is a community of people who are used to do stuff and don’t just consume it. If we use a third party library and find a bug or need a feature we don’t just complain, instead we look at the code, try to fix it and provide a patch to upstream. We could do the same for the tools we use. But we need to be able to do it. It has to be Free Software.

Now a lot of rumors and discussion evolved around the news that GitHub is undergoing a full-blown overhaul as execs and employees depart. Some people even predict that this will be the end of GitHub.

Wait for it. Three months from now, GitHub introduces "features" no-one wants or needs. 12 months from now, the exodus.

— Pieter Hintjens (@hintjens) February 7, 2016

It seems that many people underestimated the lock-in effect of the new hosting platforms such as GitHub for a long time. Now they start to realize that it might be easy to export the git repository but what about the issue tracker, the wiki, CI integration, all the social interaction and collaboration between the projects, all the useful scripts written for the GitHub-API? You can’t clone all this stuff easily and move on.

I don’t want to go deeper into the discussion about what’s going on at GitHub and what will happen next. There are plenty of articles and discussions about it, you can read some of them if you follow the links in this blog.

At the moment the ESLint initiative discusses the option to move away from GitHub and by reading the comments you can get a idea about the lock-in effect I’m talking about. With the growing dissatisfaction and with people realizing that they are sitting in a “golden cage” I have the feeling that we might have a opportunity to think about the next generation of code hosting platforms and how they should look like.

Some of you may remember how Git come into existence, the tool which is used as the underlying technology of GitHub. Ironically, Git was born because of quite similar reasons for which the next generation source code hosting platforms might arise. Before Git, the Linux-Kernel developer community used BitKeeper. BitKeeper is a proprietary source control management system. The developer decided to use it because from a technical point of view BitKeeper was so much better than what we had until then, mainly SVN and CVS. The developer enjoyed the tool and didn’t thought about the problems such a dependency could create. At some point the copyright holder of BitKeeper had withdrawn gratis use of the product after claiming that Andrew Tridgell had reverse-engineered the BitKeeper protocols. The Linux-Kernel community had to move on and Linus Torvalds wrote Git.

Back to the next generation of source code hosting and collaboration platforms. It is easy to find Free Software to run your own git repository, a issue tracker and a wiki. But in 2016 I think that this is no longer enough. As described before, the crucial part is to connect software initiatives and developer to make the interaction between them as easy as possible. That’s why traditional code hosting platforms like for example Savannah are no longer a real option for many projects. I think the next generation code hosting platform needs to work in a decentralized way. Every project should be able to either host its own platform or chose a provider freely without loosing the connection to other software initiatives and developers. This development, from proprietary and centralized solutions to centralized Free Software solutions to federated Free Software solutions is something we already saw in the area of social networks and cloud services. Maybe it is worth looking at what they already achieved and how they did it.

To make the same transition happen for code hosting platforms we need implementations based on Free Software, Open Standards and protocols which enabled this kind of federation. The good news is that we already have most of them. Git by itself is already a distributed revision control system and doesn’t need a central server for collaboration. What’s missing is a nice web interfaces to glue all this parts together: a issue tracker, a wiki, good integration in Free Software CI tools, good APIs and of course Git. This will enable us to fork projects across servers, send pull requests, interact with the other developers and comment on issues no matter if they are on the same server or not. Chances are high that we will already find a suitable protocol by looking at the large amount of federated social networks. By choosing a exiting protocol of a established federated social network we could even provide a tight integration in traditional social networks which could provide additional benefits beyond what we already have. The hard part will be to pull all this together. Will it happen? I don’t know. But I hope that after we have seen the raise and fall of SourceForge, Google Code and maybe at some point GitHub we will move on to create something more sustainable instead of building the next data silo and wait until it fails again.

Integrate ToDo.txt into Claws Mail


I use Claws Mail for many years now. I like to call it “the mutt mail client for people who prefer a graphical user interface”. Like Mutt, Claws is really powerful and allows you to adjust it exactly to your needs. During the last year I began to enjoy managing my open tasks with ToDo.txt. A powerful but still simple way to manage your tasks based on text files. This allows me not only to manage my tasks on my computer but also to keep it in sync with my mobile devices. But there is one thing I always missed. Often a task starts with an email conversation and I always wanted to be able to transfer a mail easily to as task in a way, that the task links back to the original mail conversation. Finally I found some time to make it happen and this is the result:

To integrate ToDo.txt into Claws-Mail I wrote the Python program mail2todotxt.py. You need to pass the path to the mail you want to add as parameter. By default the program will create a ToDo.txt task which looks like this:


<task_creation_date> <subject_of_the_mail> <link_to_the_mail>

Additionally you can call the program with the parameter “-i” to switch to the interactive mode. Now the program will ask you for a task description and will use the provided description instead of the mail subject. If you don’t enter a subscription the program will fall back to the mail subject as task description. To use the interactive mode you need to install the Gtk3 Python bindings.

To call this program directly from Claws Mail you need to go to Configuration->Actions and create a action to execute following command:


/path_to_mail2todotxt/mail2todotxt.py -i %f &

Just skip the -i parameter if you always want to use the subject as task description. Now you can execute the program for the selected mail by calling Tools->Actions-><The_name_you_chose_for_the_action>. Additional you can add a short-cut if you wish, e.g. I use “Ctrl-t” to create a new task.

Now that I’m able to transfer a mail to a ToDo.txt item I also want to go back to the mail while looking at my open tasks. Therefore I use the “open” action from Sebastian Heinlein which I extended with an handler to open claws mail links. After you added this action to your ~/.todo.action.d you can start Claws-Mail and jump directly to the referred mail by typing:


t open <task_number_which_referes_to_a_mail>

The original version of the “open” action can be found at Gitorious. The modified version you need to open the Claws-Mail links can be found here.

The ownCloud Public Link Creator


ownCloud Share Link Creator - Context Menu

ownCloud Share Link Creator – Context Menu

Holiday season is the perfect time to work on some stuff on your personal ToDo list. ownCloud 6 introduced a public REST-style Share-API which allows you to call various share operations from external applications. Since I started working on the Share-API I thought about having a simple shell script on my file manager to automatically upload a file and generate a public link for it… Here it is!

I wrote a script which can be integrated in the Thunar file manager as a “custom action”. It is possible that the program also works with other file managers which provide similar possibilities, e.g Nautilus. But until now I tested and used it with Thunar only. If you try the script with a different file manager I would be happy to hear about your experience.

ownCloud Share Link Creator - File Upload

ownCloud Share Link Creator – File Upload

If you configure the “custom action” in Thunar, make sure to pass the paths of all selected files to the program using the “%F” parameter. The program expects the absolute path to the files. In the “Appearance and Conditions” tab you can activate all file types and directories. Once the custom action is configured you can execute the program from the right-click context menu. The program works for all file types and also for directories. Once the script gets executed it will first upload the files/directories to your ownCloud and afterwards it will generate a public link to access them. The link will be copied directly to your clipboard, additionally a dialog will inform you about the URL. If you uploaded a single file or directory than the file/directory will be created directly below your default target folder as defined in the shell script. If you selected multiple files, than the program will group them together in a directory named with the current timestamp.

This program does already almost everything I want. As already said, it can upload multiple files and even directories. One think I want to add in the future is the possibility to detect a ownCloud sync folder on the desktop. If the user selects a file in the sync folder than the script should skip the upload and create the share link directly.

Edit: In the meantime I got feedback that the script also works nicely with Dolphin, Nautilus and Nemo

My Backup Solution


For a long time I have made backups of my home partition by hand, starting from time to time rdiff-backup. But as you can imagine, this approach doesn’t generate regular and reliable backups.

I couldn’t put this task into a simple cronjob because of two reasons. First I use encrypted hard disks and my backup disk is connected via USB and not always on. So before a backup starts I have to turn on my backup disk and make sure, that my home partition and my backup disk is decrypted and mounted. Second I don’t want the backup happen during my regular work. In my experience such processes often starts in the most annoying moments.

So I decided that I need an semi-automatic backup, which runs during shutdown. The result is this small script which I put in /etc/rc0.d/K05backup.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/bin/bash
 
currentTime=`date +%s`
timeUntilNextBackup=604800                 # 604800sec = 1week
startBackup=false
 
# check if it's time for the next backup
if [ -f /var/log/nextBackup.log ]; then
    nextBackupTime=`cat /var/log/nextBackup.log`
    if [ $(($currentTime - $nextBackupTime)) -gt 0 ]; then
        startBackup=true                       #time for the next backup
    fi
else
    startBackup=true
fi
 
if [ $startBackup == true ]; then
    echo "It's time for another Backup!"
    echo "Don't forget to switch on your backup hard disk before you start!"
    repeat=true
    while $repeat; do
        echo -n "Start backup procedure now? (y)es or (n)o? "
        read char
        case $char in
            [y,Y] ) 
                if [ ! -d /home/schiesbn ]; then
                    echo "encrypted HOME partition has to be mounted..."
                    cryptsetup luksOpen /dev/sda6 secureHome
                    mount /dev/mapper/secureHome /home
                fi
                echo "encrypted BACKUP partition has to be mounted..."
                cryptsetup luksOpen /dev/sdd1 secureBackup
                mount /dev/mapper/secureBackup /mnt/backup
                echo "Starting Backup...";
                rdiff-backup --print-statistics /home/schiesbn /mnt/backup
                echo "umount backup disk..."
                umount /mnt/backup
                cryptsetup luksClose secureBackup
                # calculate the time for the next backup and write it to the log
                nextBackup=$(($currentTime + $timeUntilNextBackup))
                echo $nextBackup > /var/log/nextBackup.log
                echo "DONE."
                sleep 10   #give me some time to look at the backup statistics
                repeat=false;;
            [n,N] )
                repeat=false;;
        esac
    done
fi

If the last backup is older than 1 week the script asks me, if I want to do another backup. Than I can decide to postpone it or to start it now. If I decide to start the backup procedure I get the opportunity to decrypt my backup and home partition before rdiff-backup starts. After that I can leave the room and be sure that the computer will shutdown after the backup is finished.

Until now this is the best and most reliable, least annoying and most automated solution I could found.

Jabber Mail Notification


I always struggled to find the right mail notification applet for my desktop. Furthermore I always stumble over the question: Why do I have to ask the mail server in a defined time interval “Do I have a new e-mail?”. Wouldn’t it be better if the mail server notifies me if a new e-mail arrives?
This is probably somehow a new form of the good old question “mailing list vs bulletin board” or in general: Do i have to fetch the information or does the information come to me? Personally i always preferred to get the information and not to hunt around for them.

Thinking about this question i realized that notification through Jabber would be perfect and the open XMPP protocol virtually invites one to do such things.

The idea was born. Now the first step was to find a easy to use XMPP implementation for a scripting language like Python, Ruby or PHP. At the end I found a quite nice and easy to use PHP library. While searching such a library I also found this guidance (German only), borrowed some code from it and my solution was born:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
<?php
// The script gets the input either as an argument, from a REQUEST-variable or from stdin
// If you use it within procmail you will get the input through stdin 
if ($argv[1]) {
    $msg = $argv[1];
} elseif ($_REQUEST['msg']) {
    $msg = urldecode($_REQUEST['msg']);
} else {
    // open stdin. Only read the first 4096 character, this should be enough to match
    // the FROM- and  SUBJECT-header
    $stdin = fopen('php://stdin', 'r');
    $msg   = fread($stdin, 4096);
 
    if (empty($msg)) {
        $msg = "empty";
    } else {
        // Get FROM und SUBJECT
        preg_match('@From:(.*)@i', $msg, $from);
	preg_match('@Subject:(.*)@i', $msg, $subject);
        $msg = "\n" . $from[0] . "\n" . $subject[0] . "\n";
    }
}
 
// now init xmpp and get the notification out
include 'XMPPHP/XMPP.php';
 
$conn = new XMPPHP_XMPP('schiessle.org', 5222, 'user', 'password', 'xmpphp', 'schiessle.org', $printlog=false, $loglevel=XMPPHP_Log::LEVEL_INFO);
 
try {
    $conn->connect();
    $conn->processUntil('session_start');
    $conn->presence();
    $conn->message('me@jabber.server.org', $msg);
    $conn->disconnect();
} catch(XMPPHP_Exception $e) {
    die($e->getMessage());
}
?>

Now I just had to tell procmail to pipe the mails through the PHP script. If you want to get notified about all mails you can simply put this line at the top of your procmail rules (Or maybe at least behind the spam filter rules 😉 ):

1
2
:0c
|php /home/schiessle/bin/mailnotification.php

I want to get notified only about some specific mails so I extended my procmail rules in this way:

1
2
3
4
5
6
7
8
9
:0
* ^(To:|Cc:).*foo@bar-mailinglist.org
{
 	:0c
        |php /home/schiessle/bin/mailnotification.php
 
        :0
        .bar-list/
}

That’s it! All in all it was quite easy to get e-mail notification through Jabber. Now I don’t have to search for the right applet, configure it etc.. All I have to do is to start my Jabber client and I will get notified about new mails whatever desktop or computer i’m using.