Wiedi in Wonderland

Profiling Django with DTrace and cProfile

Mon 18 November 2019

Django is a fantastic framework, not the least because it includes everything needed to quickly create web apps. But the developers should not be the only ones benefiting from this. The app should also be fast for users.

The official documentation has a chapter about performance and optimization with good advice. In this article I want to build on that and show tools and methods I've used in the past to reduce page load time.

Measure & collect data

Performance benchmarking and profiling are essential to any optimization work. Blindly applying optimizations could add complexity to the code base and maybe even make things worse.

We need performance data to know which parts to focus on and to validate that any changes have the desired effect.

django-debug-toolbar

The django-debug-toolbar is easy to use and has a nice interface. It can show you how much time is spent on each SQL query, a quick button to get EXPLAIN output for that query and a few other interesting details. The template-profiler is an extra panel that adds profiling data about the template rendering process.

There are however a few drawbacks with the django-debug-toolbar. Because of how it integrates into the site it only makes sense to use in a development environment where DEBUG = True. It also comes with a huge performance penalty itself.

DTrace

DTrace doesn't have these limitations. It can be used on production services and give many details way beyond just the python part of the project. You can look deep into the database, python interpreter, webserver and operating system to get a complete picture where the time is spent.

Instead of the pretty browser UI this will happen in the CLI. DTrace scripts are written in a AWK-like syntax. There is also a collection of useful scripts in the dtracetools package. When using the Joyent pkgsrc repos this can be installed with

pkgin install dtracetools

One of the useful scripts in this package is the dtrace-mysql_query_monitor.d which will show all MySQL queries:

Who                  Database             Query                                    QC Time(ms)
kumquat@localhost    kumquat              set autocommit=0                         N  0        
kumquat@localhost    kumquat              set autocommit=1                         N  0        
kumquat@localhost    kumquat              SELECT `django_session`.`session_key`, `django_session`.`session_data`, `django_session`.`expire_date` FROM `django_session` WHERE (`django_session`.`session_key` = 'w4ty3oznpqesvoieh64me1pvdfwjhr2k' AND `django_session`.`expire_date` > '2019-11-18 13:04: N  0        
kumquat@localhost    kumquat              SELECT `auth_user`.`id`, `auth_user`.`password`, `auth_user`.`last_login`, `auth_user`.`is_superuser`, `auth_user`.`username`, `auth_user`.`first_name`, `auth_user`.`last_name`, `auth_user`.`email`, `auth_user`.`is_staff`, `auth_user`.`is_active`, `auth_u Y  0        
kumquat@localhost    kumquat              SELECT `cron_cronjob`.`id`, `cron_cronjob`.`when`, `cron_cronjob`.`command` FROM `cron_cronjob` Y  0        
...

To do something similar for PostgreSQL:

dtrace -n '
#pragma D option quiet
#pragma D option switchrate=10hz
#pragma D option strsize=2048

dtrace:::BEGIN {
 printf("%-9s %-80s\n", "Time(ms)", "Query");
}

postgres*::query-start {
  start = timestamp;
}

postgres*::query-done {
  printf("%-9d %-80s\n\n", ((timestamp - start) / 1000 / 1000), copyinstr(arg0));
}
'

Which will look like this:

Time(ms)  Query                                                                           
7         SELECT "auth_user"."id", "auth_user"."password", "auth_user"."last_login", "auth_user"."is_superuser", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."username" = 'wiedi'
...

To look into the python process itself there are a few very useful dtrace-py_* scripts in the dtracetools package. For example dtrace-py_cputime.d will show the number of calls to a function as well as the inclusive and exclusive CPU time:

Count,
   FILE                 TYPE       NAME                                COUNT
...
   base.py              func       render                               1431
   sre_parse.py         func       get                                  1607
   base.py              func       render_annotated                     1621
   functional.py        func       <genexpr>                            1768
   base.py              func       resolve                              1888
   sre_parse.py         func       __getitem__                          2011
   sre_parse.py         func       __next                               2104
   related.py           func       <genexpr>                            2324
   __init__.py          func       <genexpr>                            3974
   regex_helper.py      func       next_char                            9033
   -                    total      -                                  113741

Exclusive function on-CPU times (us),
   FILE                 TYPE       NAME                                TOTAL
...
   base.py              func       _resolve_lookup                     22070
   base.py              func       resolve                             22810
   base.py              func       render                              22997
   related.py           func       foreign_related_fields              23543
   functional.py        func       wrapper                             25928
   defaulttags.py       func       render                              26218
   base.py              func       __init__                            33303
   sre_parse.py         func       _parse                              42869
   regex_helper.py      func       next_char                           44579
   regex_helper.py      func       normalize                           71313
   -                    total      -                                 1809937

Inclusive function on-CPU times (us),
   FILE                 TYPE       NAME                                TOTAL
...
   wsgi.py              func       __call__                          1790427
   sync.py              func       handle_request                    1804334
   sync.py              func       handle                            1806034
   sync.py              func       accept                            1806870
   loader_tags.py       func       render                            2452085
   base.py              func       _render                           2886611
   base.py              func       render_annotated                  4563513
   base.py              func       render                            6018042
   deprecation.py       func       __call__                         12147994
   exception.py         func       inner                            13873367

In this case we see there is a bit of time spent on regex things, probably related to the URL routing.

cProfile

The standard python library comes with cProfile which will collect precise timings of function calls. Together with the django test client this can be used to automate performance testing.

Automating the performance data collection step as much as possible allows for quick iterations. For a recent project I created a dedicated manage.py command to profile the most important URLs. It looked similar to this:

from django.core.management.base import BaseCommand
from django.test import Client
from django.contrib.auth.models import User
import io
import pstats
import cProfile


def profile_url(url):
    c = Client()
    c.force_login(User.objects.first())

    pr = cProfile.Profile()
    pr.enable()
    r = c.get(url, follow = True)
    pr.disable()

    assert r.status_code == 200

    s = io.StringIO()
    pstats.Stats(pr, stream = s).sort_stats('cumulative').print_stats(35)


class Command(BaseCommand):
    help = 'run profiling functions'

    def handle(self, *args, **options):
        profile_url("/")
        profile_url("/contacts/")
        profile_url("/events/")
        profile_url("/search/?q=info&type=all")

Instead of just printing the statistics they can also be saved to disk with pr.dump_stats(fn). This allows further processing with flameprof to create FlameGraphs.

Django flamegraph

%timeit

Another handy utility from the standard library is timeit. You'll often find examples like this:

$ python -m timeit '"-".join(str(n) for n in range(100))'
10000 loops, best of 5: 30.2 usec per loop

This is useful when experimenting with small statements.

To take this one step further I recommend you install IPython which will transform the Django manage.py shell into a very powerful development environment.

Besides tab-completion and a thousand other features you'll have the %timeit magic.

In [1]: from app.models import *                                                                                                                                                                         
In [2]: e = Events.objects.first()                                                                                                                                                                      
In [3]: %timeit e.some_model_method()                                                                                                                                                                
703 ns ± 7.05 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Optimize

Once you know which parts of your project are the slowest you can start working on improving those parts. Usually the most time is spent on database queries followed by template rendering.

Although every project might need different optimizations there are some common patterns.

Prefetch related objects

When you display a list of objects in your template and access some field of a related object this will trigger an additional database query. This can easily add up and result in huge numbers of queries just for one request.

When you know which related fields you will need you can tell Django to get these in a more efficient way. The two important methods are select_related() and prefetch_related().

While select_related() works by using a SQL JOIN prefetch_related() creates one query for each lookup. These are easy to use, require nearly no modifications to existing code and can result in huge improvements.

Indexes

Another easy to apply performance tweak is to make sure you have the right database indexes. Whenever you use a field to filter and in some cases for order_by you should consider if you need an index. Creating an index is as easy as adding db_index = True to your model field, then creating and running the resulting migration. Be sure to validate the improvement with SQL EXPLAIN.

Cache

Caching is a huge topic and there are many ways to improve django performance with caching. Depending on the environment and performance characteristics the place, duration and layer where a cache is used will be different.

The django cache framework is an easy way to leverage Memcached at various layers. The @cached_property decorator is often helpful for fat model methods.

Precalculate

Some calculations just take too long for the usual time-budget of a HTTP request. In these cases I've found it useful to precalculate the needed data in a background process. This can be done with a task queue like Celery or with a little less complexity by just having a manage.py command that is either long running as a service or called as a cronjob.

Beyond Django

Beyond these common cases there are many further ways to optimize web projects. Changing the database schema by denormalizing might improve some queries. Other techniques will depend heavily on the circumstances of the project.

There are usually also plenty of opportunities to optimize upstack as well as below. Measure performance data from the Browser and the time spent inside Django will only become a smaller part. With that new data you can start work on DOM rendering, CSS and JS, reducing request size for images or better network routing.

Also looking at lower levels can have huge benefits. Even small improvements at a lower level can result in performance gains simply because these parts are run so often.

A recent example was a ~120ms per request gain by changing how the python interpreter was compiled. The cpython version tested had the retpoline mitigation enabled. This was an isolated internal service where the threat model did not require this. So just by compiling without -mindirect-branch=thunk-inline -mfunction-return=thunk-inline -mindirect-branch-register resulted in a large performance boost.

If you have a web project in need of some performance optimization feel free to reach out!

More than nice: changing CPU scheduling priority inside SmartOS LX-Zones

Sat 09 November 2019

The illumos kernel supports different process scheduling classes. The most interesting ones are:

  • RT: Real-time
  • FX: Fixed
  • FSS: Fair Share

The default class is FSS, which will dynamically adjust the process priority based on things like the nice level and also how much CPU time was used recently. FX can be used to assign a fixed priority. With RT a process can be assigned a defined time quantum. Since RT processes have higher priority than system processes special care must be taken.

You can learn more about those in priocntl(2) and the old but still correct OpenSolaris System Administration Guide.

In most cases the default FSS (Fair Share) scheduling class will work very well and be fair, so the need to make changes here are very rare. However sometimes you might want to run applications that are very timing critical alongside CPU intensive, but otherwise uncritical, processes (e.g. cron jobs).

In native SmartOS/illumos zones the scheduling parameters are changed with priocntl(1). For LX-Zones we need to hack a few things into place first.

First of all we need to grant that zone the proc_priocntl privilege, because tuning the CPU scheduling parameters is not something every user should be allowed to do:

vmadm update 874520a9-708e-41d8-86a8-f46c7a8cdf27 limit_priv=default,proc_priocntl

Inside the zone two symlinks are required to place the native tools into the Linux environment:

ln -s /native/usr/bin/priocntl /usr/bin/
ln -s /native/usr/lib/class /usr/lib/class

After that setting the priority works just like:

priocntl -s -c FX -p 60 -m 60 -i pid 30586

This changes the priority for pid 30586 to 60.

Parallel git and cvs workflow

Wed 08 July 2015

This describes a workflow for using git to develop on pkgsrc. As pkgsrc upstream uses cvs you'll need two parallel copies of the tree: one using git and one using cvs. To get changes from git into cvs it uses git-cvsexportcommit. Git has many benefits like local commits, easy branching, atomic commits across multiple files, etc. I also like to prepare my commit messages and review them before setting them in stone in the eternal history.

Environment

To setup a basic working environment I use configuration like follows.

Add to ~/.bash_profile:

export CVSEDITOR=$EDITOR
export CVSROOT=$LOGNAME@cvs.NetBSD.org:/cvsroot
export CVS_RSH=ssh

Add to ~/.cvsrc

# recommended CVS configuration file from the pkgsrc guide
cvs -q -z2
checkout -P
update -dP
diff -upN
rdiff -u
release -d

CVS

get your cvs tree:

mkdir ~/cvs ~/tmp
cd ~/cvs
cvs checkout pkgsrc

GIT

checkout Jörgs git conversion:

cd ~/tmp
git clone https://github.com/jsonn/pkgsrc.git

bootstrap

the git tree will be your working tree:

cd ~/tmp/pkgsrc/bootstrap
./bootstrap --unprivileged

add to ~/.bash_profile

export PATH="~/pkg/bin:~/pkg/sbin:$PATH"

create your change

as an example let's do a simple update of nano:

cd ~/tmp/pkgsrc/editors/nano

make changes and test them:

$EDITOR Makefile
bmake package
...

stage your changes and commit:

git add Makefile distinfo PLIST
git commit

you now get to review your change as one entitiy, including the commit message:

git show -1
commit 3b861ca6563af68aa3c175ebae151d3870c9b5d2
Author: Sebastian Wiedenroth <wiedi@frubar.net>
Date:   Wed Jul 8 22:59:39 2015 +0200

    Update nano to 2.4.2

    2015.07.05 - GNU nano 2.4.1 "Portorož" is released.  This release
            includes several fixes, including the ability to resize
            when in modes other than the main editing window,
            proper displaying of invalid UTF-8 bytes, new syntax
            definitions for Elisp, Guile, and PostgreSQL, and
            better display of shortcuts in the help menu and file
            browser.  Thanks for your patience and using nano!

diff --git a/editors/nano/Makefile b/editors/nano/Makefile
index 62fb2c1..c2a4b12 100644
--- a/editors/nano/Makefile
+++ b/editors/nano/Makefile
@@ -1,10 +1,10 @@
 # $NetBSD: Makefile,v 1.46 2015/06/05 01:32:38 wiedi Exp $

-DISTNAME=  nano-2.4.1
+DISTNAME=  nano-2.4.2
 CATEGORIES=    editors
 MASTER_SITES=  http://www.nano-editor.org/dist/v2.4/

-MAINTAINER=    pkgsrc-users@NetBSD.org
+MAINTAINER=    wiedi@frubar.net
 HOMEPAGE=  http://www.nano-editor.org/
 COMMENT=   Small and friendly text editor (a free replacement for Pico)
 LICENSE=       gnu-gpl-v3
diff --git a/editors/nano/PLIST b/editors/nano/PLIST
index 7d4f944..1ecfdd4 100644
--- a/editors/nano/PLIST
+++ b/editors/nano/PLIST
@@ -1,4 +1,4 @@
-@comment $NetBSD: PLIST,v 1.18 2015/06/05 01:32:38 wiedi Exp $
+@comment $NetBSD$
 bin/nano
 bin/rnano
 info/nano.info
@@ -50,10 +50,12 @@ share/nano/cmake.nanorc
 share/nano/css.nanorc
 share/nano/debian.nanorc
 share/nano/default.nanorc
+share/nano/elisp.nanorc
 share/nano/fortran.nanorc
 share/nano/gentoo.nanorc
 share/nano/go.nanorc
 share/nano/groff.nanorc
+share/nano/guile.nanorc
 share/nano/html.nanorc
 share/nano/java.nanorc
 share/nano/javascript.nanorc
@@ -70,6 +72,7 @@ share/nano/patch.nanorc
 share/nano/perl.nanorc
 share/nano/php.nanorc
 share/nano/po.nanorc
+share/nano/postgresql.nanorc
 share/nano/pov.nanorc
 share/nano/python.nanorc
 share/nano/ruby.nanorc
diff --git a/editors/nano/distinfo b/editors/nano/distinfo
index 6b372a5..3420686 100644
--- a/editors/nano/distinfo
+++ b/editors/nano/distinfo
@@ -1,6 +1,6 @@
 $NetBSD: distinfo,v 1.20 2015/06/05 01:32:38 wiedi Exp $

-SHA1 (nano-2.4.1.tar.gz) = 422958cb700cc8cedc9a6b5ec00bf968c0fa875e
-RMD160 (nano-2.4.1.tar.gz) = 84bd54e50b5e8c6457d983dc7ef730b5a0303bf8
-Size (nano-2.4.1.tar.gz) = 1890805 bytes
+SHA1 (nano-2.4.2.tar.gz) = bcf2bb3fcc04874cb38c52cfd8feebce61dd5e0a
+RMD160 (nano-2.4.2.tar.gz) = 6a3d0569740c223230af6ae88f8ef0797402c4c2
+Size (nano-2.4.2.tar.gz) = 1898633 bytes
 SHA1 (patch-configure) = 3a63b02a39000d5a15087739648b82e999d14f56

You can take this diff and apply it to different systems for testing. This is very easy with git.

Once you are happy you can commit to CVS.

commiting

cd ~/tmp/pkgsrc/
git cvsexportcommit -w ~/cvs/pkgsrc/ -pcv 3b861ca6563af68aa3c175ebae151d3870c9b5d2

This will commit (-c) if the change with the id 3b861ca6563af68aa3c175ebae151d3870c9b5d2 applies cleanly (-p for paranoid).

Keepking CHANGES and TODO up to date:

~/cvs/pkgsrc/editors/nano
bmake changes-entry
cd ../../doc
cvs diff
cvs commit CHANGES-2015

The reason to not prepare this in the git commit is that the CHANGES file is updated very frequently and will conflict with the strict settings used with cvsexportcommit.

keeping both trees updated

before starting a new change update your git tree:

git pull

As you now have git you can also rebase easily onto more recent changes from upstream.

Before commiting update your cvs tree:

cvs update -d

Building illumos-gate on OmniOS

Thu 02 July 2015

Illumos-gate is the source repository of illumos. It is comparable with the Linux kernel in that you usually don't install from illumos-gate direclty but one of the distributions like SmartOS, OmniOS or OpenIndiana.

Unlike Linux the stable interface is not at the syscall layer but libc. So it kinda makes sense that the repository also contains some userland software like libc and core utilities that make the operating system.

Illumos has strict requirements for its build environment. Currently you need a specific version of gcc. In the past the only distribution where you could do a clean build of unmodified illumos-gate was OpenIndiana. Thanks to Dan McDonald it recently became also possible to build on OmniOS.

For detailed information see the "How to build illumos" page in the wiki. If you are starting out with OpenIndiana Ryan Zezeski has very helpful instructions. These are the steps I used on OmniOS.

Install dependencies

First start out by making sure you have all required software installed:

sudo pkg install -v \
 pkg:/developer/astdev \
 pkg:/developer/build/make \
 pkg:/developer/build/onbld \
 pkg:/developer/gcc44 \
 pkg:/developer/sunstudio12.1 \
 pkg:/developer/gnu-binutils \
 pkg:/developer/java/jdk \
 pkg:/developer/lexer/flex \
 pkg:/developer/object-file \
 pkg:/developer/parser/bison \
 pkg:/developer/versioning/mercurial \
 pkg:/developer/versioning/git \
 pkg:/developer/library/lint \
 pkg:/library/glib2 \
 pkg:/library/libxml2 \
 pkg:/library/libxslt \
 pkg:/library/nspr/header-nspr \
 pkg:/library/perl-5/xml-parser \
 pkg:/library/security/trousers \
 pkg:/runtime/perl \
 pkg:/runtime/perl-64 \
 pkg:/runtime/perl/module/sun-solaris \
 pkg:/system/library/math \
 pkg:/system/library/install \
 pkg:/system/library/dbus \
 pkg:/system/library/libdbus \
 pkg:/system/library/libdbus-glib \
 pkg:/system/library/mozilla-nss/header-nss \
 pkg:/system/header \
 pkg:/system/management/snmp/net-snmp \
 pkg:/text/gnu-gettext \
 pkg:/library/python-2/python-extra-26

Get the code

Next clone the repository:

cd ~
git clone git://github.com/illumos/illumos-gate.git
cd illumos-gate

Closed source binaries

Some closed source binaries are still required:

wget -c \
 https://download.joyent.com/pub/build/illumos/on-closed-bins.i386.tar.bz2 \
 https://download.joyent.com/pub/build/illumos/on-closed-bins-nd.i386.tar.bz2
tar xjvpf on-closed-bins.i386.tar.bz2
tar xjvpf on-closed-bins-nd.i386.tar.bz2

Setup illumos.sh environment file

The build environment is configured in a script that is passed to nightly.sh as an argument. Start out by copying the example:

cp usr/src/tools/env/illumos.sh .

Then adjust it to your needs. My diff looks like this:

--- usr/src/tools/env/illumos.sh        Sat May 30 16:58:22 2015
+++ illumos.sh  Mon Jun  1 14:18:16 2015
@@ -58,10 +58,10 @@

 # This is a variable for the rest of the script - GATE doesn't matter to
 # nightly itself
-export GATE='testws'
+export GATE='illumos-gate'

 # CODEMGR_WS - where is your workspace at (or what should nightly name it)
-export CODEMGR_WS="$HOME/ws/$GATE"
+export CODEMGR_WS="$HOME/$GATE"

 # Maximum number of dmake jobs.  The recommended number is 2 + NCPUS,
 # where NCPUS is the number of logical CPUs on your build system.
@@ -206,7 +206,9 @@
 # exists to make it easier to test new versions of the compiler.
 export BUILD_TOOLS='/opt'
 #export ONBLD_TOOLS='/opt/onbld'
-export SPRO_ROOT='/opt/SUNWspro'
+
+# Help OmniOS find lint
+export SPRO_ROOT='/opt'
 export SPRO_VROOT="$SPRO_ROOT"

 # This goes along with lint - it is a series of the form "A [y|n]" which
@@ -230,15 +232,26 @@

 # Comment this out to disable support for IPP printing, i.e. if you
 # don't want to bother providing the Apache headers this needs.
-export ENABLE_IPP_PRINTING=
+export ENABLE_IPP_PRINTING='#'

 # Comment this out to disable support for SMB printing, i.e. if you
 # don't want to bother providing the CUPS headers this needs.
-export ENABLE_SMB_PRINTING=
+export ENABLE_SMB_PRINTING='#'

 # If your distro uses certain versions of Perl, make sure either Makefile.master
 # contains your new defaults OR your .env file sets them.
 # These are how you would override for building on OmniOS r151012, for example.
-#export PERL_VERSION=5.16.1
-#export PERL_ARCH=i86pc-solaris-thread-multi-64int
-#export PERL_PKGVERS=-5161
+export PERL_VERSION=5.16.1
+export PERL_ARCH=i86pc-solaris-thread-multi-64int
+export PERL_PKGVERS=-5161
+
+# OmniOS places GCC 4.4.4 differently.
+export GCC_ROOT=/opt/gcc-4.4.4/
+
+export ONNV_BUILDNUM=151014
+
+# GCC only
+export CW_NO_SHADOW=1
+export ONLY_LINT_DEFS=-I${SPRO_ROOT}/sunstudio12.1/prod/include/lint
+export __GNUC=""

Also copy nightly.sh and make it executeable:

cp usr/src/tools/scripts/nightly.sh .
chmod +x nightly.sh

I am not sure why this is needed:

sudo ln -s /opt/sunstudio12.1/ /opt/SUNWspro

Build

Now all you need to start a build is:

./nightly.sh illumos.sh

Developing

If all went well you now have a working development environment and already did your first build. Next you can start making changes. The usual workflow is documented in the excellent illumos developer's guide.

For code review illumos also has a reviewboard which can be used instead of webrev since a few months.

Finally there is also a chapter on getting code upstream.

Installing OmniOS

Wed 01 July 2015

OmniOS is one of the illumos distributions. Created by OmniTI it is primarily designed as a server operating system and comes with a minimal set of software installed by default.

Installation is straight forward and configuration well documented. So in general you should not need this article. But as I was using VMware Fusion I hit a few issues that might be worth mentioning for others. The rest is mostly so I have something to copy & past in the future.

The first issue you might run into using VMware is that you need to add an empty floppy disk to your vm or else installation will hang.

installing OmniOS in VMware

The second issue took me a bit longer to figure out. There was a bug in VMware Fusion that led to kernel panics. An update of Fusion solved this, so make sure you're using the latest version.

Configure Network

The installer, like the system in general, is very minimal. Network configuration is left for the admin:

ipadm create-if e1000g0
ipadm create-addr -T dhcp e1000g0/v4
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
cp /etc/nsswitch.conf{,.bak}
cp /etc/nsswitch.{dns,conf}

Create a user

Add yourself a user to work with:

useradd -s /usr/bin/bash -d /export/home/wiedi wiedi
mkdir /export/home/wiedi
chown wiedi:other /export/home/wiedi
passwd wiedi
vim /etc/sudoers

Probably you also want to add your ssh key to the ~/.ssh/authorized_files.

To get a nicer bash prompt and and all the tools you need add this to your ~/.bash_profile:

export PATH=$PATH:/opt/omni/bin/
export PS1'=\u@\h \w: '

Update

pkg install pkg:/package/pkg
pkg update -v

Nano

More non-core packages can be found in the "managed services" repository. To install nano use:

sudo pkg set-publisher -g http://pkg.omniti.com/omniti-ms/ ms.omniti.com
sudo pkg install nano

Take half a pull request and keep the author

Thu 25 September 2014

So someone contributes a fantastic bugfix to your project on github and you're happy. But there's a problem: besides the bugfix there are other changes too that you might not want to merge.

So what you do is you start cherry-picking the good stuff. On Stackoverflow there is actually a great answer on how to only cherry-pick some changes from one commit. Sadly, once you do it like in that answer you become the author of that new commit.

Someone took the effort to write a fix for your project so proper attribution is important. The solution is git commit -c <commit> which reuses the log message and the authorship information (including the timestamp) when creating the commit.

So the complete thing looks something like that:

git cherry-pick -n <commit> # get your patch, but don't commit (-n = --no-commit)
git reset                   # unstage the changes from the cherry-picked commit
git add -p                  # make all your choices (add the changes you do want)
git commit -c <commit>      # make the commit and keep the author

Updates

Tue 01 June 2010

I haven't blogged in a while, but that doent's mean nothing happened.
So here are some quick updates. ;-)

libmaia:
Since the first released of libmaia 2 years ago a lot of people have mailed me patches. This is really great, thanks for that! To make it easier to contribute the source is now (in addition to subversion) also available on github. So if you want to add some new feature or fix a bug you can simply fork it. Github also offers issue tracking which was requested a few times, since managing bug-reports in blog comments of course isn't ideal.

XChannel:
In february we upgraded our ircd and are now reachable over IPv6! Connectivity is provided by SkyLime ;-)

Freamware:
Now also IPv6 ready is the Freamware jabber server. And we've switched our server software to ejabberd.

University:
We had an awesome python workshop and a nice Linux Install Party. This semester I'm working on a cluster management project which involves django and "something" with the semantic web and linked data. Great stuff!

Barcamp Bodensee:
This weekend drscream, Maex and myself will be in Konstanz for the 2.0 version of the Barcamp Bodensee.

Good night! :D

delicious tags

Sun 30 August 2009

Wenn man seine bookmarks schon online sortiert kann man das natürlich als tagcloud verwursten: *hübsch*
wied0r delicious tags
(via drscream)

memosaic

Sat 29 August 2009

Die lelith hat was buntes gefunden sich selbst darzustellen:
wiedimosaic
Das bin dann also ich, soso ;)

ALIX Router

Mon 29 June 2009

Seit gut einer Woche läuft jetzt mein neuer Router.
Die erste Überlegung war ein Soekris Board zu kaufen, wegen dem Preis hat dann aber das Alix 2d13 gewonnen.
Im Varia-Store gibt es günstige Komplettpakete inkl. Case und allem was man braucht.

Auf dem Alix ist ein AMD Geode mit 500 MHz verbaut, es hat drei Ethernet Ports und bootet von CF.

Weil sich Router alleine schnell einsam fühlen haben sich drscream und Boris auch einen gekauft. Wir haben dann Voyage Linux, ein Debian Derivat, auf die CF Card installiert.
Das war einfach und schnell gemacht.
Voyage ist speziell für solche embedded x86 Geräte angepasst.
Hat also Kernel Patches um die LEDs und Sensoren anzusteuern. Das System ist sinnvoll vorkonfiguriert.

Damit die CF card nicht gleich stirbt werden /var/log etc. als tmpfs gemounted und via init Script gesynced.
Der ISC dhcpd hat sein lease-file allerdings in /var/lib/dhcp3/. Es ist also eine gute Idee den Ordner in /etc/default/voyage-util bei VOYAGE_SYNC_DIRS einzutragen.

Ausserdem läuft noch ein OpenVPN Client und Quagga für OSPF drauf.

Boris hat heute noch eine Wifi Karte via miniPCI angeschlossen.
Was man sonst noch alles damit basteln kann wird sich zeigen. Ist auf jedenfall ein schönes Gerät - sehr unkompliziert.