Showing posts with label planet-debian. Show all posts
Showing posts with label planet-debian. Show all posts

Thursday, March 7, 2019

Exclude files from being exported into the zip/tar source archives on github.com

GitHub.com (and probably GitLab too) provides various ways to export the Git branch contents or tags and releases as Zip- or Tar-archives. When creating a release, these tar-/zipballs are automatically created and added to the release. I often find archives, which contain a lot of files not useful to the end user, like .github directories, Git (.gitignore, .gitattributes) or CI related files (.travis.yml, .appveyor.yml). Sometimes they also contain directories (e.g. for test files), upstream hosts in Git, but does not need for the source distribution. But there is an easy way to keep these files out of the automatically created source archives and keep the latter clean by using the export-ignore attribute in the .gitattributes files:

# don't export the github-pages source
/docs export-ignore
# export some other irrelevant directories
/foo export-ignore
# don't export the files necessary for CI
Gemfile export-ignore
.appveyor.yml export-ignore
.travis.yml export-ignore
# ignore Git related files
.gitattributes export-ignore
.gitignore export-ignore

Sunday, February 24, 2019

Jekyll and GitHub pages: access the download URL (aka browser_download_url) for an asset of your latest release via site.github

Add the download URL of an asset of your latest release

Often there is the question to get the download URL for an asset (e.g. a setup-file) of the latest release of a project. In my case I provide an executable, which includes the version number in its name together with the source as ZIP- and Tarball-archive. Others provide versioned source tarballs or executables, which are different from the Git repository source tarballs.

project-X.Y.Z-setup.exe
project-X.Y.Z-src.tar.gz

Now to get the download URL(s) for the asset(s) using the GitHub API one can get and process this URL (replacing USER and PROJECT/var> with the GitHub user account and projectname accordingly):

https://api.github.com/repos/USER/PROJECT/releases/latest

Note, that the assets download URL is provided by the browser_download_url object in the assets objects list:

{
  ...
  "assets": [
    {
      ...
      "browser_download_url": "...",
      ...
    }
  ]
}

The content provided by the API is also available to Jekyll sites hosted on GitHub pages via the site.github namespace. You can easily check all the content of this namespace using this approach (somewhere in your code):

{{ site.github | inspect }}

You'll find, that you can even access detailed author and project information. Now to get the download URL of my asset, I just access the first list entry using this:

{{ site.github.latest_release.assets[0].browser_download_url }}

or this approach (less typing):

{% assign release = site.github.latest_release %}
{{ release.assets[0].browser_download_url }}

I use this to create structured data in JSON-LD for a software application. I can even access the file size, the creation and publication date of my asset. The following shows the JSON-LD snippet I add to one of my GitHub project pages (I replaced fixed content with dots):

{% assign release = site.github.latest_release %}
{
  "@context": "http://schema.org/",
  "@type": "SoftwareApplication",
  "name": "...",
  "softwareVersion": "{{ release.tag_name | strip | remove: 'v' }}",
  "alternateName": [
    "...",
    "{{ release.name }}"
  ],
  "description": "...",
  "applicationCategory": "...",
  "inLanguage": ["..", ".."],
  "operatingSystem": [
    "...",
    "..."
  ],
  "downloadUrl": "{{ release.assets[0].browser_download_url }}",
  "fileSize": "{{ release.assets[0].size | divided_by: 1024 }}",
  "releaseNotes": "{{ release.html_url }}",
  "license": "...",
  "url": "{{ site.github.repository_url }}",
  "datePublished": "{{ release.published_at }}",
  "dateCreated": "{{ release.created_at }}",
  "author":    {%- include json/person.json -%},
  "publisher": {%- include json/publisher.json -%}
}

If there is more than one asset (the GitHub repository source tarball and zipball are not assets) one probably has to use a more flexibale approach then accessing the first list entry via asset[0] as shown above. If there are several assets and the asset file name is created the same way for every release but includes the version number (see the file name examples from the beginning of this post), there is another approach, that might be to used. One can process:

{{ site.github.latest_release.tag_name }}

and create the download URL like this

{{ site.github.releases_url }}/download/latest/foo-{{ site.github.latest_release.tag_name | strip | remove 'v' }}-setup.exe
{{ site.github.releases_url }}/download/latest/foo-{{ site.github.latest_release.tag_name | strip | remove 'v' }}-src.tar.gz

Because it is common to tag the version as vX.Y.Z the leading v is removed from the version tag in the examples above.

Using the approach above one can even loop over site.github.releaes and create a changelog/news page automatically for all releases! Maybe you can share your ideas about the suggested approaches on my GIST page.

Monday, April 30, 2018

Re-enabling right click functionality for my Thinkpad touchpad

I have a Lenovo Thinkpad Yoga 11e running Debian Sid. The touchpad has a left and a right click area at the bottem. For some reason, the right click ability recently stopped working. I have not yet found the reason, but I was able to fix it by adding the emphasized lines in /usr/share/X11/xorg.conf.d/70-synaptics.conf.


Section "InputClass"
        Identifier "Default clickpad buttons"
        MatchDriver "synaptics"
        Option "ClickPad" "true"
        Option "EmulateMidButtonTime" "0"
        Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0"
        Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"
EndSection
Edit

Nope. Stopped working again and both bottom areas act as left click. As it is working as a left click, I guess, the touchpad is physically ok. So I have no idea, what's going on :(

Edit 2

Thanks to Cole Robinson and the link he provided I found the reason and a fix. GNOME 3.28 uses the clickfinger behaviour by default now. By setting the click method from the 'fingers' (clickfinger) method back to 'areas', either using gsettings or gnome-tweaks, the right click ability is back after rebooting the system.

PS: With the new default clickfinger method, the right- and middle click are emulated using a two- and three-finger tap.

Monday, February 5, 2018

Get your namespace_id(s) for salsa.debian.org

In addition to Christoph Bergs script, to import packages to salsa.debian.org, I'd like to provide some more ways to determine the namesapce_id parameter you'll need.

Say that GROUP is the name of the group or a string of the groups name and that TOKEN is a personal access token you created. So this will work much faster:

curl --request GET -d "search=GROUP" -s https://salsa.debian.org/api/v4/groups | jq '.[].id'

If the group is not public, you need to add your token:

curl [..] --header "PRIVATE-TOKEN: TOKEN" [..]

The command might provide several IDs, if the search term matches several groups. In this case, you might want to have a look at the raw output without piping it to jq or look at the result of ...

https://salsa.debian.org/api/v4/groups/?search=GROUP

... in a browser. Interestingly, the latter doesn't provide any output in the browser, if you are not part of the group. But it provides the necessary information using curl. Bug or feature?

Another way that works is, to look at the output of...

https://salsa.debian.org/api/v4/namespaces

... when you are logged in or ...

curl --request GET --header "PRIVATE-TOKEN: TOKEN" https://salsa.debian.org/api/v4/namespaces

This has the advantage of also getting the namespace_id for your personal repository, say for e.g. importing projects hosted on https://people.debian.org/~USER/ or https://anonscm.debian.org/cgit/users/USER/.

Wednesday, January 31, 2018

Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

With the deprecation of alioth.debian.org the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:

$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
GeƤnderte Pfade:
   A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
   R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
   R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
   D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
   R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
   R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>

e.g.

dleidert = Daniel Leidert <dleidert@debian.org>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:

create repository REPOSITORY
...
end repository

and

match PATTERN
...
end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:

create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:

match /([^/]+)/trunk/
  repository \1
  branch master
end match

match /([^/]+)/tags/([^/]+)/
  repository \1
  branch refs/tags/debian/\2
  annotated true
end match

match /([^/]+)/branches/([^/]+)/
  repository \1
  branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...

git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:

# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
  repository \1
  branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
  repository \2
  branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
  repository \2
  branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
  repository \1
  annotated true
  branch refs/tags/debian/\2
  substitute branch s/~/_/
  substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:

git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to git.debian.org as personal repository:

cp REPOSITORY/description REPOSITORY.git/description 
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git git.debian.org:~/public_git/

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

Sunday, January 14, 2018

Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

My mail server runs with a self signed certificate. So bts, configured like this ...

BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='user'
BTS_SMTP_AUTH_PASSWORD='pass'

...lately refused to send mails with this error:

bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):

if (have_smtps) {
    $smtp = Net::SMTPS->new($host, Port => $port,
        Hello => $smtphelo, doSSL => 'starttls',
        SSL_fingerprint => 'sha1$hex-fingerprint')
        or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
    $smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
        or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}

Pretty happy to being able to use the bts command again.

Tuesday, October 31, 2017

Troubleshoot

I have no idea, what these errors mean. $searchengine and manual pages didn't reveal anything.

That's the first one. It occurs during boot time. Might be a bug somewhere, recently introduced in Debian Sid.

kernel: [ ... ] cgroup: cgroup2: unknown option "nsdelegate"

And that's the second one. It simply occurred. No real issue with NFS though.

kernel: [ ... ] nfs: RPC call returned error 22
kernel: [ ... ] NFS: state manager: check lease failed on NFSv4 server XXX.XXX.XXX.XXX with error 5

Any explanation is appreciated.

Sunday, March 29, 2015

Prevent suspend/hibernate if system is remotely backed up via rdiff-backup

I usually use rdiff-backup to backup several of my systems. One is a workstation which goes to sleep after some time of idling around. Now having a user logged in running rdiff-backup (or rsync, rsnapshot etc for that matter) won't prevent the system from being put to sleep. Naturally this happens before the backup is complete. So some time ago I was looking for a resolution and recieved a suggestion to use a script in /etc/pm/sleep.d/. I had to modify the script a bit, because the query result always was true. So this is my solution in /etc/pm/sleep.d/01_prevent_sleep_on_backup now:

#!/bin/sh

. "${PM_FUNCTIONS}"

command_exists rdiff-backup || exit $NA

case "$1" in
        hibernate|suspend)
                if ps cax | grep -q rdiff-backup
                then
                        exit 1
                fi
        ;;
esac

exit 0

Currently testing ...

Update

The above works with pm-utils; but it fails with systemd. Seems I have to move and modify the script for my system.

Update 2

It doesn't work. In short: exit 1 doesn't prevent systemd from going to suspend. I can see, that the script itself leads to the results I want, so the logic is correct. But I cannot find a way to tell systemd, to stop suspend. Shouldn't it be doing this automtically in a case, where a remote user is logged in and runs a command?

Update 3

There is also a related bug report.

Friday, February 13, 2015

Motion picture capturing: Debian + motion + Logitech C910 - part II

In my recent attempt to setup a motion detection camera I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software motion. Now I got a bit further. This seems to be an issue with the format used by motion. I've check the output of v4l2-ctl ...

$ v4l2-ctl -d /dev/video1 --list-formats-ext
[..]
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YUYV'
	Name        : YUV 4:2:2 (YUYV)
[..]
		Size: Discrete 1280x720
			Interval: Discrete 0.100s (10.000 fps)
			Interval: Discrete 0.133s (7.500 fps)
			Interval: Discrete 0.200s (5.000 fps)
[..]

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : MJPEG
[..]
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
			Interval: Discrete 0.042s (24.000 fps)
			Interval: Discrete 0.050s (20.000 fps)
			Interval: Discrete 0.067s (15.000 fps)
			Interval: Discrete 0.100s (10.000 fps)
			Interval: Discrete 0.133s (7.500 fps)
			Interval: Discrete 0.200s (5.000 fps)
[..]

... and motion:

$ motion
[..]
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)
[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008
[..]

Ok, so both formats YUYV and MJPG are supported and recognized and I can choose both via the v4l2palette configuration variable, citing motion.conf:

# v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0  'S910'
# V4L2_PIX_FMT_SBGGR16 : 1  'BYR2'
# V4L2_PIX_FMT_SBGGR8  : 2  'BA81'
# V4L2_PIX_FMT_SPCA561 : 3  'S561'
# V4L2_PIX_FMT_SGBRG8  : 4  'GBRG'
# V4L2_PIX_FMT_SGRBG8  : 5  'GRBG'
# V4L2_PIX_FMT_PAC207  : 6  'P207'
# V4L2_PIX_FMT_PJPG    : 7  'PJPG'
# V4L2_PIX_FMT_MJPEG   : 8  'MJPEG'
# V4L2_PIX_FMT_JPEG    : 9  'JPEG'
# V4L2_PIX_FMT_RGB24   : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY    : 14 'UYVY'
# V4L2_PIX_FMT_YUYV    : 15 'YUYV'
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420  : 17 'YU12'
#
v4l2_palette 17

Now motion uses YUYV as default mode as shown by its output. So it seems that all I have to do is to choose MJPEG in my motion.conf:

v4l2_palette 8

Testing again ...

$ motion
[..]
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 ) 
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 ) 
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 ) 
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 ) 
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 ) 
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
[1] [ERR] [ALL] motion_init: Error capturing first image
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] mjpegtoyuv420p: Corrupt image ... continue
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1
[..]

... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to $searchengine the above happens to a lot of people. I just found one often suggested fix: pre-load v4l2convert.so from libv4l-0:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so motion

But the problem persists and I'm out of ideas :( So atm it lokks like I cannot use the MJPEG format and don't get 30 fps at 1280x720 pixels. During writing I then discovered a solution by good old trial-and-error: Leaving the v4l2_palette variable at its default value 17 (YU12) and pre-loading v4l2convert.so makes use of YU12 and the framerate at least raises to 24 fps:

$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/v4l2convert.so motion
[..]
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[..]
[1] [NTC] [EVT] event_new_video FPS 24
[..]

Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:

v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get 24 fps too, the resulting movie suffers of jumps every few frames. So here I got pretty good results with a more conservative setting. By increasing framerate - tested up to 15 fps with good results - pre_capture needed to be decreased accordingly to values between 1..3 to minimize jumps:

v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality

Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :)

TODO

I guess, the results can be optimzed further by playing around with ffmpeg_bps and ffmpeg_variable_bitrate. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various norm settings (PAL, NTSC, etc.).

Thursday, February 12, 2015

Motion picture capturing: Debian + motion + Logitech C910

Woodpecker near the window

Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a Logitech C910 USB camera which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found motion, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.

Logitech C910 HD camera

Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.

$ dmesg
[..]
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device  (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17

$ lsusb
[..]
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910
[..]

$ v4l2-ctl -V -d /dev/video1 
Format Video Capture:
Width/Height  : 1280/720
Pixel Format  : 'YUYV'
Field         : None
Bytes per Line: 2560
Size Image    : 1843200
Colorspace    : SRGB

Also the uvcvideo kernel module is loaded and the user in question is part of the video group.

Installation and start

Installation of the software is as easy as always:

apt-get install motion

It is possible to run the software as a service. But for testing, I copied /etc/motion/motion.conf to ~/.motion/motion.conf, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.

daemon off

Note that in my case the correct device is /dev/video1 because the laptop has a built-in camera, that is /dev/video0. Also the target directory should be writeable by my user:

videodevice /dev/video1
target_dir ~/Videos

Then running motion from the command line ...

$ motion
[..]
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[..]
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability: 
------------------------
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
cap.capabilities=0x84000001
------------------------
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[..]
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items

... will begin to capture motion detection events and also output a live stream. CTRL+C will stop it again.

Live stream

The live stream is available by pointing the browser to localhost:8081. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:

stream_motion on
stream_maxrate 100

The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by stream_maxrate or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the stream_quality value. Because I neither need the stream nor the control feed I disabled both:

stream_port 0
webcontrol_port 0

Picture capturing

By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:

output_pictures off

FYI: If you want a good picture quality, then the value of quality should very probably be increased.

Video capturing

This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the width and height variables. For example the following configures motion for 1280x720 pixels (720p):

width 1280
height 720

The result was really disappointing. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the framerate variable. Of course its default value of 2 (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of framerate, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following event_new_video FPS.)

The solution to the issue, that videos are running too fast, however is to increase the pre_capture value, the number of pre-captured (buffered) pictures from before motion was detected. Even small values like 3..5 result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.

framerate 100
pre_capture 5

Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running guvcview, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are ffmpeg_bps and ffmpeg_variable_bitrate. I got the best results just changing the latter:

ffmpeg_variable_bitrate 2

Finally the resulting video quality is promising. I'll start with this configuration setting up an observation camera for the bird feeding ground.

There is one more tweak for me. I got even better results by enabling the auto_brightness feature.

auto_brightness on

Complete configuration

So the complete configuration looks like this (only those options changed to the original config file)

daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080

Links

Continue with part II ...

Setting up a network buildd with pbuilder ... continued

Last year I'd described my setup of a local network-buildd using pbuilder, ccache, inoticoming and NFS. One then-still-open goal was to support different Debian releases. This is especially necessary for backports of e.g. bluefish. The rules to contribute backports require to include all changelog entries since the last version on debian-backports or since stable if it's the first version in an uploaded package. Therefor one needs to know the last version in e.g. wheezy-backports. Because I'm not typing the command (the source package only gets uploaded and inoticoming starts the build process) I was looking for a way to automatically retrieve that version and add the relevant -vX.Y-Z switch to dpkg-buildpackage.

The solution I found requires aptitude and a sources.list entry for the relevant release. If you are only interested in the solution, just jump to the end :)

I'm going to add the version switch to the DEBBUILDOPTS variable of pbuilder. In my setup I have a common (shared) snippet called /etc/pbuilderrc.inc and one configuration file per release and architecture, say /etc/pbuilderrc.amd64.stable. Now the first already contains ...

DEBBUILDOPTS="-us -uc -j2"

... and DEBBUILDOPTS can be extended in the latter:

DEBBUILDOPTS+="..."

Because the config file is parsed pretty early in the process the package name has not yet been assigned to any variable. The last argument to pbuilder is the .dsc file. So I use the last argument and parse the file to retrieve the source package name.

cat ${@: -1} | grep -e ^Source | awk -F\  '{ print $2 }'

The solution above works, because pbuilder is a BASH script. Otherwise it maybe needs some tweaking. I use the source package name, because it is unique and there is just one :) Now with this name I check for all versions in wheezy* and stable* and sort them. The sort order of aptitude is from low to high, so the last line shopuld contain the highest version. This covers the possibility that there has not yet been a backport or that there is one:

aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version \
   "?narrow(?source-package(^PACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" |\
   tail -n 1 | sed -e 's#~bpo.*$##g'

The sed-part is necessary because otherwise dh_genchanges will add a superfluous changelog entry (the last one of the last upload). To make things easier, I assign the name and version to variables. So this is the complete solution:

[..]
MYPACKAGE="`cat ${@: -1} | grep -e ^Source | awk -F\  '{ print $2 }'`"
MYBPOVERS="`aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version "?narrow(?source-package(^$MYPACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" | tail -n 1 | sed -e 's#~bpo.*$##g'`"

log "I: Package is $MYPACKAGE and last stable/bpo version is $MYBPOVERS"

DEBBUILDOPTS+=" -v$MYBPOVERS"
[..]

Examples

I've recently built a new bluefish backport. The last backport version is 2.2.6-1~bpo70+1. There is also the stable version 2.2.3-4. So the version I need is 2.2.6-1 (2.2.6-1~bpo70+1 < 2.2.6-1!!). Checking the log it works:

I: Package is bluefish and last stable/bpo version is 2.2.6-1

A different example is rsync. I've recently locally rebuilt it for a stable system (wanted to make use of the --chown switch). There is not yet a backport. So the version I (would) need is 3.0.9-4. Checking the logs again and works too:

I: Package is rsync and last stable/bpo version is 3.0.9-4

Feedback appreciated ...

Links

Wednesday, February 11, 2015

Blogger RSS feed and category URLs with combined labels/tags

Run a blog on blogger.com? Maybe made it bilingual? Maybe blog on different topics? Wondering how the URL for an RSS feed for e.g. two labels looks like? Asking how to see all articles matching two tags (labels)? Finding a keyword under one or more labels? Many things are possible. I'll show a few examples below. Maybe that is even interesting for the planet Debian folks. I happen to blog mostly in English about Debian topics. But sometimes I also want to post something in German only (e.g. about German tax software). It is discouraged to put the latter on planet-debian. Instead it can be published in the language specific planet feed. So instead of adding new tags, one could easily combine two labels: the one for language of the feed and the one for Debian related posts (e.g. debian+english or debian+german). Therefor this post goes to the Debian planet.

Search for combbined labels/tags

Say I want to view all postings related to the topics FOO and BAR. Then it is:

http://domain.tld/search/label/FOO+BAR OR
http://domain.tld/search/?q=label:FOO+label:BAR

Be aware that labels are case sensitive and that more labels can be added. The examples below will show all postings related to the topics debian and n54l and xbmc:

http://www.wgdd.de/search/label/debian+n54l+xbmc
http://www.wgdd.de/search/?q=label:debian+label:n54l+label:xbmc

It is also possible to search for all posts related to the topics FOO or BAR:

http://domain.tld/search/?q=label:FOO|label:BAR

Say for example, you want to see all postings related to the topics logitech or toshiba, then it is:

http://www.wgdd.de/search/?q=label:logitech|label:toshiba

Feed URLs

To get back to the first example lets say, the feed shall contain all posts related to the topics FOO and BAR. Then it is:

http://domain.tld/feeds/posts/default/-/FOO/BAR/ OR
http://domain.tld/feeds/posts/default?q=label:FOO+label:BAR

Respecitvely to show all feeds related to either of those topics use:

http://domain.tld/feeds/posts/default/?q=label:FOO|label:BAR

To get a feed of the example topics as shown above then would be:

http://www.wgdd.de/feeds/posts/default/-/debian/n54l/xbmc/
http://www.wgdd.de/feeds/posts/default?q=label:debian+label:n54l+label:xbmc
http://www.wgdd.de/feeds/posts/default?q=label:logitech|label:toshiba

Coming back to planet Debian, below is a solution for a multi-lingual planet contribution (if both planets would exist):

http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:english
http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:german

Advanced ...

There is much more possible. I'll just show two more examples. AND and OR can be combined ...

http://www.wgdd.de/feeds/posts/default?q=label:debian+(label:logitech|label:toshiba)

... and a keyword search can be added too:

http://www.wgdd.de/feeds/posts/default?q=stick+(label:debian+(label:logitech|label:toshiba))

Links

Sunday, February 8, 2015

Correct keyboard layout (QWERTZ) with Logitech K400 attached to an Android HDMI stick

I've bought an HDMI stick with Android 4.4.2 to connect a TV to the local network (stream, DLNA) and the internet (Video portals etc.). The stick comes with no input device whatsoever. So I decided for a wireless Logitech K400r keyboard. Unfortunately the keyboard layout used by Android seems to be American (QWERTY) although the language and location settings are set to German. So I neither get Umlauts nor special signs correct nor do the media keys of the K400 work. But there is an easy way to make all this work. I've recently seen some people suggesting to install an app called External Keyboard Helper to fix things, which comes in a demo and a pro version, the latter with costs. Fortunately I didn't need any of these for the Logitech keyboard plus I got all media keys of this keyboard to work in just three steps without any cost :)

Step 1

First I went to the Google Play Store and installed the Logitech Keyboard Plus app by Logitech Europe S.A. (See? No external stuff and without costs. Ignore anything that's written in its description about Bluetooth.)

App Logitech Keyboard Plus shown at Google Play Store

Step 2

Then I opened the settings section and went for Settings > Language and Input (Einstellungen > Sprache und Eingabe). Under KEYBOARD & INPUT METHOD (TASTATUR & EINGABEMETHODEN) I activated the entry called Logitech keyboard (Logitech Tastatur) as shown below.

Activate the entry &quote;Logitech keyboard&quote; under Settings > Language and Input

Step 3

Now a click on the field above that is called Default (Standard) and (here) defaults to Deutsch - Android-Tastatur (AOSP) ...

Click on the field &quote;Default&quote;

... opened a popup called Choose input method and there I've chosen Logitech keyboard (Logitech Tastatur) as shown below and confirmed it.

Activate the Logitech keyboard on the popup called &quote;Choose input method&quote;

The default now is the Logitech keyboard:

The default keyboard now is the Logitech keyboard

Finally ...

... it's done. The keyboard now behaves as expected and also the media/function keys do work. It is however not possible to adjust or configure anything, because this presumes a confirmed Bluetooth pairing between the keyboard and the android device and the K400 is not a Bluetooth device. It would probably be a good idea for Logitech to make this app usable for all available Logitech keyboards, even USB and Unifying(TM) ones. Update: As of now it is possible to cancel the Bluetooth pairing attempt without leaving the application. So this app in fact can be used for USB and Unifying(TM)-based Logitech keyboards.

Friday, December 12, 2014

Issues with Server4You vServer running Debian Stable (Wheezy)

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]? 

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating. (not reproducible anymore)

Now I tried to debug this:

# apt-get -o  Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
  Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
  Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
  Added sysvinit:amd64 to the remove list
  Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
  init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]? 

Eh, upstart?

# apt-cache policy upstart
upstart:
  Installed: 1.6.1-1
  Candidate: 1.6.1-1
  Version table:
     1.11-5 0
        500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
        500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
 *** 1.6.1-1 0
        990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
        100 /var/lib/dpkg/status
# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                          Version             Architecture        Description
+++-=============================-===================-===================-===============================================================
hi  upstart                       1.6.1-1             amd64               event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself