tag:blogger.com,1999:blog-41314193247266722812024-03-05T18:55:22.727+01:00[erfahrungen, meinungen, halluzinationen]ein kleines, privates blogDaniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-4131419324726672281.post-69931836040426458982019-03-07T13:52:00.001+01:002019-03-07T13:52:48.180+01:00Exclude files from being exported into the zip/tar source archives on github.com<div lang="en">
<p><a href="https://github.com">GitHub.com</a> (and probably <a href="https://gitlab.com">GitLab</a> too) provides various ways to export the Git branch contents or tags and releases as Zip- or Tar-archives. When creating a release, these tar-/zipballs are automatically created and added to the release. I often find archives, which contain a lot of files not useful to the end user, like <tt>.github</tt> directories, Git (<tt>.gitignore</tt>, <tt>.gitattributes</tt>) or CI related files (<tt>.travis.yml</tt>, <tt>.appveyor.yml</tt>). Sometimes they also contain directories (e.g. for test files), upstream hosts in Git, but does not need for the source distribution. But there is an easy way to keep these files out of the automatically created source archives and keep the latter clean by using the <code>export-ignore</code> attribute in the <tt>.gitattributes</tt> files:</p>
<pre>
# don't export the github-pages source
/docs export-ignore
# export some other irrelevant directories
/foo export-ignore
# don't export the files necessary for CI
Gemfile export-ignore
.appveyor.yml export-ignore
.travis.yml export-ignore
# ignore Git related files
.gitattributes export-ignore
.gitignore export-ignore
</pre>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-35398265329504713802019-02-24T17:00:00.002+01:002019-02-24T17:23:54.988+01:00Jekyll and GitHub pages: access the download URL (aka browser_download_url) for an asset of your latest release via site.github<div lang="en">
<h1>Add the download URL of an asset of your latest release</h1>
<p>Often there is the question to get the download URL for an asset (e.g. a setup-file) of the latest release of a project. In my case I provide an executable, which includes the version number in its name together with the source as ZIP- and Tarball-archive. Others provide versioned source tarballs or executables, which are different from the Git repository source tarballs.</p>
<pre>
project-X.Y.Z-setup.exe
project-X.Y.Z-src.tar.gz
</pre>
<p>Now to get the download URL(s) for the asset(s) using the <a href="https://developer.github.com/v3/">GitHub API</a> one can get and process this URL (replacing <var>USER</var> and <var>PROJECT/var> with the GitHub user account and projectname accordingly):<p>
<pre><a href="https://developer.github.com/v3/repos/releases/#get-the-latest-release">https://api.github.com/repos/USER/PROJECT/releases/latest</a></pre>
<p>Note, that the assets download URL is provided by the <code>browser_download_url</code> object in the <code>assets</code> objects list:</p>
<pre>
{
...
"assets": [
{
...
"browser_download_url": "...",
...
}
]
}
</pre>
<p>The content provided by the API is <strong>also</strong> available to <a href="https://jekyllrb.com/">Jekyll sites</a> hosted on <a href="https://pages.github.com/">GitHub pages</a> via the <a href="https://help.github.com/en/articles/repository-metadata-on-github-pages"><code>site.github</code></a> namespace. You can easily check all the content of this namespace using this approach (somewhere in your code):</p>
<pre>{{ site.github | inspect }}</pre>
<p>You'll find, that you can even access detailed author and project information. Now to get the download URL of my asset, I just access the first list entry using this:</p>
<pre>{{ site.github.latest_release.assets[0].browser_download_url }}</pre>
<p>or this approach (less typing):</p>
<pre>
{% assign release = site.github.latest_release %}
{{ release.assets[0].browser_download_url }}
</pre>
<p>I use this to create structured data in <a href="https://json-ld.org">JSON-LD</a> for a software application. I can even access the file size, the creation and publication date of my asset. The following shows the <a href="https://raw.githubusercontent.com/dleidert/bde-lock/master/docs/_includes/json/softwareapplication.json">JSON-LD snippet</a> I add to one of my GitHub project pages (I replaced fixed content with dots):</p>
<pre>
{% assign release = site.github.latest_release %}
{
"@context": "http://schema.org/",
"@type": "SoftwareApplication",
"name": "...",
"softwareVersion": "{{ release.tag_name | strip | remove: 'v' }}",
"alternateName": [
"...",
"{{ release.name }}"
],
"description": "...",
"applicationCategory": "...",
"inLanguage": ["..", ".."],
"operatingSystem": [
"...",
"..."
],
"downloadUrl": "{{ release.assets[0].browser_download_url }}",
"fileSize": "{{ release.assets[0].size | divided_by: 1024 }}",
"releaseNotes": "{{ release.html_url }}",
"license": "...",
"url": "{{ site.github.repository_url }}",
"datePublished": "{{ release.published_at }}",
"dateCreated": "{{ release.created_at }}",
"author": {%- include json/person.json -%},
"publisher": {%- include json/publisher.json -%}
}
</pre>
<p>If there is more than one asset (the GitHub repository source tarball and zipball are not assets) one probably has to use a more flexibale approach then accessing the first list entry via <code>asset[0]</code> as shown above. If there are several assets and the asset file name is created the same way for every release but includes the version number (see the file name examples from the beginning of this post), there is another approach, that might be to used. One can process:</p>
<pre>
{{ site.github.latest_release.tag_name }}
</pre>
<p>and create the download URL like this</p>
<pre>
{{ site.github.releases_url }}/download/latest/foo-{{ site.github.latest_release.tag_name | strip | remove 'v' }}-setup.exe
{{ site.github.releases_url }}/download/latest/foo-{{ site.github.latest_release.tag_name | strip | remove 'v' }}-src.tar.gz
</pre>
<p>Because it is common to tag the version as <code>vX.Y.Z</code> the leading <code>v</code> is removed from the version tag in the examples above.</p>
<p>Using the approach above one can even loop over <code>site.github.releaes</code> and create a changelog/news page automatically for all releases! Maybe you can share <strong>your ideas</strong> about the suggested approaches on <a href="https://gist.github.com/dleidert/99a8e6ee3a879a7ed1f160c5dd07c13d">my GIST page</a>.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-59060156340947882932018-04-30T19:35:00.001+02:002018-05-01T11:17:47.314+02:00Re-enabling right click functionality for my Thinkpad touchpad<div lang="en">
<p>I have a Lenovo Thinkpad Yoga 11e running Debian Sid. The touchpad has a left and a right click area at the bottem. For some reason, the right click ability recently stopped working. I have <del>not yet found the reason, but I was able to fix it by adding the emphasized lines in <tt>/usr/share/X11/xorg.conf.d/70-synaptics.conf</tt></del>.</p>
<pre><del>
Section "InputClass"
Identifier "Default clickpad buttons"
MatchDriver "synaptics"
<strong>Option "ClickPad" "true"
Option "EmulateMidButtonTime" "0"</strong>
Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0"
Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"
EndSection</del>
</pre>
<h5>Edit</h5>
<p>Nope. Stopped working again and both bottom areas act as left click. As it is working as a left click, I guess, the touchpad is physically ok. So I have no idea, what's going on :(</p>
<h5>Edit 2</h5>
<p>Thanks to <a href="#comment-holder">Cole Robinson</a> and the <a href="http://who-t.blogspot.com/2018/04/gnome-328-uses-clickfinger-behaviour-by.html">link</a> he provided I found the reason and a fix. GNOME 3.28 uses the clickfinger behaviour by default now. By setting the click method from the <code>'fingers'</code> (clickfinger) method back to <code>'areas'</code>, either using <tt>gsettings</tt> or <tt>gnome-tweaks</tt>, the right click ability is back <strong>after rebooting the system</strong>.</p>
<p>PS: With the new default clickfinger method, the right- and middle click are emulated using a two- and three-finger tap.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com3tag:blogger.com,1999:blog-4131419324726672281.post-42079900442567199502018-02-05T18:56:00.002+01:002018-02-05T18:57:32.316+01:00Get your namespace_id(s) for salsa.debian.org<div lang="en">
<p>In addition to <a href="http://www.df7cb.de/blog/2017/Salsa_batch_import.html">Christoph Bergs script</a>, to import packages to <a href="http://salsa.debian.org">salsa.debian.org</a>, I'd like to provide some more ways to determine the <code>namesapce_id</code> parameter you'll need.</p>
<p>Say that <var>GROUP</var> is the name of the group or a string of the groups name and that <var>TOKEN</var> is <a href="https://salsa.debian.org/profile/personal_access_tokens">a personal access token you created</a>. So this will work much faster:</p>
<pre>curl --request GET -d "search=<var>GROUP</var>" -s https://salsa.debian.org/api/v4/<a href="https://salsa.debian.org/help/api/groups.md">groups</a> | jq '.[].id'</pre>
<p>If the group is not public, you need to add your token:</p>
<pre>curl [..] --header "PRIVATE-TOKEN: <var>TOKEN</var>" [..]</pre>
<p>The command might provide several IDs, if the search term matches several groups. In this case, you might want to have a look at the raw output without piping it to <code>jq</code> or look at the result of ...</p>
<pre><a href="https://salsa.debian.org/api/v4/groups/?search=GROUP">https://salsa.debian.org/api/v4/groups/?search=<var>GROUP</var></a></pre>
<p>... in a browser. Interestingly, the latter doesn't provide any output in the browser, if you are not part of the group. But it provides the necessary information using <code>curl</code>. Bug or feature?</p>
<p>Another way that works is, to look at the output of...</p>
<pre><a href="https://salsa.debian.org/api/v4/namespaces">https://salsa.debian.org/api/v4/namespaces</a></pre>
<p>... when you are logged in or ...</p>
<pre>curl --request GET --header "PRIVATE-TOKEN: <var>TOKEN</var>" https://salsa.debian.org/api/v4/<a href="https://salsa.debian.org/help/api/namespaces.md">namespaces</a></pre>
<p>This has the advantage of also getting the <code>namespace_id</code> for your personal repository, say for e.g. importing projects hosted on <a href="https://people.debian.org/">https://people.debian.org/~<var>USER</var>/</a> or <a href="https://anonscm.debian.org/cgit/users/">https://anonscm.debian.org/cgit/users/<var>USER</var>/</a>.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-62083399611182544082018-01-31T13:24:00.000+01:002018-01-31T13:24:46.103+01:00Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics<div lang="en">
<p>With the <a href="https://wiki.debian.org/Alioth#Deprecation_of_Alioth">deprecation of alioth.debian.org</a> the subversion service hosted there will be shut down too. <a href="https://lintian.debian.org/tags/vcs-deprecated-in-debian-infrastructure.html">According to lintian</a> the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the <a href="https://debichem.alioth.debian.org/">debichem group</a> we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found <a href="https://anonscm.debian.org/viewvc/debichem/tools/svn2git/">here</a>.</p>
<p>Of course I began as described at the <a href="https://wiki.debian.org/de/Alioth/Git#Convert_a_SVN_Alioth_repository_to_Git">Debian Wiki</a>. But following this guide, using <code>git-svn</code> and converting the tags with the script supplied under rubric <cite>Convert remote tags and branches to local one</cite> gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that <tt>git-svn</tt> was to blame and reported this as <a href="https://bugs.debian.org/887881">bug #887881</a>. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:</p>
<pre title="svn log -v -r7405">
$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
GeƤnderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)
[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------
</pre>
<p>Looking into the git log, the tags deteremined by <kbd>git-svn</kbd> are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO <kbd>git-svn</kbd> is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use <samp>/<var>branch</var>/<var>package</var>/</samp>, I began to look for different tools and found <a href="https://github.com/svn-all-fast-export/svn2git"><kbd>svn-all-fast-export</kbd></a>, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)</p>
<h5 id="step1">Step 1: Setting up a local subversion mirror</h5>
<p>First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the <kbd>svnsync</kbd> command. There are several howtos for this, so I won't describe this step here. Please check out <a href="http://www.microhowto.info/howto/mirror_a_subversion_repository.html">this guide</a>. In my case I have such a copy in <samp>/srv/svn/debichem</samp>.</p>
<h5 id="step2">Step 2: Creating the identity map</h5>
<p><kbd>svn-all-fast-export</kbd> needs at least two files to work. One is the so called <quote>identity map</quote>. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by <kbd>git-svn</kbd>:</p>
<pre>loginname = author name <mail address></pre>
<p>e.g.</p>
<pre>dleidert = Daniel Leidert <dleidert@debian.org></pre>
<p>The list of subversion user IDs can be obtained the same way as <a href="https://wiki.debian.org/de/Alioth/Git#Create_the_author_file">described in the Wiki</a>:</p>
<pre>svn log <var>SVN_URL</var> | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u</pre>
<p>Just replace the placeholder <var>SVN_URL</var> with your subversion URL. <a href="https://anonscm.debian.org/viewvc/debichem/tools/svn2git/authors.txt?view=co&content-type=text%2Fplain">Here</a> is the complete file for the debichem group.</p>
<h5 id="step3">Step 3: Creating the rules</h5>
<p>The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file <a href="https://raw.githubusercontent.com/svn-all-fast-export/svn2git/master/src/ruleparser.cpp">src/ruleparser.cpp</a>. I'll describe, what I already found out. If you are impatient, <a href="https://anonscm.debian.org/viewvc/debichem/tools/svn2git/debichem.rules?view=co&content-type=text%2Fplain">here</a> is my result so far.</p>
<p>The basic rules are:</p>
<pre>
create repository <var>REPOSITORY</var>
...
end repository
</pre>
<p>and</p>
<pre>
match <var>PATTERN</var>
...
end match
</pre>
<p>The first rule creates a bare git repository with the name you've chosen (above represented by <var>REPOSITORY</var>). It can have one child, that is the repository description to be put into the repositories <samp>description</samp> file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:</p>
<pre>
create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository
</pre>
<p>You'll have to create every repository, before you can put something into it. Else <kbd>svn-all-fast-export</kbd> will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.</p>
<p>Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by <var>PATTERN</var>), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. <strong>The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply!</strong> So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type <a href="http://doc.qt.io/qt-5/qregexp.html#introduction">QRegExp</a> and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with <quote>standard layout</quote> (that is <samp>/<var>PACKAGE</var>/{trunk,tags,branches}/</samp>), clean naming and subversion history, the rules could be:</p>
<pre>
match /([^/]+)/trunk/
repository \1
branch master
end match
match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match
match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match
</pre>
<p>The first rule captures the (source) package name from the path and puts it into the backreference <code>\1</code>. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference <code>\1</code> to that name - and there into the master branch. Note, that <kbd>svn-all-fast-export</kbd> will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the <code>create repository</code> rule. The second rule captures the (source) package name from the path too and puts it into the backreference <code>\1</code>. But in backreference <code>\2</code> it further captures (and applies to) all the tag directories under the <samp>/tags/</samp> directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to <a href="https://git-scm.com/book/en/v2/Git-Basics-Tagging#_annotated_tags">annotated</a> Git tags (another advantage of <kbd>svn-all-fast-export</kbd> over <kbd>git-svn</kbd>). Without enabling the <code>annotated</code> statement, the tags created will be <a href="https://git-scm.com/book/en/v2/Git-Basics-Tagging#_lightweight_tags">lightweight tags</a>. So the tag name (here: <samp>debian/<var>VERSION</var></samp>) is determined via backreference <code>\2</code>. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.</p>
<p>Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called <samp>authors</samp> and the rules file is called <samp>rules</samp> and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at <samp>/srv/svn/mymirror</samp>. So ...</p>
<pre>svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror</pre>
<p>... will create one or more <strong>bare</strong> Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...</p>
<pre>
git -C <var>REPOSITORY</var>/ --bare show-ref
git -C <var>REPOSITORY</var>/ --bare log --all --graph
</pre>
<p>... and you will find your repositories description (if you added one to the rules file) in <samp><var>REPOSITORY</var>/description</samp>:</p>
<pre>cat <var>REPOSITORY</var>/description</pre>
<p><strong>Please note, that not all the debian version strings are <a href="https://git-scm.com/docs/git-check-ref-format">well formed Git reference names</a> and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.</strong></p>
<p>But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to <a href="#step4">skip to the next step</a>.</p>
<p>In the debichem group we used a different layout. The packaging directories were under <samp>/{unstable,experimental,wheezy,lenny,non-free}/<var>PACKAGE</var>/</samp>. This translates to <samp><a href="https://anonscm.debian.org/viewvc/debichem/unstable/">/unstable/</a><var>PACKAGE</var>/</samp> and <samp><a href="https://anonscm.debian.org/viewvc/debichem/non-free/">/non-free/</a><var>PACKAGE</var>/</samp> being the trunk directories and the <a href="https://anonscm.debian.org/viewvc/debichem/experimental/">others</a> being the branches. The tags are in <samp><a href="https://anonscm.debian.org/viewvc/debichem/tags/">/tags/</a><var>PACKAGE</var>/</samp>. And packages, that are yet to upload are located in <samp><a href="https://anonscm.debian.org/viewvc/debichem/wnpp/">/wnpp/</a><var>PACKAGE</var>/</samp>. With this layout, the basic rules are:</p>
<pre>
# trunk handling
# e.g. <a href="https://anonscm.debian.org/viewvc/debichem/unstable/espresso/">/unstable/espresso/</a>
# e.g. <a href="https://anonscm.debian.org/viewvc/debichem/non-free/molden/">/non-free/molden/</a>
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match
# handling wnpp
# e.g. <a href="https://anonscm.debian.org/viewvc/debichem/wnpp/osra/">/wnpp/osra/</a>
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match
# branch handling
# e.g. <a href="https://anonscm.debian.org/viewvc/debichem/wheezy/espresso/">/wheezy/espresso/</a>
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match
# tags handling
# e.g. <a href="https://anonscm.debian.org/viewvc/debichem/tags/espresso/">/tags/espresso/<var>VERSION</var>/</a>
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match
</pre>
<p>In the first rule, there is a non-capturing expression <kbd>(?: ... )</kbd>, which simply means, that the rule applies to <samp>/unstable/</samp> and <samp>/non-free/</samp>. Thus the backreference <code>\1</code> refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the <samp>wnpp</samp> directory are not pushed to master, but instead to a branch called <samp>wnpp</samp>. This was necessary because of overlaps between <samp>/unstable/</samp> and <samp>/wnpp/</samp> history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference <code>\1</code> determines the branch (note the capturing expression in contrast to the first rule) and the second backreference <code>\2</code> the package repository to act on. The last rule is similar, but now <code>\1</code> determines the package repository and <code>\2</code> the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the <code>substitute branch <var>EXPRESSION</var></code> instructions.</p>
<h5 id="step4">Step 4: Cleaning the bare repository</h5>
<p>The <a href="https://techbase.kde.org/Projects/MoveToGit/UsingSvn2Git#Checking_for_proper_history_in_the_new_git_repository">tool documentation</a> suggests to run ...</p>
<pre>git -C <var>REPOSITORY</var>/ repack -a -d -f</pre>
<p>... before you upload this bare repository to another location. But <a href="http://lists.alioth.debian.org/pipermail/debichem-devel/2018-January/008816.html">Stuart Prescott told me on the debichem list</a>, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:</p>
<pre>
git -C <var>REPOSITORY</var>/ --bare clone --bare <var>REPOSITORY</var>.git
git -C <var>REPOSITORY</var>.git/ repack -a -d -f
</pre>
<p><strong>Please note, that this won't copy the repositories <samp>description</samp> file. You'll have to copy it manually, if you wanna keep it.</strong> The resulting bare repository can be uploaded (e.g. to <a href="https://wiki.debian.org/de/Alioth/Git#Using_personal_Git_repositories">git.debian.org as personal repository</a>:</p>
<pre>
cp <var>REPOSITORY</var>/description <var>REPOSITORY</var>.git/description
touch <var>REPOSITORY</var>.git/git-daemon-export-ok
rsync -avz <var>REPOSITORY</var>.git git.debian.org:~/public_git/
</pre>
<p>Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script <a href="https://anonscm.debian.org/viewvc/debichem/tools/svn2git/gitimport.sh?view=co&content-type=text%2Fplain">here</a>.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-64947831962301916232018-01-14T03:45:00.001+01:002018-01-14T03:46:24.631+01:00Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate<div lang="en">
<p>My mail server runs with a self signed certificate. So <code>bts</code>, configured like this ...</p>
<pre title="~/.devscripts">
BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='<var>user</var>'
BTS_SMTP_AUTH_PASSWORD='<var>pass</var>'
</pre>
<p>...lately refused to send mails with this error:</p>
<pre>
bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)
</pre>
<p>After <a href="https://stackoverflow.com/questions/33196891/trust-self-signed-certificate-for-emailsendertransportsmtps-in-perl">searching</a> a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the <code>send_mail()</code> function. When calling the <a href="https://metacpan.org/pod/Net::SMTPS#CONSTRUCTOR"><code>Net::SMTPS->new()</code></a> constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):</p>
<pre>
if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls'<strong>,
<a href="https://metacpan.org/pod/IO::Socket::SSL#SSL_fingerprint">SSL_fingerprint</a> => 'sha1$<var>hex-fingerprint</var>'</strong>)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}
</pre>
<p>Pretty happy to being able to use the <code>bts</code> command again.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com1tag:blogger.com,1999:blog-4131419324726672281.post-49376349140354098692017-10-31T22:34:00.004+01:002017-10-31T22:34:48.390+01:00Troubleshoot<div lang="en">
<p>I have no idea, what these errors mean. <var>$searchengine</var> and manual pages didn't reveal anything.</p>
<p>That's the first one. It occurs during boot time. Might be a bug somewhere, recently introduced in Debian Sid.</p>
<pre>
kernel: [ ... ] cgroup: cgroup2: unknown option "nsdelegate"
</pre>
<p>And that's the second one. It simply occurred. No real issue with NFS though.</p>
<pre>
kernel: [ ... ] nfs: RPC call returned error 22
kernel: [ ... ] NFS: state manager: check lease failed on NFSv4 server XXX.XXX.XXX.XXX with error 5
</pre>
<p>Any explanation is appreciated.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-54698337512213216192015-03-29T13:48:00.002+02:002015-03-30T19:01:48.033+02:00Prevent suspend/hibernate if system is remotely backed up via rdiff-backup<div lang="en">
<p>I usually use <tt>rdiff-backup</tt> to backup several of my systems. One is a workstation which goes to sleep after some time of idling around. Now having a user logged in running <tt>rdiff-backup</tt> (or <tt>rsync</tt>, <tt>rsnapshot</tt> etc for that matter) won't prevent the system from being put to sleep. Naturally this happens before the backup is complete. So some time ago I was looking for a resolution and recieved a <a href="http://comments.gmane.org/gmane.user-groups.linux.dresden/24216" hreflang="de">suggestion to use a script in <tt>/etc/pm/sleep.d/</tt></a>. I had to modify the script a bit, because the query result always was true. So this is my solution in <tt>/etc/pm/sleep.d/01_prevent_sleep_on_backup</tt> now:</p>
<pre title="/etc/pm/sleep.d/01_prevent_sleep_on_backup" class="brush: shell">
#!/bin/sh
. "${PM_FUNCTIONS}"
command_exists rdiff-backup || exit $NA
case "$1" in
hibernate|suspend)
if ps cax | grep -q rdiff-backup
then
exit 1
fi
;;
esac
exit 0
</pre>
<p>Currently testing ...</p>
<h4>Update</h4>
<p>The above works with <tt>pm-utils</tt>; but it fails with <tt>systemd</tt>. Seems I have to move and modify the script for my system.</p>
<h4>Update 2</h4>
<p>It doesn't work. In short: <code>exit 1</code> doesn't prevent systemd from going to suspend. I can see, that the script itself leads to the results I want, so the logic is correct. But I cannot find a way to tell systemd, to stop suspend. Shouldn't it be doing this automtically in a case, where a remote user is logged in and runs a command?</p>
<h4>Update 3</h4>
<p>There is also a related <a href="https://bugs.debian.org/771641">bug report</a>.</p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com9tag:blogger.com,1999:blog-4131419324726672281.post-59455103698201308192015-02-13T12:26:00.003+01:002015-02-13T12:40:28.973+01:00Motion picture capturing: Debian + motion + Logitech C910 - part II<div lang="en">
<p>In <a href="/2015/02/motion-picture-capturing-debian-motion.html">my recent attempt to setup a motion detection camera</a> I was disappointed, that my camera, which should be able to record with 30 fps in 720p mode only reached 10 fps using the software <tt>motion</tt>. Now I got a bit further. This seems to be an issue with the format used by <tt>motion</tt>. I've check the output of <tt>v4l2-ctl</tt> ...</p>
<pre title="v4l2-ctl -d /dev/video1 --list-formats-ext">$ v4l2-ctl -d /dev/video1 --list-formats-ext
[..]
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
<strong>Pixel Format: 'YUYV'</strong>
Name : YUV 4:2:2 (YUYV)
[..]
<strong>Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)</strong>
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]
Index : 1
Type : Video Capture
<strong>Pixel Format: 'MJPG' (compressed)</strong>
Name : MJPEG
[..]
<strong>Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)</strong>
Interval: Discrete 0.042s (24.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.133s (7.500 fps)
Interval: Discrete 0.200s (5.000 fps)
[..]</pre>
<p>... and <tt>motion</tt>:</p>
<pre title="motion" id="motion_default_mode">$ motion
[..]
[1] [NTC] [VID] v4l2_set_pix_format: Config palette index 17 (YU12) doesn't work.
[1] [NTC] [VID] v4l2_set_pix_format: Supported palettes:
<strong>[1] [NTC] [VID] v4l2_set_pix_format: (0) YUYV (YUV 4:2:2 (YUYV))
[1] [NTC] [VID] v4l2_set_pix_format: 0 - YUV 4:2:2 (YUYV) (compressed : 0) (0x56595559)
[1] [NTC] [VID] v4l2_set_pix_format: (1) MJPG (MJPEG)
[1] [NTC] [VID] v4l2_set_pix_format: 1 - MJPEG (compressed : 1) (0x47504a4d)</strong>
[1] [NTC] [VID] v4l2_set_pix_format Selected palette YUYV
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YUYV (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YUYV (1280x720) bytesperlines 2560 sizeimage 1843200 colorspace 00000008
[..]
</pre>
<p>Ok, so both formats <code>YUYV</code> and <code>MJPG</code> are supported and recognized and I can choose both via the <var>v4l2palette</var> configuration variable, citing <tt>motion.conf</tt>:</p>
<pre title="motion.conf"># v4l2_palette allows to choose preferable palette to be use by motion
# to capture from those supported by your videodevice. (default: 17)
# E.g. if your videodevice supports both V4L2_PIX_FMT_SBGGR8 and
# V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.
# Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8
# instead.
#
# Values :
# V4L2_PIX_FMT_SN9C10X : 0 'S910'
# V4L2_PIX_FMT_SBGGR16 : 1 'BYR2'
# V4L2_PIX_FMT_SBGGR8 : 2 'BA81'
# V4L2_PIX_FMT_SPCA561 : 3 'S561'
# V4L2_PIX_FMT_SGBRG8 : 4 'GBRG'
# V4L2_PIX_FMT_SGRBG8 : 5 'GRBG'
# V4L2_PIX_FMT_PAC207 : 6 'P207'
# V4L2_PIX_FMT_PJPG : 7 'PJPG'
<strong># V4L2_PIX_FMT_MJPEG : 8 'MJPEG'</strong>
# V4L2_PIX_FMT_JPEG : 9 'JPEG'
# V4L2_PIX_FMT_RGB24 : 10 'RGB3'
# V4L2_PIX_FMT_SPCA501 : 11 'S501'
# V4L2_PIX_FMT_SPCA505 : 12 'S505'
# V4L2_PIX_FMT_SPCA508 : 13 'S508'
# V4L2_PIX_FMT_UYVY : 14 'UYVY'
<strong># V4L2_PIX_FMT_YUYV : 15 'YUYV'</strong>
# V4L2_PIX_FMT_YUV422P : 16 '422P'
# V4L2_PIX_FMT_YUV420 : 17 'YU12'
#
v4l2_palette 17
</pre>
<p>Now motion uses <code>YUYV</code> as default mode <a href="#motion_default_mode">as shown by its output</a>. So it seems that all I have to do is to choose <code>MJPEG</code> in my <tt>motion.conf</tt>:</p>
<pre title="motion.conf">v4l2_palette 8</pre>
<p>Testing again ...</p>
<pre title="motion">$ motion
[..]
[1] [NTC] [VID] vid_v4lx_start: Using V4L2
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 25 (ret 0 )
Corrupt JPEG data: 5 extraneous bytes before marker 0xd6
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 14 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd5
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 36 (ret 0 )
Corrupt JPEG data: 3 extraneous bytes before marker 0xd2
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 58 (ret 0 )
Corrupt JPEG data: 1 extraneous bytes before marker 0xd7
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
[1] [NTC] [VID] v4l2_set_control: setting control "Brightness" to 80 (ret 0 )
Corrupt JPEG data: 4 extraneous bytes before marker 0xd7
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
[1] [ERR] [ALL] <strong>motion_init: Error capturing first image</strong>
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 16 items
Corrupt JPEG data: 4 extraneous bytes before marker 0xd1
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
Corrupt JPEG data: 11 extraneous bytes before marker 0xd1
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
Corrupt JPEG data: 3 extraneous bytes before marker 0xd4
[1] [CRT] [VID] <strong>mjpegtoyuv420p: Corrupt image ... continue</strong>
Corrupt JPEG data: 7 extraneous bytes before marker 0xd1
[..]</pre>
<p>... and another issue is turning up :( The output above goes on and on and on and there is no video capturing. So accordingly to <var>$searchengine</var> the above happens to a lot of people. I just found one often suggested fix: pre-load <tt>v4l2convert.so</tt> from <tt>libv4l-0</tt>:</p>
<pre title="run motion and pre-load v4l2convert.so">$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so motion</pre>
<p>But the problem persists <del>and I'm out of ideas :( So atm it lokks like I cannot use the <code>MJPEG</code> format and don't get 30 fps at 1280x720 pixels</del>. During writing I then discovered a solution by good old trial-and-error: Leaving the <var>v4l2_palette</var> variable at its default value <tt>17</tt> (<code>YU12</code>) <strong>and</strong> pre-loading <tt>v4l2convert.so</tt> makes use of <code>YU12</code> and the framerate at least raises to <strong>24 fps</strong>:</p>
<pre title="motion output with pre-loaded v4l2convert.so">$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4lg/v4l2convert.so motion
[..]
[1] [NTC] [VID] v4l2_do_set_pix_format: Testing palette YU12 (1280x720)
[1] [NTC] [VID] v4l2_do_set_pix_format: Using palette YU12 (1280x720) bytesperlines 1280 sizeimage 1382400 colorspace 00000008
[..]
[1] [NTC] [EVT] event_new_video FPS 24
[..]</pre>
<p>Finally! :) The results are nice. It would maybe even be a good idea to limit the framerate a bit, to e.g. 20. So that is a tested configuration for the Logitech C910 running at a resolution of 1280x720 pixels:</p>
<pre title="motion.conf for 1280x720">v4l2_palette 17
width 1280
height 720
framerate 20
minimum_frame_time 0
pre_capture 10 # 0,5 seconds pre-recording
post_capture 50 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality</pre>
<p>Now all this made me curious, which framerate is possible at a resolution of 1920x1080 pixels now and how the results look like. Although I get <strong>24 fps</strong> too, the resulting movie <a href="http://www.lavrsen.dk/foswiki/bin/view/Motion/FrequentlyAskedQuestions#Motion_detection_is_working._But_when_you_view_the_mpeg_recording_there_seems_to_be_several_seconds_missing_44_it_skips_many_frames_and_jumps_ahead_several_seconds.">suffers of jumps</a> every few frames. So here I got pretty good results with a more conservative setting. By increasing <var>framerate</var> - tested up to 15 fps with good results - <var>pre_capture</var> needed to be decreased accordingly to values between <samp>1..3</samp> to minimize jumps:</p>
<pre title="motion.conf for 1920x1080">v4l2_palette 17
width 1920
height 1080
framerate 12
minimum_frame_time 0
pre_capture 6 # 0,5 seconds pre-recording
post_capture 30 # 2,5 seconds after-recording
auto_brightness on
ffmpeg_variable_bitrate 2 # best quality</pre>
<p>Both configurations lead to satisfying results. Of course the latter will easily fill your hardrive :)</p>
<h4>TODO</h4>
<p>I guess, the results can be optimzed further by playing around with <var>ffmpeg_bps</var> and <var>ffmpeg_variable_bitrate</var>. Maybe then it is possible to record without jumps at higher framerates too(?). I also didn't test the various <var>norm</var> settings (PAL, NTSC, etc.). </p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com3tag:blogger.com,1999:blog-4131419324726672281.post-1333612835367505292015-02-12T20:02:00.001+01:002015-04-06T14:10:12.748+02:00Motion picture capturing: Debian + motion + Logitech C910 <div lang="en">
<span><a href="/20150211/woodpecker_1.jpg"><img src="/20150211/woodpecker_1.jpg" alt="Woodpecker near the window" style="float: right;" /></a></span>
<p>Winter time is a good time for some nature observation. Yesterday I had a woodpecker (picture) in front of my kitchen window. During the recent weeks there were long-tailed tits, a wren and other rarely seen birds. So I thought, it might be a good idea to capture some of these events :) I still own a <a href="http://support.logitech.com/product/hd-pro-webcam-c910">Logitech C910 USB camera</a> which allows HD video capturing up to 1080p. So I checked the web for some software that would begin video capturing in the case of motion detection and found <a href="apt://motion"><tt>motion</tt></a>, already available for Debian users. So I gave it a try. I tested all available resolutions of the camera together with the capturing results. I found that the resulting framerate of both the live stream and the captured video is highly depending on the resolution and some few configuration options. Below is a summary of my tests and the results I've achieved so far.</p>
<h4 style="clear: both;">Logitech C910 HD camera</h4>
<p>Just a bit of data regarding the camera. AFAIK it allows for fluent video streams up to 720p.</p>
<pre title="Output of various commands reagrding the Logitech C910 HD USB webcam">$ dmesg
[..]
usb 7-3: new high-speed USB device number 5 using ehci-pci
usb 7-3: New USB device found, idVendor=046d, idProduct=0821
usb 7-3: New USB device strings: Mfr=0, Product=0, SerialNumber=1
usb 7-3: SerialNumber: 91CF80A0
usb 7-3: current rate 0 is different from the runtime rate 16000
usb 7-3: current rate 0 is different from the runtime rate 32000
uvcvideo: Found UVC 1.00 device <unnamed> (046d:0821)
input: UVC Camera (046d:0821) as /devices/pci0000:00/0000:00:1a.7/usb7/7-3/7-3:1.2/input/input17
$ lsusb
[..]
Bus 007 Device 005: ID 046d:0821 Logitech, Inc. HD Webcam C910
[..]
$ v4l2-ctl -V -d /dev/video1
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 2560
Size Image : 1843200
Colorspace : SRGB</pre>
<p>Also the <code>uvcvideo</code> kernel module is loaded and the user in question is part of the <tt>video</tt> group.</p>
<h4>Installation and start</h4>
<p>Installation of the software is as easy as always:</p>
<pre>apt-get install motion</pre>
<p>It is possible to run the software as a service. But for testing, I copied <tt>/etc/motion/motion.conf</tt> to <tt>~/.motion/motion.conf</tt>, fixed its permissions (you cannot copy the file as user - it's not world readable) and disabled the daemon mode.</p>
<pre title="~/.motion/motion.conf">daemon off</pre>
<p>Note that in my case the correct device is <tt>/dev/video1</tt> because the laptop has a built-in camera, that is <tt>/dev/video0</tt>. Also the target directory should be writeable by my user:</p>
<pre title="~/.motion/motion.conf">videodevice /dev/video1
target_dir ~/Videos</pre>
<p>Then running <tt>motion</tt> from the command line ...</p>
<pre title="shell">$ motion
[..]
[0] [NTC] [ALL] motion_startup: Motion 3.2.12+git20140228 Started
[..]
[1] [NTC] [ALL] motion_init: Thread 1 started , motion detection Enabled
[0] [NTC] [ALL] main: Thread 1 is device: /dev/video1 input -1
[1] [NTC] [VID] v4l2_get_capability:
------------------------
cap.driver: "uvcvideo"
cap.card: "UVC Camera (046d:0821)"
cap.bus_info: "usb-0000:00:1a.7-1"
cap.capabilities=0x84000001
------------------------
[1] [NTC] [VID] v4l2_get_capability: - VIDEO_CAPTURE
[1] [NTC] [VID] v4l2_get_capability: - STREAMING
[1] [NTC] [VID] v4l2_select_input: name = "Camera 1", type 0x00000002, status 00000000
[1] [NTC] [VID] v4l2_select_input: - CAMERA
[..]
[1] [NTC] [ALL] image_ring_resize: Resizing pre_capture buffer to 1 items</pre>
<p>... will begin to capture motion detection events and also output a live stream. <kbd>CTRL+C</kbd> will stop it again.</p>
<h4>Live stream</h4>
<p>The live stream is available by pointing the browser to <tt>localhost:8081</tt>. However, the stream seems to run at 1 fps (frames per second) and indeed does. The stream gets more quality by this configuration:</p>
<pre title="~/.motion/motion.conf">stream_motion on
stream_maxrate 100</pre>
<p>The first option is responsible, that the stream only runs at one fps if there is no motion detection event. Otherwise the framerate increases to its maximum value, which is either the one given by <var>stream_maxrate</var> or the camera limit. The quality of the stream picture can be increased a bit further too by increasing the <var>stream_quality</var> value. Because I neither need the stream nor the control feed I disabled both:</p>
<pre title="~/.motion/motion.conf">stream_port 0
webcontrol_port 0</pre>
<h4>Picture capturing</h4>
<p>By default there is video and picture capturing if a motion event is detected. I'm not interested in these pictures, so I turned them off:</p>
<pre title="~/.motion/motion.conf">output_pictures off</pre>
<p>FYI: If you want a good picture quality, then the value of <var>quality</var> should very probably be increased.</p>
<h4>Video capturing</h4>
<p>This is the really interesting part :) Of course if I will "shoot" some birds (with the camera), then a small image of say 320x240 pixels is not enough. The camera allows for a capture resolution up to 1920x1080 pixels (1080p). It is advertised for fluent video streams up to 720p (1280x720 pixels). So I tried the following resolutions: 320x240, 640x480, 800x600, 640x360 (360p), 1280x720 (720p) and 1920x1080 (1080p). These are easily configured by the <var>width</var> and <var>height</var> variables. For example the following configures <tt>motion</tt> for 1280x720 pixels (720p):</p>
<pre title="~/.motion/motion.conf">width 1280
height 720</pre>
<p>The result was <strong>really disappointing</strong>. No event is captured with more then 20 fps. At higher resolutions the framerate drops even further and at the highest resolution of 1920x1080 pixels, the framerate is only two(!) fps. Also every created video runs much too fast and even faster by increasing the <var>framerate</var> variable. Of course its default value of <samp>2</samp> (fps) is not enough for fluent videos. AFAIK the C910 can run with 30 fps at 1280x720 pixels. So increasing the value of <var>framerate</var>, the maximum framerate recorded, is a must-do. (If you wanna test yourself, check the log output for the value following <code><q cite="syslog">event_new_video FPS</q></code>.)</p>
<p>The solution to the issue, that videos are running too fast, however is to increase the <var>pre_capture</var> value, the number of <q cite="motion.conf">pre-captured (buffered) pictures from before motion was detected</q>. Even small values like <samp>3..5</samp> result in a distinctive improvement of the situation. Though increasing the value further didn't have any effect. So the values below should almost get the most out of the camera and result in videos in normal speed.</p>
<pre title="~/.motion/motion.conf">framerate 100
pre_capture 5</pre>
<p>Videos in 1280x720 pixels are still captured with 10 fps and I don't know why. Running <tt>guvcview</tt>, the same camera allows for 30 fps in this resolution (even 60 fps in lower resolutions). However, even if the framerate could be higher, the resulting video runs fluently. Still the quality is just moderate (or to be honest, still disappointing). It looks "pixelated". Only static pictures are sharp. It took me a while to fix this too, because I first thought, the reason is the camera or missing hardware support. It is not :) The reason is, that ffmpeg is configured to produce a moderate(?)-quality video. The relevant variables are <var>ffmpeg_bps</var> and <var>ffmpeg_variable_bitrate</var>. I got the best results just changing the latter:</p>
<pre title="~/.motion/motion.conf">ffmpeg_variable_bitrate 2</pre>
<p><strong>Finally the resulting video quality is promising.</strong> I'll start with this configuration setting up an observation camera for the bird feeding ground.</p>
<p>There is one more tweak for me. I got even better results by enabling the <var>auto_brightness</var> feature.</p>
<pre title="~/.motion/motion.conf">auto_brightness on</pre>
<h4>Complete configuration</h4>
<p>So the complete configuration looks like this (only those options changed to the original config file)</p>
<pre title="~/.motion/motion.conf">daemon off
videodevice /dev/video1
width 1280
height 720
framerate 100
auto_brightness on
ffmpeg_variable_bitrate 2
target_dir /home/user/Videos
stream_port 0 #8081
stream_motion on
stream_maxrate 100
webcontrol_port 0 #8080</pre>
<h4>Links</h4>
<ul>
<li><a href="http://www.lavrsen.dk/foswiki/bin/view/Motion/MotionGuide">Motion Guide for Motion</a></li>
<li><a href="http://www.lavrsen.dk/foswiki/bin/view/Motion/ConfigFileOptions">Motion Config File Options</a></li>
</ul>
<p><a href="/2015/02/motion-picture-capturing-debian-motion_13.html">Continue with part II ...</a></p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-75378177187199599292015-02-12T15:11:00.002+01:002015-02-12T15:15:19.670+01:00Setting up a network buildd with pbuilder ... continued<div lang="en">
<p>Last year I'd described <a href="/2013/08/setting-up-network-buildd-with-pbuilder.html">my setup of a local network-buildd using pbuilder, ccache, inoticoming and NFS</a>. One then-still-open goal was to support different Debian releases. This is especially necessary for backports of e.g. bluefish. The <a href="http://backports.debian.org/Contribute/#index5h3">rules</a> to contribute backports require to <q cite="http://backports.debian.org/Contribute/#index5h3">include all changelog entries since the last version on debian-backports or since stable if it's the first version</q> in an uploaded package. Therefor one needs to know the last version in e.g. <tt>wheezy-backports</tt>. Because I'm not typing the command (the source package only gets uploaded and <tt>inoticoming</tt> starts the build process) I was looking for a way to automatically retrieve that version and add the relevant <code>-vX.Y-Z</code> switch to <tt>dpkg-buildpackage</tt>.</p>
<p>The solution I found requires <tt>aptitude</tt> and a <tt>sources.list</tt> entry for the relevant release. If you are only interested in the solution, just jump to the <a href="#pbuilderrc_working">end</a> :)</p>
<p>I'm going to add the version switch to the <var>DEBBUILDOPTS</var> variable of <tt>pbuilder</tt>. In my setup I have a common (shared) snippet called <tt>/etc/pbuilderrc.inc</tt> and one configuration file per release and architecture, say <tt>/etc/pbuilderrc.amd64.stable</tt>. Now the first already contains ...</p>
<pre title="/etc/pbuilderrc.inc">DEBBUILDOPTS="-us -uc -j2"</pre>
<p> ... and <var>DEBBUILDOPTS</var> can be extended in the latter:</p>
<pre title="/etc/pbuilderrc.amd64.stable">DEBBUILDOPTS+="..."</pre>
<p>Because the config file is parsed pretty early in the process the package name has not yet been assigned to any variable. The last argument to <tt>pbuilder</tt> is the <tt>.dsc</tt> file. So I use the last argument and parse the file to retrieve the source package name.</p>
<pre title="shell">cat ${@: -1} | grep -e ^Source | awk -F\ '{ print $2 }'</pre>
<p>The solution above works, because pbuilder is a BASH script. Otherwise it maybe needs some tweaking. I use the source package name, because it is unique and there is just one :) Now with this name I check for all versions in <tt>wheezy*</tt> and <tt>stable*</tt> and sort them. The sort order of aptitude is from low to high, so the last line shopuld contain the highest version. This covers the possibility that there has not yet been a backport or that there is one:</p>
<pre title="shell">aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version \
"?narrow(?source-package(^<var>PACKAGE</var>\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" |\
tail -n 1 | sed -e 's#~bpo.*$##g'
</pre>
<p>The <tt>sed</tt>-part is necessary because otherwise <tt>dh_genchanges</tt> will add a superfluous changelog entry (the last one of the last upload). To make things easier, I assign the name and version to variables. So this is the complete solution:</p>
<pre id="#pbuilderrc_working" title="/etc/pbuilderrc.amd64.stable">
[..]
MYPACKAGE="`cat ${@: -1} | grep -e ^Source | awk -F\ '{ print $2 }'`"
MYBPOVERS="`aptitude versions -F '%p' --show-package-names=never --group-by=none --sort=version "?narrow(?source-package(^$MYPACKAGE\$), ?or(?archive(^wheezy.*), ?archive(^stable.*)))" | tail -n 1 | sed -e 's#~bpo.*$##g'`"
log "I: Package is $MYPACKAGE and last stable/bpo version is $MYBPOVERS"
DEBBUILDOPTS+=" -v$MYBPOVERS"
[..]
</pre>
<h4>Examples</h4>
<p>I've recently built a new bluefish backport. The last backport version is <code>2.2.6-1~bpo70+1</code>. There is also the stable version <code>2.2.3-4</code>. So the version I need is <code>2.2.6-1</code> (<code>2.2.6-1~bpo70+1</code> < <code>2.2.6-1</code>!!). Checking the log it works:</p>
<pre>I: Package is bluefish and last stable/bpo version is 2.2.6-1</pre>
<p>A different example is <tt>rsync</tt>. I've recently locally rebuilt it for a stable system (wanted to make use of the <code>--chown</code> switch). There is not yet a backport. So the version I (would) need is <code>3.0.9-4</code>. Checking the logs again and works too:</p>
<pre>I: Package is rsync and last stable/bpo version is 3.0.9-4</pre>
<p>Feedback appreciated ...</p>
<h4>Links</h4>
<ul>
<li><a href="https://aptitude.alioth.debian.org/doc/en/ch02s04s05.html">aptitude: Search term reference</a></li>
<li><a href="https://aptitude.alioth.debian.org/doc/en/ch02s05s01.html">aptitude: Customizing the package list</a></li>
</ul>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com0tag:blogger.com,1999:blog-4131419324726672281.post-67125851202568701822015-02-11T17:07:00.002+01:002015-02-11T21:28:05.989+01:00Blogger RSS feed and category URLs with combined labels/tags <div lang="en">
<p>Run a blog on <a href="http://www.blogger.com">blogger.com</a>? Maybe made it bilingual? Maybe blog on different topics? Wondering how the URL for an RSS feed for e.g. two labels looks like? Asking how to see all articles matching two tags (labels)? Finding a keyword under one or more labels? Many things are possible. I'll show a few examples below. Maybe that is even interesting for the planet Debian folks. I happen to blog mostly in English about Debian topics. But sometimes I also want to post something in German only (e.g. about German tax software). It is <a href="https://wiki.debian.org/PlanetDebian">discouraged</a> to put the latter on planet-debian. Instead it can be published in the language specific planet feed. So instead of adding new tags, one could easily combine two labels: the one for language of the feed and the one for Debian related posts (e.g. debian+english or debian+german). Therefor this post goes to the Debian planet.</p>
<h4>Search for combbined labels/tags</h4>
<p>Say I want to view all postings related to the topics <samp>FOO</samp> <strong>and</strong> <samp>BAR</samp>. Then it is:</p>
<pre>
<samp>http://domain.tld/search/label/<var>FOO</var>+<var>BAR</var></samp> OR
<samp>http://domain.tld/search/?q=label:<var>FOO</var>+label:<var>BAR</var></samp>
</pre>
<p>Be aware that labels are case sensitive and that more labels can be added. The examples below will show all postings related to the topics <samp>debian</samp> <strong>and</strong> <samp>n54l</samp> <strong>and</strong> <samp>xbmc</samp>:</p>
<pre>
<a href="http://www.wgdd.de/search/label/debian+n54l+xbmc">http://www.wgdd.de/search/label/debian+n54l+xbmc</a>
<a href="http://www.wgdd.de/search/?q=label:debian+label:n54l+label:xbmc">http://www.wgdd.de/search/?q=label:debian+label:n54l+label:xbmc</a>
</pre>
<p>It is also possible to search for all posts related to the topics <samp>FOO</samp> <strong>or</strong> <samp>BAR</samp>:</p>
<pre>
<samp>http://domain.tld/search/?q=label:<var>FOO</var>|label:<var>BAR</var></samp>
</pre>
<p>Say for example, you want to see all postings related to the topics <samp>logitech</samp> <strong>or</strong> <samp>toshiba</samp>, then it is:</p>
<pre>
<a href="http://www.wgdd.de/search/?q=label:logitech|label:toshiba">http://www.wgdd.de/search/?q=label:logitech|label:toshiba</a>
</pre>
<h4>Feed URLs</h4>
<p>To get back to the first example lets say, the feed shall contain all posts related to the topics <samp>FOO</samp> <strong>and</strong> <samp>BAR</samp>. Then it is:</p>
<pre>
<samp>http://domain.tld/feeds/posts/default/-/<var>FOO</var>/<var>BAR</var>/</samp> OR
<samp>http://domain.tld/feeds/posts/default?q=label:<var>FOO</var>+label:<var>BAR</var></samp>
</pre>
<p>Respecitvely to show all feeds related to either of those topics use:</p>
<pre>
<samp>http://domain.tld/feeds/posts/default/?q=label:<var>FOO</var>|label:<var>BAR</var></samp>
</pre>
<p>To get a feed of the example topics as shown above then would be:</p>
<pre>
<a href="http://www.wgdd.de/feeds/posts/default/-/debian/n54l/xbmc/">http://www.wgdd.de/feeds/posts/default/-/debian/n54l/xbmc/</a>
<a href="http://www.wgdd.de/feeds/posts/default?q=label:debian+label:n54l+label:xbmc">http://www.wgdd.de/feeds/posts/default?q=label:debian+label:n54l+label:xbmc</a>
<a href="http://www.wgdd.de/feeds/posts/default?q=label:logitech|label:toshiba">http://www.wgdd.de/feeds/posts/default?q=label:logitech|label:toshiba</a>
</pre>
<p>Coming back to planet Debian, below is a solution for a multi-lingual planet contribution (if both planets would exist):</p>
<pre>
<a href="http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:english">http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:english</a>
<a href="http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:german">http://www.wgdd.de/feeds/posts/default?q=label:planet-debian+label:german</a>
</pre>
<h4>Advanced ...</h4>
<p>There is much more possible. I'll just show two more examples. <strong>AND</strong> and <strong>OR</strong> can be combined ...</p>
<pre><a href="http://www.wgdd.de/feeds/posts/default?q=label:debian+(label:logitech|label:toshiba)">http://www.wgdd.de/feeds/posts/default?q=label:debian+(label:logitech|label:toshiba)</a></pre>
<p>... and a keyword search can be added too:</p>
<pre><a href="http://www.wgdd.de/feeds/posts/default?q=stick+(label:debian+(label:logitech|label:toshiba))">http://www.wgdd.de/feeds/posts/default?q=stick+(label:debian+(label:logitech|label:toshiba))</a></pre>
<h4>Links</h4>
<ul>
<li><a href="https://support.google.com/blogger/answer/97933">https://support.google.com/blogger/answer/97933</a></li>
<li><a href="https://support.google.com/blogger/answer/53336">https://support.google.com/blogger/answer/53336</a></li>
<li><a href="https://developers.google.com/blogger/docs/3.0/reference/posts#resource">https://developers.google.com/blogger/docs/3.0/reference/posts#resource</a></li>
<li><a href="http://www.mybloggertricks.com/2014/01/17-best-search-techniques-in-blogger.html">http://www.mybloggertricks.com/2014/01/17-best-search-techniques-in-blogger.html</a></li>
</ul>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com7tag:blogger.com,1999:blog-4131419324726672281.post-59160087421165301552015-02-08T20:05:00.003+01:002015-04-06T14:09:21.464+02:00Correct keyboard layout (QWERTZ) with Logitech K400 attached to an Android HDMI stick<div lang="en">
<p>I've bought an HDMI stick with Android 4.4.2 to connect a TV to the local network (stream, DLNA) and the internet (Video portals etc.). The stick comes with no input device whatsoever. So I decided for a <a href="support.logitech.com/product/wireless-touch-keyboard-k400r">wireless Logitech K400r keyboard</a>. Unfortunately the keyboard layout used by Android seems to be American (<kbd>QWERTY</kbd>) although the language and location settings are set to German. So I neither get Umlauts nor special signs correct nor do the media keys of the K400 work. But there is an easy way to make all this work. I've recently seen some people suggesting to install an app called <a href="https://play.google.com/store/search?q=External%20Keyboard%20Helper&c=apps">External Keyboard Helper</a> to fix things, which comes in a demo and a pro version, the latter with costs. Fortunately I didn't need any of these for the Logitech keyboard <strong>plus</strong> I got all media keys of this keyboard to work in just three steps without any cost :)</p>
<h4>Step 1</h4>
<p>First I went to the Google Play Store and installed the <a href="https://play.google.com/store/apps/details?id=com.logitech.keyboard.look_ten">Logitech Keyboard Plus</a> app by Logitech Europe S.A. (See? No external stuff and without costs. Ignore <strong>anything</strong> that's written in its description about Bluetooth.)</p>
<a href="/20150207/1.png"><img src="/20150207/1.png" alt="App Logitech Keyboard Plus shown at Google Play Store" style="width: 100%;"/></a>
<h4>Step 2</h4>
<p>Then I opened the settings section and went for <code>Settings > Language and Input</code> (<code lang="de">Einstellungen > Sprache und Eingabe</code>). Under <code>KEYBOARD & INPUT METHOD</code> (<code lang="de">TASTATUR & EINGABEMETHODEN</code>) I activated the entry called <em>Logitech keyboard</em> (<em lang="de">Logitech Tastatur</em>) as shown below.</p>
<a href="/20150207/2.png"><img src="/20150207/2.png" alt="Activate the entry "e;Logitech keyboard"e; under Settings > Language and Input" style="width: 100%;"/></a>
<h4>Step 3</h4>
<p>Now a click on the field above that is called <code>Default</code> (<code lang="de">Standard</code>) and (here) defaults to <code lang="de">Deutsch - Android-Tastatur (AOSP)</code> ...</p>
<a href="/20150207/3.png"><img src="/20150207/3.png" alt="Click on the field "e;Default"e;" style="width: 100%;"/></a>
<p>... opened a popup called <code>Choose input method</code> and there I've chosen <em>Logitech keyboard</em> (<em lang="de">Logitech Tastatur</em>) as shown below and confirmed it.</p>
<a href="/20150207/4.png"><img src="/20150207/4.png" alt="Activate the Logitech keyboard on the popup called "e;Choose input method"e;" style="width: 100%;"/></a>
<p>The default now is the Logitech keyboard:</p>
<a href="/20150207/5.png"><img src="/20150207/5.png" alt="The default keyboard now is the Logitech keyboard" style="width: 100%;"/></a>
<h4>Finally ...</h4>
<p>... it's done. The keyboard now behaves as expected and also the media/function keys do work. <del>It is however not possible to adjust or configure anything, because this presumes a confirmed Bluetooth pairing between the keyboard and the android device and the K400 is not a Bluetooth device. It would probably be a good idea for Logitech to make this app usable for all available Logitech keyboards, even USB and Unifying(TM) ones.</del> <ins>Update: As of now it is possible to cancel the Bluetooth pairing attempt without leaving the application. So this app in fact can be used for USB and Unifying(TM)-based Logitech keyboards.</ins></p>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com1tag:blogger.com,1999:blog-4131419324726672281.post-2766318761030698942014-12-12T11:47:00.001+01:002015-02-11T21:27:36.188+01:00Issues with Server4You vServer running Debian Stable (Wheezy)<div lang="en">
<p>I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using <tt>debootstrap</tt> over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)</p>
<h4>Cron jobs not running</h4>
<p>I installed <tt>unattended-upgrades</tt> and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with</p>
<pre title="shell"># run-parts --list /etc/cron.daily</pre>
<p>and <tt>apt</tt> was not listed although <tt>/etc/cron.daily/apt</tt> was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using <tt>dpkg-statoverride</tt> :( It was easy to fix the file permissions for everything beyond <tt>/etc/cron*</tt>, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using <tt>debsums</tt>. I'm speaking about file permissions and ownership.</p>
<p>Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.</p>
<h4>init from testing / upstart on hold</h4>
<p>Today I've discovered, that <tt>apt-get</tt> wanted to update the <tt>init</tt> package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, <tt>init</tt> is only in testing/unstable and essential there. I purged it, but <tt>apt-get</tt> keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:</p>
<ul>
<li>no sources listed for backports, volatile, multimedia etc.</li>
<li>sources listed for testing and unstable</li>
<li>only packages from stable/stable-updates installed</li>
<li>sets <code>APT::Default-Release "stable";</code></li>
</ul>
<p>First I checked with aptitude:</p>
<pre title="shell"># aptitude why init
Unable to find a reason to install init.</pre>
<p>Ok, so why:</p>
<pre title="shell"># apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]? </pre>
<p><del><strong>JFTR: I see a stable system bugging me to install <tt>systemd</tt> for no obvious reason. The issue might be similar! I'm still investigating.</strong></del> <ins>(not reproducible anymore)</ins></p>
<p>Now I tried to debug this:</p>
<pre title="shell"># apt-get -o Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) <strong>upstart</strong> [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]? </pre>
<p>Eh, <tt>upstart</tt>?</p>
<pre title="shell"># apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status</pre>
<pre title="shell"># dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
<strong>h</strong>i upstart 1.6.1-1 amd64 event-based init daemon</pre>
<p>Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. <strong>But I still don't understand <tt>apt-get</tt> and the difference to <tt>aptitude</tt> behaviour?</strong> Can someone please enlighten me?</p>
<h4>Customized files</h4>
<p>This isn't really an issue, but just for completion: several files have been customized. <tt>debsums</tt> easily shows which ones:</p>
<pre title="shell"># debsums -ac
<em>I don't have the original list anymore - please check yourself</em></pre>
</div>Daniel Leiderthttp://www.blogger.com/profile/17052464961644858181noreply@blogger.com1