Autarchy of the Private Cave

Tiny bits of bioinformatics, [web-]programming etc

    • Archives

    • Recent comments

    Archive for February, 2016

    How to: export only notes to PDF from LibreOffice Impress 5

    28th February 2016

    If you want to export Notes to a PDF from LibreOffice Impress 5,
    and dutifully set the appropriate checkbox in PDF export dialog,
    then you will get all slides twice: first just all the slides as with usual PDF export, and then all the Notes pages.

    There is an easy solution to get Notes-only without editing the PDF.

    If you have a PDf printer installed (most Linux distributions, and Windows 10), just do File -> Print from Impress,
    then under the Print sub-header choose Notes from the Document drop-down (see picture).
    Make sure to set the proper paper format for the PDF printer (A4 in my case).
    Then print, and save the resulting PDF.
    Read the rest of this entry »

    Share

    Posted in how-to, Notepad, Software | No Comments »

    Preprint servers and open journals

    28th February 2016

    Let’s start with some definitions.

    With Open Journals I’m referring to open/public peer-review journals.
    With preprint servers, I’m referring to services which allow you to publish your manuscript with a DOI, for pre-submission interest and feedback collection.

    I am aware of the following public peer-review journals:

    • F1000 Research: your submission is made public without any editorial pre-screening within an average of 7 days, but only gets indexed in PubMed/Scopus/Scholar after a successful public peer review. Public means that a reviewer-signed evaluation appears together with the submitted manuscript. Authors may respond to criticism, and upload revisions of their submission. I believe a submission passes peer review after two positive reviews. Note that even your initial submission receives a DOI, and is thus citable (as well as all subsequent revisions). Brief examination of articles in some of the topics tells me that F1000 Research is a good place to publish, esp. because it is a kind of pre-print + journal in one package. You pay per-submission, there are 3 tiers by word count.
    • The Winnower: submit-review-revise, but here you pay for the DOI after your submission is reviewed. Before review your submission is thus not citable (except for by URL, which isn’t tracked as easily as DOI references). I haven’t formed an opinion on how attractive the winnower is for submitting, but I did find this quite interesting story for you to enjoy :)
    • Science Open: this project encompasses 5 mostly medical journals. It lists over 11 million articles on the front page, but those are sourced from other publications; Science Open itself seems to have several hundred publications across all 5 journals. Submissions get a DOI, then can undergo public review. It is not clear to me in which direction Science Open will be moving – towards becoming an excellent research papers aggregator, or towards becoming a publishing platform, or – like now – towards both.

    I’m also aware of the following preprint servers:
    Read the rest of this entry »

    Share

    Posted in Science | No Comments »

    How to use mkfifo named pipes with prinseq-lite.pl

    24th February 2016

    prinseq_logo_1prinseq-lite.pl is a utility written in Perl for preprocessing NGS reads, also in FASTQ format.
    It can read sequences both from files and from stdin (if you only have 1 sequence).

    I wanted to use it with compressed (gzipped/bzipped2) FASTQ input files.
    As I do not need to store decompressed input files, the most efficient solution is to use pipes.
    This works well for a single file, but not for 2 files (paired-end reads).

    For 2 files, named pipes (also known as FIFOs) can be used.
    You can create a named pipe in Linux with the help of mkfifo command, for example mkfifo R1_decompressed.fastq.
    To use it, start decompressing something into it (either in a different terminal, or in background), for example zcat R1.fastq.gz > R1_decompressed.fastq &;
    we can call this a writing/generating process, because it writes into a pipe.
    (If you are writing software to use named pipes, any processes writing into them should be started in a new thread, as they will block until all the data is consumed.)
    Now if you give the R1_decompressed.fastq as a file argument to some other program, it will see decompressed content (e.g. wc -l R1_decompressed.fastq will tell you the number of lines in the decompressed file); we can call program reading from the named pipe a reading/consuming process.
    As soon as a consuming process had consumed (read) all of the data, the writing/generating process will finally exit.

    This, however, does not work with prinseq-lite.pl (version 0.20.4 or earlier), with a broken pipe error. Read the rest of this entry »

    Share

    Posted in *nix, Bioinformatics, Software | No Comments »

    ZFS is the FS for Containers in Ubuntu 16.04 and how it compares to btrfs

    20th February 2016

    Recently in hacker news the following was posted: ZFS is the FS for Containers in Ubuntu 16.04.

    I must admit the 16.04 demo does look very pleasant to work with.

    However, bringing in ZFS into Linux reminded me of a fairly recent comparison of ZFS and btrfs that I had to do when building my home NAS.
    At that time, few months ago, I’ve arrived (among others) at the following conclusions:

    • ZFS on FreeBSD is reliable, though a memory hog;
    • on Debian, OpenVault seems to be a good NAS web-management interface;
    • on FreeBSD, FreeNAS is good (there is also Nas4Free fork of an older version, but I haven’t looked into it deep enough);
    • running ZFS on linux (even as a kernel module) is the least efficient solution, at least partially because kernel’s file caching and ZFS’s ARC cache are two separate entities;
    • although btrfs offers features very similar to ZFS, as of few months ago OpenVault did not offer btrfs volumes support from the web-interface.

    In the end, I’ve decided to go with FreeNAS, and it seems to work well so far.

    But had anything changed in the btrfs vs ZFS on Linux field?
    Read the rest of this entry »

    Share

    Posted in *nix, Software | No Comments »

    How to: convert your VPS root filesystem to btrfs (using rescue boot)

    15th February 2016

    I’m moving from (a kind of…) a dedicated server to a VPS, to decrease my frightful anticipation of hardware failures.
    Honestly though, that server had been freezing up and restarting spontaneously for several months now, causing sometimes really long down-times…
    That server is now about 6-7 years old, built with off-the-shelf components, some of which (the HDD :) ) had weird noises from the very start.
    Definitely time to move!

    I’ve purchased a fairly cheap VPS with an easy, one-click upgrade option for after I’m done configuring it.
    It comes with a wide selection of OSes to pre-install; I’ve chosen Debian Jessie, version 8.3 as of this writing.

    I wanted to use btrfs from the beginning, so could have installed Debian myself, but… VPS provider does some initial configuration (like their Debian mirror and some other things), so I’ve felt that converting to btrfs after the fact would be easier. Now that I’ve done this – I guess it was fairly easy, although preparation did take some time.

    Below, I’m providing step-by-step instructions on how to convert your root filesystem from (most likely) ext4 to btrfs.
    Read the rest of this entry »

    Share

    Posted in *nix, how-to | No Comments »

    How to fix: mod_proxy’s ProxyPass directive does not work

    10th February 2016

    So… You had finally built a nice LXC container for your web-facing application, and even configured Apache (Debian package version 2.14.18-1 in my case) to serve some static/web-only components.
    From your client-side JavaScript UI you talk (in JSON) to the API, which is implemented as a separate node.js/Python/etc server – say, on port 8000 in the same LXC container.

    The simplest solution to forward requests from the web-frontend to your API is by using mod_proxy.
    If you want to forward any requests to /api/* to your custom back-end server on port 8000, you just add the following lines to your VirtualHost configuration:

    ProxyPass “/api” “http://localhost:8000″
    ProxyPassReverse “/api” “http://localhost:8000″

    I’d suggest not wrapping this fragment with the classical IfModule: as your application will not really work without its API back-end, you actually want Apache to fail as soon as possible if mod_proxy is missing.

    That was easy, right? What, it doesn’t work? Can’t be! It’s dead simple! No way you could make a mistake in 2 lines of configuration!!! :mad_rage: :)

    Oh wait… I remember I had this problem before… Read the rest of this entry »

    Share

    Posted in *nix, how-to, Web | No Comments »