Autarchy of the Private Cave

Tiny bits of bioinformatics, [web-]programming etc

    • Archives

    • Recent comments

    Archive for the 'Software' Category

    How to cite PHYLIP

    10th January 2014

    Official PHYLIP FAQ does suggest a few ways to cite the software, but I believe that the best citation is mentioned in the wikipedia PHYLIP article: pubmed reference for PMID 7288891. This PubMed citations seems the best, because

    • it does mention the software tool implementing the maximum likelihood approach,
    • it is likely the earliest mention of the PHYLIP software (which was distributed since around 1980),
    • it refers to a journal indexed by pubmed, and
    • according to Google Scholar, it was already cited over 6660 times :)
    Share

    Posted in Links, Science, Software | No Comments »

    Brief comparison: Dropbox vs BitTorrent Sync vs AeroFS vs SparkleShare

    24th November 2013

    Right now I’m mostly using Dropbox, and recently started BitTorrent Sync for my music collection sync between all the PCs and my backups server, as well as for sharing larger files at work (thanks to direct LAN connections, this is much faster with BTSync than with Dropbox, which has to first upload the file to Dropbox server). I’m also considering syncing a TrueCrypt container of my photos archive using BTSync. SparkleShare is potentially interesting, but given my trend to move to free code-hosting services, I do not yet see a need for it.

    Below is a short summary table I’ve used to compare available solutions. Feel free to contribute to the table in the comments – I’ll update the post, then.

    Read the rest of this entry »

    Share

    Posted in Links, Software | No Comments »

    Alternatives to GNU make

    19th October 2013

    Right now, when I see that I have to often repeat/retype some sets and sequences of commands, I’m trying to wrap them up into some kind of a script, every time choosing the most appropriate language – shell when I need to start lots of existing command-line tools, Python when there’s some data handling and processing involved, and R when I’m invoking commands from R packages. So far I have been avoiding the fairly popular makefile-based approach to automating pipelines and workflows which rely heavily on existing tools. However, being curious, I’ve compiled a short list of modern make-like alternatives, to possibly explore… sometime later…

    • First comes make itself – the oldest and the most widely used software build tool. Stable and powerful. Still, even people who got used to using make, have some gripes about it. The most detailed list of gripes is probably here.
    • SCons is a build tool written in Python. I guess I like that “configuration files are Python scripts” – maybe knowing Python is enough to use SCons, which makes SCons a better choice than make for me. SCons seems to have gained some support (scroll down for comments/discussion). There were some doubts about SCons performance (1, 2, and 3); not sure where SCons is at right now in that regard.
    • waf, a Python-based framework for configuring, compiling and installing applications.
    • pyDoIt is a Python automation tool. It seems to use Python syntax. It aims at bringing the power of build-tools to execute any kind of task, where a task describes some computation to be done (actions), and contains some extra meta-data. Based on the description alone, I’m quite intrigued! I wonder if anyone had already worked with pyDoIt and can share experiences?…
    • Rake – Ruby make – is a simple build program with capabilities similar to those of make. Had seen a lot of positive feedback about this one – mostly regarding simplicity of use. Still [py]DoIt so far looks more attractive to me personally.
    • Ruffus is a lightweight python module for running computational pipelines. Sounds like some good competition to [py]DoIt!
    • Anduril is an open source component-based workflow framework for scientific data analysis. Sounds promising, though the latest downloadable version is over 400 MBs… It probably already contains a bunch of binaries and maybe even data and complete workflows for data analysis. Probably worth a look, but may turn out a little overweight for simple pipelining.
    • snakemake is a scalable bioinformatics workflow engine. I get the feeling that Python is truly dominating the pipelines/workflows world: snakemake, as even the name suggests, is in Python, too. The front-page example is so simple and clear, that snakemake immediately pushes DoIt down from the 1st place! Awesome.
    • Paver is a yet-another Python-based software project scripting tool along the lines of Make or Rake, designed to help out with repetitive tasks with the convenience of Python’s syntax. Sounds similar to DoIt. Have no idea how they actually compare to each other.

    That is it for now.

    What were your experiences with automating repetitive tasks and building simple pipelines?

    Share

    Posted in *nix, Notepad, Programming, Software | No Comments »

    GUIs for R

    17th October 2013

    I’ve tried [briefly] Cantor (which also supports Octave and KAlgebra as backends), rkward, deducer/JGR, R Commander, and RStudio.

    My personal choice was RStudio: it is good-looking, intuitive, easy-to-use, while powerful.

    Next step would be using some R-equivalent of the excellent ipython’s Mathematica-like Notebook webinterface…

    Share

    Posted in *nix, Notepad, Programming, Science, Software | No Comments »

    The favourite file compressor: gzip, bzip2, or 7z?

    17th October 2013

    Here comes a heap of assorted web-links!

    I had personally settled on using pbzip2 for these simple reasons:

    • performance scales quasi-linearly with the number of CPU cores (until one hits an I/O bottleneck);
    • when archive is damaged, you are only guaranteed to loose the damaged block(s) of size 100-900 KiB – remaining information might be salvable.

    Compared to pbzip2, neither gzip nor 7z (lzma) offer quasi-linear speedups proportional to the number of CPU cores.
    pigz, the parallel gzip, does parallelize compression, but gzip compresses not as good as bzip2, and decompression is not parallel like in pbzip2.
    7z is multi-threaded, but it tops out at using 2 CPU cores (see links below for tests).

    pbzip2 is also quite a good choice for FASTQ data files: even if a few blocks get lost due to data corruption, this should not noticeably affect the entire dataset.
    Specialized tools for FASTQ compression do exist (see e.g. this article, also Fastqz, fqzcomp, and samcomp project pages.) I think I liked fastqz quite a bit, but I still have to examine data recoverability in the case of archive damage. It is possible to use external parity tools which support file repair using pre-calculated recovery files – like the linux par2 utility, also for bzip2 archives and any other files in general – but adding parity file may negate the higher compression ratio benefits. Also, if there is no independent block structure of the archive, insufficient parity file may lead to the loss of the entire archive. In other words, this still has to be tested.

    Now the long-promised web-links!
    Read the rest of this entry »

    Share

    Posted in *nix, Links, Notepad, Software | 1 Comment »

    Debian: how to whitelist IP addresses in tumgrey-SPF

    7th August 2013

    SPF is nice for protecting your mail server from spam, but sometimes there is a need to bypass SPF checking. For example, if you rely on 3rd party servers to do spam protection for you :)

    Current setup:

    • MX records point to the spam protection mail servers, which then
    • connect to my server and deliver (hopefully spam-free) mail.

    Problem: some senders (like last.fm) do have proper, strict SPF records. Tumgreyspf on my server then rejects emails relayed through the spam-protection service.

    If these spam protection relay servers are the only which send mail to your server, then it makes sense to fully disable/uninstall tumgreyspf. Putting tumgreyspf into the permanent “learning mode” (set defaultSeedOnly = 1 in /etc/tumgreyspf/tumgreyspf.conf) may not fix the SPF problem described above, as SeedOnly seems to only affect greylisting, and not rejecting unauthorized senders.

    Solution: whitelist relay server IPs.
    Read the rest of this entry »

    Share

    Posted in *nix, how-to, Software | No Comments »

    MultiParanoid vs. QuickParanoid: pro et contra for each

    9th July 2013

    MultiParanoid

    Here we present a new proteome-scale analysis program called MultiParanoid that can automatically find orthology relationships between proteins in multiple proteomes. The software is an extension of the InParanoid program that identifies orthologs and inparalogs in pairwise proteome comparisons. MultiParanoid applies a clustering algorithm to merge multiple pairwise ortholog groups from InParanoid into multi-species ortholog groups.

    QuickParanoid

    QuickParanoid is a suite of programs for automatic ortholog clustering and analysis. It takes as input a collection of files produced by InParanoid and finds ortholog clusters among multiple species. For a given dataset, QuickParanoid first preprocesses each InParanoid output file and then computes ortholog clusters. It also provides a couple of programs qa1 and qa2 for analyzing the result of ortholog clustering.

    So… both use InParanoid… Are there any differences? Let me list those which I’ve found.

    Read the rest of this entry »

    Share

    Posted in *nix, Bioinformatics, Software | 2 Comments »