Autarchy of the Private Cave

Tiny bits of bioinformatics, [web-]programming etc

    • Archives

    • Recent comments

    Archive for the 'Artificial Intelligence' Category

    Kite AI coding assistant is saying farewell

    28th December 2022

    I’m looking at AI/ML-powered coding assistants (such as mutable.ai, github’s CoPilot, tabnine, and even Alibaba AI assistant – but there everything was in Chinese so I didn’t proceed at all with it), and found – with sadness – that Kite, one of the longer-existing solutions (since 2014!) has gone out of business…

    Here is Kite’s farewell for you to read.

    Kite did open-source many parts of their technology/software stack, though I didn’t check how comprehensive those parts are, and if that is anywhere near enough to fork/continue their work.
    I wonder if there already exists an open-source project focusing on ML-based code completion for e.g. Python – let me know in the comments if you know one!

    Read the rest of this entry »

    Share

    Posted in Machine learning, Programming, Software, Technologies | No Comments »

    Information criteria for choosing best predictive models

    29th May 2012

    Usually I’m using 10-fold (non-stratified) CV to measure the predictive power of the models: it gives consistent results, and is easy to perform (at least on smaller datasets).

    Just came across the Akaike’s Infor­ma­tion Criterion (AIC) and Schwarz Bayesian Infor­ma­tion Criterion (BIC). Citing robjhyndman,

    Asymp­tot­i­cally, min­i­miz­ing the AIC is equiv­a­lent to min­i­miz­ing the CV value. This is true for any model (Stone 1977), not just lin­ear mod­els. It is this prop­erty that makes the AIC so use­ful in model selec­tion when the pur­pose is prediction.

    Because of the heav­ier penalty, the model cho­sen by BIC is either the same as that cho­sen by AIC, or one with fewer terms. Asymp­tot­i­cally, for lin­ear mod­els min­i­miz­ing BIC is equiv­a­lent to leave–v–out cross-​​validation when v = n[1-1/(log(n)-1)] (Shao 1997).

    Want to try AIC and maybe BIC on my models. Conveniently, both functions exist in R.

    Share

    Posted in Bioinformatics, Machine learning | No Comments »

    Everything old is new again: nice summary of realworld-digital integration approaches

    9th May 2010

    Just found a really nice, “almost interactive” TED talk about digital/real-world interfaces. The ideas aren’t new – they have been around for quite a while, as exemplified both by Sci-Fi movies and several digital implant enthusiasts – but this time it comes with a seemingly-tested implementation, which is – WOW! – both cheap and working. Moreover, here ideas are taken to a level of example applications with functional prototypes – which gives hope to have at least some of those market-ready within 5-10 years :)
    Read the rest of this entry »

    Share

    Posted in Links, Software, Technologies | No Comments »

    ocrodjvu: increase accessibility of your DJVU books

    5th November 2009

    ocrodjvu = OCRopus (tesseract) + DJVU

    It is a small command-line tool to easily convert your image-only DJVU files into image+text DJVU files. In Debian testing, there are language packages for (in no specific order) German, English, French, Spanish, Vietnamese, Brasilian Portuguese, Dutch, and Italian. The original tesseract-ocr software includes training data & code, so it should be (at least in theory) easy to add more recognition languages.

    Share

    Posted in Links, Software, Technologies | No Comments »

    i-SOBOT (Omnibot-2007) by TOMY and SANYO

    3rd September 2007

    This isn’t new, but looks like a milestone to me (yeah, I know about Sony and their robots, but that was before and different).

    i-SOBOT CAMVersion

    i-SOBOT is a tiny 165-mm robot with 17 servo-engines. i-SOBOT can be programmed and controlled at distance (using the LCD-equipped remote control), and understands up to 10 voice commands. This robot will be available in Japan and US in October 2007.

    According to the Guinness Book Of Records, i-SOBOT is the smallest robot in the world.
    Read the rest of this entry »

    Share

    Posted in Artificial Intelligence, Robotics | No Comments »

    Practical Artificial Intelligence (AI)

    25th June 2007

    Googling for “practical artificial intelligence” gives only two (somewhat) relevant links:

    Looks like it isn’t widely acknowledged, that AI is, in fact, quite widely used. Though primarily in OCR, TTS, STT :), and NLP (including machine translation).

    Share

    Posted in Artificial Intelligence, Links, Programming, Science | 5 Comments »

    On the use of Artificial Neural Networks for AI

    28th May 2007

    I came across a list of postulates (link removed – content disappeared), which define the space for creating strong artificial intelligence. One of the postulates, which says that AI can be implemented only using ANNs, appears to be not clearly enough proven to be a real requirement.

    Consciousness is not necessarily the derivative of complexity; it can be rather the derivative of the world’s model and the subject’s placement in that model, which causes consciousness to arise. (In other words, consciousness equals to the ability of the subject to place himself within the constantly self-re-approving environment model.) Thus, the requirement for ANNs use is not convincing: one can ensure that the appropriate world model is created without ANNs. I would even say that ANNs are just a kind of a “black box”, by using which we try to avoid the really obvious complexity, which can nevertheless be solved purely algorithmically, with no extra overhead from ANN-like simulators and wrappers.

    There appears to be a specific double-dichotomy in ANN versus Algorithmic approaches for AI development: ANN considers the brain to be a collection of individual neurons (or “perceptrons”? for this case), while algorithmic approach considers the brain to be a collection of “modules”, each performing some quite narrow function. At the same time, we are told that algorithmic approaches cannot foresee unforeseen circumstances, thus ANNs are better for AI development (that’s the second dichotomy). However, modern “intelligent” software (here I mean first of all cognitive-functions software) rather successfully uses “learning algorithms”, “pattern matching algorithms”, “inference algorithms”, “prediction algorithms” and many more other “algorithms”. At the same time, I’m unaware of the successful (or at least impressive) software tool built using ANNs.

    (Well, unwinding the above paragraph may lead to a controversy: ANNs’ implementations are algorithms themselves. However, I would make a clear distinction here: I consider ANN to be a rather generic simulator of inter-neuronal interactions and signal-response circuits; the same type of ANN could be applied to several different tasks (well, different instances of the same-type ANN). But if a generic ANN is trained for a specific task, and then optimized for that task only, and then probably also simplified and extended with fixed-value tables to avoid recalculating static relations – this is not ANN, but an algorithm, as being task-specific it falls under the definition of the algorithm much better than under the definition of the ANN.)

    I was given a reference to Daniel Dennett by Bernardo Kastrup, the author of the “postulates”. Daniel Dennett is, like me, a proponent of algorithmic approaches to AI. However, I didn’t read any of his works yet. As soon as I do, I’ll add more to the ANN vs Algorithms topic.

    Share

    Posted in Artificial Intelligence | No Comments »