I came across a list of postulates (link removed – content disappeared), which define the space for creating strong artificial intelligence. One of the postulates, which says that AI can be implemented only using ANNs, appears to be not clearly enough proven to be a real requirement.
Consciousness is not necessarily the derivative of complexity; it can be rather the derivative of the world’s model and the subject’s placement in that model, which causes consciousness to arise. (In other words, consciousness equals to the ability of the subject to place himself within the constantly self-re-approving environment model.) Thus, the requirement for ANNs use is not convincing: one can ensure that the appropriate world model is created without ANNs. I would even say that ANNs are just a kind of a “black box”, by using which we try to avoid the really obvious complexity, which can nevertheless be solved purely algorithmically, with no extra overhead from ANN-like simulators and wrappers.
There appears to be a specific double-dichotomy in ANN versus Algorithmic approaches for AI development: ANN considers the brain to be a collection of individual neurons (or “perceptrons”? for this case), while algorithmic approach considers the brain to be a collection of “modules”, each performing some quite narrow function. At the same time, we are told that algorithmic approaches cannot foresee unforeseen circumstances, thus ANNs are better for AI development (that’s the second dichotomy). However, modern “intelligent” software (here I mean first of all cognitive-functions software) rather successfully uses “learning algorithms”, “pattern matching algorithms”, “inference algorithms”, “prediction algorithms” and many more other “algorithms”. At the same time, I’m unaware of the successful (or at least impressive) software tool built using ANNs.
(Well, unwinding the above paragraph may lead to a controversy: ANNs’ implementations are algorithms themselves. However, I would make a clear distinction here: I consider ANN to be a rather generic simulator of inter-neuronal interactions and signal-response circuits; the same type of ANN could be applied to several different tasks (well, different instances of the same-type ANN). But if a generic ANN is trained for a specific task, and then optimized for that task only, and then probably also simplified and extended with fixed-value tables to avoid recalculating static relations – this is not ANN, but an algorithm, as being task-specific it falls under the definition of the algorithm much better than under the definition of the ANN.)
I was given a reference to Daniel Dennett by Bernardo Kastrup, the author of the “postulates”. Daniel Dennett is, like me, a proponent of algorithmic approaches to AI. However, I didn’t read any of his works yet. As soon as I do, I’ll add more to the ANN vs Algorithms topic.