Artificial Neural Network is Nasty

By Xah Lee. Date: . Last updated: .

artificial neural network is nasty.

Google's got TensorFlow, and SyntaxNet, AlphaGo … all based on neural networks aka “deep learning”.

When neural networks solves a problem, such as language translation, it undestands nothings, and we can learn nothing from it. No theory, no reasoning, nothing. Zero. It simply shot a answer back at you.

it doesn't have any insights, doesn't have any why or how. Basically, we can just say it's a pattern recognizer, much like facial recognition.

when something worked by neural network, all we can say is that a pattern matched. Here, the concept of pattern is broader. It can be 2D images, or the sound waves of speech recognition, language translation, understanding search, or other things. The gist here, is that the input for example for face recog or go game is a 2-d grid, each node we can think of as a on/off switch. To apply this to something other than that has direct connection with a 2D image such as go game board, is that we simply think of these systems of switches as input. (also, each switch has a relation to other, as forming a structure, as in a 2d grid. this is important, because one node will need neighbor node's behavior to judge) So, this system of input grids… you bombard it with data (that is, a set of on/off for each node), and a info to say the result is yes/no. You do this with billions of times (each data set count as 1 here). This, is called the training. So, after this, given a random set of data, the program can kinda tell, whether the data you gave is yes or no (meaning, it is true, false, or good/bad. Basically giving you a yes/no answer to some question). The reason it can tell, is because, during the training, each node has “learned”, with relation to neighboring nodes, that it should be yes or no, and combining all node's answers, it gives a yes/no answer to the whole.

y'know, recent #AplhaGo game… [see Google AI AlphaGo vs Lee Sedol (2016-03)] that the guy said, alphago didn't train from any record of Lee Sedol's game… he said because, alphago needs millions (or did he say billion) of games to learn something. So, here, the striking thing is that, for neural network to work, it needs huge amount of data. And, one consequence is that, for many problems, this amount of data isn't available.

another example, in alphogo game 4, it made completely idiotic moves. Though, unlike traditional software or concrete algorithm, one can trace the problem, as to call it a bug. But no so with neural network things. We can't point fingers at something particular that's wrong. Rather, we can just say, we discovered a undesirable behavior, and we need to do something about it (such as more training), but isn't something we can ‘fix’ concretely in the traditional sense as something broken we just fix it. In analogy, when a child made a mistake, you can't ‘fix’ him, but only, ‘teach’, and hope the chance of error is reduced in the future.

also, when you use Microsoft's voice recognition in Windows, it has a training period. That's also a example of neural network. Here, i believe the input is a linear sequence of nodes. (that is, the relationship between each input nodes is that they form a line. This line, is time. Each node here is basically a digital signal from the sound input)

also, thus, by the nature, neural network cannot be used to solve say math equations, such as differential equations, for example. At least not in the usual sense of solve.

though, what neural network (NN) is good at is learning, as we human learn. Now, here's the elusive thing. In general, when human do things, we can't tell you exactly why or how. (For example, such as chess prodigy or music prodigy or human calculators or mathematicians… can't tell you how exactly they arrive at answers exactly) But anyway, so NN is good at learning. And look, human animals has build all the things, the world. So, in a sense, NN could potentially learn everything as human know too, and eventually, on top of that, invent all the other systems such as math and logic that find answers by direct proof or construction…

what am saying is that, neural network, although cannot solve math equations or anything that's concrete, but, it can learn and learn and eventually build the entire system of logic and able to solve math equations. But, here we are getting philosophyical.

It's interesting, that if we consider the universe as some deductive system, or consider pure math, you have foundational problems, and godel stuff stuff, undecidability issues, paradoxes in systems… on the other hand, consider the universe as something just is, like, holistic or something, then you “deduce” or compute by non-logical means like neural networks, where, there isn't any reason or anything to begin with, but you just go on… well this is getting into lala land actually i don't know what i'm talking about.