A Conversation for Artificial Neural Networks

When Neural Nets Go Bad

Post 1

Mr. Cogito

Well, it's a rather dramatic title, but I just felt like I should mention that training neural nets has been called as much of an art as a science. Basically, there are two situations wiht supervised learning which you try to avoid, which I will illustrate with the following example.

Suppose you wanted to train a neural net to recognize that a picture is of a tree (as opposed to a bicycle, Brad Pitt, or the Empire State Building). You decide to make it a supervised learning case, but there are problems in the way you present the data. The neural net could have the following problems:

UNDERFITTING. In this case, the neural net generalises too much, concluding that if it's green, it's a tree. This is obviously a bit too vague, since there are green things that aren't tree, and is often an effect of biases in the training data (none of the negative examples above are green).

OVERFITTING. This is the opposite problem where the neural net is unable to generalize at all. So, it would conclude that a picture is a tree if only it is green and has 537 oak-shaped leaves. Again, this may be a result of problems in the training data, where the positive examples of trees are too few or too much alike.

Anyhow, nice entry. I just felt like adding an extra bit. smiley - smiley

Yours,
Jake


When Neural Nets Go Bad

Post 2

Cabby

Remind me again how unsupervised learning works?

From what I remember, supervised learning consists of showing the net examples of things you want it to recognise (and things that you don't) and executing a positive reaction on it when it gets something right and a negative one when it doesn't. (I forget how the maths worked, but it was all about changing the thresholds at which individual neurons decide to fire).

However, in unsupervised learning, presumably I'm not telling the net if it got the right answer or not, so how does it change its responses?


When Neural Nets Go Bad

Post 3

Mr. Cogito

Hello,

I'm not sure exactly of all the details of unsupervised learning, but it's usually used for different sorts of problems. For a straight-forward problem where you use the solution a basic neural network with an input layer, hidden layer, and output layer for the answer is used and the training errors are back propagated to correct weights.

The best case of unsupervised learning I can think of involves a Hopfield network which is a case where every neuron is connected to each other (and not in layers). Unsupervised learning is useful for situations where we have a bunch of data but we have no idea of how to organize it. Unsupervised learning is often very good at finding ways of breaking down data points into several distinct categories. Of course, since it's unsupervised, some of these clusters could be organized on strange principles (green blobby things, people with beards, etc.) to us, and we can only really intuit the categories by looking at the items in each group. Still, it can be useful for certain problems.

Yours,
Jake


When Neural Nets Go Bad

Post 4

Cabby

Thinking about it, I guess your goal in unsupervised learning is just for consistency. You want the same answer for the same inputs ideally, Then, when you've got a new 'thing' the net should be able to spot which of the 'things' it's already seen it's closest to and catagorise it appropriately.
It's certainly useful for tasks like data mining, where you don't know what the connections are till you look.

Been too long since I had to think about this stuff! I did mess around with neural nets a bit at Uni but I'm afraid I've forgotten a lot of it since then smiley - sadface

That's another issue of course. Can you get a neural net to forget what its learned? (short of just resetting all the neuron thresholds back to their default values).


When Neural Nets Go Bad

Post 5

Mr. Cogito

Hello,

Well, you could have weights on connections degrade over time, thus strengthening only the paths that get used on a regular basis. I suppose that would be a somewhat decent way to introduce forgetting, but it can't be confused with human memory of course.

Yours,
Jake


Key: Complain about this post

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more