DT

Find Me On
Add
Close and Update
Close
Edit Dataset
Add and remove entries from the dataset. The dataset is used to train or test the data to specific inputs.
Dataset Entries: 0
Use Data Set
Use Inputs
View Results
The results of the testing set

Training Mode

Use this mode to train the model on patterns

Use the Edit Dataset button and add at least one entry in order to use Training
Train Model
Run the training or testing with the current settings
Training
Training mode allows you to train a model against a dataset containing input information and the expected outputs.
Testing
Testing mode allows you to test particular outputs against the model using either a dataset or direct inputs, without effecting the training of the model.

Neural Networks

Structure

The typical neural network is comprised of neurons assembled into layers. Most neural networks use some sort of neuron (or node as they are otherwise known) , but typically they consist of a list of weights which connect it to other neurons (the weights themself is basically a multiplier) a summation function to add together the values recieved from other neurons via the weights, and an activation function, which determines the recieved value which will result in the neuron firing. Typically a neuron firing will result in a value of 1 being transmitted through a weight (as opposed to passing a value of 0 if it doesn't fire), the weight will modify the value by its multiplier, which could be a positive or negative value, and the connected neuron will fire or not based on the total value it received from its neuron connections. The neurons are organsed into to layers, beginning with an input layer, ending with an output layer and in between an amount of extra layers known as hidden layers. Each neuron on a layer is connected to every neuron in the previous layer, which means any change to the inputs can make unpredictable changes to the output.

Hidden Nodes

Hidden nodes are characterised principally by only being connected to other nodes, rather than to external input or output values. Hidden nodes help the network form more complicated response to to the inputs given, though it is quite difficult to determine the precise causes of a response. There has been some debate over how many layers of hidden nodes should be used, before the advent of deep-learning, it was believed that any more than one hidden layer was excessive due to the fact that with one hidden layer a neural network can be a 'Universal Approximator', which means that the network can output any pattern in response to any inputs, meaning it can be set to learn anything. There was also the issue that there was no way to effectively train a network with multiple hidden layers.

Bias Nodes

Bias nodes are used to help networks learn particular patterns. A bias node is simply a node that is always activated / firing , mean it is sending out a value of 1 (which could of course be modified by weights). The most obvious way in which a bias node could allow an output that would be impossible without one, would be if all the inputs were a value of 0, and the desired output was 1, without a bias node, no weight value would give a value of anything other than 0 (given that they use multiplication), but with a bias node continiously outputting 1, its weight could simply be raised to the level where the desired output is triggered.

Simulated Annealing

In order to train a neural network, you need an objective function / utility function. Such a function takes the outputs and provides a score determining how close to the desired output it is. That information can then be used to optimise the network to produce outputs that are closer to the desired outputs. Simulated Annealing is one such optimisation algorithm. Simulated annealling takes the complete set of weights (all the weight connections between every node) and generates a new set of weights randomly. The system then has to determine whether the new weights are better than the old sets of weights by running them on the network, then scoring them. The system has a chance of accepting a lower scored weight set, which reduces as the network performs more cycles, this is often refered to as the temperature (as reference to the annealling in the name), which is used to prevent the system from settling in a local optimium by allowing it to experiment in earlier cycles, and then solidifying its solutions in later stages.