ML

A simple neural network processor written in C. Creating complicated networks using python libraries is relatively easy nowadays, however this time I wanted to create a simple program in a simple language (by simple I mean featureless), with the hope of making something portable, light, and easy to use.

Requirements

Usage

Usage: ml train [Options] JSON_FILE
   or: ml predict [-o FILE] FILE
Train and predict json data

Options:
  -h, --help               Show this message
  -a, --alpha=ALPHA        Learning rate (only works with train)
  -e, --epochs=EPOCHS      Epochs to train the model (only works with train)
  -o, --output=FILE        Output file (only works with predict)

Examples:
  $ ml train -e 150 -a 1e-4 housing.json
  $ ml predict housing.json -o predictions.json

Network Customization

To change the network layout you must go to ~/.config/ml/settings.cfg here you will find a file like the following

[net]
loss = square ; options (square)
epochs = 200 ; comment
alpha = 1e-2
weights_path = utils/weights.bin
inputs = x
labels = y

; activation options (relu, sigmoid, softplus, leaky_relu)

[layer]
neurons=20
activation=sigmoid

[outlayer]
activation = sigmoid

On [net] section you can set data keys using the inputs and labels fields, the weights will be saved on a file specified on weights_path, and the other fields indicates the network how it will be trained.

To build your network use [layer] to create a new hidden layer with its neurons and corresponding activation, and set the output layer using [outlayer], in this section neurons is not required.

Examples

The beginning XOR

This classic problem is great to test if the gradient descend works.

[
  {
    "x": 0, "y": 0, "z": 0
  },
  {
    "x": 1, "y": 0, "z": 1
  },
  {
    "x": 0, "y": 1, "z": 1
  },
  {
    "x": 1, "y": 1, "z": 0
  }
]

with the following model:

[net]
loss = square ; options (square)
epochs = 200 ; comment
alpha = 5e-1
weights_path = utils/weights.bin
inputs = x, y
labels = z

; activation options (relu, sigmoid, softplus, leaky_relu)

[layer]
neurons=5
activation=relu

[outlayer]
activation = sigmoid

we achieve:

[
  {
    "x": 0.000000, "y": 0.000000, "z": 0.058975
  },
  {
    "x": 1.000000, "y": 0.000000, "z": 0.920547
  },
  {
    "x": 0.000000, "y": 1.000000, "z": 0.920186
  },
  {
    "x": 1.000000, "y": 1.000000, "z": 0.086705
  }
]

Gaussian Approximation

Lets make a toy regression problem, this time we will approximate a Gaussian function with the following network.

[net]
loss = square ; options (square)
epochs = 200 ; comment
alpha = 5e-3
weights_path = utils/weights.bin
inputs = x
labels = y

; activation options (relu, sigmoid, softplus, leaky_relu)

[layer]
neurons=10
activation=leaky_relu

[layer]
neurons=5
activation=sigmoid

[outlayer]
activation = sigmoid

Here are the results:

Conclusion

Although there are things to do, like improving weights initialization and implementing batch learning, it is usable enough to be showed, and tested by other people.

One of the surprises during the development of this project was to realize how difficult can be to implement a user interface if you haven’t defined it previously, in comparison implementing the math was easier and took me less time.

Download

To get the software download the git with:

$ git clone https://git.juanvalencia.xyz/ml