Friday, May 25, 2007

A perceptron in Haskell

After reading Michael Negnevitsky book on Artifical Intelligence, I started playing with some of the algorithms he gives in Haskell. So my first endeavour is writing a simple perceptron (a one neuron neural network) to perform AND or OR logical operations. Next I'll start a full multi layer network to do much more complex operations like exclusive OR (-:.
Once again I was struck with the simplicity of the Haskell code necessary to implement such functionality (and I'm no Haskell expert yet). You think about how you going to implement it, the type checker tells you when you're doing something stupid, and by the time everything is written and compiles it kinda works.
The core method is calculating the output of the neuron, given the inputs, the weight of each input and the threshold value for that neuron:

neuronOutput :: [Float] -> [Float] -> Float -> Float
neuronOutput inputs weights threshold =
let
tot=(foldl (+) 0 (zipWith (*) inputs weights)) - threshold
in
if tot >= 0
then 1
else 0

Using zipWith we multiply each input with its weigth, calculate the sum with fold and remove the threshold. A negative result yields 0, else it yields 1.

Adjusting the weights to adapt to the error of a test case is no more difficult:

defaultLearningRate::Float
defaultLearningRate=0.1

adjustWeights :: [Float] -> [Float] -> Float -> Float -> [Float]
adjustWeights inputs origWeights expected actual =
let
e=expected - actual
delta (i,w) = w + (defaultLearningRate * i * e)
in
map delta (zip inputs origWeights)

We modify the weight by applying to each weight a delta that is dependant on the input weight, the error ratio and the learning rat

A single step is made of calculating the output of the neuron and adapting the weights, given the expected value:

defaultThreshold::Float
defaultThreshold=0.2

step :: [Float] -> [Float] -> Float -> [Float]
step inputs weights expected=
let
o = neuronOutput inputs weights defaultThreshold
in
adjustWeights inputs weights expected o

Then an epoch applies the step function to each different input sets. The hardest part is calculating the average of all deltas to find if the weights have changed at all during the evaluation of all the test cases

epoch :: [([Float],Float)] -> [Float] -> ([Float],Float)
epoch allInputs weights=
let
f w (inputs,expected) = step inputs w expected
newWeights=foldl f weights allInputs
delta= (foldl (+) 0 (map abs (zipWith (-) newWeights weights))) / (fromIntegral $ length weights)
in (newWeights,delta)

A training run is only launching the epoch function repeatedly till the delta is zero:

run :: [([Float],Float)] -> [Float] -> Int -> ([Float],Int)
run allInputs weights epochNb=
let (newWeights,delta) = epoch allInputs weights
in if delta == 0
then
(newWeights, epochNb)
else
run allInputs newWeights (epochNb+1)

We generate the initial weights in the IO monad, passing the number of weights needed:

initialWeights :: Int -> IO [Float]
initialWeights nb = do
let interval= randomR (-0.5,0.5)
(replicateM nb (getStdRandom interval))

Putting it all together:

test :: IO()
test = do
w<-initialWeights 2
let (ws,i)=run [([0,0],0),([0,1],0),([1,0],0),([1,1],1)] w 1
print (ws,i)
return ()

test find out the proper weights to implement AND and print them out, with the number of epochs needed to reach the result. You can then call neuronOutput with the resulting weights and your input: you have a AND logical gate!

As usual, any pointer to how to improve the code is welcome!

5 comments:

Justin said...

I think the functions would be a bit clearer if you defined a Neuron type rather than passing tons of floats around. I would start with

newtype Neuron = Nearon [Float] [Float] Float

which makes the neuron hold hte inputs, the weights and its threshold.

I would make then add some other types just to make the intent clear:

type Inputs = [Float]
type Weights = [Float]
type Threshold = Float

then Neuron becomes:

newtype Neuron = Inputs Weights Threshold

I like 'newtype' (as opposed to type) because the compiler will catch if you try to mix Neurons with something else. If Neuron was just a type synonym, the compiler can silently convert it on you. NOt sure if that's really bad or not but it seems safer to go with newtype.

Anyways, then you could add the "output" function as a value of the Neuron type:

newtype Neuron = { inputs :: Inputs, weights :: Weights, threshold :: Float, output :: (Inputs -> Weights -> Threshold -> Float)

I look forward to seeing how your code develops!

Anonymous said...

All of your

let
x = foo
in
bar = x ...

would probably be more idiotmatically represented as

bar = x ...
where
x = foo

Aaron Tomb said...

Personally, I would reverse the order of the inputs to neuronOutput, to facilitate partial application. The threshold changes least often, the weights change moderately often, and the inputs change the most often. So you could define neuronOutput as:

neuronOutput threshold weights inputs = ...

Then, creating a neuron with a given threshold could be done by:

neuron = neuronOutput threshold

and, after training, you could do:

trainedNeuron = neuron weights

and then use the trained neuron with:

trainedNeuron inputs

Anyway, I like what you've done!

JP Moresmau said...

Thanks all for the tips!! Yes I do not think often enough about newtype and about the order of my parameters for partial application!! And I suppose the let syntax is closer to imperative style: I detail in order each result I need to have!
Justin, I don't really see what benefits having the funtion output in the type definition brings?

Muddle-headed Wombat said...

Just wanted to thank you. Your post on perceptrons in Haskell, your follow-up post on the neural net, and the comments people have made are helping me with a similar project of my own.