Once again I was struck with the simplicity of the Haskell code necessary to implement such functionality (and I'm no Haskell expert yet). You think about how you going to implement it, the type checker tells you when you're doing something stupid, and by the time everything is written and compiles it kinda works.
The core method is calculating the output of the neuron, given the inputs, the weight of each input and the threshold value for that neuron:
neuronOutput :: [Float] -> [Float] -> Float -> Float
neuronOutput inputs weights threshold =
let
tot=(foldl (+) 0 (zipWith (*) inputs weights)) - threshold
in
if tot >= 0
then 1
else 0
Using zipWith we multiply each input with its weigth, calculate the sum with fold and remove the threshold. A negative result yields 0, else it yields 1.
Adjusting the weights to adapt to the error of a test case is no more difficult:
defaultLearningRate::Float
defaultLearningRate=0.1
adjustWeights :: [Float] -> [Float] -> Float -> Float -> [Float]
adjustWeights inputs origWeights expected actual =
let
e=expected - actual
delta (i,w) = w + (defaultLearningRate * i * e)
in
map delta (zip inputs origWeights)
We modify the weight by applying to each weight a delta that is dependant on the input weight, the error ratio and the learning rat
A single step is made of calculating the output of the neuron and adapting the weights, given the expected value:
defaultThreshold::Float
defaultThreshold=0.2
step :: [Float] -> [Float] -> Float -> [Float]
step inputs weights expected=
let
o = neuronOutput inputs weights defaultThreshold
in
adjustWeights inputs weights expected o
Then an epoch applies the step function to each different input sets. The hardest part is calculating the average of all deltas to find if the weights have changed at all during the evaluation of all the test cases
epoch :: [([Float],Float)] -> [Float] -> ([Float],Float)
epoch allInputs weights=
let
f w (inputs,expected) = step inputs w expected
newWeights=foldl f weights allInputs
delta= (foldl (+) 0 (map abs (zipWith (-) newWeights weights))) / (fromIntegral $ length weights)
in (newWeights,delta)
A training run is only launching the epoch function repeatedly till the delta is zero:
run :: [([Float],Float)] -> [Float] -> Int -> ([Float],Int)
run allInputs weights epochNb=
let (newWeights,delta) = epoch allInputs weights
in if delta == 0
then
(newWeights, epochNb)
else
run allInputs newWeights (epochNb+1)
We generate the initial weights in the IO monad, passing the number of weights needed:
initialWeights :: Int -> IO [Float]
initialWeights nb = do
let interval= randomR (-0.5,0.5)
(replicateM nb (getStdRandom interval))
Putting it all together:
test :: IO()
test = do
w<-initialWeights 2
let (ws,i)=run [([0,0],0),([0,1],0),([1,0],0),([1,1],1)] w 1
print (ws,i)
return ()
test find out the proper weights to implement AND and print them out, with the number of epochs needed to reach the result. You can then call neuronOutput with the resulting weights and your input: you have a AND logical gate!
As usual, any pointer to how to improve the code is welcome!