nn alternatives and similar packages
Based on the "AI" category.
Alternatively, view nn alternatives based on common mentions on social networks and blogs.
-
tensor-safe
A Haskell framework to define valid deep learning models and export them to other frameworks like TensorFlow JS or Keras. -
moo
Genetic algorithm library for Haskell. Binary and continuous (real-coded) GAs. Binary GAs: binary and Gray encoding; point mutation; one-point, two-point, and uniform crossover. Continuous GAs: Gaussian mutation; BLX-α, UNDX, and SBX crossover. Selection operators: roulette, tournament, and stochastic universal sampling (SUS); with optional niching, ranking, and scaling. Replacement strategies: generational with elitism and steady state. Constrained optimization: random constrained initialization, death penalty, constrained selection without a penalty function. Multi-objective optimization: NSGA-II and constrained NSGA-II. -
HaVSA
HaVSA (Have-Saa) is a Haskell implementation of the Version Space Algebra Machine Learning technique described by Tessa Lau. -
Etage
A general data-flow framework featuring nondeterminism, laziness and neurological pseudo-terminology.
CodeRabbit: AI Code Reviews for Developers
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of nn or a related project?
Popular Comparisons
README
ηn
A tiny neural network ðŸ§
This small neural network is based on the backpropagation algorithm.
Usage
A minimal usage example would look like this:
import AI.Nn (new
,predict
,train)
main :: IO ()
main = do
{- Creates a new network with two inputs,
two hidden layers and one output -}
network <- new [2, 2, 1]
{- Train the network for a common logical AND,
until the maximum error of 0.01 is reached -}
let trainedNetwork = train 0.01 network [([0, 0], [0])
,([0, 1], [0])
,([1, 0], [0])
,([1, 1], [1])]
{- Predict the learned values -}
let r00 = predict trainedNetwork [0, 0]
let r01 = predict trainedNetwork [0, 1]
let r10 = predict trainedNetwork [1, 0]
let r11 = predict trainedNetwork [1, 1]
{- Print the results -}
putStrLn $ printf "0 0 -> %.2f" (head r00)
putStrLn $ printf "0 1 -> %.2f" (head r01)
putStrLn $ printf "1 0 -> %.2f" (head r10)
putStrLn $ printf "1 1 -> %.2f" (head r11)
The result should be something like:
0 0 -> -0.02
0 1 -> -0.02
1 0 -> -0.01
1 1 -> 1.00
Hacking
To start hacking simply clone this repository and make sure that stack is installed. Then simply hack around and build the project with:
> stack build --file-watch
Contributing
You want to contribute to this project? Wow, thanks! So please just fork it and send me a pull request.