sgd alternatives and similar packages
Based on the "Algorithms" category.
Alternatively, view sgd alternatives based on common mentions on social networks and blogs.

arithmoi
Number theory: primes, arithmetic functions, modular computations, special sequences 
imjanimation
Monorepo for a multiplayer game engine, and game examples 
searchalgorithms
Haskell library containing common graph search algorithms 
lca
Improves the known complexity of online lowest common ancestor search to O(log h) persistently, and without preprocessing 
treeviz
Haskell library for visualizing algorithmic decomposition of computations. 
incrementalsatsolver
Simple, Incremental SAT Solving as a Haskell Library 
integerlogarithms
Integer logarithms, originally split from arithmoi package 
infinitesearch
An implementation of Martin Escardo's exhaustively searchable sets in Haskell. 
nonlinearoptimizationad
Several Haskell packages for numerical optimizations. 
editdistancevector
Calculate edit scripts and distances between Vectors. 
graphgenerators
A Haskell library for creating random Data.Graph instances using several pop 
primesieve
A collection of packages related to math, algorithms and science, in Haskell. 
editdistancelinear
Levenshtein edit distance in linear memory (also turns out to be faster than C++) 
dgim
:chart_with_upwards_trend: Implementation of the DGIM algorithm in Haskell. 
epanethaskell
Call the EPANET toolkit via Haskell's Foreign Function Interface 
MIP
Libraries for reading/writing MIP problem files, invoking external MIP solvers, etc. in Haskell
Learn any GitHub repo in 59 seconds
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of sgd or a related project?
Popular Comparisons
README
Haskell stochastic gradient descent library
Stochastic gradient descent (SGD) is a method for optimizing a global objective function defined as a sum of smaller, differentiable functions. In each iteration of SGD the gradient is calculated based on a subset of the training dataset. In Haskell, this process can be simply represented as a fold over a of subsequent dataset subsets (singleton elements in the extreme).
However, it can be beneficial to select the subsequent subsets randomly (e.g.,
shuffle the entire dataset before each pass). Moreover, the dataset can be
large enough to make it impractical to store it all in memory. Hence, the
sgd
library adopts a pipebased
interface in which SGD takes the form of a process consuming dataset subsets
(the socalled minibatches) and producing a stream of output parameter values.
The sgd
library implements several SGD variants (SGD with momentum, AdaDelta,
Adam) and handles heterogeneous parameter representations (vectors, maps, custom
records, etc.). It can be used in combination with automatic differentiation
libraries (ad,
backprop), which can be used to
automatically calculate the gradient of the objective function.
Look at the hackage repository for a library documentation.