spsa alternatives and similar packages
Based on the "Math" category.
Alternatively, view spsa alternatives based on common mentions on social networks and blogs.
ad9.8 4.7 spsa VS adAutomatic Differentiation
vector9.8 5.8 spsa VS vectorAn efficient implementation of Int-indexed arrays (both mutable and immutable), with a powerful loop optimisation framework .
subhask9.8 0.0 spsa VS subhaskType safe interface for working in subcategories of Hask
hmatrix9.8 0.0 spsa VS hmatrixLinear algebra and numerical computation
statistics9.7 2.2 spsa VS statisticsA fast, high quality library for computing with statistics in Haskell.
linear9.7 2.6 spsa VS linearLow-dimensional linear algebra primitives for Haskell.
what49.4 6.1 spsa VS what4Symbolic formula representation and solver interaction library
HerbiePlugin9.4 0.0 spsa VS HerbiePluginGHC plugin that improves Haskell code's numerical stability
hgeometry9.3 8.2 spsa VS hgeometryHGeometry is a library for computing with geometric objects in Haskell. It defines basic geometric types and primitives, and it implements some geometric data structures and algorithms. The main two focusses are: (1) Strong type safety, and (2) implementations of geometric algorithms and data structures that have good asymptotic running time guarantees.
units9.2 0.0 spsa VS unitsThe home of the units Haskell package
grid9.2 0.0 spsa VS gridTools for working with regular grids/graphs/lattices.
algebra9.2 0.0 spsa VS algebraconstructive abstract algebra
semigroups9.1 0.0 spsa VS semigroupsHaskell 98 semigroups
dimensional9.1 5.7 spsa VS dimensionalDimensional library variant built on Data Kinds, Closed Type Families, TypeNats (GHC 7.8+).
computational-algebra9.0 0.0 spsa VS computational-algebraGeneral-Purpose Computer Algebra System as an EDSL in Haskell
estimator9.0 0.0 spsa VS estimatorState-space estimation algorithms and models
hermit9.0 0.0 spsa VS hermitHaskell Equational Reasoning Model-to-Implementation Tunnel
hblas8.9 0.0 spsa VS hblashaskell bindings for blas and lapack
matrix8.9 0.0 spsa VS matrixA Haskell native implementation of matrices and their operations.
mwc-random8.9 1.0 spsa VS mwc-randomA very fast Haskell library for generating high quality pseudo-random numbers.
numhask8.8 2.9 spsa VS numhaskA haskell numeric prelude, providing a clean structure for numbers and operations that combine them.
lambda-calculator8.8 4.7 spsa VS lambda-calculatorAn introduction to the Lambda Calculus
vector-space8.7 0.0 spsa VS vector-spaceVector & affine spaces, linear maps, and derivatives
math-functions8.6 1.0 spsa VS math-functionsSpecial mathematical functions
poly8.5 4.7 spsa VS polyFast polynomial arithmetic in Haskell (dense and sparse, univariate and multivariate, usual and Laurent)
vector-sized8.5 0.0 spsa VS vector-sizedSize tagged vectors
rounded8.5 0.0 spsa VS roundedMPFR bindings for Haskell
deeplearning-hs8.5 0.0 spsa VS deeplearning-hsDeep Learning in Haskell
cf8.5 0.0 spsa VS cf"Exact" real arithmetic for Haskell using continued fractions (Not formally proven correct)
bayes-stack8.5 0.0 spsa VS bayes-stackFramework for Gibbs sampling of probabilistic models
arrayfire8.5 1.7 spsa VS arrayfireHaskell bindings to ArrayFire
optimization8.4 0.0 spsa VS optimizationSome numerical optimization methods implemented in Haskell
rampart8.3 0.0 spsa VS rampart:european_castle: Determine how intervals relate to each other.
safe-decimal8.3 0.0 spsa VS safe-decimalSafe and very efficient arithmetic operations on fixed decimal point numbers
equational-reasoning8.2 2.9 spsa VS equational-reasoningAgda-style equational reasoning in Haskell
bed-and-breakfast8.2 0.0 spsa VS bed-and-breakfastMatrix operations in 100% pure Haskell
monte-carlo8.2 0.0 spsa VS monte-carloA Monte Carlo monad and transformer for Haskell.
type-natural8.2 0.0 spsa VS type-naturalType-level well-kinded natural numbers.
sbvPlugin8.2 4.2 spsa VS sbvPluginFormally prove properties of Haskell programs using SBV/SMT.
intervals8.1 0.0 spsa VS intervalsInterval Arithmetic
Decimal8.1 0.0 spsa VS DecimalDecimal numbers with variable precision
simple-smt8.1 2.6 spsa VS simple-smtA simple way to interact with an SMT solver process.
monoid-subclasses7.9 0.0 spsa VS monoid-subclassesSubclasses of Monoid with a solid theoretical foundation and practical purposes
polynomial7.9 0.0 spsa VS polynomialHaskell library for manipulating and evaluating polynomials
eigen7.9 0.0 L2 spsa VS eigenHaskel binding for Eigen library. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
mltool7.9 0.0 spsa VS mltoolMachine Learning Toolbox
simd7.8 0.0 spsa VS simdsimple interface to ghc's simd vector support
dimensions7.8 0.0 spsa VS dimensionsMany-dimensional type-safe numeric ops
manifold-random7.7 4.2 spsa VS manifold-randomCoordinate-free hypersurfaces as Haskell types
hTensor7.7 0.0 spsa VS hTensorMultidimensional arrays and simple tensor computations
Static code analysis for 29 languages.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of spsa or a related project?
SPSA in Haskell
Simultaneous Perturbation Stochastic Approximation is a global optimizer for continuous loss functions of any dimension. It has a strong theoretical foundation with very few knobs that must be tuned. While it doesn't get the press of Genetic Algorithms or other Evolutionary Computing, it is on the same level with a better foundation.
Still working on this...
You can install via
cabal install spsa
Getting the Source
git clone git://github.com/yanatan16/haskell-spsa spsa
Set up a sandbox.
The first time through, you need to download and install a ton of dependencies, so hang in there.
cd spsa cabal-dev install \ --enable-tests \ --enable-benchmarks \ --only-dependencies \ -j
cabal-dev command is just a sandboxing wrapper around the
cabal command. The
-j flag above tells
cabal to use all of your
CPUs, so even the initial build shouldn't take more than a few
cabal-dev configure --enable-tests --enable-benchmarks cabal-dev build
Note: For the development mode use
--flags=developer. Development mode allows you to run tests from ghci easily to allow some good ole TDD.
Once you've built the code, you can run the entire test suite in a few seconds.
dist/build/tests/tests +RTS -N
We use the direct executable rather than
cabal-dev tests because it doesn't pass through options very well. The
+RTS -N above tells GHC's runtime system to use all available cores. If you want to explore, the
tests program (
dist/build/tests/tests) accepts a
--help option. Try it out.
Tests From GHCI
You can run all the tests from ghci.
starts up the REPL.
> import Test.SPSA > runAllTests
Or you can run a single test
> runTest "SPSA*" -- follows test patterns in Test.Framework > runGroup 4 -- if you know the group number
You can run benchmarks similar to tests.
Just like with tests, there's a
--help option to explore.
SPSA minimizes a loss function, so maximization problems must negate the fitness function. It can optionally use a constraint mapper that runs each round of SPSA. SPSA is an iterative or recursive optimization algorithm based as a stochastic extension of gradient descent. Like the Finite Difference (FDSA) approximation to the gradient, SPSA uses Simultaneous Perturbation to estimate the gradient of the loss function being minimized. However, it only uses two loss measurements, regardless of the dimension of the parameter vector, whereas FDSA uses 2p where p is the dimension of the parameter vector. Surprisingly, this shortcut has no ill effect on convergence rate and only a small effect on convergence criteria.
Since SPSA uses a small amount of randomness in its gradient estimate, it also produces some noise. This noise is the good kind however, because it promotes SPSA to become a global optimizer with the same convergence rate as FDSA!
SPSA has fewer tuning knobs than many other global optimization methods, but it still requires some work. The most important parameter is
a in the
a(k) = a / (k + 1 + A) ^ alpha gain sequence (see tuning below). It can wildly affect results. If you wish to limit function measurements, since SPSA uses two function measurements per iteration, pass in N / 2 as the number of rounds to run when using SPSA.
For more information on SPSA, please see Spall's papers from his website.
To tune SPSA, I suggest the Semiautomatic Tuning method by Spall introduced in his book Introduction to Stochastic Search and Optimization (ISSO). There are 3 main knobs, the two gain sequences ( a and c), and the perturbation distribution, delta. Delta is best chosen as Bernoulli +/- 1, as that is asymptotically optimal (it must not be Normal or Uniform, see ISSO). There are rules about the properties of the gain sequences. The standard form works well, and follows the forms:
a(k) = a / (k + 1 + A) ^ alpha c(k) = c / (k + 1) ^ gamma
Here, the values alpha and gamma are tied together, asymptotically optimial is alpha = 1 and gamma = 1/6, but the values alpha = .602 and gamma = .101 work well for finite cases. A should be about 10% of the total iterations, and c should be approximately the standard deviation of the loss function in question. a is the most volatile parameter, and must be tuned carefully. In general, one should pick a such that the first step is not too large so as to send the algorithm in the wrong direction.
- SPSA Website
- Introduction to Stochastic Search and Optimization. James Spall. Wiley 2003.
The MIT License found in the LICENSE file.
*Note that all licence references and agreements mentioned in the spsa README section above are relevant to that project's source code only.