sparselinearalgebra alternatives and similar packages
Based on the "Numeric" category.
Alternatively, view sparselinearalgebra alternatives based on common mentions on social networks and blogs.

hyperloglog
A constantmemory approximation of set membership 
logdomain
This package provides logdomain floats, doubles and complex numbers for Haskell. 
tdigest
Online accumulation of rankbased statistics such as quantiles and trimmed means 
limp
ideally, this will become a pure Haskell library for Linear Integer/Mixed Programming 
mightymetropolis
The classic Metropolis sampling algorithm. 
microlensaeson
Lenses and Traversals for Aeson, based on Microlens. 
mwcprobabilitytransition
A Markov stochastic transition operator with logging 
hastyhamiltonian
Speedy gradientbased traversal through parameter space. 
repr
Render overloaded expressions to their textual representation. 
romannumerals
Parsing and pretty printing of Roman numerals 
numericranges
DSL written in Haskell that provides a set of tools for working with both open and closed numeric intervals. 
natsizednumbers
Arbitrarily sized wraparound numeric types. Useful for testing 
hmatrixnlopt
Highlevel hmatrixcompatible bindings to NLOPT from Haskell 
nlopthaskell
Lowlevel Haskell bindings to the NLOPT library 
approx
The library is created to allow for a easytouse reasonable way of emulating approx in Haskell. 
simpleunits
Haskell library for performing arithmetic with SI units with typechecked dimensional analysis 
affineinvariantensemblemcmc
Generalpurpose sampling 
gutenbergfibonaccis
The first 1001 Fibonacci numbers, retrieved from the Gutenberg Project.
Less time debugging, more time building
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of sparselinearalgebra or a related project?
README
sparselinearalgebra
Numerical computation in native Haskell
This library provides common numerical analysis functionality, without requiring any external bindings. It aims to serve as an experimental platform for scientific computation in a purely functional setting.
State of the library
Mar 14, 2018: Mostly functional, but there are still a few (documented) bugs. Complex number support is still incomplete, so the users are advised to not rely on that for the time being. The issues related to Complex number handling are tracked in #51, #12, #30.
News
Oct 7, 2017: The library is evolving in a number of ways, to reflect performance observations and user requests:
typeclasses and instances for primitive types will become
sparselinearalgebracore
, along with a typeclassoriented reformulation of the numerical algorithms that used to depend on the nested IntMap representation. This will let other developers build on top of this library, in the spirit ofvectorspace
andlinear
.The
vector
based backend is being reworked.An
accelerate
based backend is under development [6, 7].
Contents
Iterative linear solvers (
<\>
) Generalized Minimal Residual (GMRES) (nonHermitian systems)
 BiConjugate Gradient (BCG)
 Conjugate Gradient Squared (CGS)
 BiConjugate Gradient Stabilized (BiCGSTAB) (nonHermitian systems)
 MoorePenrose pseudoinverse (
pinv
) (rectangular systems)
Direct linear solvers
 LUbased (
luSolve
); forward and backward substitution (triLowerSolve
,triUpperSolve
)
 LUbased (
Matrix factorization algorithms
 QR (
qr
)  LU (
lu
)  Cholesky (
chol
)  Arnoldi iteration (
arnoldi
)
 QR (
Eigenvalue algorithms
 QR (
eigsQR
)  QRArnoldi (
eigsArnoldi
)
 QR (
Utilities : Vector and matrix norms, matrix condition number, Givens rotation, Householder reflection
Predicates : Matrix orthogonality test (A^{T} A ~= I)
Under development
Eigenvalue algorithms
 Rayleigh quotient iteration (
eigRayleigh
)
 Rayleigh quotient iteration (
Matrix factorization algorithms
 GolubKahanLanczos bidiagonalization (
gklBidiag
)  Singular value decomposition (SVD)
 GolubKahanLanczos bidiagonalization (
Iterative linear solvers
 TransposeFree QuasiMinimal Residual (TFQMR)
Examples
The module Numeric.LinearAlgebra.Sparse
contains the user interface.
Creation of sparse data
The fromListSM
function creates a sparse matrix from a collection of its entries in (row, column, value) format. This is its type signature:
fromListSM :: Foldable t => (Int, Int) > t (IxRow, IxCol, a) > SpMatrix a
and, in case you have a running GHCi session (the terminal is denoted from now on by λ>
), you can try something like this:
λ> amat = fromListSM (3,3) [(0,0,2),(1,0,4),(1,1,3),(1,2,2),(2,2,5)] :: SpMatrix Double
Similarly, fromListSV
is used to create sparse vectors:
fromListSV :: Int > [(Int, a)] > SpVector a
Alternatively, the user can copy the contents of a list to a (dense) SpVector using
fromListDenseSV :: Int > [a] > SpVector a
Displaying sparse data
Both sparse vectors and matrices can be prettyprinted using prd
:
λ> prd amat
( 3 rows, 3 columns ) , 5 NZ ( density 55.556 % )
2.00 , _ , _
4.00 , 3.00 , 2.00
_ , _ , 5.00
Note (sparse storage): sparse data should only contain nonzero entries not to waste memory and computation.
Note (approximate output): prd
rounds the results to two significant digits, and switches to scientific notation for large or small values. Moreover, values which are indistinguishable from 0 (see the Numeric.Eps
module) are printed as _
.
Matrix factorizations, matrix product
There are a few common matrix factorizations available; in the following example we compute the LU factorization of matrix amat
and verify it with the matrixmatrix product ##
of its factors :
λ> (l, u) < lu amat
λ> prd $ l ## u
( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )
2.00 , _ , _
4.00 , 3.00 , 2.00
_ , _ , 5.00
Notice that the result is dense, i.e. certain entries are numerically zero but have been inserted into the result along with all the others (thus taking up memory!).
To preserve sparsity, we can use a sparsifying matrixmatrix product #~#
, which filters out all the elements x for which x <= eps
, where eps
(defined in Numeric.Eps
) depends on the numerical type used (e.g. it is 10^{6} for Float
s and 10^{12} for Double
s).
λ> prd $ l #~# u
( 3 rows, 3 columns ) , 5 NZ ( density 55.556 % )
2.00 , _ , _
4.00 , 3.00 , 2.00
_ , _ , 5.00
A matrix is transposed using the transpose
function.
Sometimes we need to compute matrixmatrix transpose products, which is why the library offers the infix operators #^#
(i.e. matrix transpose * matrix) and ##^
(matrix * matrix transpose):
λ> amat' = amat #^# amat
λ> prd amat'
( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )
20.00 , 12.00 , 8.00
12.00 , 9.00 , 6.00
8.00 , 6.00 , 29.00
λ> lc < chol amat'
λ> prd $ lc ##^ lc
( 3 rows, 3 columns ) , 9 NZ ( density 100.000 % )
20.00 , 12.00 , 8.00
12.00 , 9.00 , 10.80
8.00 , 10.80 , 29.00
In the last example we have also shown the Cholesky decomposition (M = L L^{T} where L is a lowertriangular matrix), which is only defined for symmetric positivedefinite matrices.
Linear systems
Large sparse linear systems are best solved with iterative methods. sparselinearalgebra
provides a selection of these via the <\>
(inspired by Matlab's "backslash" function. Currently this method uses GMRES as default) :
λ> b = fromListDenseSV 3 [3,2,5] :: SpVector Double
λ> x < amat <\> b
λ> prd x
( 3 elements ) , 3 NZ ( density 100.000 % )
1.50 , 2.00 , 1.00
The result can be verified by computing the matrixvector action amat #> x
, which should (ideally) be very close to the righthand side b
:
λ> prd $ amat #> x
( 3 elements ) , 3 NZ ( density 100.000 % )
3.00 , 2.00 , 5.00
The library also provides a forwardbackward substitution solver (luSolve
) based on a triangular factorization of the system matrix (usually LU). This should be the preferred for solving smaller, dense systems. Using the LU factors defined previously we can crossverify the two solution methods:
λ> x' < luSolve l u b
λ> prd x'
( 3 elements ) , 3 NZ ( density 100.000 % )
1.50 , 2.00 , 1.00
License
GPL3, see LICENSE
Credits
Inspired by
linear
: https://hackage.haskell.org/package/linearvectorspace
: https://hackage.haskell.org/package/vectorspacesparselinalg
: https://github.com/laughedelic/sparselinalg
References
[1] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., 2000
[2] G.H. Golub and C.F. Van Loan, Matrix Computations, 3rd ed., 1996
[3] T.A. Davis, Direct Methods for Sparse Linear Systems, 2006
[4] L.N. Trefethen, D. Bau, Numerical Linear Algebra, SIAM, 1997
[5] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in Fortran 77, 2nd ed., 1992
[6] M. M. T. Chakravarty, et al., Accelerating Haskell array codes with multicore GPUs  DAMP'11
[7] accelerate
*Note that all licence references and agreements mentioned in the sparselinearalgebra README section above
are relevant to that project's source code only.