yet-another-logger alternatives and similar packages
Based on the "Logging" category.
Alternatively, view yet-another-logger alternatives based on common mentions on social networks and blogs.
9.6 2.4 yet-another-logger VS co-log📓 Flexible and configurable modern #Haskell logging framework
Journald backend for logging-facade
Functions for logging ToJSON instances with monad-logger
Location-aware logging without Template Haskell
Size-limited, concurrent, automatically-rotating log writer for Haskell production applications
4.8 1.9 yet-another-logger VS lumberjackHelps you trek through your code forest and generate logs.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of yet-another-logger or a related project?
A logging framework written with flexibility and performance in mind.
import System.Logger main ∷ IO () main = withConsoleLogger Info $ do logg Info "moin" withLabel ("function", "f") f logg Warn "tschüss" where f = withLevel Debug $ do logg Debug "debug f"
The logging system consists of four main parts:
The logging front-end are those types and functions that are used to produce log messages in the code. This includes the
LogFunctiontype, and the
LoggerCtxis the context through which the
LogFunctiondelivers log messages to the logger back-end.
The formatter is a function for serializing log messages.
The logger back-end is a callback that is invoked by
Loggeron each log messages. The logger back-end applies the formatting function and delivers the log messages to some sink.
The framework allows you to combine this components in a modular way. The
front-end types, the
Logger, and the back-end callback are represented by
types or type classes. The formatter exists only as a concept in the
implementation of back-ends. These types and concepts together form the
abstract logger interface that is defined in the module
The package also provides a concrete Logger that implements these components
in the module
Writing a log message in a service application should introduce only minimal latency overhead in the thread where the log message is written. Processing should be done asynchronously as much as possible. This framework addresses this by doing all serialization and IO in an asynchronous logger back-end callback.
When a log message is produced it is associated with a logger context. The logger context includes
- a log-level threshold,
- a scope, which is a list of key-value labels which are used to tag log messages with additional information, and
- a policy that specifies how to deal with a situation where the log message pipeline is congested.
A log message can be any Haskell type with
constraint. Ideally the logged value is computed anyways in the program so that
constructing and forcing it does not introduce any additional overhead.
When a log messages is produced it is tagged with a time stamp. This introduces overhead and there is be room for optimizations here. A log message also has a log-level. If the log-threshold that is effective at the time a log message is written isn't met, no message is produced.
The logger has an internal log message queue. Further benchmarking should be done in chosen the queue implementation that is best suited for this purpose.
The logger asynchronously reads log messages from the queue and calls the back-end callback for each message. Right now the code includes only a single back-end, namely for writing to a handle, but we are going to add more back-ends soon. Due to the modular design, it is possible to combine different back-ends into a single back-end so that messages are processed by more than a single back-end and delivered to more than a single sink.
A back-end includes a formatting function. This is where, beside IO, most processing happens.
Delaying the serialization to the very end of the processing pipeline has the following advantages:
- serialization is done asynchronously,
- serialization is done only for messages that are actually delivered and it is done only for those parts of the message that are relevant for the respective back-end, and
- it is easy to deploy different serialization methods.
For instance, when logging to the console, one usually wants a line-wise UNIX-tool friendly format. For a cloud service one may chose an efficient binary serialization with a back-end that stores messages in a remote database. There may be circumstances where the data of all or some messages is just aggregated for statistical analysis before the messages are discarded. The modular design, which decouples generation and serialization of log messages, allows one to accommodate these different scenarios by just using different back-ends, possibly parameterized by the formatting function.