zstd – github.com/DataDog/zstd Index | Files

package zstd

import "github.com/DataDog/zstd"

Index

Constants

const (
	BestSpeed          = 1
	BestCompression    = 20
	DefaultCompression = 5
)

Defines best and standard values for zstd cli

Variables

var (
	// ErrEmptyDictionary is returned when the given dictionary is empty
	ErrEmptyDictionary = errors.New("Dictionary is empty")
	// ErrBadDictionary is returned when cannot load the given dictionary
	ErrBadDictionary = errors.New("Cannot load dictionary")
)
var (
	// ErrEmptySlice is returned when there is nothing to compress
	ErrEmptySlice = errors.New("Bytes slice is empty")
)
var ErrNoParallelSupport = errors.New("No parallel support")

Functions

func Compress

func Compress(dst, src []byte) ([]byte, error)

Compress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func CompressBound

func CompressBound(srcSize int) int

CompressBound returns the worst case size needed for a destination buffer, which can be used to preallocate a destination buffer or select a previously allocated buffer from a pool. See zstd.h to mirror implementation of ZSTD_COMPRESSBOUND

func CompressLevel

func CompressLevel(dst, src []byte, level int) ([]byte, error)

CompressLevel is the same as Compress but you can pass a compression level

func Decompress

func Decompress(dst, src []byte) ([]byte, error)

Decompress src into dst. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func DecompressInto

func DecompressInto(dst, src []byte) (int, error)

DecompressInto decompresses src into dst. Unlike Decompress, DecompressInto requires that dst be sufficiently large to hold the decompressed payload. DecompressInto may be used when the caller knows the size of the decompressed payload before attempting decompression.

It returns the number of bytes copied and an error if any is encountered. If dst is too small, DecompressInto errors.

func IsDstSizeTooSmallError

func IsDstSizeTooSmallError(e error) bool

IsDstSizeTooSmallError returns whether the error correspond to zstd standard sDstSizeTooSmall error

func NewReader

func NewReader(r io.Reader) io.ReadCloser

NewReader creates a new io.ReadCloser. Reads from the returned ReadCloser read and decompress data from r. It is the caller's responsibility to call Close on the ReadCloser when done. If this is not done, underlying objects in the zstd library will not be freed.

func NewReaderDict

func NewReaderDict(r io.Reader, dict []byte) io.ReadCloser

NewReaderDict is like NewReader but uses a preset dictionary. NewReaderDict ignores the dictionary if it is nil.

Types

type BulkProcessor

type BulkProcessor struct {
	// contains filtered or unexported fields
}

BulkProcessor implements Bulk processing dictionary API. When compressing multiple messages or blocks using the same dictionary, it's recommended to digest the dictionary only once, since it's a costly operation. NewBulkProcessor() will create a state from digesting a dictionary. The resulting state can be used for future compression/decompression operations with very limited startup cost. BulkProcessor can be created once and shared by multiple threads concurrently, since its usage is read-only. The state will be freed when gc cleans up BulkProcessor.

func NewBulkProcessor

func NewBulkProcessor(dictionary []byte, compressionLevel int) (*BulkProcessor, error)

NewBulkProcessor creates a new BulkProcessor with a pre-trained dictionary and compression level

func (*BulkProcessor) Compress

func (p *BulkProcessor) Compress(dst, src []byte) ([]byte, error)

Compress compresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

func (*BulkProcessor) Decompress

func (p *BulkProcessor) Decompress(dst, src []byte) ([]byte, error)

Decompress decompresses `src` into `dst` with the dictionary given when creating the BulkProcessor. If you have a buffer to use, you can pass it to prevent allocation. If it is too small, or if nil is passed, a new buffer will be allocated and returned.

type Ctx

type Ctx interface {
	// Compress src into dst.  If you have a buffer to use, you can pass it to
	// prevent allocation.  If it is too small, or if nil is passed, a new buffer
	// will be allocated and returned.
	Compress(dst, src []byte) ([]byte, error)

	// CompressLevel is the same as Compress but you can pass a compression level
	CompressLevel(dst, src []byte, level int) ([]byte, error)

	// Decompress src into dst.  If you have a buffer to use, you can pass it to
	// prevent allocation.  If it is too small, or if nil is passed, a new buffer
	// will be allocated and returned.
	Decompress(dst, src []byte) ([]byte, error)

	// DecompressInto decompresses src into dst. Unlike Decompress, DecompressInto
	// requires that dst be sufficiently large to hold the decompressed payload.
	// DecompressInto may be used when the caller knows the size of the decompressed
	// payload before attempting decompression.
	//
	// It returns the number of bytes copied and an error if any is encountered. If
	// dst is too small, DecompressInto errors.
	DecompressInto(dst, src []byte) (int, error)
}

func NewCtx

func NewCtx() Ctx

Create a new ZStd Context.

When compressing/decompressing many times, it is recommended to allocate a
context just once, and re-use it for each successive compression operation.
This will make workload friendlier for system's memory.
Note : re-using context is just a speed / resource optimization.
       It doesn't change the compression ratio, which remains identical.
Note 2 : In multi-threaded environments,
       use one different context per thread for parallel execution.

type ErrorCode

type ErrorCode int

ErrorCode is an error returned by the zstd library.

func (ErrorCode) Error

func (e ErrorCode) Error() string

Error returns the error string given by zstd

type Writer

type Writer struct {
	CompressionLevel int
	// contains filtered or unexported fields
}

Writer is an io.WriteCloser that zstd-compresses its input.

func NewWriter

func NewWriter(w io.Writer) *Writer

NewWriter creates a new Writer with default compression options. Writes to the writer will be written in compressed form to w.

func NewWriterLevel

func NewWriterLevel(w io.Writer, level int) *Writer

NewWriterLevel is like NewWriter but specifies the compression level instead of assuming default compression.

The level can be DefaultCompression or any integer value between BestSpeed and BestCompression inclusive.

func NewWriterLevelDict

func NewWriterLevelDict(w io.Writer, level int, dict []byte) *Writer

NewWriterLevelDict is like NewWriterLevel but specifies a dictionary to compress with. If the dictionary is empty or nil it is ignored. The dictionary should not be modified until the writer is closed.

func (*Writer) Close

func (w *Writer) Close() error

Close closes the Writer, flushing any unwritten data to the underlying io.Writer and freeing objects, but does not close the underlying io.Writer.

func (*Writer) Flush

func (w *Writer) Flush() error

Flush writes any unwritten data to the underlying io.Writer.

func (*Writer) SetNbWorkers

func (w *Writer) SetNbWorkers(n int) error

Set the number of workers to run the compression in parallel using multiple threads If > 1, the Write() call will become asynchronous. This means data will be buffered until processed. If you call Write() too fast, you might incur a memory buffer up to as large as your input. Consider calling Flush() periodically if you need to compress a very large file that would not fit all in memory. By default only one worker is used.

func (*Writer) Write

func (w *Writer) Write(p []byte) (int, error)

Write writes a compressed form of p to the underlying io.Writer.

Source Files

errors.go zstd.go zstd_bulk.go zstd_ctx.go zstd_stream.go

Version
v1.5.7 (latest)
Published
Mar 28, 2025
Platform
js/wasm
Imports
9 packages
Last checked
3 weeks ago

Tools for package owners.