package brotli
import "github.com/andybalholm/brotli"
Index ¶
- Constants
- func HTTPCompressor(w http.ResponseWriter, r *http.Request) io.WriteCloser
- func NewWriterV2(dst io.Writer, level int) *matchfinder.Writer
- type Encoder
- func (e *Encoder) Encode(dst []byte, src []byte, matches []matchfinder.Match, lastBlock bool) []byte
- func (e *Encoder) Reset()
- type Reader
- func NewReader(src io.Reader) *Reader
- func (r *Reader) Read(p []byte) (n int, err error)
- func (r *Reader) Reset(src io.Reader) error
- type Writer
- func NewWriter(dst io.Writer) *Writer
- func NewWriterLevel(dst io.Writer, level int) *Writer
- func NewWriterOptions(dst io.Writer, options WriterOptions) *Writer
- func (w *Writer) Close() error
- func (w *Writer) Flush() error
- func (w *Writer) Reset(dst io.Writer)
- func (w *Writer) Write(p []byte) (n int, err error)
- type WriterOptions
Examples ¶
Constants ¶
const ( BestSpeed = 0 BestCompression = 11 DefaultCompression = 6 )
Functions ¶
func HTTPCompressor ¶
func HTTPCompressor(w http.ResponseWriter, r *http.Request) io.WriteCloser
HTTPCompressor chooses a compression method (brotli, gzip, or none) based on the Accept-Encoding header, sets the Content-Encoding header, and returns a WriteCloser that implements that compression. The Close method must be called before the current HTTP handler returns.
func NewWriterV2 ¶
func NewWriterV2(dst io.Writer, level int) *matchfinder.Writer
NewWriterV2 is like NewWriterLevel, but it uses the new implementation based on the matchfinder package. It currently supports up to level 7; if a higher level is specified, level 7 will be used.
Types ¶
type Encoder ¶
type Encoder struct {
// contains filtered or unexported fields
}
An Encoder implements the matchfinder.Encoder interface, writing in Brotli format.
func (*Encoder) Encode ¶
func (e *Encoder) Encode(dst []byte, src []byte, matches []matchfinder.Match, lastBlock bool) []byte
func (*Encoder) Reset ¶
func (e *Encoder) Reset()
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
func NewReader ¶
NewReader creates a new Reader reading the given reader.
func (*Reader) Read ¶
func (*Reader) Reset ¶
Reset discards the Reader's state and makes it equivalent to the result of its original state from NewReader, but reading from src instead. This permits reusing a Reader rather than allocating a new one. Error is always nil
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}
func NewWriter ¶
Writes to the returned writer are compressed and written to dst. It is the caller's responsibility to call Close on the Writer when done. Writes may be buffered and not flushed until Close.
func NewWriterLevel ¶
NewWriterLevel is like NewWriter but specifies the compression level instead of assuming DefaultCompression. The compression level can be DefaultCompression or any integer value between BestSpeed and BestCompression inclusive.
func NewWriterOptions ¶
func NewWriterOptions(dst io.Writer, options WriterOptions) *Writer
NewWriterOptions is like NewWriter but specifies WriterOptions
func (*Writer) Close ¶
Close flushes remaining data to the decorated writer.
func (*Writer) Flush ¶
Flush outputs encoded data for all input provided to Write. The resulting output can be decoded to match all input before Flush, but the stream is not yet complete until after Close. Flush has a negative impact on compression.
func (*Writer) Reset ¶
Reset discards the Writer's state and makes it equivalent to the result of
its original state from NewWriter or NewWriterLevel, but writing to dst
instead. This permits reusing a Writer rather than allocating a new one.
Code:
Output:Example¶
{
proverbs := []string{
"Don't communicate by sharing memory, share memory by communicating.\n",
"Concurrency is not parallelism.\n",
"The bigger the interface, the weaker the abstraction.\n",
"Documentation is for users.\n",
}
var b bytes.Buffer
bw := NewWriter(nil)
br := NewReader(nil)
for _, s := range proverbs {
b.Reset()
// Reset the compressor and encode from some input stream.
bw.Reset(&b)
if _, err := io.WriteString(bw, s); err != nil {
log.Fatal(err)
}
if err := bw.Close(); err != nil {
log.Fatal(err)
}
// Reset the decompressor and decode to some output stream.
if err := br.Reset(&b); err != nil {
log.Fatal(err)
}
if _, err := io.Copy(os.Stdout, br); err != nil {
log.Fatal(err)
}
}
// Output:
// Don't communicate by sharing memory, share memory by communicating.
// Concurrency is not parallelism.
// The bigger the interface, the weaker the abstraction.
// Documentation is for users.
}
Don't communicate by sharing memory, share memory by communicating.
Concurrency is not parallelism.
The bigger the interface, the weaker the abstraction.
Documentation is for users.
func (*Writer) Write ¶
Write implements io.Writer. Flush or Close must be called to ensure that the encoded bytes are actually flushed to the underlying Writer.
type WriterOptions ¶
type WriterOptions struct { // Quality controls the compression-speed vs compression-density trade-offs. // The higher the quality, the slower the compression. Range is 0 to 11. Quality int // LGWin is the base 2 logarithm of the sliding window size. // Range is 10 to 24. 0 indicates automatic configuration based on Quality. LGWin int }
WriterOptions configures Writer.
Source Files ¶
backward_references.go backward_references_hq.go bit_cost.go bit_reader.go bitwriter.go block_splitter.go block_splitter_command.go block_splitter_distance.go block_splitter_literal.go brotli_bit_stream.go cluster.go cluster_command.go cluster_distance.go cluster_literal.go command.go compress_fragment.go compress_fragment_two_pass.go constants.go context.go decode.go dictionary.go dictionary_hash.go encode.go encoder.go encoder_dict.go entropy_encode.go entropy_encode_static.go fast_log.go find_match_length.go h10.go h5.go h6.go hash.go hash_composite.go hash_forgetful_chain.go hash_longest_match_quickly.go hash_rolling.go histogram.go http.go huffman.go literal_cost.go memory.go metablock.go metablock_command.go metablock_distance.go metablock_literal.go params.go platform.go prefix.go prefix_dec.go quality.go reader.go ringbuffer.go state.go static_dict.go static_dict_lut.go symbol_list.go transform.go utf8_util.go util.go write_bits.go writer.go
Directories ¶
Path | Synopsis |
---|---|
matchfinder | The matchfinder package defines reusable components for data compression. |
- Version
- v1.1.1 (latest)
- Published
- Jul 29, 2024
- Platform
- js/wasm
- Imports
- 12 packages
- Last checked
- 1 second ago –
Tools for package owners.