package shlex
import "go.mau.fi/util/shlex"
Package shlex implements a simple lexer which splits input in to tokens using shell-style rules for quoting and commenting.
The basic use case uses the default ASCII lexer to split a string into sub-strings:
shlex.Split("one \"two three\" four") -> []string{"one", "two three", "four"}
To process a stream of strings:
l := NewLexer(os.Stdin) for ; token, err := l.Next(); err != nil { // process token }
To access the raw token stream (which includes tokens for comments):
t := NewTokenizer(os.Stdin) for ; token, err := t.Next(); err != nil { // process token }
Index ¶
Functions ¶
func Split ¶
Split partitions a string into a slice of strings.
Types ¶
type Lexer ¶
type Lexer Tokenizer
Lexer turns an input stream into a sequence of tokens. Whitespace and comments are skipped.
func NewLexer ¶
NewLexer creates a new lexer from an input stream.
func (*Lexer) Next ¶
Next returns the next word, or an error. If there are no more words, the error will be io.EOF.
type Token ¶
type Token struct {
// contains filtered or unexported fields
}
Token is a (type, value) pair representing a lexographical token.
func (*Token) Equal ¶
Equal reports whether tokens a, and b, are equal. Two tokens are equal if both their types and values are equal. A nil token can never be equal to another token.
type TokenType ¶
type TokenType int
TokenType is a top-level token classification: A word, space, comment, unknown.
Classes of lexographic token
type Tokenizer ¶
type Tokenizer struct {
// contains filtered or unexported fields
}
Tokenizer turns an input stream into a sequence of typed tokens
func NewTokenizer ¶
NewTokenizer creates a new tokenizer from an input stream.
func (*Tokenizer) Next ¶
Next returns the next token in the stream.
Source Files ¶
shlex.go
- Version
- v0.8.6 (latest)
- Published
- Mar 16, 2025
- Platform
- linux/amd64
- Imports
- 4 packages
- Last checked
- 1 week ago –
Tools for package owners.