package lex
import "cmd/asm/internal/lex"
Package lex implements lexical analysis for the assembler.
Index ¶
- func HistLine() int32
- func InitHist()
- func IsRegisterShift(r ScanToken) bool
- type Input
- func NewInput(name string) *Input
- func (in *Input) Close()
- func (in *Input) Error(args ...interface{})
- func (in *Input) Next() ScanToken
- func (in *Input) Push(r TokenReader)
- func (in *Input) Text() string
- type Macro
- type ScanToken
- type Slice
- func NewSlice(fileName string, line int, tokens []Token) *Slice
- func (s *Slice) Close()
- func (s *Slice) Col() int
- func (s *Slice) File() string
- func (s *Slice) Line() int
- func (s *Slice) Next() ScanToken
- func (s *Slice) SetPos(line int, file string)
- func (s *Slice) Text() string
- type Stack
- func (s *Stack) Close()
- func (s *Stack) Col() int
- func (s *Stack) File() string
- func (s *Stack) Line() int
- func (s *Stack) Next() ScanToken
- func (s *Stack) Push(tr TokenReader)
- func (s *Stack) SetPos(line int, file string)
- func (s *Stack) Text() string
- type Token
- func Make(token ScanToken, text string) Token
- func Tokenize(str string) []Token
- func (l Token) String() string
- type TokenReader
- type Tokenizer
- func NewTokenizer(name string, r io.Reader, file *os.File) *Tokenizer
- func (t *Tokenizer) Close()
- func (t *Tokenizer) Col() int
- func (t *Tokenizer) File() string
- func (t *Tokenizer) Line() int
- func (t *Tokenizer) Next() ScanToken
- func (t *Tokenizer) SetPos(line int, file string)
- func (t *Tokenizer) Text() string
Functions ¶
func HistLine ¶
func HistLine() int32
HistLine reports the cumulative source line number of the token, for use in the Prog structure for the linker. (It's always handling the instruction from the current lex line.) It returns int32 because that's what type ../asm prefers.
func InitHist ¶
func InitHist()
InitHist sets the line count to 1, for reproducible testing.
func IsRegisterShift ¶
IsRegisterShift reports whether the token is one of the ARM register shift operators.
Types ¶
type Input ¶
type Input struct { Stack // contains filtered or unexported fields }
Input is the main input: a stack of readers and some macro definitions. It also handles #include processing (by pushing onto the input stack) and parses and instantiates macro definitions.
func NewInput ¶
NewInput returns a
func (*Input) Close ¶
func (in *Input) Close()
func (*Input) Error ¶
func (in *Input) Error(args ...interface{})
func (*Input) Next ¶
func (*Input) Push ¶
func (in *Input) Push(r TokenReader)
func (*Input) Text ¶
type Macro ¶
type Macro struct {
// contains filtered or unexported fields
}
A Macro represents the definition of a #defined macro.
type ScanToken ¶
type ScanToken rune
A ScanToken represents an input item. It is a simple wrapping of rune, as returned by text/scanner.Scanner, plus a couple of extra values.
const ( // Asm defines some two-character lexemes. We make up // a rune/ScanToken value for them - ugly but simple. LSH ScanToken = -1000 - iota // << Left shift. RSH // >> Logical right shift. ARR // -> Used on ARM for shift type 3, arithmetic right shift. ROT // @> Used on ARM for shift type 4, rotate right. )
func (ScanToken) String ¶
type Slice ¶
type Slice struct {
// contains filtered or unexported fields
}
A Slice reads from a slice of Tokens.
func NewSlice ¶
func (*Slice) Close ¶
func (s *Slice) Close()
func (*Slice) Col ¶
func (*Slice) File ¶
func (*Slice) Line ¶
func (*Slice) Next ¶
func (*Slice) SetPos ¶
func (*Slice) Text ¶
type Stack ¶
type Stack struct {
// contains filtered or unexported fields
}
A Stack is a stack of TokenReaders. As the top TokenReader hits EOF, it resumes reading the next one down.
func (*Stack) Close ¶
func (s *Stack) Close()
func (*Stack) Col ¶
func (*Stack) File ¶
func (*Stack) Line ¶
func (*Stack) Next ¶
func (*Stack) Push ¶
func (s *Stack) Push(tr TokenReader)
Push adds tr to the top (end) of the input stack. (Popping happens automatically.)
func (*Stack) SetPos ¶
func (*Stack) Text ¶
type Token ¶
type Token struct { ScanToken // contains filtered or unexported fields }
A Token is a scan token plus its string value. A macro is stored as a sequence of Tokens with spaces stripped.
func Make ¶
Make returns a Token with the given rune (ScanToken) and text representation.
func Tokenize ¶
Tokenize turns a string into a list of Tokens; used to parse the -D flag and in tests.
func (Token) String ¶
type TokenReader ¶
type TokenReader interface { // Next returns the next token. Next() ScanToken // The following methods all refer to the most recent token returned by Next. // Text returns the original string representation of the token. Text() string // File reports the source file name of the token. File() string // Line reports the source line number of the token. Line() int // Col reports the source column number of the token. Col() int // SetPos sets the file and line number. SetPos(line int, file string) // Close does any teardown required. Close() }
A TokenReader is like a reader, but returns lex tokens of type Token. It also can tell you what the text of the most recently returned token is, and where it was found. The underlying scanner elides all spaces except newline, so the input looks like a stream of Tokens; original spacing is lost but we don't need it.
func NewLexer ¶
func NewLexer(name string, ctxt *obj.Link) TokenReader
NewLexer returns a lexer for the named file and the given link context.
type Tokenizer ¶
type Tokenizer struct {
// contains filtered or unexported fields
}
A Tokenizer is a simple wrapping of text/scanner.Scanner, configured for our purposes and made a TokenReader. It forms the lowest level, turning text from readers into tokens.
func NewTokenizer ¶
func (*Tokenizer) Close ¶
func (t *Tokenizer) Close()
func (*Tokenizer) Col ¶
func (*Tokenizer) File ¶
func (*Tokenizer) Line ¶
func (*Tokenizer) Next ¶
func (*Tokenizer) SetPos ¶
func (*Tokenizer) Text ¶
Source Files ¶
input.go lex.go slice.go stack.go tokenizer.go
- Version
- v1.5.2
- Published
- Dec 3, 2015
- Platform
- darwin/amd64
- Imports
- 11 packages
- Last checked
- 1 minute ago –
Tools for package owners.