v4 – github.com/jackc/pgx/v4 Index | Files | Directories

package pgx

import "github.com/jackc/pgx/v4"

Package pgx is a PostgreSQL database driver.

pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/v4/stdlib to use pgx as a database/sql compatible driver.

Query Interface

pgx implements Query and Scan in the familiar database/sql style.

var sum int32

// Send the query to the server. The returned rows MUST be closed
// before conn can be used again.
rows, err := conn.Query(context.Background(), "select generate_series(1,$1)", 10)
if err != nil {
    return err
}

// rows.Close is called by rows.Next when all rows are read
// or an error occurs in Next or Scan. So it may optionally be
// omitted if nothing in the rows.Next loop can panic. It is
// safe to close rows multiple times.
defer rows.Close()

// Iterate through the result set
for rows.Next() {
    var n int32
    err = rows.Scan(&n)
    if err != nil {
        return err
    }
    sum += n
}

// Any errors encountered by rows.Next or rows.Scan will be returned here
if rows.Err() != nil {
    return err
}

// No errors found - do something with sum

pgx also implements QueryRow in the same style as database/sql.

var name string
var weight int64
err := conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Use Exec to execute a query that does not return a result set.

commandTag, err := conn.Exec(context.Background(), "delete from widgets where id=$1", 42)
if err != nil {
    return err
}
if commandTag.RowsAffected() != 1 {
    return errors.New("No row found to delete")
}

Connection Pool

See sub-package pgxpool for connection pool.

Base Type Mapping

pgx maps between all common base types directly between Go and PostgreSQL. In particular:

Go           PostgreSQL
-----------------------
string       varchar
             text

// Integers are automatically be converted to any other integer type if
// it can be done without overflow or underflow.
int8
int16        smallint
int32        int
int64        bigint
int
uint8
uint16
uint32
uint64
uint

// Floats are strict and do not automatically convert like integers.
float32      float4
float64      float8

time.Time   date
            timestamp
            timestamptz

[]byte      bytea

Null Mapping

pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer.

var foo pgtype.Varchar
var bar *string
err := conn.QueryRow("select foo, bar from widgets where id=$1", 42).Scan(&foo, &bar)
if err != nil {
    return err
}

Array Mapping

pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types.

JSON and JSONB Mapping

pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB.

Inet and CIDR Mapping

pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6.

Custom Type Support

pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types.

See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type.

pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces.

If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type.

Raw Bytes Mapping

[]byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL.

Transactions

Transactions are started by calling Begin. The second argument can create a transaction with a specified isolation level.

tx, err := conn.Begin(context.Background(), nil)
if err != nil {
    return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback(context.Background())

_, err = tx.Exec(context.Background(), "insert into foo(id) values (1)")
if err != nil {
    return err
}

err = tx.Commit(context.Background())
if err != nil {
    return err
}

Copy Protocol

Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory.

rows := [][]interface{}{
    {"John", "Smith", int32(36)},
    {"Jane", "Doe", int32(29)},
}

copyCount, err := conn.CopyFrom(
    context.Background(),
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromRows(rows),
)

CopyFrom can be faster than an insert with as few as 5 rows.

Listen and Notify

Use the underlying pgconn.PgConn for listen and notify.

Logging

pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.

Index

Constants

const (
	LogLevelTrace = 6
	LogLevelDebug = 5
	LogLevelInfo  = 4
	LogLevelWarn  = 3
	LogLevelError = 2
	LogLevelNone  = 1
)

The values for log levels are chosen such that the zero value means that no log level was specified.

const (
	Serializable    = TxIsoLevel("serializable")
	RepeatableRead  = TxIsoLevel("repeatable read")
	ReadCommitted   = TxIsoLevel("read committed")
	ReadUncommitted = TxIsoLevel("read uncommitted")
)

Transaction isolation levels

const (
	ReadWrite = TxAccessMode("read write")
	ReadOnly  = TxAccessMode("read only")
)

Transaction access modes

const (
	Deferrable    = TxDeferrableMode("deferrable")
	NotDeferrable = TxDeferrableMode("not deferrable")
)

Transaction deferrable modes

const (
	TxStatusInProgress      = 0
	TxStatusCommitFailure   = -1
	TxStatusRollbackFailure = -2
	TxStatusInFailure       = -3
	TxStatusCommitSuccess   = 1
	TxStatusRollbackSuccess = 2
)
const (
	TextFormatCode   = 0
	BinaryFormatCode = 1
)

PostgreSQL format codes

Variables

var ErrDeadConn = errors.New("conn is dead")

ErrDeadConn occurs on an attempt to use a dead connection

var ErrInvalidLogLevel = errors.New("invalid log level")

ErrInvalidLogLevel occurs on attempt to set an invalid log level.

var ErrNoRows = errors.New("no rows in result set")

ErrNoRows occurs when rows are expected but none are returned.

var ErrTLSRefused = pgconn.ErrTLSRefused

ErrTLSRefused occurs when the connection attempt requires TLS and the PostgreSQL server refuses to use TLS

var ErrTxClosed = errors.New("tx is closed")
var ErrTxCommitRollback = errors.New("commit unexpectedly resulted in rollback")

ErrTxCommitRollback occurs when an error has occurred in a transaction and Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but it is treated as ROLLBACK.

var ErrTxInFailure = errors.New("tx failed")

Types

type Batch

type Batch struct {
	// contains filtered or unexported fields
}

Batch queries are a way of bundling multiple queries together to avoid unnecessary network round trips.

func (*Batch) Queue

func (b *Batch) Queue(query string, arguments []interface{}, parameterOIDs []pgtype.OID, resultFormatCodes []int16)

Queue queues a query to batch b. query can be an SQL query or the name of a prepared statement. parameterOIDs and resultFormatCodes should be nil if query is a prepared statement. Otherwise, parameterOIDs are required if there are parameters and resultFormatCodes are required if there is a result.

type BatchResults

type BatchResults interface {
	// ExecResults reads the results from the next query in the batch as if the query has been sent with Exec.
	ExecResults() (pgconn.CommandTag, error)

	// QueryResults reads the results from the next query in the batch as if the query has been sent with Query.
	QueryResults() (Rows, error)

	// QueryRowResults reads the results from the next query in the batch as if the query has been sent with QueryRow.
	QueryRowResults() Row

	// Close closes the batch operation. Any error that occured during a batch operation may have made it impossible to
	// resyncronize the connection with the server. In this case the underlying connection will have been closed.
	Close() error
}

type Conn

type Conn struct {
	ConnInfo *pgtype.ConnInfo
	// contains filtered or unexported fields
}

Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use ConnPool to manage access to multiple database connections from multiple goroutines.

func Connect

func Connect(ctx context.Context, connString string) (*Conn, error)

Connect establishes a connection with a PostgreSQL server with a connection string. See pgconn.Connect for details.

func ConnectConfig

func ConnectConfig(ctx context.Context, connConfig *ConnConfig) (*Conn, error)

Connect establishes a connection with a PostgreSQL server with a configuration struct.

func (*Conn) Begin

func (c *Conn) Begin(ctx context.Context, txOptions *TxOptions) (*Tx, error)

BeginEx starts a transaction with txOptions determining the transaction mode. txOptions can be nil. Unlike database/sql, the context only affects the begin command. i.e. there is no auto-rollback on context cancelation.

func (*Conn) CauseOfDeath

func (c *Conn) CauseOfDeath() error

func (*Conn) Close

func (c *Conn) Close(ctx context.Context) error

Close closes a connection. It is safe to call Close on a already closed connection.

func (*Conn) CopyFrom

func (c *Conn) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error)

CopyFrom uses the PostgreSQL copy protocol to perform bulk data insertion. It returns the number of rows copied and an error.

CopyFrom requires all values use the binary format. Almost all types implemented by pgx use the binary format by default. Types implementing Encoder can only be used if they encode to the binary format.

func (*Conn) Deallocate

func (c *Conn) Deallocate(ctx context.Context, name string) error

Deallocate released a prepared statement

func (*Conn) Exec

func (c *Conn) Exec(ctx context.Context, sql string, arguments ...interface{}) (pgconn.CommandTag, error)

Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced positionally from the sql string as $1, $2, etc.

func (*Conn) IsAlive

func (c *Conn) IsAlive() bool

func (*Conn) PgConn

func (c *Conn) PgConn() *pgconn.PgConn

PgConn returns the underlying *pgconn.PgConn. This is an escape hatch method that allows lower level access to the PostgreSQL connection than pgx exposes.

It is strongly recommended that the connection be idle (no in-progress queries) before the underlying *pgconn.PgConn is used and the connection must be returned to the same state before any *pgx.Conn methods are again used.

func (*Conn) Ping

func (c *Conn) Ping(ctx context.Context) error

func (*Conn) Prepare

func (c *Conn) Prepare(ctx context.Context, name, sql string) (ps *PreparedStatement, err error)

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec without concern for if the statement has already been prepared.

func (*Conn) Query

func (c *Conn) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error)

Query executes sql with args. If there is an error the returned Rows will be returned in an error state. So it is allowed to ignore the error returned from Query and handle it in Rows.

func (*Conn) QueryRow

func (c *Conn) QueryRow(ctx context.Context, sql string, args ...interface{}) Row

QueryRow is a convenience wrapper over Query. Any error that occurs while querying is deferred until calling Scan on the returned Row. That Row will error with ErrNoRows if no rows are returned.

func (*Conn) SendBatch

func (c *Conn) SendBatch(ctx context.Context, b *Batch) BatchResults

SendBatch sends all queued queries to the server at once. All queries are run in an implicit transaction unless explicit transaction control statements are executed.

func (*Conn) SetLogLevel

func (c *Conn) SetLogLevel(lvl LogLevel) (LogLevel, error)

SetLogLevel replaces the current log level and returns the previous log level.

func (*Conn) SetLogger

func (c *Conn) SetLogger(logger Logger) Logger

SetLogger replaces the current logger and returns the previous logger.

type ConnConfig

type ConnConfig struct {
	pgconn.Config
	Logger   Logger
	LogLevel LogLevel

	// PreferSimpleProtocol disables implicit prepared statement usage. By default pgx automatically uses the extended
	// protocol. This can improve performance due to being able to use the binary format. It also does not rely on client
	// side parameter sanitization. However, it does incur two round-trips per query (unless using a prepared statement)
	// and may be incompatible proxies such as PGBouncer. Setting PreferSimpleProtocol causes the simple protocol to be
	// used by default. The same functionality can be controlled on a per query basis by setting
	// QueryExOptions.SimpleProtocol.
	PreferSimpleProtocol bool
}

ConnConfig contains all the options used to establish a connection.

func ParseConfig

func ParseConfig(connString string) (*ConnConfig, error)

type CopyFromSource

type CopyFromSource interface {
	// Next returns true if there is another row and makes the next row data
	// available to Values(). When there are no more rows available or an error
	// has occurred it returns false.
	Next() bool

	// Values returns the values for the current row.
	Values() ([]interface{}, error)

	// Err returns any error that has been encountered by the CopyFromSource. If
	// this is not nil *Conn.CopyFrom will abort the copy.
	Err() error
}

CopyFromSource is the interface used by *Conn.CopyFrom as the source for copy data.

func CopyFromRows

func CopyFromRows(rows [][]interface{}) CopyFromSource

CopyFromRows returns a CopyFromSource interface over the provided rows slice making it usable by *Conn.CopyFrom.

type Identifier

type Identifier []string

Identifier a PostgreSQL identifier or name. Identifiers can be composed of multiple parts such as ["schema", "table"] or ["table", "column"].

func (Identifier) Sanitize

func (ident Identifier) Sanitize() string

Sanitize returns a sanitized string safe for SQL interpolation.

type LargeObject

type LargeObject struct {
	// contains filtered or unexported fields
}

A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized in. It uses the context it was initialized with for all operations. It implements these interfaces:

io.Writer
io.Reader
io.Seeker
io.Closer

func (*LargeObject) Close

func (o *LargeObject) Close() error

Close closees the large object descriptor.

func (*LargeObject) Read

func (o *LargeObject) Read(p []byte) (int, error)

Read reads up to len(p) bytes into p returning the number of bytes read.

func (*LargeObject) Seek

func (o *LargeObject) Seek(offset int64, whence int) (n int64, err error)

Seek moves the current location pointer to the new location specified by offset.

func (*LargeObject) Tell

func (o *LargeObject) Tell() (n int64, err error)

Tell returns the current read or write location of the large object descriptor.

func (*LargeObject) Truncate

func (o *LargeObject) Truncate(size int64) (err error)

Trunctes the large object to size.

func (*LargeObject) Write

func (o *LargeObject) Write(p []byte) (int, error)

Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.

type LargeObjectMode

type LargeObjectMode int32
const (
	LargeObjectModeWrite LargeObjectMode = 0x20000
	LargeObjectModeRead  LargeObjectMode = 0x40000
)

type LargeObjects

type LargeObjects struct {
	// contains filtered or unexported fields
}

LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it was created.

For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html

func (*LargeObjects) Create

func (o *LargeObjects) Create(ctx context.Context, oid pgtype.OID) (pgtype.OID, error)

Create creates a new large object. If oid is zero, the server assigns an unused OID.

func (*LargeObjects) Open

func (o *LargeObjects) Open(ctx context.Context, oid pgtype.OID, mode LargeObjectMode) (*LargeObject, error)

Open opens an existing large object with the given mode. ctx will also be used for all operations on the opened large object.

func (o *LargeObjects) Unlink(ctx context.Context, oid pgtype.OID) error

Unlink removes a large object from the database.

type LogLevel

type LogLevel int

LogLevel represents the pgx logging level. See LogLevel* constants for possible values.

func LogLevelFromString

func LogLevelFromString(s string) (LogLevel, error)

LogLevelFromString converts log level string to constant

Valid levels:

trace
debug
info
warn
error
none

func (LogLevel) String

func (ll LogLevel) String() string

type Logger

type Logger interface {
	// Log a message at the given level with data key/value pairs. data may be nil.
	Log(level LogLevel, msg string, data map[string]interface{})
}

Logger is the interface used to get logging from pgx internals.

type PrepareExOptions

type PrepareExOptions struct {
	ParameterOIDs []pgtype.OID
}

PrepareExOptions is an option struct that can be passed to PrepareEx

type PreparedStatement

type PreparedStatement struct {
	Name              string
	SQL               string
	FieldDescriptions []pgproto3.FieldDescription
	ParameterOIDs     []pgtype.OID
}

PreparedStatement is a description of a prepared statement

type ProtocolError

type ProtocolError string

ProtocolError occurs when unexpected data is received from PostgreSQL

func (ProtocolError) Error

func (e ProtocolError) Error() string

type QueryArgs

type QueryArgs []interface{}

QueryArgs is a container for arguments to an SQL query. It is helpful when building SQL statements where the number of arguments is variable.

func (*QueryArgs) Append

func (qa *QueryArgs) Append(v interface{}) string

Append adds a value to qa and returns the placeholder value for the argument. e.g. $1, $2, etc.

type QueryResultFormats

type QueryResultFormats []int16

QueryResultFormats controls the result format (text=0, binary=1) of a query by result column position.

type QueryResultFormatsByOID

type QueryResultFormatsByOID map[pgtype.OID]int16

QueryResultFormatsByOID controls the result format (text=0, binary=1) of a query by the result column OID.

type QuerySimpleProtocol

type QuerySimpleProtocol bool

QuerySimpleProtocol controls whether the simple or extended protocol is used to send the query.

type Row

type Row interface {
	// Scan works the same as Rows. with the following exceptions. If no
	// rows were found it returns ErrNoRows. If multiple rows are returned it
	// ignores all but the first.
	Scan(dest ...interface{}) error
}

Row is a convenience wrapper over Rows that is returned by QueryRow.

type Rows

type Rows interface {
	// Close closes the rows, making the connection ready for use again. It is safe
	// to call Close after rows is already closed.
	Close()

	Err() error
	FieldDescriptions() []pgproto3.FieldDescription

	// Next prepares the next row for reading. It returns true if there is another
	// row and false if no more rows are available. It automatically closes rows
	// when all rows are read.
	Next() bool

	// Scan reads the values from the current row into dest values positionally.
	// dest can include pointers to core types, values implementing the Scanner
	// interface, []byte, and nil. []byte will skip the decoding process and directly
	// copy the raw bytes received from PostgreSQL. nil will skip the value entirely.
	Scan(dest ...interface{}) error

	// Values returns an array of the row values
	Values() ([]interface{}, error)
}

Rows is the result set returned from *Conn.Query. Rows must be closed before the *Conn can be used again. Rows are closed by explicitly calling Close(), calling Next() until it returns false, or when a fatal error occurs.

func RowsFromResultReader

func RowsFromResultReader(connInfo *pgtype.ConnInfo, rr *pgconn.ResultReader) Rows

RowsFromResultReader wraps a *pgconn.ResultReader in a Rows wrapper so a more convenient scanning interface can be used.

In most cases, the appropriate pgx query methods should be used instead of sending a query with pgconn and reading the results with pgx.

type SerializationError

type SerializationError string

SerializationError occurs on failure to encode or decode a value

func (SerializationError) Error

func (e SerializationError) Error() string

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx represents a database transaction.

All Tx methods return ErrTxClosed if Commit or Rollback has already been called on the Tx.

func (*Tx) Commit

func (tx *Tx) Commit(ctx context.Context) error

Commit commits the transaction.

func (*Tx) CopyFrom

func (tx *Tx) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error)

CopyFrom delegates to the underlying *Conn

func (*Tx) Err

func (tx *Tx) Err() error

Err returns the final error state, if any, of calling Commit or Rollback.

func (*Tx) Exec

func (tx *Tx) Exec(ctx context.Context, sql string, arguments ...interface{}) (commandTag pgconn.CommandTag, err error)

Exec delegates to the underlying *Conn

func (*Tx) LargeObjects

func (tx *Tx) LargeObjects() LargeObjects

LargeObjects returns a LargeObjects instance for the transaction.

func (*Tx) Prepare

func (tx *Tx) Prepare(ctx context.Context, name, sql string) (*PreparedStatement, error)

Prepare delegates to the underlying *Conn

func (*Tx) Query

func (tx *Tx) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error)

Query delegates to the underlying *Conn

func (*Tx) QueryRow

func (tx *Tx) QueryRow(ctx context.Context, sql string, args ...interface{}) Row

QueryRow delegates to the underlying *Conn

func (*Tx) Rollback

func (tx *Tx) Rollback(ctx context.Context) error

Rollback rolls back the transaction. Rollback will return ErrTxClosed if the Tx is already closed, but is otherwise safe to call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will be called first in a non-error condition.

func (*Tx) SendBatch

func (tx *Tx) SendBatch(ctx context.Context, b *Batch) BatchResults

SendBatch delegates to the underlying *Conn

func (*Tx) Status

func (tx *Tx) Status() int8

Status returns the status of the transaction from the set of pgx.TxStatus* constants.

type TxAccessMode

type TxAccessMode string

type TxDeferrableMode

type TxDeferrableMode string

type TxIsoLevel

type TxIsoLevel string

type TxOptions

type TxOptions struct {
	IsoLevel       TxIsoLevel
	AccessMode     TxAccessMode
	DeferrableMode TxDeferrableMode
}

Source Files

batch.go conn.go copy_from.go doc.go extended_query_builder.go go_stdlib.go large_objects.go logger.go messages.go rows.go sql.go tx.go values.go

Directories

PathSynopsis
examples
examples/chat
examples/todo
examples/url_shortener
internal
log
log/log15adapterPackage log15adapter provides a logger that writes to a github.com/inconshreveable/log15.Logger log.
log/logrusadapterPackage logrusadapter provides a logger that writes to a github.com/sirupsen/logrus.Logger log.
log/testingadapterPackage testingadapter provides a logger that writes to a test or benchmark log.
log/zapadapterPackage zapadapter provides a logger that writes to a go.uber.org/zap.Logger.
log/zerologadapterPackage zerologadapter provides a logger that writes to a github.com/rs/zerolog.
pgmock
pgxpool
stdlibPackage stdlib is the compatibility layer from pgx to database/sql.
Version
v4.0.0-pre1
Published
Jun 29, 2019
Platform
darwin/amd64
Imports
17 packages
Last checked
now

Tools for package owners.