package storage
import "cloud.google.com/go/bigquery/storage/apiv1"
Package storage is an auto-generated package for the BigQuery Storage API.
NOTE: This package is in alpha. It is not stable, and is likely to change.
Use of Context
The ctx passed to NewClient is used for authentication requests and for creating the underlying connection, but is not used for subsequent calls. Individual methods on the client use the ctx given to them.
To close the open connection, use the Close() method.
For information about setting deadlines, reusing contexts, and more please visit godoc.org/cloud.google.com/go.
Index ¶
- func DefaultAuthScopes() []string
- type BigQueryReadCallOptions
- type BigQueryReadClient
- func NewBigQueryReadClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryReadClient, error)
- func (c *BigQueryReadClient) Close() error
- func (c *BigQueryReadClient) Connection() *grpc.ClientConn
- func (c *BigQueryReadClient) CreateReadSession(ctx context.Context, req *storagepb.CreateReadSessionRequest, opts ...gax.CallOption) (*storagepb.ReadSession, error)
- func (c *BigQueryReadClient) ReadRows(ctx context.Context, req *storagepb.ReadRowsRequest, opts ...gax.CallOption) (storagepb.BigQueryRead_ReadRowsClient, error)
- func (c *BigQueryReadClient) SplitReadStream(ctx context.Context, req *storagepb.SplitReadStreamRequest, opts ...gax.CallOption) (*storagepb.SplitReadStreamResponse, error)
Examples ¶
Functions ¶
func DefaultAuthScopes ¶
func DefaultAuthScopes() []string
DefaultAuthScopes reports the default set of authentication scopes to use with this package.
Types ¶
type BigQueryReadCallOptions ¶
type BigQueryReadCallOptions struct { CreateReadSession []gax.CallOption ReadRows []gax.CallOption SplitReadStream []gax.CallOption }
BigQueryReadCallOptions contains the retry settings for each method of BigQueryReadClient.
type BigQueryReadClient ¶
type BigQueryReadClient struct { // The call options for this service. CallOptions *BigQueryReadCallOptions // contains filtered or unexported fields }
BigQueryReadClient is a client for interacting with BigQuery Storage API.
Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
func NewBigQueryReadClient ¶
func NewBigQueryReadClient(ctx context.Context, opts ...option.ClientOption) (*BigQueryReadClient, error)
NewBigQueryReadClient creates a new big query read client.
BigQuery Read API.
The Read API can be used to read data from BigQuery.
func (*BigQueryReadClient) Close ¶
func (c *BigQueryReadClient) Close() error
Close closes the connection to the API service. The user should invoke this when the client is no longer required.
func (*BigQueryReadClient) Connection ¶
func (c *BigQueryReadClient) Connection() *grpc.ClientConn
Connection returns a connection to the API service.
Deprecated.
func (*BigQueryReadClient) CreateReadSession ¶
func (c *BigQueryReadClient) CreateReadSession(ctx context.Context, req *storagepb.CreateReadSessionRequest, opts ...gax.CallOption) (*storagepb.ReadSession, error)
CreateReadSession creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.
A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Data is assigned to each stream such that roughly the same number of rows can be read from each stream. Because the server-side unit for assigning data is collections of rows, the API does not guarantee that each stream will return the same number or rows. Additionally, the limits are enforced based on the number of pre-filtered rows, so some filters can lead to lopsided assignments.
Read sessions automatically expire 24 hours after they are created and do
not require manual clean-up by the caller.
Code:play
Example¶
package main
import (
"context"
storage "cloud.google.com/go/bigquery/storage/apiv1"
storagepb "google.golang.org/genproto/googleapis/cloud/bigquery/storage/v1"
)
func main() {
// import storagepb "google.golang.org/genproto/googleapis/cloud/bigquery/storage/v1"
ctx := context.Background()
c, err := storage.NewBigQueryReadClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &storagepb.CreateReadSessionRequest{
// TODO: Fill request struct fields.
}
resp, err := c.CreateReadSession(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func (*BigQueryReadClient) ReadRows ¶
func (c *BigQueryReadClient) ReadRows(ctx context.Context, req *storagepb.ReadRowsRequest, opts ...gax.CallOption) (storagepb.BigQueryRead_ReadRowsClient, error)
ReadRows reads rows from the stream in the format prescribed by the ReadSession. Each response contains one or more table rows, up to a maximum of 100 MiB per response; read requests which attempt to read individual rows larger than 100 MiB will fail.
Each request also returns a set of stream statistics reflecting the current state of the stream.
func (*BigQueryReadClient) SplitReadStream ¶
func (c *BigQueryReadClient) SplitReadStream(ctx context.Context, req *storagepb.SplitReadStreamRequest, opts ...gax.CallOption) (*storagepb.SplitReadStreamResponse, error)
SplitReadStream splits a given ReadStream into two ReadStream objects. These ReadStream objects are referred to as the primary and the residual streams of the split. The original ReadStream can still be read from in the same manner as before. Both of the returned ReadStream objects can also be read from, and the rows returned by both child streams will be the same as the rows read from the original stream.
Moreover, the two child streams will be allocated back-to-back in the
original ReadStream. Concretely, it is guaranteed that for streams
original, primary, and residual, that original[0-j] = primary[0-j] and
original[j-n] = residual[0-m] once the streams have been read to
completion.
Code:play
Example¶
package main
import (
"context"
storage "cloud.google.com/go/bigquery/storage/apiv1"
storagepb "google.golang.org/genproto/googleapis/cloud/bigquery/storage/v1"
)
func main() {
// import storagepb "google.golang.org/genproto/googleapis/cloud/bigquery/storage/v1"
ctx := context.Background()
c, err := storage.NewBigQueryReadClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &storagepb.SplitReadStreamRequest{
// TODO: Fill request struct fields.
}
resp, err := c.SplitReadStream(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
Source Files ¶
big_query_read_client.go doc.go
- Version
- v1.6.0
- Published
- Apr 9, 2020
- Platform
- darwin/amd64
- Imports
- 15 packages
- Last checked
- 15 minutes ago –
Tools for package owners.