package scheduler
import "cloud.google.com/go/pubsub/internal/scheduler"
Index ¶
- type PublishScheduler
- func NewPublishScheduler(workers int, handle func(bundle interface{})) *PublishScheduler
- func (s *PublishScheduler) Add(key string, item interface{}, size int) error
- func (s *PublishScheduler) FlushAndStop()
- type ReceiveScheduler
Types ¶
type PublishScheduler ¶
type PublishScheduler struct { // Settings passed down to each bundler that gets created. DelayThreshold time.Duration BundleCountThreshold int BundleByteThreshold int BundleByteLimit int BufferedByteLimit int // contains filtered or unexported fields }
PublishScheduler is a scheduler which is designed for Pub/Sub's Publish flow. It bundles items before handling them. All items in this PublishScheduler use the same handler.
Each item is added with a given key. Items added to the empty string key are handled in random order. Items added to any other key are handled sequentially.
func NewPublishScheduler ¶
func NewPublishScheduler(workers int, handle func(bundle interface{})) *PublishScheduler
NewPublishScheduler returns a new PublishScheduler.
The workers arg is the number of workers that will operate on the queue of work. A reasonably large number of workers is highly recommended. If the workers arg is 0, then a healthy default of 10 workers is used.
The scheduler does not use a parent context. If it did, canceling that context would immediately stop the scheduler without waiting for undelivered messages.
The scheduler should be stopped only with FlushAndStop.
func (*PublishScheduler) Add ¶
func (s *PublishScheduler) Add(key string, item interface{}, size int) error
Add adds an item to the scheduler at a given key.
Add never blocks. Buffering happens in the scheduler's publishers. There is no flow control.
Since ordered keys require only a single outstanding RPC at once, it is possible to send ordered key messages to Topic.Publish (and subsequently to PublishScheduler.Add) faster than the bundler can publish them to the Pub/Sub service, resulting in a backed up queue of Pub/Sub bundles. Each item in the bundler queue is a goroutine.
func (*PublishScheduler) FlushAndStop ¶
func (s *PublishScheduler) FlushAndStop()
FlushAndStop begins flushing items from bundlers and from the scheduler. It blocks until all items have been flushed.
type ReceiveScheduler ¶
type ReceiveScheduler struct {
// contains filtered or unexported fields
}
ReceiveScheduler is a scheduler which is designed for Pub/Sub's Receive flow.
Each item is added with a given key. Items added to the empty string key are handled in random order. Items added to any other key are handled sequentially.
func NewReceiveScheduler ¶
func NewReceiveScheduler(workers int) *ReceiveScheduler
NewReceiveScheduler creates a new ReceiveScheduler.
The workers arg is the number of concurrent calls to handle. If the workers arg is 0, then a healthy default of 10 workers is used.
func (*ReceiveScheduler) Add ¶
func (s *ReceiveScheduler) Add(key string, item interface{}, handle func(item interface{})) error
Add adds the item to be handled. Add may block.
Buffering happens above the ReceiveScheduler in the form of a flow controller that requests batches of messages to pull. A backed up ReceiveScheduler.Add call causes pushback to the pubsub service (less Receive calls on the long-lived stream), which keeps memory footprint stable.
func (*ReceiveScheduler) Shutdown ¶
func (s *ReceiveScheduler) Shutdown()
Shutdown begins flushing messages and stops accepting new Add calls. Shutdown does not block, or wait for all messages to be flushed.
Source Files ¶
publish_scheduler.go receive_scheduler.go
- Version
- v1.0.1-beta.ordered.keys
- Published
- Oct 10, 2019
- Platform
- darwin/amd64
- Imports
- 5 packages
- Last checked
- 2 hours ago –
Tools for package owners.