StreamContextExtension
Container for things by which the context type is augmented. This interface should likely never be used directly. Instead, take a look at the corresponding context flavor called Stream
Properties
api
api: { streamMessage(
chat_id: number,
draft_id_offset: number,
stream: Iterable<MessageDraftPiece> | AsyncIterable<MessageDraftPiece>,
otherMessageDraft?: Omit<Parameters<ApiMethods["sendMessageDraft"]>[0], "chat_id" | "draft_id" | "text">,
otherMessage?: Omit<Parameters<ApiMethods["sendMessage"]>[0], "chat_id" | "text">,
signal?: AbortSignal,
): Promise<Message.TextMessage[]>; };Methods
replyWithStream
replyWithStream(
stream: Iterable<MessageDraftPiece> | AsyncIterable<MessageDraftPiece>,
otherMessageDraft?: Omit<Parameters<ApiMethods["sendMessageDraft"]>[0], "chat_id" | "draft_id" | "text">,
otherMessage?: Omit<Parameters<ApiMethods["sendMessage"]>[0], "chat_id" | "text">,
signal?: AbortSignal,
): Promise<Message.TextMessage[]>;
Use this method to stream an iterator of message pieces to the current private chat. This is a convenience method built on top of send and send. Returns an array of sent message objects.
The message pieces of the Iterable or Async can either be simple strings or objects with text and an array of entities (as defined by Message
This method automatically sends several drafts if the data is too long. More specifically, if the data exceeds 4096 characters, the first chunk that crosses this threshold will receive an incremented draft value. Note that individual chunks are never split up, so they must each be at most 4096 characters long.
An offset that gets added to each draft identifier is determined by the current update. More specifically, this offset is update, leaving 256 message parts or about 1 MB of ASCII characters before draft identifiers begin to clash. However, if you want to call this method several times from the same handler and/or middleware pass, then you should make sure that both calls happen sequentially. Otherwise, clashes between draft identifiers can happen across the concurrent calls. Alternatively, you can adjust the way the draft identifier offset is picked by setting Stream
Each draft is sent as a separate message as soon as the draft is complete. For instance, the following sequence of API calls will be observed for six chunks of text with 1000 characters each:
sendMessage Draft sendMessage Draft sendMessage Draft sendMessage Draft sendMessage sendMessage Draft sendMessage Draft sendMessage
If you need more control over which draft identifiers are used (and by extension, how messages get split up), you can include custom draft values in the objects of the data stream. These values are always going to be used as-is, and messages will never be split between two chunks if they both have the same draft identifier. If you want to start a new draft/message, you only need to yield a new draft identifier once. All subsequent chunks will automatically obtain the same identifier (until the message length limit is hit which increments the value, or until a new draft identifier is specified by the data stream).
This method consumes the given iterator as fast as possible and updates the message draft as often as possible. If reading the next chunk of data is faster than the message draft can be updated, then some calls to send are skipped. This integrates well with the auto-retry plugin which converts rate limits into slower API calls. Make sure to install it before installing this plugin. In contrast, send calls are never skipped, so no data is lost in the process.