Struct rotor_stream::Buf
[−]
[src]
pub struct Buf { // some fields omitted }
A buffer object to be used for reading from network
Assumptions:
Buffer need to be growable as sometimes requests are large
Buffer should deallocate when empty as most of the time connections are idle
a. Deallocations are cheap as we have cool memory allocator (jemalloc)
b. First allocation should be big (i.e. kilobytes not few bytes)
Should be easy too peek and get a slice as it makes packet parsing easy
Cheap removing bytes at the start of the buf
Buf itself has same size as Vec
Buf holds upto 4Gb of memory, larger network buffers are impractical for most use cases
Methods
impl Buf
fn new() -> Buf
Create empty buffer. It has no preallocated size. It's always have deallocated underlying memory chunk when there are no useful bytes in the buffer.
fn consume(&mut self, bytes: usize)
Mark the first bytes
of the buffer as read. Basically it's shaving
off bytes from the buffer. But does it effeciently. When there are
no more bytes in the buffer it's deallocated.
Note: Buffer currently don't shrink on calling this method. It's assumed that all bytes will be consumed shortly. In case you're appending to the buffer after consume, old data is discarded.
Panics
Panics if bytes
is larger than current length of buffer
fn remove_range<R>(&mut self, range: R) where R: Into<RangeArgument>
Allows to remove arbitrary range of bytes
A more comprehensive version of consume()
. It's occasionaly useful
if you data by frames/chunks but want to buffer the whole body anyway.
E.g. in http chunked encoding you have each chunk prefixed by it's
length, but it doesn't mean you can't buffer the whole request into
the memory. This method allows to continue reading next chunk into
the same buffer while removing chunk length.
Note: it's not super efficient, as it requires to move(copy) bytes after the range, in case range is neither at the start nor at the end of buffer. Still it should be faster than copying everything to yet another buffer.
We never shrink the buffer here (except when it becomes empty, to keep this invariant), assuming that you will receive more data into the buffer shortly.
The RangeArgument
type is a temporary type until rust provides
the one in standard library, you should use the range syntax directly:
buf.remove_range(5..7)
Panics
Panics if range is invalid for the buffer
fn capacity(&self) -> usize
Capacity of the buffer. I.e. the bytes it is allocated for. Use for
debugging or for calculating memory usage. Note it's not guaranteed
that you can write buf.capacity() - buf.len()
bytes without resize
fn len(&self) -> usize
Number of useful bytes in the buffer
fn is_empty(&self) -> bool
Is buffer is empty. Potentially a little bit faster than
getting len()
fn extend(&mut self, buf: &[u8])
Extend buffer. Note unlike Write::write()
and read_from()
this
method reserves smallest possible chunk of memory. So it's inefficient
to grow with this method. You may use Write trait to grow
incrementally.
fn read_from<R>(&mut self, stream: &mut R) -> Result<usize, Error> where R: Read
Read some bytes from stream (object implementing Read
) into buffer
Note this does not continue read until getting WouldBlock
. It
passes all errors as is. It preallocates some chunk to read into
buffer, it may be possible that socket still has bytes buffered after
this method returns. This method is expected either to be called until
WouldBlock
is returned or is used with level-triggered polling.
fn read_max_from<R>(&mut self, max: usize, stream: &mut R) -> Result<bool, Error> where R: Read
Reads no more than max bytes into buffer and returns boolean flag of whether max bytes are reached
Except limit on number of bytes and slightly different allocation strategy this method has same consideration as read_from
Note this method might be used for two purposes:
- Limit number of bytes buffered until parser can process data
(for example HTTP header size, which is number of bytes read before
\r\n\r\n
delimiter reached) - Wait until exact number of bytes fully received
Since we support (1) we don't preallocate buffer for exact size of
the max
value. It also helps a little in case (2) for minimizing
DDoS attack vector. If that doesn't suit you, you may with to use
Vec::with_capacity()
for the purpose of (2).
On the countrary we don't overallocate more than max
bytes, so if
you expect data to go after exact number of bytes read. You might
better use raw read_from()
and check buffer length.
fn write_to<W>(&mut self, sock: &mut W) -> Result<usize, Error> where W: Write
Write contents of buffer to the stream (object implementing
the Write trait). We assume that stream is non-blocking, use
Write::write
(instead of Write::write_all
) and return all errors
to the caller (including WouldBlock
or Interrupted
).
Instead of returning number of bytes method consume()
s bytes from
buffer, so it's safe to retry calling the method at any moment. Also
it's common pattern to append more data to the buffer between calls.