| Package | Description |
|---|---|
| org.bytedeco.arrow | |
| org.bytedeco.arrow_dataset | |
| org.bytedeco.arrow.global | |
| org.bytedeco.gandiva | |
| org.bytedeco.parquet |
| Modifier and Type | Class and Description |
|---|---|
class |
LoggingMemoryPool |
class |
ProxyMemoryPool
Derived class for memory allocation.
|
| Modifier and Type | Method and Description |
|---|---|
static MemoryPool |
MemoryPool.CreateDefault()
\brief EXPERIMENTAL.
|
MemoryPool |
KernelContext.memory_pool()
\brief The memory pool to use for allocations.
|
MemoryPool |
IpcWriteOptions.memory_pool()
\brief The memory pool to use for allocations made during IPC writing
While Arrow IPC is predominantly zero-copy, it may have to allocate
memory in some cases (for example if compression is enabled).
|
MemoryPool |
IpcReadOptions.memory_pool()
\brief The memory pool to use for allocations made during IPC reading
While Arrow IPC is predominantly zero-copy, it may have to allocate
memory in some cases (for example if compression is enabled).
|
MemoryPool |
ExecContext.memory_pool()
\brief The MemoryPool used for allocations, default is
default_memory_pool().
|
| Modifier and Type | Method and Description |
|---|---|
TableResult |
Table.CombineChunks(MemoryPool pool)
\brief Make a new table by combining the chunks this table has.
|
BufferResult |
ArrowBuffer.CopySlice(long start,
long nbytes,
MemoryPool pool)
Copy a section of the buffer into a new Buffer.
|
static BufferOutputStreamResult |
BufferOutputStream.Create(long initial_capacity,
MemoryPool pool)
\brief Create in-memory output stream with indicated capacity using a
memory pool
|
static BufferedInputStreamResult |
BufferedInputStream.Create(long buffer_size,
MemoryPool pool,
InputStream raw) |
static BufferedInputStreamResult |
BufferedInputStream.Create(long buffer_size,
MemoryPool pool,
InputStream raw,
long raw_read_bound)
\brief Create a BufferedInputStream from a raw InputStream
|
static BufferedOutputStreamResult |
BufferedOutputStream.Create(long buffer_size,
MemoryPool pool,
OutputStream raw)
\brief Create a buffered output stream wrapping the given output stream.
|
TableResult |
Table.Flatten(MemoryPool pool)
\brief Flatten the table, producing a new Table.
|
ArrayVectorResult |
StructArray.Flatten(MemoryPool pool)
\brief Flatten this array as a vector of arrays, one for each field
|
ArrayResult |
ListArray.Flatten(MemoryPool memory_pool)
\brief Return an Array that is a concatenation of the lists in this array.
|
ArrayResult |
LargeListArray.Flatten(MemoryPool memory_pool)
\brief Return an Array that is a concatenation of the lists in this array.
|
ChunkedArrayVectorResult |
ChunkedArray.Flatten(MemoryPool pool)
\brief Flatten this chunked array as a vector of chunked arrays, one
for each struct field
|
static ArrayResult |
MapArray.FromArrays(Array offsets,
Array keys,
Array items,
MemoryPool pool)
\brief Construct MapArray from array of offsets and child key, item arrays
This function does the bare minimum of validation of the offsets and
input types, and will allocate a new offsets array if necessary (i.e.
|
static ListArrayResult |
ListArray.FromArrays(Array offsets,
Array values,
MemoryPool pool)
\brief Construct ListArray from array of offsets and child value array
This function does the bare minimum of validation of the offsets and
input types, and will allocate a new offsets array if necessary (i.e.
|
static LargeListArrayResult |
LargeListArray.FromArrays(Array offsets,
Array values,
MemoryPool pool)
\brief Construct LargeListArray from array of offsets and child value array
This function does the bare minimum of validation of the offsets and
input types, and will allocate a new offsets array if necessary (i.e.
|
static ArrayResult |
MapArray.FromArrays(DataType type,
Array offsets,
Array keys,
Array items,
MemoryPool pool) |
ArrayDataResult |
DictionaryMemo.GetDictionary(long id,
MemoryPool pool)
\brief Return current dictionary corresponding to a particular
id.
|
static CompressedInputStreamResult |
CompressedInputStream.Make(Codec codec,
InputStream raw,
MemoryPool pool)
\brief Create a compressed input stream wrapping the given input stream.
|
static CompressedOutputStreamResult |
CompressedOutputStream.Make(Codec codec,
OutputStream raw,
MemoryPool pool)
\brief Create a compressed output stream wrapping the given output stream.
|
static DictionaryUnifierResult |
DictionaryUnifier.Make(DataType value_type,
MemoryPool pool)
\brief Construct a DictionaryUnifier
|
static TableReaderResult |
TableReader.Make(MemoryPool pool,
InputStream input,
ReadOptions arg2,
CsvParseOptions arg3,
ConvertOptions arg4)
Create a TableReader instance
|
static StreamingReaderResult |
StreamingReader.Make(MemoryPool pool,
InputStream input,
ReadOptions arg2,
CsvParseOptions arg3,
ConvertOptions arg4)
Create a StreamingReader instance
Currently, the StreamingReader is always single-threaded (parallel
readahead is not supported).
|
static Status |
RecordBatchBuilder.Make(Schema schema,
MemoryPool pool,
long initial_capacity,
RecordBatchBuilder builder)
\brief Create an initialize a RecordBatchBuilder
|
static Status |
RecordBatchBuilder.Make(Schema schema,
MemoryPool pool,
RecordBatchBuilder builder)
\brief Create an initialize a RecordBatchBuilder
|
IpcWriteOptions |
IpcWriteOptions.memory_pool(MemoryPool setter) |
IpcReadOptions |
IpcReadOptions.memory_pool(MemoryPool setter) |
static ReadableFileResult |
ReadableFile.Open(BytePointer path,
MemoryPool pool) |
static ReadableFileResult |
ReadableFile.Open(int fd,
MemoryPool pool)
\brief Open a local file for reading
|
static ReadableFileResult |
ReadableFile.Open(String path,
MemoryPool pool)
\brief Open a local file for reading
|
Status |
BufferOutputStream.Reset(long initial_capacity,
MemoryPool pool)
\brief Initialize state of OutputStream with newly allocated memory and
set position to 0
|
| Constructor and Description |
|---|
BinaryBuilder(DataType type,
MemoryPool pool) |
BinaryBuilder(MemoryPool pool) |
BooleanBuilder(DataType type,
MemoryPool pool) |
BooleanBuilder(MemoryPool pool) |
BufferBuilder(MemoryPool pool) |
BufferBuilder(ResizableBuffer buffer,
MemoryPool pool)
\brief Constructs new Builder that will start using
the provided buffer until Finish/Reset are called.
|
ChunkedBinaryBuilder(int max_chunk_value_length,
int max_chunk_length,
MemoryPool pool) |
ChunkedBinaryBuilder(int max_chunk_value_length,
MemoryPool pool) |
ChunkedStringBuilder(int max_chunk_value_length,
int max_chunk_length,
MemoryPool pool) |
ChunkedStringBuilder(int max_chunk_value_length,
MemoryPool pool) |
DayTimeIntervalBuilder(DataType type,
MemoryPool pool) |
Decimal128Builder(DataType type,
MemoryPool pool) |
Decimal256Builder(DataType type,
MemoryPool pool) |
DenseUnionBuilder(MemoryPool pool)
Use this constructor to initialize the UnionBuilder with no child builders,
allowing type to be inferred.
|
DenseUnionBuilder(MemoryPool pool,
ArrayBuilderVector children,
DataType type)
Use this constructor to specify the type explicitly.
|
DictionaryMemoTable(MemoryPool pool,
Array dictionary) |
DictionaryMemoTable(MemoryPool pool,
DataType type) |
DoubleBuilder(DataType type,
MemoryPool pool) |
ExecContext(MemoryPool pool,
FunctionRegistry func_registry) |
FixedSizeBinaryBuilder(DataType type,
MemoryPool pool) |
FixedSizeListBuilder(MemoryPool pool,
ArrayBuilder value_builder,
DataType type)
Use this constructor to infer the built array's type.
|
FixedSizeListBuilder(MemoryPool pool,
ArrayBuilder value_builder,
int list_size)
Use this constructor to define the built array's type explicitly.
|
FloatBuilder(DataType type,
MemoryPool pool) |
HalfFloatBuilder(DataType type,
MemoryPool pool) |
Int16Builder(DataType type,
MemoryPool pool) |
Int32Builder(DataType type,
MemoryPool pool) |
Int64Builder(DataType type,
MemoryPool pool) |
Int8Builder(DataType type,
MemoryPool pool) |
LargeBinaryBuilder(DataType type,
MemoryPool pool) |
LargeBinaryBuilder(MemoryPool pool) |
LargeListBuilder(MemoryPool pool,
ArrayBuilder value_builder) |
LargeListBuilder(MemoryPool pool,
ArrayBuilder value_builder,
DataType type) |
LargeStringBuilder(DataType type,
MemoryPool pool) |
LargeStringBuilder(MemoryPool pool) |
ListBuilder(MemoryPool pool,
ArrayBuilder value_builder) |
ListBuilder(MemoryPool pool,
ArrayBuilder value_builder,
DataType type) |
LoggingMemoryPool(MemoryPool pool) |
MapBuilder(MemoryPool pool,
ArrayBuilder key_builder,
ArrayBuilder item_builder) |
MapBuilder(MemoryPool pool,
ArrayBuilder key_builder,
ArrayBuilder item_builder,
boolean keys_sorted)
Use this constructor to infer the built array's type.
|
MapBuilder(MemoryPool pool,
ArrayBuilder key_builder,
ArrayBuilder item_builder,
DataType type)
Use this constructor to define the built array's type explicitly.
|
MapBuilder(MemoryPool pool,
ArrayBuilder item_builder,
DataType type) |
MessageDecoder(MessageDecoderListener listener,
int initial_state,
long initial_next_required_size,
MemoryPool pool) |
MessageDecoder(MessageDecoderListener listener,
MemoryPool pool)
\brief Construct a message decoder.
|
MessageDecoder(MessageDecoderListener listener,
MessageDecoder.State initial_state,
long initial_next_required_size,
MemoryPool pool)
\brief Construct a message decoder with the specified state.
|
NullBuilder(DataType type,
MemoryPool pool) |
NullBuilder(MemoryPool pool) |
ProxyMemoryPool(MemoryPool pool) |
SparseUnionBuilder(MemoryPool pool)
Use this constructor to initialize the UnionBuilder with no child builders,
allowing type to be inferred.
|
SparseUnionBuilder(MemoryPool pool,
ArrayBuilderVector children,
DataType type)
Use this constructor to specify the type explicitly.
|
StringBuilder(DataType type,
MemoryPool pool) |
StringBuilder(MemoryPool pool) |
StructBuilder(DataType type,
MemoryPool pool,
ArrayBuilderVector field_builders)
If any of field_builders has indeterminate type, this builder will also
|
TypedBufferBuilder(MemoryPool pool) |
UInt16Builder(DataType type,
MemoryPool pool) |
UInt32Builder(DataType type,
MemoryPool pool) |
UInt64Builder(DataType type,
MemoryPool pool) |
UInt8Builder(DataType type,
MemoryPool pool) |
| Modifier and Type | Method and Description |
|---|---|
MemoryPool |
ScanContext.pool()
A pool from which materialized and scanned arrays will be allocated.
|
| Modifier and Type | Method and Description |
|---|---|
ScanContext |
ScanContext.pool(MemoryPool setter) |
RecordBatchResult |
RecordBatchProjector.Project(RecordBatch batch,
MemoryPool pool) |
Status |
RecordBatchProjector.SetInputSchema(Schema from,
MemoryPool pool) |
| Modifier and Type | Method and Description |
|---|---|
static MemoryPool |
arrow.default_memory_pool()
\}
|
static MemoryPool |
arrow.system_memory_pool()
\brief Return a process-wide memory pool based on the system allocator.
|
| Modifier and Type | Method and Description |
|---|---|
static BufferUniqueResult |
arrow.AllocateBuffer(long size,
MemoryPool pool)
\defgroup buffer-allocation-functions Functions for allocating buffers
\{
|
static ResizableBuffer |
parquet.AllocateBuffer(MemoryPool pool,
long size) |
static BufferResult |
arrow.AllocateEmptyBitmap(long length,
MemoryPool pool)
\brief Allocate a zero-initialized bitmap buffer from a memory pool
|
static ResizableUniqueResult |
arrow.AllocateResizableBuffer(long size,
MemoryPool pool)
\brief Allocate a resizeable buffer from a memory pool, zero its padding.
|
static ArrayResult |
arrow.Concatenate(ArrayVector arrays,
MemoryPool pool)
\brief Concatenate arrays
|
static Status |
arrow.Concatenate(ArrayVector arrays,
MemoryPool pool,
Array out)
Deprecated.
|
static TableResult |
arrow.ConcatenateTables(TableVector tables,
ConcatenateTablesOptions options,
MemoryPool memory_pool)
\brief Construct table from multiple input tables.
|
static BufferOutputStream |
parquet.CreateOutputStream(MemoryPool pool) |
static MessageUniqueResult |
arrow.GetSparseTensorMessage(SparseTensor sparse_tensor,
MemoryPool pool)
\brief EXPERIMENTAL: Convert arrow::SparseTensor to a Message with minimal memory
allocation
The message is written out as followed:
|
static Status |
arrow.GetSparseTensorPayload(SparseTensor sparse_tensor,
MemoryPool pool,
IpcPayload out)
\brief Compute IpcPayload for the given sparse tensor
|
static ArrayDataResult |
arrow.GetTakeIndices(ArrayData filter,
FilterOptions.NullSelectionBehavior null_selection,
MemoryPool memory_pool)
\brief Compute uint64 selection indices for use with Take given a boolean
filter
|
static ArrayDataResult |
arrow.GetTakeIndices(ArrayData filter,
int null_selection,
MemoryPool memory_pool) |
static MessageUniqueResult |
arrow.GetTensorMessage(Tensor tensor,
MemoryPool pool)
\brief EXPERIMENTAL: Convert arrow::Tensor to a Message with minimal memory
allocation
|
static Status |
arrow.jemalloc_memory_pool(MemoryPool out) |
static ArrayResult |
arrow.MakeArrayFromScalar(Scalar scalar,
long length,
MemoryPool pool)
\brief Create an Array instance whose slots are the given scalar
|
static ArrayResult |
arrow.MakeArrayOfNull(DataType type,
long length,
MemoryPool pool)
\brief Create a strongly-typed Array instance with all elements null
|
static Status |
arrow.MakeBuilder(MemoryPool pool,
DataType type,
ArrayBuilder out)
\brief Construct an empty ArrayBuilder corresponding to the data
type
|
static Status |
arrow.MakeDictionaryBuilder(MemoryPool pool,
DataType type,
Array dictionary,
ArrayBuilder out)
\brief Construct an empty DictionaryBuilder initialized optionally
with a pre-existing dictionary
|
static Status |
arrow.mimalloc_memory_pool(MemoryPool out) |
static Status |
parquet.OpenFile(RandomAccessFile arg0,
MemoryPool allocator,
FileReader reader)
\defgroup parquet-arrow-reader-factories Factory functions for Parquet Arrow readers
\{
|
static TableResult |
arrow.PromoteTableToSchema(Table table,
Schema schema,
MemoryPool pool)
\brief Promotes a table to conform to the given schema.
|
static MessageUniqueResult |
arrow.ReadMessage(InputStream stream,
MemoryPool pool)
\brief Read encapsulated IPC message (metadata and body) from InputStream
Returns null if there are not enough bytes available or the
message length is 0 (e.g.
|
static Status |
arrow.ResolveDictionaries(ArrayDataVector columns,
DictionaryMemo memo,
MemoryPool pool) |
static BufferResult |
arrow.SerializeSchema(Schema schema,
MemoryPool pool)
\brief Serialize schema as encapsulated IPC message
|
static Status |
parquet.WriteTable(Table table,
MemoryPool pool,
OutputStream sink,
long chunk_size) |
static Status |
parquet.WriteTable(Table table,
MemoryPool pool,
OutputStream sink,
long chunk_size,
WriterProperties properties,
ArrowWriterProperties arrow_properties)
\brief Write a Table to Parquet.
|
| Modifier and Type | Method and Description |
|---|---|
Status |
Projector.Evaluate(RecordBatch batch,
MemoryPool pool,
ArrayVector output)
Evaluate the specified record batch, and return the allocated and populated output
arrays.
|
Status |
Projector.Evaluate(RecordBatch batch,
SelectionVector selection_vector,
MemoryPool pool,
ArrayVector output)
Evaluate the specified record batch, and return the allocated and populated output
arrays.
|
static Status |
SelectionVector.MakeInt16(long max_slots,
MemoryPool pool,
SelectionVector selection_vector) |
static Status |
SelectionVector.MakeInt32(long max_slots,
MemoryPool pool,
SelectionVector selection_vector)
\brief make selection vector with int32 type records.
|
static Status |
SelectionVector.MakeInt64(long max_slots,
MemoryPool pool,
SelectionVector selection_vector)
\brief make selection vector with int64 type records.
|
| Modifier and Type | Method and Description |
|---|---|
MemoryPool |
WriterProperties.memory_pool() |
MemoryPool |
ReaderProperties.memory_pool() |
MemoryPool |
FileWriter.memory_pool() |
MemoryPool |
ArrowWriteContext.memory_pool() |
| Modifier and Type | Method and Description |
|---|---|
static Statistics |
Statistics.Make(ColumnDescriptor descr,
BytePointer encoded_min,
BytePointer encoded_max,
long num_values,
long null_count,
long distinct_count,
boolean has_min_max,
boolean has_null_count,
boolean has_distinct_count,
MemoryPool pool) |
static RecordReader |
RecordReader.Make(ColumnDescriptor descr,
LevelInfo leaf_info,
MemoryPool pool,
boolean read_dictionary) |
static Statistics |
Statistics.Make(ColumnDescriptor descr,
MemoryPool pool)
\brief Create a new statistics instance given a column schema
definition
|
static ColumnReader |
ColumnReader.Make(ColumnDescriptor descr,
PageReader pager,
MemoryPool pool) |
static Statistics |
Statistics.Make(ColumnDescriptor descr,
String encoded_min,
String encoded_max,
long num_values,
long null_count,
long distinct_count,
boolean has_min_max,
boolean has_null_count,
boolean has_distinct_count,
MemoryPool pool)
\brief Create a new statistics instance given a column schema
definition and pre-existing state
|
static Scanner |
Scanner.Make(ColumnReader col_reader,
long batch_size,
MemoryPool pool) |
static Status |
FileReader.Make(MemoryPool pool,
ParquetFileReader reader,
ArrowReaderProperties properties,
FileReader out)
Factory function to create a FileReader from a ParquetFileReader and properties
|
static Status |
FileReader.Make(MemoryPool pool,
ParquetFileReader reader,
FileReader out)
Factory function to create a FileReader from a ParquetFileReader
|
static Status |
FileWriter.Make(MemoryPool pool,
ParquetFileWriter writer,
Schema schema,
ArrowWriterProperties arrow_properties,
FileWriter out) |
WriterProperties.Builder |
WriterProperties.Builder.memory_pool(MemoryPool pool) |
FileReaderBuilder |
FileReaderBuilder.memory_pool(MemoryPool pool)
Set Arrow MemoryPool for memory allocation
|
ArrowWriteContext |
ArrowWriteContext.memory_pool(MemoryPool setter) |
static PageReader |
PageReader.Open(InputStream stream,
long total_num_rows,
Compression.type codec,
MemoryPool pool,
CryptoContext ctx) |
static PageReader |
PageReader.Open(InputStream stream,
long total_num_rows,
int codec,
MemoryPool pool,
CryptoContext ctx) |
static Status |
FileWriter.Open(Schema schema,
MemoryPool pool,
OutputStream sink,
WriterProperties properties,
ArrowWriterProperties arrow_properties,
FileWriter writer) |
static Status |
FileWriter.Open(Schema schema,
MemoryPool pool,
OutputStream sink,
WriterProperties properties,
FileWriter writer) |
| Constructor and Description |
|---|
ArrowWriteContext(MemoryPool memory_pool,
ArrowWriterProperties properties) |
ReaderProperties(MemoryPool pool) |
Copyright © 2021. All rights reserved.