trait MicroBatchStream extends SparkDataStream
- Alphabetic
- By Inheritance
- MicroBatchStream
- SparkDataStream
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Abstract Value Members
-
abstract
def
commit(end: Offset): Unit
Informs the source that Spark has completed processing all data for offsets less than or equal to
endand will only request offsets greater thanendin the future.Informs the source that Spark has completed processing all data for offsets less than or equal to
endand will only request offsets greater thanendin the future.- Definition Classes
- SparkDataStream
-
abstract
def
createReaderFactory(): PartitionReaderFactory
Returns a factory to create a
PartitionReaderfor eachInputPartition. -
abstract
def
deserializeOffset(json: String): Offset
Deserialize a JSON string into an Offset of the implementation-defined offset type.
Deserialize a JSON string into an Offset of the implementation-defined offset type.
- Definition Classes
- SparkDataStream
- Exceptions thrown
IllegalArgumentExceptionif the JSON does not encode a valid offset for this reader
-
abstract
def
initialOffset(): Offset
Returns the initial offset for a streaming query to start reading from.
Returns the initial offset for a streaming query to start reading from. Note that the streaming data source should not assume that it will start reading from its initial offset: if Spark is restarting an existing query, it will restart from the check-pointed offset rather than the initial one.
- Definition Classes
- SparkDataStream
-
abstract
def
latestOffset(): Offset
Returns the most recent offset available.
-
abstract
def
planInputPartitions(start: Offset, end: Offset): Array[InputPartition]
Returns a list of
input partitionsgiven the start and end offsets.Returns a list of
input partitionsgiven the start and end offsets. EachInputPartitionrepresents a data split that can be processed by one Spark task. The number of input partitions returned here is the same as the number of RDD partitions this scan outputs.If the
Scansupports filter pushdown, this stream is likely configured with a filter and is responsible for creating splits for that filter, which is not a full scan.This method will be called multiple times, to launch one Spark job for each micro-batch in this data stream.
-
abstract
def
stop(): Unit
Stop this source and free any resources it has allocated.
Stop this source and free any resources it has allocated.
- Definition Classes
- SparkDataStream
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()