trait SupportsAdmissionControl extends SparkDataStream
A mix-in interface for SparkDataStream streaming sources to signal that they can control
the rate of data ingested into the system. These rate limits can come implicitly from the
contract of triggers, e.g. Trigger.Once() requires that a micro-batch process all data
available to the system at the start of the micro-batch. Alternatively, sources can decide to
limit ingest through data source options.
Through this interface, a MicroBatchStream should be able to return the next offset that it will
process until given a ReadLimit.
- Since
3.0.0
- Alphabetic
- By Inheritance
- SupportsAdmissionControl
- SparkDataStream
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Abstract Value Members
-
abstract
def
commit(end: Offset): Unit
Informs the source that Spark has completed processing all data for offsets less than or equal to
endand will only request offsets greater thanendin the future.Informs the source that Spark has completed processing all data for offsets less than or equal to
endand will only request offsets greater thanendin the future.- Definition Classes
- SparkDataStream
-
abstract
def
deserializeOffset(json: String): Offset
Deserialize a JSON string into an Offset of the implementation-defined offset type.
Deserialize a JSON string into an Offset of the implementation-defined offset type.
- Definition Classes
- SparkDataStream
- Exceptions thrown
IllegalArgumentExceptionif the JSON does not encode a valid offset for this reader
-
abstract
def
initialOffset(): Offset
Returns the initial offset for a streaming query to start reading from.
Returns the initial offset for a streaming query to start reading from. Note that the streaming data source should not assume that it will start reading from its initial offset: if Spark is restarting an existing query, it will restart from the check-pointed offset rather than the initial one.
- Definition Classes
- SparkDataStream
-
abstract
def
latestOffset(startOffset: Offset, limit: ReadLimit): Offset
Returns the most recent offset available given a read limit.
Returns the most recent offset available given a read limit. The start offset can be used to figure out how much new data should be read given the limit. Users should implement this method instead of latestOffset for a MicroBatchStream or getOffset for Source.
When this method is called on a
Source, the source can returnnullif there is no data to process. In addition, for the very first micro-batch, thestartOffsetwill be null as well.When this method is called on a MicroBatchStream, the
startOffsetwill beinitialOffsetfor the very first micro-batch. The source can returnnullif there is no data to process. -
abstract
def
stop(): Unit
Stop this source and free any resources it has allocated.
Stop this source and free any resources it has allocated.
- Definition Classes
- SparkDataStream
Concrete Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getDefaultReadLimit(): ReadLimit
Returns the read limits potentially passed to the data source through options when creating the data source.
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()