package encoders
- Alphabetic
- By Inheritance
- encoders
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
- case class DummyExpressionHolder(exprs: Seq[Expression]) extends LeafNode with Product with Serializable
-
case class
ExpressionEncoder[T](objSerializer: Expression, objDeserializer: Expression, clsTag: ClassTag[T]) extends Encoder[T] with Product with Serializable
A generic encoder for JVM objects that uses Catalyst Expressions for a
serializerand adeserializer.A generic encoder for JVM objects that uses Catalyst Expressions for a
serializerand adeserializer.- objSerializer
An expression that can be used to encode a raw object to corresponding Spark SQL representation that can be a primitive column, array, map or a struct. This represents how Spark SQL generally serializes an object of type
T.- objDeserializer
An expression that will construct an object given a Spark SQL representation. This represents how Spark SQL generally deserializes a serialized value in Spark SQL representation back to an object of type
T.- clsTag
A classtag for
T.
Value Members
-
def
encoderFor[A](implicit arg0: Encoder[A]): ExpressionEncoder[A]
Returns an internal encoder object that can be used to serialize / deserialize JVM objects into Spark SQL rows.
Returns an internal encoder object that can be used to serialize / deserialize JVM objects into Spark SQL rows. The implicit encoder should always be unresolved (i.e. have no attribute references from a specific schema.) This requirement allows us to preserve whether a given object type is being bound by name or by ordinal when doing resolution.
-
object
ExpressionEncoder extends Serializable
A factory for constructing encoders that convert objects and primitives to and from the internal row format using catalyst expressions and code generation.
A factory for constructing encoders that convert objects and primitives to and from the internal row format using catalyst expressions and code generation. By default, the expressions used to retrieve values from an input row when producing an object will be created as follows:
- Classes will have their sub fields extracted by name using UnresolvedAttribute expressions and UnresolvedExtractValue expressions.
- Tuples will have their subfields extracted by position using BoundReference expressions.
- Primitives will have their values extracted from the first ordinal with a schema that defaults
to the name
value.
- object OuterScopes
-
object
RowEncoder
A factory for constructing encoders that convert external row to/from the Spark SQL internal binary representation.
A factory for constructing encoders that convert external row to/from the Spark SQL internal binary representation.
The following is a mapping between Spark SQL types and its allowed external types:
BooleanType -> java.lang.Boolean ByteType -> java.lang.Byte ShortType -> java.lang.Short IntegerType -> java.lang.Integer FloatType -> java.lang.Float DoubleType -> java.lang.Double StringType -> String DecimalType -> java.math.BigDecimal or scala.math.BigDecimal or Decimal DateType -> java.sql.Date if spark.sql.datetime.java8API.enabled is false DateType -> java.time.LocalDate if spark.sql.datetime.java8API.enabled is true TimestampType -> java.sql.Timestamp if spark.sql.datetime.java8API.enabled is false TimestampType -> java.time.Instant if spark.sql.datetime.java8API.enabled is true BinaryType -> byte array ArrayType -> scala.collection.Seq or Array MapType -> scala.collection.Map StructType -> org.apache.spark.sql.Row