org.bdgenomics.adam.rdd

ADAMContext

class ADAMContext extends Serializable with Logging

Linear Supertypes
Logging, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. ADAMContext
  2. Logging
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ADAMContext(sc: SparkContext)

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. def adamDictionaryLoad[T](filePath: String)(implicit ev1: (T) ⇒ SpecificRecord, ev2: Manifest[T]): SequenceDictionary

    This method should create a new SequenceDictionary from any parquet file which contains records that have the requisite reference{Name,Id,Length,Url} fields.

    This method should create a new SequenceDictionary from any parquet file which contains records that have the requisite reference{Name,Id,Length,Url} fields.

    (If the path is a BAM or SAM file, and the implicit type is an Read, then it just defaults to reading the SequenceDictionary out of the BAM header in the normal way.)

    T

    The type of records to return

    filePath

    The path to the input data

    returns

    A sequenceDictionary containing the names and indices of all the sequences to which the records in the corresponding file are aligned.

  7. def adamLoad[T, U <: UnboundRecordFilter](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None)(implicit ev1: (T) ⇒ SpecificRecord, ev2: Manifest[T]): RDD[T]

    This method will create a new RDD.

    This method will create a new RDD.

    T

    The type of records to return

    filePath

    The path to the input data

    predicate

    An optional pushdown predicate to use when reading the data

    projection

    An option projection schema to use when reading the data

    returns

    An RDD with records of the specified type

  8. def applyPredicate[T, U <: ADAMPredicate[T]](reads: RDD[T], predicateOpt: Option[Class[U]])(implicit ev1: (T) ⇒ SpecificRecord, ev2: Manifest[T]): RDD[T]

  9. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  10. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  11. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  13. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. def findFiles(path: Path, regex: String): Seq[Path]

    Searches a path recursively, returning the names of all directories in the tree whose name matches the given regex.

    Searches a path recursively, returning the names of all directories in the tree whose name matches the given regex.

    path

    The path to begin the search at

    regex

    A regular expression

    returns

    A sequence of Path objects corresponding to the identified directories.

  15. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  16. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  17. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  18. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  19. def loadAlignments[U <: ADAMPredicate[AlignmentRecord]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None): RDD[AlignmentRecord]

  20. def loadAlignmentsFromPaths(paths: Seq[Path]): RDD[AlignmentRecord]

    Takes a sequence of Path objects (e.

    Takes a sequence of Path objects (e.g. the return value of findFiles). Treats each path as corresponding to a Read set -- loads each Read set, converts each set to use the same SequenceDictionary, and returns the union of the RDDs.

    (GenomeBridge is using this to load BAMs that have been split into multiple files per sample, for example, one-BAM-per-chromosome.)

    paths

    The locations of the parquet files to load

    returns

    a single RDD[Read] that contains the union of the AlignmentRecords in the argument paths.

  21. def loadFeatures[U <: ADAMPredicate[Feature]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None): RDD[Feature]

  22. def loadGenes[U <: ADAMPredicate[Feature]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None): RDD[Gene]

  23. def loadGenotypes[U <: ADAMPredicate[Genotype]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None, sd: Option[SequenceDictionary] = None): RDD[Genotype]

  24. def loadSequence[U <: ADAMPredicate[NucleotideContigFragment]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None, fragmentLength: Long = 10000): RDD[NucleotideContigFragment]

  25. def loadVariantAnnotations[U <: ADAMPredicate[DatabaseVariantAnnotation]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None, sd: Option[SequenceDictionary] = None): RDD[DatabaseVariantAnnotation]

  26. def loadVariants[U <: ADAMPredicate[Variant]](filePath: String, predicate: Option[Class[U]] = None, projection: Option[Schema] = None, sd: Option[SequenceDictionary] = None): RDD[Variant]

  27. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  28. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  29. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  30. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  31. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  32. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  33. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  34. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  35. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  36. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  37. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  38. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  39. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  40. final def notify(): Unit

    Definition Classes
    AnyRef
  41. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  42. val sc: SparkContext

  43. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  44. def toString(): String

    Definition Classes
    AnyRef → Any
  45. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  46. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  47. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Logging

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped