package dense
- Alphabetic
- Public
- All
Type Members
- class AdadeltaGradientDescentDVD extends StochasticGradientDescent[DenseVector[Double]]
-
case class
AffineOutputTransform
[FV](numOutputs: Int, numInputs: Int, innerTransform: Transform[FV, DenseVector[Double]], includeBias: Boolean = true) extends OutputTransform[FV, DenseVector[Double]] with Product with Serializable
Used at the output layer when we're only going to need some of the possible ouputs; it exposes the penultimate layer and then the Layer allows you to pass the results from that back in (caching it elsewhere) and only compute certain cells in the output layer (activationsFromPenultimateDot).
- case class AffineTransform [FV, Mid](numOutputs: Int, numInputs: Int, innerTransform: Transform[FV, Mid], includeBias: Boolean = true)(implicit mult: breeze.linalg.operators.OpMulMatrix.Impl2[DenseMatrix[Double], Mid, DenseVector[Double]], canaxpy: breeze.linalg.scaleAdd.InPlaceImpl3[DenseVector[Double], Double, Mid]) extends Transform[FV, DenseVector[Double]] with Product with Serializable
-
case class
BatchNormalizationTransform
[FV](size: Int, useBias: Boolean, inner: Transform[FV, DenseVector[Double]]) extends Transform[FV, DenseVector[Double]] with Product with Serializable
Implements batch normalization from http://arxiv.org/pdf/1502.03167v3.pdf Basically, each unit is shifted and rescaled per minibatch so that its activations have mean 0 and variance 1.
Implements batch normalization from http://arxiv.org/pdf/1502.03167v3.pdf Basically, each unit is shifted and rescaled per minibatch so that its activations have mean 0 and variance 1. This has been demonstrated to help training deep networks, but doesn't seem to help here.
-
case class
CachingLookupAndAffineTransformDense
[FV](numOutputs: Int, numInputs: Int, word2vecIndexed: Word2VecIndexed[String], includeBias: Boolean = true) extends Transform[Array[Int], DenseVector[Double]] with Product with Serializable
Used at the input layer to cache lookups and the result of applying the affine transform at the first layer of the network.
Used at the input layer to cache lookups and the result of applying the affine transform at the first layer of the network. This saves computation across repeated invocations of the neural network in the sentence.
-
case class
CachingLookupTransform
(word2vecIndexed: Word2VecIndexed[String]) extends Transform[Array[Int], DenseVector[Double]] with Product with Serializable
Used at the input layer to cache lookups and
-
case class
EmbeddingsTransform
[FV](numOutputs: Int, numInputs: Int, word2vecIndexed: Word2VecIndexed[String], includeBias: Boolean = true) extends Transform[Array[Int], DenseVector[Double]] with Product with Serializable
Used at the input layer to cache lookups and backprop into embeddings
- class FrequencyTagger [W] extends Tagger[W] with Serializable
- class IdentityTransform [T] extends Transform[T, T]
- case class LRQTNLayer (lhsWeights: DenseMatrix[Double], rhsWeights: DenseMatrix[Double], index: Index[Feature], numRanks: Int, numLeftInputs: Int, numRightInputs: Int) extends Product with Serializable
- case class LowRankQuadraticTransform [FV](numOutputs: Int, numRanks: Int, numLeftInputs: Int, numRightInputs: Int, innerTransform: Transform[FV, DenseVector[Double]]) extends OutputTransform[FV, DenseVector[Double]] with Product with Serializable
-
case class
LowRankQuadraticTransformNeuron
(numRanks: Int, numLeftInputs: Int, numRightInputs: Int) extends Product with Serializable
Separate because I was having some issues...
- case class NeuralBias (input: Int) extends Feature with Product with Serializable
- case class NeuralFeature (output: Int, input: Int) extends Feature with Product with Serializable
-
case class
NonlinearTransform
[FV](nonLinType: String, size: Int, inner: Transform[FV, DenseVector[Double]], dropoutRate: Double = 0.5) extends Transform[FV, DenseVector[Double]] with Product with Serializable
A bit of a misnomer since this has been generalized to support linear functions as well...
-
case class
OutputEmbeddingTransform
[FV](numOutputs: Int, outputDim: Int, innerTransform: Transform[FV, DenseVector[Double]], coarsenerForInitialization: Option[(Int) ⇒ Int] = None) extends OutputTransform[FV, DenseVector[Double]] with Product with Serializable
Output embedding technique described in section 6 of http://www.eecs.berkeley.edu/~gdurrett/papers/durrett-klein-acl2015.pdf Basically learns a dictionary for the output as well as an affine transformation in order to produce the vector that gets combined with the input in the final bilinear product.
- trait OutputTransform [In, +Out] extends Serializable
- trait Tagger [W] extends AnyRef
- case class TanhTransform [FV](inner: Transform[FV, DenseVector[Double]]) extends Transform[FV, DenseVector[Double]] with Product with Serializable
- trait Transform [In, +Out] extends Serializable
- class Word2VecDepFeaturizerIndexed [W] extends Serializable
-
class
Word2VecIndexed
[W] extends Serializable
converter is used to map words into the word2vec vocabulary, which might include things like lowercasing, replacing numbers, changing -LRB-, etc.
converter is used to map words into the word2vec vocabulary, which might include things like lowercasing, replacing numbers, changing -LRB-, etc. See Word2Vec.convertWord
- class Word2VecSurfaceFeaturizerIndexed [W] extends Serializable
-
class
Word2VecUtils
extends AnyRef
Utilities from https://gist.github.com/ansjsun/6304960
- trait WordVectorAnchoringIndexed [String] extends AnyRef
- trait WordVectorDepAnchoringIndexed [String] extends AnyRef
Value Members
- object AffineTransform extends Serializable
- object NonlinearTransform extends Serializable
- object OutputEmbeddingTransform extends Serializable
- object OutputTransform extends Serializable
- object Transform extends Serializable
- object Word2Vec
- object Word2VecIndexed extends Serializable
- object Word2VecSurfaceFeaturizerIndexed extends Serializable