org.encog.neural.networks.training.lma
Class LevenbergMarquardtTraining

java.lang.Object
  extended by org.encog.neural.networks.training.BasicTraining
      extended by org.encog.neural.networks.training.lma.LevenbergMarquardtTraining
All Implemented Interfaces:
Train

public class LevenbergMarquardtTraining
extends BasicTraining

Trains a neural network using a Levenberg Marquardt algorithm (LMA). This training technique is based on the mathematical technique of the same name. http://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm The LMA training technique has some important limitations that you should be aware of, before using it. Only neural networks that have a single output neuron can be used with this training technique. The entire training set must be loaded into memory. Because of this an Indexable training set must be used. However, despite these limitations, the LMA training technique can be a very effective training method. References: - http://www-alg.ist.hokudai.ac.jp/~jan/alpha.pdf - http://www.inference.phy.cam.ac.uk/mackay/Bayes_FAQ.html


Field Summary
static double LAMBDA_MAX
          The max amount for the LAMBDA.
static double SCALE_LAMBDA
          The amount to scale the lambda by.
 
Constructor Summary
LevenbergMarquardtTraining(BasicNetwork network, NeuralDataSet training)
          Construct the LMA object.
 
Method Summary
 void calculateHessian(double[][] jacobian, double[] errors)
          Calculate the Hessian matrix.
 BasicNetwork getNetwork()
          Get the current best network from the training.
 boolean isUseBayesianRegularization()
           
 void iteration()
          Perform one iteration.
 void setUseBayesianRegularization(boolean useBayesianRegularization)
          Set if Bayesian regularization should be used.
static double trace(double[][] m)
          Return the sum of the diagonal.
 double updateWeights()
          Update the weights.
 
Methods inherited from class org.encog.neural.networks.training.BasicTraining
addStrategy, finishTraining, getCloud, getError, getIteration, getStrategies, getTraining, isTrainingDone, iteration, postIteration, preIteration, setCloud, setError, setIteration, setTraining
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

SCALE_LAMBDA

public static final double SCALE_LAMBDA
The amount to scale the lambda by.

See Also:
Constant Field Values

LAMBDA_MAX

public static final double LAMBDA_MAX
The max amount for the LAMBDA.

See Also:
Constant Field Values
Constructor Detail

LevenbergMarquardtTraining

public LevenbergMarquardtTraining(BasicNetwork network,
                                  NeuralDataSet training)
Construct the LMA object.

Parameters:
network - The network to train. Must have a single output neuron.
training - The training data to use. Must be indexable.
Method Detail

trace

public static double trace(double[][] m)
Return the sum of the diagonal.

Parameters:
m - The matrix to sum.
Returns:
The trace of the matrix.

calculateHessian

public void calculateHessian(double[][] jacobian,
                             double[] errors)
Calculate the Hessian matrix.

Parameters:
jacobian - The Jacobian matrix.
errors - The errors.

getNetwork

public BasicNetwork getNetwork()
Description copied from interface: Train
Get the current best network from the training.

Returns:
The trained network.

isUseBayesianRegularization

public boolean isUseBayesianRegularization()
Returns:
True, if Bayesian regularization is used.

iteration

public void iteration()
Perform one iteration.


setUseBayesianRegularization

public void setUseBayesianRegularization(boolean useBayesianRegularization)
Set if Bayesian regularization should be used.

Parameters:
useBayesianRegularization - True to use Bayesian regularization.

updateWeights

public double updateWeights()
Update the weights.

Returns:
The sum squared of the weights.


Copyright © 2011. All Rights Reserved.