Struct rusty_machine::learning::nnet::NeuralNet
[−]
[src]
pub struct NeuralNet<'a, T, A> where T: Criterion, A: OptimAlgorithm<BaseNeuralNet<'a, T>> { /* fields omitted */ }
Neural Network Model
The Neural Network struct specifies a Criterion and a gradient descent algorithm.
Methods
impl<'a> NeuralNet<'a, BCECriterion, StochasticGD>
[src]
fn default(layer_sizes: &[usize]) -> NeuralNet<BCECriterion, StochasticGD>
Creates a neural network with the specified layer sizes.
The layer sizes slice should include the input, hidden layers, and output layer sizes. The type of activation function must be specified.
Uses the default settings (stochastic gradient descent and sigmoid activation function).
Examples
use rusty_machine::learning::nnet::NeuralNet; // Create a neural net with 4 layers, 3 neurons in each. let layers = &[3; 4]; let mut net = NeuralNet::default(layers);
impl<'a, T, A> NeuralNet<'a, T, A> where T: Criterion, A: OptimAlgorithm<BaseNeuralNet<'a, T>>
[src]
fn new(layer_sizes: &'a [usize], criterion: T, alg: A) -> NeuralNet<'a, T, A>
Create a new neural network with the specified layer sizes.
The layer sizes slice should include the input, hidden layers, and output layer sizes. The type of activation function must be specified.
Currently defaults to simple batch Gradient Descent for optimization.
Examples
use rusty_machine::learning::nnet::BCECriterion; use rusty_machine::learning::nnet::NeuralNet; use rusty_machine::learning::optim::grad_desc::StochasticGD; // Create a neural net with 4 layers, 3 neurons in each. let layers = &[3; 4]; let mut net = NeuralNet::new(layers, BCECriterion::default(), StochasticGD::default());
fn get_net_weights(&self, idx: usize) -> MatrixSlice<f64>
Gets matrix of weights between specified layer and forward layer.
Examples
use rusty_machine::linalg::BaseMatrix; use rusty_machine::learning::nnet::NeuralNet; // Create a neural net with 4 layers, 3 neurons in each. let layers = &[3; 4]; let mut net = NeuralNet::default(layers); let w = &net.get_net_weights(2); // We add a bias term to the weight matrix assert_eq!(w.rows(), 4); assert_eq!(w.cols(), 3);
Trait Implementations
impl<'a, T: Debug, A: Debug> Debug for NeuralNet<'a, T, A> where T: Criterion, A: OptimAlgorithm<BaseNeuralNet<'a, T>>
[src]
impl<'a, T, A> SupModel<Matrix<f64>, Matrix<f64>> for NeuralNet<'a, T, A> where T: Criterion, A: OptimAlgorithm<BaseNeuralNet<'a, T>>
[src]
Supervised learning for the Neural Network.
The model is trained using back propagation.