
Visualizing the Loss Landscape of Neural Nets
Neural network training relies on our ability to find "good" minimizers ...
read it

Representative Datasets: The Perceptron Case
One of the main drawbacks of the practical use of neural networks is the...
read it

Error Loss Networks
A novel model called error loss network (ELN) is proposed to build an er...
read it

Designing Accurate Emulators for Scientific Processes using CalibrationDriven Deep Models
Predictive models that accurately emulate complex scientific processes c...
read it

Extremal learning: extremizing the output of a neural network in regression problems
Neural networks allow us to model complex relationships between variable...
read it

Applications of Koopman Mode Analysis to Neural Networks
We consider the training process of a neural network as a dynamical syst...
read it

On the suitability of generalized regression neural networks for GNSS position time series prediction for geodetic applications in geodesy and geophysics
In this paper, the generalized regression neural network is used to pred...
read it
Nth Absolute Root Mean Error
Neural network training process takes long time when the size of training data is huge, without the large set of training values the neural network is unable to learn features. This dilemma between time and size of data is often solved using fast GPUs, but we present a better solution for a subset of those problems. To reduce the time for training a regression model using neural network we introduce a loss function called Nth Absolute Root Mean Error (NARME). It helps to train regression models much faster compared to other existing loss functions. Experiments show that in most use cases NARME reduces the required number of epochs to almost onetenth of that required by other commonly used loss functions, and also achieves great accuracy in the small amount of time in which it was trained.
READ FULL TEXT
Comments
There are no comments yet.