The use of graphical processing units (GPUs) which has the ability to highly parallelise computational tasks has brought the field of neural networks and its corresponding machine learning techniques – which is now around for almost half a century – back to focus.

Several architectures such as a convolutional neual network (CNN) and feed-forward NN allow to estimate a model function f which connects an input-value x with a given ouput y. The input could for example given by a stack of blurred images coming from a microscope, wheres the output comes from deblurred or ground-truth data. A learning procedure, such as the so called back-propagation method tries to minimize a given cost-/error-function – which could eventually be given by the RMS between the output data y and the, by the NN, propagated data y’=F(x). By varying the model-parameters or weights inside the NN, the error reduces.
Therefore a neural network is able to estimate any given function within a given boundary without knowing the function directly.


The field of the so called “Big Data” tries to find relationships and corellations in a given Dataset.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.