+ IBF (Idea-breeding-farm) +

We are proud of this study! close

+ MY-X DVD +

2020.XI.
BME-ETDK-1. place (incl. OTDK-access-right) with a study supported by the MY-X team!

2020.XI.
2 BPROF-Students (1. semester): 1. and 2. places (ETDK) + OTDK-access rights!

2020.III-VI.
QuILT: distance education with avatars in English!

2020.VI.
More and more qualitative studies in English from Students of the international courses!

2019.IX.
The Liebig-rule is available in the traffic!

2018.IX.
Kecskemét-Conference about Innovation: presentation of the 2DM-game!

2018.VI.
MMO 2018 conference: the best presentation of the session (award for 2DM) + the best publication of the session (award for Rosling-animations)!



Last modified: 2015.VII.19.14:08 - MIAÚ-RSS
The Journal MIAU has been working for 20+ years as a kind of public service!
The MATARKA-view

Approximation of anti-discriminative models based on solver-based neural networks

Leading article: 2021. July (MIAU No. 275.)
(Previous article: MIAU No. 274.)

Keywords: benchmark, similarity analysis

Abstract: In general, the artificial neural networks are derived in a not solver-driven way. It is important to declare, that a NN-model is a kind of relatively complex function where for the optimal parameter setting can be searched in different ways (like back propagation technique, genetic algorithm, or even solver-based approximation). The fact, that neural network models can be optimized in a solver-driven way is relevant because the models may be represented in Excel (c.f. pseudo-code) where the Excel is seen as a quasi universal planning frame system. On the other hand, the paper demonstrates, that the neural network models can be domesticated – it means the hermeneutical potential can be increased in a conscious way. The example for the domestication of these wild creatures is the anti-discriminative modelling as such – which can easily be handled with staircase functions – but hardly not with neural networks. The anti-discriminative models expect the same (constant) Y-values in case of arbitrary (different) objects. If we exactly know that at least one approximation is given (based on fully interpretable staircase functions), it is a complex challenge to enforce similar functionality from a neural network. The neural networks are capable of simulating different ceteris paribus connections around each object-attribute-value-triplet. This is a feature being responsible for quasi unlimited flexibility, but this feature is responsible in a parallel way for the hermeneutical fogs around the neural networks. The human brain is designed to assume only one form of ceteris paribus connection between Xi and Y. This assumption is not robust enough – but given. We can search for extreme cases (e.g. soil = sand and soil = best chernozem – Xi = N, Y = yield of e.g. maize – ceteris paribus can be monotonous increasing or even optimum-like). On the other hand: there are still no methods given to prove the consistence between arbitrary amount of ceteris paribus connections based on the differences of the objects (situation). More (DOC) *** More (PDF)


Please, send Your comments per email!

((Back))
miau.my-x.hu
myxfree.tool
rss.services