Interpretation-generator: the ways to interpret estimation errors
The following enumeration attempts to offer guidelines to interpret the numeric results of the analyses:
- Overfitting: If, as a result of the model-running, the facts and the estimations happen to be exactly the same, then it is a matter of data content, how to interpret it: it is either overfitting (when the otherwise consistent learning pattern can be defined perfectly by the polynomial ceteris paribus figures) or recognized (simulated) relation (e.g. the unscrambling of the sultan's hidden function, based on the fable - keeping in mind that the output depends only on these inputs).
- Lack of 'information': If the facts and estimations turn out to be different as a result of the model-running, then there may be multiple interpretations: either the learning sample is inconsistent (e.g. there are at least two objects, they have equally good attributes, but they have different Y-values) or essential attributes are missing from the input signal (cf. stochastic model), or one/more values of missing numeric-scale attributes (cf. brand, and location of a real estate) are shown by the differences of the estimation and the facts.
- Lack of balance: If it can be assumed (based on given learning pattern e.g. price/performance analysis, arbitrary descriptive statistics) that each estimation error happened due to a certain reason then the differences of the facts and the estimations can be interpreted as balance losses (cf. under- or overrating): e.g. is the interest rate too high in light of the premises and conditions?
- Brand-value: If we want to reveal the values of states that are different in their qualities, in some kind of measurement units (e.g. money), then the difference between estimation and fact is the brand-value itself, or missing attribute (or the value of an input that allows arbitrary stairs in case of zero model error). If more than one data (or their shortage) are suspected to fit into a nominal scale (e.g. the brand or color of cars), then the estimation error shows their resultant.
- Y0-error: If there no real Y in a similarity analysis model (cf. Y0-model), and the estimations and facts are equivalent, then there is no need for any discrimination between the objects (e.g. inequalities, decathlon-victory). If, in an Y0 model, the estimations and the facts differ, then it is a matter of content of the objects and attributes to decide if we can talk about good, average, or bad objects or groups.
Don't hesitate to send us your questions or suggestions in e-mail!
((Back))