+ Patent based on similarity analyses +

+ IBF (Idea-breeding-farm) +

We are proud of this study! close

+ MY-X DVD +

2021.XII.
PhD-title succesfully realized with the support of the My-X team!

2021.XI.
KJE-ETDK-1. place (incl. OTDK-access-right) with a study supported by the MY-X team!

2020.XI.
BME-ETDK-1. place (incl. OTDK-access-right) with a study supported by the MY-X team!

2020.XI.
2 BPROF-Students (1. semester): 1. and 2. places (ETDK) + OTDK-access rights!

2020.III-VI.
QuILT: distance education with avatars in English!

2020.VI.
More and more qualitative studies in English from Students of the international courses!

2019.IX.
The Liebig-rule is available in the traffic!

2018.IX.
Kecskemét-Conference about Innovation: presentation of the 2DM-game!

2018.VI.
MMO 2018 conference: the best presentation of the session (award for 2DM) + the best publication of the session (award for Rosling-animations)!



The Journal MIAU has been working for 25+ years as a kind of public service!
MATARKA-nézet

Statement for the Reader:
the thoughts presented on the virtual sites of the e-Journal of MIAU
should be interpreted like e.g. Ramanudjan's notices:
the value of the thoughts comes from the inner world of the publication -
the publication channel is not part of the value-building-processes...

Instead of social media: MYX-team-NEWS

(Last modified: 2015.VII.19.14:08 - MIAÚ-RSS)

Thought experiments II: PhD-Pyramid and automation

Leading article: 2009. June (MIAU No. 130.) - Pitlik László (MIAU) -
(Previous article: MIAU No. 129.) --- (Next article: MIAU No. 131.)

soruce: http://www.deviantart.com/download/53031084/Animated_Blank_Rubik__s_Cube_by_zavaboy.gif

We may start the story of June with the assumption (and thus it will immediately be revealed, why is the animated cube here) that hopefully everyone knows Rubik's Cube?!
Possibly, there are some people, who can not solve it, there are others, who learned how to solve it from others, and there are those who figured it out themselves how to solve it...
Who can solve it faster (with fewer steps)? Those who learned all known tricks or those who figured out certain tricks themselves?
The answer is not simple: Somebody had to figure out the first strategies, so s/he was the fastest to solve it. After that, a competition developed among the strategies then possibly there was a phase, when those who learned many of the strategies become faster than the developers of the strategies, because they found out their ideal combination. We may regard this as a transformation from quantity into quality.
Because (if true), no one ever derived how many is the minimum steps required to solve the cube to a certain state, or where is the maximum (viz. how many is the maximum steps required to reach the most jumbled state), so the competition of 'strategy-inventors' and 'strategy users' to the title of the fastest is possibly various...
Either there is rule-system for definition of the least steps required (and for the steps themselves) or not, based on the well-known strategies, robots that can identify the 3*3 color of the 6 facets of the cube (so 54 input values) without further human intervention, may be created with relative ease. So, the provided knowledge is AUTOMATABLE!
The first robot of this kind can be measured to the appearance of the first strategy. While the second has to offer more than, based on the utilitarian logic (viz. PYRAMID-principle) of edification on each other: in this case, less steps...
Should it be this way in case we think about how to build PHD-theses on each other? Moreover, the interim works of the students (and the connection systems of subjects of these), the theses, the conference works, the PHD-dissertations, and finally the research projects that initialize them shall function as the elements of a building complex:

An (by all means) interdisciplinary dissertation (URL: http://phd.okm.gov.hu/) was just now accepted in the frame of SZIE GTK TATA and SZIE GSZDI. The topic of the thesis was the analysis of the utility of ERP systems (cf. Szalay 2009).
The thesis of course has a forerunner, but in this case, as an internal benchmarking, it is unimportant. Is it important to know how to use this legitimate version of best practice as a signal fire for the upcoming PhD-dissertations?!

The essential moments of the dissertation in question are listed below:
A.) ssignificance-examination in case of the efficiency of induction of ERP for a company,
B.) the development of conceptual and calculation order of utility,
C.) the check through questionnaires of the interpretability of the calculator.

The relevant details of the vision marked by the thesis and its reviews:
D.) extension of significance-examinations in time/space/branch etc.,
E.) the forecast of the impact through ERP-introduction (e.g. expert system).

Before we set out to analyze the KIR (external information system) similarly (cf. Pető, 2009), based on interpretations of the ERP (as an internal information system), we shall prove the potential improving in the methodical fields below:

1/a.) How can the impact of handlings without experiments be revealed, viz. what kind of preconditions are to be met in order to use certain statistical tests, and if they can be used, then they lead to what kind of results (that can be interpreted with templates) after the data rows and their names are given (cf. points A & D)?
1/b.) How does the order of significance-examination alter, if with the (primer) facts we include model-based estimations that help understatement of the searched impacts (cf. point B)?
1/c.) Whether a multivariate income production function (Y= e.g. ROI, or any other income-style farm indicator, and X = resources, like animals, land, people, machines, etc., and immaterial goods, cost of consulting, communication costs etc.) can be interpreted as ERP/KIR impact-indicator, if the classic statistic methods do not detect significance? Such kind of production functions can be developed in frame of similarity analysis, which can reveal impacts, if the indirect KIR-variables would not drop out as noise. Put in other words: Can the multivariate models be considered to be a verification methodology, that can reveal the impacts on attribute (and stair) level, and would they make basically non-interpretable significance examinations obsolete? (cf. points A, B)
2.) All the questionnaires can be evaluated? Viz. the fake and the coherent answers can be distinguished from each other (cf. point C)?
3.) The demonstration of modules for an ideal KIR, the revelation of their terms of use (cf. expert systems), the demonstration of their expected utility (cf. point E).
4.) May the objects (observed as anonymous FADN-farms) cause through their time/space/legal status/activity etc. partial views in case of parallel significance-examinations inconsistent situations? (novel problem).

If we take an attentive look upon the relation of many eruption points and the letter-marked best practice / vision elements, it will become evident, that always the first effective solution for a problem may be credited as a merit to the author.
However, as soon as another author accepts an already existing conceptual order as a basis (the level of a pyramid can be based only on the previous level), the affectivity question immediately become an efficiency problem. Quantitative questions always raise qualitative challenges:

- an automated and consistency ensuring system of the existing examination methods vs. using one of the potential significance examinations, and
- consistent view of parallel subsystems vs. one classic object/handling view,
- preliminary revelation of RND-like answers in questionnaires vs. answer-analyses without any self-control mechanisms,
- working out new tracks (possible methods of calculation for optimized models) vs. calculation on the known way (cf. MALINFO), and finally:
- substitution of significance examinations with multivariate models in order to reveal the impact (levels) of handling without experiments.

One may ask in regard of the logical parallelism of ERP and KIR, how can be ensured that the quasi optional interdisciplinary analysis topics are based on each other (cf. logical landscape architecture)?

To find the answer put a new PhD-topic running in parallel under the magnifying glass: the agricultural-sector modeling:
In frame of agricultural-sector modeling, exogenous expert opinions are collected about the expected yields (for long time and detailed in space), and they determine the prices (that are balanced with the global market models) of the different products.

They look for optimal activity sizes where the net income of agriculture is at its maximum, based on these data and other system rules (like feeding principles). Thus the result is the 'ideal' production structure.
How would the creator of the sector model monitor himself, if he constantly works with fictitious frame conditions (with the long-term uniformity of subsidy systems, and an optional policy scenario). And finally he publishes only the difference in effect to the production structure of these parallel scenarios.
As it can be felt: Self-monitoring is impossible. Viz. the resultant of many seemingly clever partial relationships in shadow of Vonnegut is the result.
If this is the best practice, then where and how could that be exceeded (keeping the aforementioned PhD topics in mind):

Perhaps it is not so difficult to apprehend that the estimation of exogenous variables means the first point of intervention, because the ad-hoc opinion of the experts can not be verified even with the iterations. So the expected degree of yield variations of many years can/must be derived from previous analogue variations with similarity analysis. (cf. IDARA report).
Also the estimation of world market prices can be the substituted by analyses of parallel and consistency-based price-trends (cf. stock exchange).
Besides the forecast of WEATHER can not be circumvented, as it has a remarkable effect on the agricultural market. Especially the question of year types: for example, when would be the weather rainy in a certain region? Put in other words: the objective is to create a production function for yield estimation.
And if we constantly feel that the aforementioned models are in strong relationship, well, then we are right. The solution is consistent derivation of the future for each attributes that are observed already in the present!
In regard of ERP-KIR, the object is still the even more consistent and automatable revelation of the impacts (without experiments), that can be done with the frequently mentioned (multiple stair) similarity analyses.

In addition to the unified approach of the PhD-level, the fitting of student theses into systems shall be briefly mentioned...
The theses of those who graduate from the ISZAM3 (agricultural informatics) course in 2009 and the thesis topics of other courses and the aforementioned PhD-topics relate with each other as explained below:
KIR: the development of agricultural 'robots' and their navigation systems (viz. automated extraction of farm specific strategies from external databases), substitution of expertises based on online expert systems (e.g. subsidies, license politics), translation support with transmitter language (revelation of documents of unknown languages)...
Sector models: price forecast, weather forecast, revelation of force fields in sectors (e.g. post), making the rural development definitions/strategies consistent...

Based on the thoughts mentioned above, it is not coincidental, that for the new NKTH-OTKA proposal, that merges innovation and research, a development plan that can automatically examine the consistency of EU(-presidency), national, regional, corporate, and personal strategies, will be submitted... (Details in July!)

Please, send Your comments per email!

((Back))
miau.my-x.hu
myxfree.tool
rss.services