„CT 00” változatai közötti eltérés
Jkv1 (vitalap | szerkesztései) (→Chapter#2.1. Testing) |
Jkv1 (vitalap | szerkesztései) (→Chapter#2.1. Testing) |
||
89. sor: | 89. sor: | ||
''"Software testing is the act of checking whether software satisfies expectations."'' (Source: https://en.wikipedia.org/wiki/Software_testing) | ''"Software testing is the act of checking whether software satisfies expectations."'' (Source: https://en.wikipedia.org/wiki/Software_testing) | ||
This short definition is complex enough to deliver a relevant new keyword: ''"expectations"''. Before this abstraction is really involved, the term of "concept testing" should be defined. This definition may come from the Author(s), because here and now, only the goals of the Author(s) are relevant. Concepts are therefore patterns (formulas, systems, relationships, models, etc.) being seemingly capable of mirroring the connections between the known data (even they are partial from point of view of a holistic approach). ''"Expectations"'' are all measurable features being capable of monitoring the goodnees of the unknown connections. It is important: the human experts may not change the raw data if a concept seems not to be appropriate enough. Always the concepts should be changed till all raw data are covered through the mathematisms of the particular (best) concept. The problems of the arbitrariness of the human experts can be found listed in the book: Arthur Koestler, The Sleepwalkers! (more: https://en.wikipedia.org/wiki/The_Sleepwalkers:_A_History_of_Man%27s_Changing_Vision_of_the_Universe) Therefore, the goodness of the concepts let assume a scale: the one end of the scale is the set of the randomized generated concepts. The opposite end of this scale is the set of the error-free solutions (because it is possible two have alternative solutions with the same evaluation value). | This short definition is complex enough to deliver a relevant new keyword: ''"expectations"''. Before this abstraction is really involved, the term of "concept testing" should be defined. This definition may come from the Author(s), because here and now, only the goals of the Author(s) are relevant. Concepts are therefore patterns (formulas, systems, relationships, models, etc.) being seemingly capable of mirroring the connections between the known data (even they are partial from point of view of a holistic approach). ''"Expectations"'' are all measurable features being capable of monitoring the goodnees of the unknown connections. It is important: the human experts may not change the raw data if a concept seems not to be appropriate enough. Always the concepts should be changed till all raw data are covered through the mathematisms of the particular (best) concept. The problems of the arbitrariness of the human experts can be found listed in the book: Arthur Koestler, The Sleepwalkers! (more: https://en.wikipedia.org/wiki/The_Sleepwalkers:_A_History_of_Man%27s_Changing_Vision_of_the_Universe) Therefore, the goodness of the concepts let assume a scale: the one end of the scale is the set of the randomized generated concepts. The opposite end of this scale is the set of the error-free solutions (because it is possible two have alternative solutions with the same evaluation value). | ||
− | |||
==Chapter#2.2. ...== | ==Chapter#2.2. ...== |
A lap 2025. április 7., 08:00-kori változata
Final-thesis-like publication based on previous performances (see: https://miau.my-x.hu/mediawiki/index.php?title=CT_01) Principles for editing: https://miau.my-x.hu/mediawiki/index.php/Vita:CT_00 History of the final product: https://miau.my-x.hu/mediawiki/index.php?title=CT_00&action=history History of the discussion page: https://miau.my-x.hu/mediawiki/index.php?title=Vita:CT_00&action=history
Tartalomjegyzék
Title
Which concepts can be verified based on partial data about log-information in an e-car?
Subtitle
(or a cooperative experiment, how to create e.g. the chapter2 about literature in a final thesis)
Authors
László Pitlik (https://orcid.org/0000-0001-5819-0319), László Pitlik (Jr.) (https://orcid.org/0000-0002-8058-9577) Mátyás Pitlik (https://orcid.org/0000-0002-1991-3008),
Institutions
MY-X research team
Abstract
History of the project: The software-testing as such from point of view of a praxis-oriented education has to enforce real testing experiences - especially about softwares being given day-by-day in the education (e.g. https://miau.my-x.hu/miau/320/moodle_neptun_tests/, https://miau.my-x.hu/miau/320/moodle_testing/, https://miau.my-x.hu/miau/320/teams_testing/). On the other hand, it is not correct, if the term of testing is only focusing on ergonomy, functionality in a trivial way. Therefore, specific aspects are also important: e.g. https://miau.my-x.hu/miau/320/moodle_cubes_logic/ about interpreting systems with seemingly correct functionalities and/or https://miau.my-x.hu/miau/320/moodle_webkincstar/ about legal aspects of potential damages based on testing results. Finally, the testing as such approximate the challenge of concept testing (c.f. https://miau.my-x.hu/miau/320/concept_testing/), where the best concepts should be derived based on partial log-data about arbitrary systems (c.f. encryption/decryption tasks for unknown-cyphers).
Own objectives and results: This publication demonstrates a case about the negotiation process of 10+ experts concerning a tricky challenge, where partial (raw and derived) log-data of an e-car could be analyzed based on three concepts. 2 of them were totally correct from mathematical point of views, and one concept was a randomized set of potential interpretable numbers. The interpretation process had two levels: the first level made only a part of the existing data visible. On other level, all data could be seen. Parallel, to the case tadies based on human intuition processes, an AI-based approach must also be interpreted by human experts. The conclusions can be seen in this publication.
Future: The creation of the publication (as a kind of side effect) will also be used in the education to demonstrate a lot of rules concerning the writing process of a final thesis. On the other hand, the main motivation is always the automation: it is important, that human experts are capable of solving problems in an approximative way, but it is significantly more relevant to explore, how can we derive automations concerning the thinking processes of human experts.
Chapter#1. Introduction
In this chapter, it will be necessary to clarify the basic information about the project: aims/objectives, tasks, targeted groups, uitilities (estimation of information added-values), motivation, about the structure of the study.
Chapter#1.1. Aims/objectives
The title signalize more relevant keywords needing at least a short definition (c.f. concepts, verification, partial log-data).
The data asset for task-definitions can be seen here: https://miau.my-x.hu/miau/320/concept_testing/concept_testing_task_level.xlsx. The whole analytical process can be interpreted here: https://miau.my-x.hu/miau/320/concept_testing/concept_testing_v1.xlsx There are 3 task levels (for each level there is a separate sheet "task1", "task2", "task3" - see *task_level.xlsx). The entire complexity (see *_v1.xlsx - including data and analytical steps) was a hidden file during the task-periode. Further files concerning solutions can be seen here: https://miau.my-x.hu/miau/320/concept_testing/?C=M;O=D.
Based on the above-mentioned files, the expression partial data means: parts of a complex systems are presented as task in order to motivate for explanations/interpretations. The situation is the same, as somebody has to report about a room based on a view through one/more key-hole(s).
Concepts as keyword means: based on the raw data and further calculated data, there are 3 hidden formulas and only the results of these hidden formulas are known in frame of the tasks. The inputs of the tasks is only data positions without any formulas.
Verification as keyword means: what kind of analytical steps lead to a situation, where it is possible to classify concepts as potential realistic or even potential irrealistic.
Based on these short definitions, the publication try to present a case study where (see the entire publication as such), where different steps (task1, task2, task3, task4:interpretation of the hidden file) are interpreted in a detailed way.
The experiment based on the data delivered in task1 can be found in chapter#...
The experiment based on the data delivered in task2 can be found in chapter#...
The experiment based on the data delivered in task3 can be found in chapter#...
The experiment based on the data delivered in task4 can be found in chapter#...
The entire publication tries to deliver interpretation possibilities to the term "verification". Verification can be derived manually (see chapter#...) or even in an automated way (see chapter#...). The manual-driven steps can have such a traps, where automation becomes impossible (see chapter#...).
Summa summarum: the whole publication tries to have influence to the thinking methodology of the Students in order to see practical steps behind phylosophycal challenges (e.g. automation, nature/level of vierification). The publication can be evaluated as understood, if the Reader think, (s)he is capable of deriving classifications concerning arbitrary concepts and (s)he is capable of deciding about a concept whether it it is rather realistic or rather irrealistic. It is also important, that the Readers see the third output-level: namely, not each concept may be evaluated based on the partical given raw data (see chapter#...).
Chapter#1.2. Tasks
The aims/objective presented already the 3+1 tasks: 3 tasks are handling with concepts based on partial information. The last one (4th) demonstrates holistic/complete information.
Task1: Based on the particular information, which concept (A,B,C) seems to be rational or irrational? (see chapter#...)
Task2: Based on further particular information, which concept (A,B,C) seems to be rational or irrational? (see chapter#...)
Task3: Based on further new particular information, which concept (A,B,C) seems to be rational or irrational? (see chapter#...)
Task4a: Based on holistic/complete information, which concept (A,B,C) seems to be rational or irrational? (see chapter#...) AND
Task4b: How can be automated the most complex (most consistent) verification process? (see chapter#...)
Argumentations: The new and newer futher information units try to support the understanding process concerning more and more complex verification strategies. The tasks sould be solved in a step-by-step-way in order to ensure didactical impacts/effects in Students.
Chapter#1.3. Targeted groups
The entire challenge is a didactical challenge. The step-wise progress is the learning process as such. The methodology is basing on trial-and-error-effects in individuals and in groups. Therefore, the targeted groups are individuals (as Students) and groups of Students. On the other hand: each learning material is a kind of support for teachers too. Therefore, teachers are also part of the targeted groups. Affected teachers are not only teachers having the same subject (c.f. testing), but each subject can also be supported through the phylosophycal (context free) aspects. Finally, instituions (management of institutions/universities) are also a kind of targeted group, because the castles of the sciences have to apply each teached knowledge in the own management processes.
Chapter#1.4. Utilities (estimation of informational added-values)
There are now 4 targeted groups: individuals as Students, groups of Students, individuals as Teachers, manager of universities. The informational added-value is the difference between impacts without and with the results of this project minus costs. In ideal case: the projects does cause more positive impacts than costs compared to the benchmark where the projects results are not given. Estimations have two layers: incomes and costs in the bechmark situation AND incomes and costs based on the results of the projects.... (later)
Chapter#1.5. Motivation
This publication is an efficient case study concerning knowledge management, especially testing knowledge management processes among Students for better final theses and parallel, it is a real publication about a complex challenge: concept testing layers. Therefore, it is motivating to integrate to goals in one single action.
Chapter#1.6. About the structure of the publication
The publication will concern mathematical aspects (see similarity analyses), but without such level of details, where this publication could be used for learning about the complex system of the similerities. This challenge is complex enough in order to handle in an other publication.
This publication tries to follow the strict pattern predefined for final theses in general, and especially for BPROF-Students. In this publication one single expectation will not be worked out: the relationships between the subjects in the curriculum and the particular publication title. In order to have appropriate examples, please analyse the following URL: https://miau.my-x.hu/temp/2025tavasz/?C=M;O=D
The publication is just a quasi formatted text. Only chapters are defined in a more-layer-strucuture. The "citations" will be written as prescripted incl. the necessary sources - in this case in form of URLs pointing to specific parts of the background documentations: e.g. https://miau.my-x.hu/mediawiki/index.php?title=CT_01 Further formats (bold, underlined, footnotes, lists, etc.) are excluded.
Chapter#2. Literature
This chapter is dedicated for all definitions, which are necessary to understand the own development, results. Here, it is important to use citations with sources and between two citations, it is expected, that the Author(s) deliver argumentations about each citation: is a citation is to integrated or even to avoid? Relevant topics are: testing as such, proving as such, KPIs, correlations, regressions, similarity analyses, automation, ...
Chapter#2.1. Testing
"Software testing is the act of checking whether software satisfies expectations." (Source: https://en.wikipedia.org/wiki/Software_testing) This short definition is complex enough to deliver a relevant new keyword: "expectations". Before this abstraction is really involved, the term of "concept testing" should be defined. This definition may come from the Author(s), because here and now, only the goals of the Author(s) are relevant. Concepts are therefore patterns (formulas, systems, relationships, models, etc.) being seemingly capable of mirroring the connections between the known data (even they are partial from point of view of a holistic approach). "Expectations" are all measurable features being capable of monitoring the goodnees of the unknown connections. It is important: the human experts may not change the raw data if a concept seems not to be appropriate enough. Always the concepts should be changed till all raw data are covered through the mathematisms of the particular (best) concept. The problems of the arbitrariness of the human experts can be found listed in the book: Arthur Koestler, The Sleepwalkers! (more: https://en.wikipedia.org/wiki/The_Sleepwalkers:_A_History_of_Man%27s_Changing_Vision_of_the_Universe) Therefore, the goodness of the concepts let assume a scale: the one end of the scale is the set of the randomized generated concepts. The opposite end of this scale is the set of the error-free solutions (because it is possible two have alternative solutions with the same evaluation value).
Chapter#2.2. ...
Chapter#2.3. ...
Chapter#2.4. ...
Chapter#2.5. ...
Chapter#2.6. ...
Chapter#2.7. ...
Chapter#3. Own developments
...