+ IBF (Idea-breeding-farm) +

We are proud of this study! close

+ MY-X DVD +

Young members of the MY-X team co-operate successfully with PhD-Students in frame of the conference of Szarvas!

The MY-X team (incl. joung members) participated on the conference 'In memoriam Enyedi György'

Bárdy-Péter-Prize for a member of the MY-X team concerning his innovation potential!

Gábor-Dénes-Prize (special prize for mathematics) for a member of the MY-X team!

Special Prize in Minsk in frame of an essay-competition for a member of the MY-X team concerning a study about reform approches in education!

III. Prize in the Hlavay competition for a member of the MY-X team!

NTP-NFTÖ financial support for a member of the MY-X team!

Successful presentation in TUDOK-competition in Nyíregyháza and later in Székesfehérvár by a member of the MY-X team!

Last modified: 2015.VII.19.14:08 - MIAÚ-RSS
The Journal MIAU has been working for 20+ years as a kind of public service!

Autonomous, driverless, self-driving, robotic cars, or a kind of quality manager for traffic systems

Leading article: 2018. October (MIAU No. 242.)
(Previous article: MIAU No. 241.)

Keywords: Highway Code, robot judge, robot expert

The key message of the paper is: the re-interpreting of the definition of the Turing tests. The classic approach says: intelligent is a robot, if the humans are not capable of distinction between behavior pattern of humans and robots. There, where the quality of the artificial intelligence is over a given (e.g. legal) threshold (e.g. robotic cars, parking assistant, ABS, etc.), the re-interpretation of the Turing tests becomes rational as follows: human behaviors in traffic situations should be seen as correct, if the robotic cars would practice the same pattern or support the same decision. This reinterpretation is valid both for dynamic cases like priority of driving away and in static situations e.g. detecting a parking slot. If human individuals (head of experiments or common people) are not agreed with decisions of a robotic car in a given/single situation, but the robotic car derives mostly acceptable decisions, then it is not trivial, that the source code of the car is wrong. The parallel (preferred) suspicion should be rather: the traffic signs are inconsistent. Therefore, the specific reactions of robotic cars can be used as a kind of quality management input for suspicion generation. With other words: If a robotic car derives different decisions because of different approaching to a given parking slot, then the traffic signs are not consistent enough. More (DOC) *** More (PDF)

Please, send Your comments per email!