+ IBF (Idea-breeding-farm) +

We are proud of this study! close

+ MY-X DVD +

2020.XI.
BME-ETDK-1. place (incl. OTDK-access-right) with a study supported by the MY-X team!

2020.XI.
2 BPROF-Students (1. semester): 1. and 2. places (ETDK) + OTDK-access rights!

2020.III-VI.
QuILT: distance education with avatars in English!

2020.VI.
More and more qualitative studies in English from Students of the international courses!

2019.IX.
The Liebig-rule is available in the traffic!

2018.IX.
Kecskemét-Conference about Innovation: presentation of the 2DM-game!

2018.VI.
MMO 2018 conference: the best presentation of the session (award for 2DM) + the best publication of the session (award for Rosling-animations)!



Last modified: 2015.VII.19.14:08 - MIAÚ-RSS
The Journal MIAU has been working for 20+ years as a kind of public service!
The MATARKA-view

Autonomous, driverless, self-driving, robotic cars, or a kind of quality manager for traffic systems

Leading article: 2018. October (MIAU No. 242.)
(Previous article: MIAU No. 241.)

Keywords: Highway Code, robot judge, robot expert

Abstract: The key message of the paper is: the re-interpreting of the definition of the Turing tests. The classic approach says: intelligent is a robot, if the humans are not capable of distinction between behavior pattern of humans and robots. There, where the quality of the artificial intelligence is over a given (e.g. legal) threshold (e.g. robotic cars, parking assistant, ABS, etc.), the re-interpretation of the Turing tests becomes rational as follows: human behaviors in traffic situations should be seen as correct, if the robotic cars would practice the same pattern or support the same decision. This reinterpretation is valid both for dynamic cases like priority of driving away and in static situations e.g. detecting a parking slot. If human individuals (head of experiments or common people) are not agreed with decisions of a robotic car in a given/single situation, but the robotic car derives mostly acceptable decisions, then it is not trivial, that the source code of the car is wrong. The parallel (preferred) suspicion should be rather: the traffic signs are inconsistent. Therefore, the specific reactions of robotic cars can be used as a kind of quality management input for suspicion generation. With other words: If a robotic car derives different decisions because of different approaching to a given parking slot, then the traffic signs are not consistent enough. More (DOC) *** More (PDF)


Please, send Your comments per email!

((Back))
miau.my-x.hu
myxfree.tool
rss.services