GB2563400A - Method and process for co-simulation with virtual testing of real environments with pedestrian interaction - Google Patents

Method and process for co-simulation with virtual testing of real environments with pedestrian interaction Download PDF

Info

Publication number
GB2563400A
GB2563400A GB1709348.5A GB201709348A GB2563400A GB 2563400 A GB2563400 A GB 2563400A GB 201709348 A GB201709348 A GB 201709348A GB 2563400 A GB2563400 A GB 2563400A
Authority
GB
United Kingdom
Prior art keywords
virtual
vehicles
vehicle
environments
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1709348.5A
Other versions
GB201709348D0 (en
Inventor
Hartmann Michael
Stolz Michael
Watzenig Daniel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kompetenzzentrum das Virtuelle Fahrzeug Forchungs GmbH
Original Assignee
Kompetenzzentrum das Virtuelle Fahrzeug Forchungs GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kompetenzzentrum das Virtuelle Fahrzeug Forchungs GmbH filed Critical Kompetenzzentrum das Virtuelle Fahrzeug Forchungs GmbH
Priority to GB1709348.5A priority Critical patent/GB2563400A/en
Publication of GB201709348D0 publication Critical patent/GB201709348D0/en
Publication of GB2563400A publication Critical patent/GB2563400A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Traffic Control Systems (AREA)

Abstract

Virtual testing of interaction between vehicles and road users (pedestrians) comprising creating virtual reality (VR) environment, based on a real world environment or a new virtual environment, with vehicles and road users (pedestrians), sending details about VR scene to a test person and a vehicle, measuring actions of test person in VR environment using a motion capture system, and predicting pedestrian movement and predicting vehicle control actuator signals based on test person actions that are sent to a visualisation processing unit. The invention may test the safety algorithms of autonomous vehicles to improve collision avoidance software between driverless cars and pedestrians. Evasive manoeuvre secure motion planning and uncertainty quantification may be used and the method may be operated in a non-sequential multi-threading manner. The test person may be immersed in a virtual reality cage with VR glasses or a virtual reality cave. The movement, gestures and actions of the user may be captured by the MoCap system and analysed in different real-world traffic situations in VR environments based on 3D models of real cities and with daily typical road behaviour and realistic human behaviour. Outdoor virtual testing with augmented reality (AR) glasses in real city environments is also disclosed.

Description

Method and process for co-simulation with virtual testing of real environments with pedestrian interaction
BACKGROUND OF INVENTION
In everyday life, people often participate in the traffic without being aware of it. Usually we take it as understandable to reach a target safely. Either as a driver, cyclist or pedestrian we change our role in the nature of participation. But in reality there are still too many fatalities on the roads of Europe and the world. Technical innovations in the field of automated driving functions have steadily reduced the number of fatalities. Nevertheless there are still many problems and open questions for automated driving. Especially for situations in complex environments (e.g. cities) with many different road users (e.g. pedestrians, bicycles, animals ...) situations are complex and challenging for the motion planning algorithm. For this, new virtual testing procedures are presented.
Autonomous vehicles should...
• React and drive like a human • Make correct decisions • React appropriately in various situations (e.g. reactive) • Drive safely and efficient also in uncertain and dynamic environments • Therefore a technical process is invented for...
• integration of persons in virtual environments • change of intention, perception, behavior of road users and interaction are testable in virtual environments • integration of real world environments in the virtual world • integration of many persons in the virtual environment • adaption to real world scenarios with internet queries • integration of existing ViL, PiL, HiL and SiL
The technical effect of the invention is the increase of safety for (autonomous) vehicles (Level 3 and Level 4) in complex, uncertain, dynamic virtual environments and real world behavior of pedestrians.
IMPORTANT DEFINITIONS
Following definitions can be interpreted as a help for the understanding.
City Graph (compare: Block W: World): A mathematical description of the road network with nodes and edges (e.g. streets). 1
Open Street Map: Geodata with open access. Development by huge web-community.
Motion planning: Search of future trajectories for the ego-vehicle depending on the believed (future) time state space.
(Future) time state space: Mathematical description of future state space depending on the predictions. Necessary for collision avoidance. Depending on the uncertainty there are several deterministic state space, belief state space, plausible state space.
Uncertainty Quantification (compare: Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels): In safety related applications there are some methods to quantify the uncertainty. The uncertainty representation and propagation can be changed.
Autonomous Mode (compare: Block T A: Period A and Block T E: Period E): Autonomous mode means in this document that the ego-vehicle is equipped with on-board sensors and processing units to enable a self-driving mode without external sensors from the infrastructure. Information from external resources offers the possibility to drive less conservative driving trajectories.
Situation prediction (compare: Block 11...In: Machine Perception Units (e.g. different configurations)): Besides the prediction of the (human-) movement (e.g. positions), there are more aspects which can be incorporated in the prediction. Semantic information, personal internal stance or environment aspects can be incorporated.
Cloud service (compare: Block E: Server (e.g. Cloud service)): A cloud service, which assists the egovehicle in following aspects: traffic flow coordination, navigation, motion planning, situation recognition and -prediction. It is assumed there are many sensor networks. For safety reasons several servers to achieve redundancy are presumed. Therefore it is also possible that the ego-vehicle can communicate to multiple sources.
Ego-vehicle: The ego-vehicle, which can drive in an autonomous mode, consists of Block A: (Autonomous) Vehicle(s), Block B: On-Car communication units for communication with the cloud service, Block C: Processing and Navigation Unit and Block D: On-Board sensors and perception units
Predicted Time-State Space: The predicted time-state space is necessary for motion planning of a robotic system. Therefore predictions where the obstacles will move in the future will lead to the predicted time-state space.
STATE OF THE ART
Criticisms for existing test procedures with virtual environments and/or robots are:
• consideration of a static environment • predefined trajectories of pedestrians • no interaction • testing highly influenced by the paradigm of testing the vehicle control of a deterministic vehicle • not adequate and realistic for real world scenarios (e.g. cities) and the safety verification of Level 3 or Level 4 autonomous vehicles
Map data and databases
There are different (online) map data services available. Online-map actualization (e.g. Waze [2]), open source projects. Meanwhile, there are some companies that are specialized in spatial data, for example HERE [3], [4] and TomTom [5]. Also on the format OpenStreetMap maps are available for research purposes [6]. There are also new approaches, databases and technologies for the human movement detection (e.g. Mapillary [7], Placemeter [8]). There is also some current 3D virtual environments available (e.g. 3D-0SM [9]) and new web services (e.g. bostonography.com [10], geOps [11], [12]). With [13] google infrastructure or via other commercial APIs (e.g. [4]) it is possible to use map APIs (e.g. APIs for geocoding, places, map) with several information ([14]). An example is shown in [15], how it is possible to use Google API for tracking applications with smartphones. It is shown, that is possible to communicate between autonomous vehicles and pedestrians via a communication network [16]. Current survey from human movement detection [17] and technology [18]. A current pedestrian detection system for driver assistance with off board and onboard sensing units is presented in [19]. A pedestrian system with onboard systems is presented in [20]
Human Movement Prediction
In [21] a study about the state of the art for movement prediction algorithms is presented. In [22] the growing hidden markov models are presented, which incremental learn new behaviors. The [23] offers some new principles from a statistical inference perspective, where causal dependencies are incorporated in the movement prediction of pedestrians. In [24] Gaussian Processes are used, where spatial dependencies can be analyzed.
Motion Planning
Surveys for motion planning can be found in [25], [26], [27] [28] to get an overview of the state of the art. There are two current approaches which are promising for motion planning. Optimization based and sampling-based motion planning algorithms.
Sampling-based motion planning
For sampling approaches rapidly exploring random trees are the most famous approaches and they build a graph with different variants of exploration of the state space. For non-holonomic systems kinodynamic versions are used [29] [30] [31]. RRTs and variants can be found in automotive path planning [32] [33]. These can be used for realtime applications, but don't have redundant pathways. Redundant pathways could be advantageous for dynamic environments with moving objects, but costly for the computation. In this document a compromise in sense of optimality is presented. Motion planning problems in high-dimensional state spaces is known to be PSPACE-hard [33]. Probabilistic roadmaps (PRM) and rapidly- exploring random trees (RRT) are incremental sampling-based planners. Motion planning problems in high-dimensional state spaces is known to be PSPACE-hard [33]. Probabilistic roadmaps (PRM) and rapidly-exploring random trees (RRT) are incremental sampling-based planners.
Optimization-based motion planning
In [34], [35], [36], [37] and [38] mixed integer linear programming algorithms are used for motion planning algorithms. Mixed-Integer Linear Programming can be used as a MPC formulation [38] and are promising because they incorporate binary variables for logical expressions.
Inventions in ADAS and autonomous vehicles
In [39] an automated movement of a vehicle is described, especially in a fixed environment (e.g. park, factory). The surrounding road users are detected with external sensors. In [39] a semi-Autonomous movements of the ego-vehicle with detection of environments of the vehicle by outdoor-sensor and application for park assistant or robots in industry. In [40] the prediction of preceding vehicles is done with an adaption of the perception module (region of interest) with data fusion. Effect on the adaption of the velocity and steering angle, to assist the driver is the result. In [41] the prediction of traffic participant is done, consisting of a system with a localization unit for movable objects. The collision avoidance: prediction of collisions and warnings is done with cooperative sensors (active or passive RFID transponder) for pedestrian detection (not detection of hidden objects with cameras) and classification of the object. In [42] a determination of a driving strategy is done with prediction of movements and evaluation of environment data and modelling the virtual driver with artificial intelligence In [43] the prediction of the region of movement, situation classification of normality of movement and selection of movement models for prediction In [44] a collision Avoidance system is introduced to bring the vehicle to a safe state with adequate and automated steering and acceleration. Modules with prediction of trajectories of moving objects, warning of the driver, estimation of the risk of collision and building of a Collision-State-map, trying of different acceleration/steering combinations to bring the vehicle to safe driving state and use of hypothetical trajectories In [45] an digital map of a parking area is used with a Car2X- communication network, so that the position data of mobile objects are detected. This information is used for navigation to a target position with collision avoidance. In [46] a process for collision avoidance and automated configuration of working area of a robot discussed. In [47] a classification of type of object (e.g. bicycle, pedestrian) and a classification and prediction of behavior is presented. Features are adaption and correction of characteristic values and motion planning depending on predictions. In [48] a probabilistic situation analysis is presented for the fusion of Situation Analysis to trigger safety systems. Application is for pre-crash system. In [49] a prediction procedure for trajectories for collision avoidance and the control of velocity is presented. [50] A visual pedestrian detection is described with extraction of a partial image and processing unit with prediction of human behavior. In [51] a communication based vehicle-pedestrian collision warning system with pedestrian detection, prediction of moving objects and ego-vehicle and path collision circuit for detection of collisions is presented. In [52] a communication based vehicle-pedestrian collision warning system is presented. The system includes a base and a mast and a plurality of sensors. The prediction of moving objects and egovehicle and a path collision circuit for detection of collisions is described. In [53] a crowd movement prediction using optical flow algorithms is presented with a predictive map of a distribution of objects of interest (OOls). In [54] a computer vision approach for collision avoidance for pedestrians and analysis of the optical flow is presented. In [55] a computer vision approach for estimation of Time to collision (TTC) is presented with use of a plurality of images. In [56], [57] systems for object detection are presented for the usage in autonomous vehicles.
Interacting vehicles
In [58] a new research program is initiated by the German research program for cooperative interacting vehicles. In [59] many aspects about cooperative and interaction based are analyzed for safety reasons.
In [60] a cloud based system for autonomous vehicles is described to assist the internal navigation and motion planning with information from the cloud. In [61] a start-up for optimization of a fleet of autonomous vehicles via a cloud.
REFERENCES [1] Iteam, https://iteam-project.net/, accessed: 2017-03-21.
[2] Waze, waze.com, accessed: 2017-03-14.
[3] wego.here.com, wego.here.com, accessed: 2017-03-14.
[4] here.com, here.com, accessed: 2017-03-14.
[5] tomtom.com, tomtom.com, accessed: 2017-03-14.
[6] H. Winner, S. Hakuli, and G. Wolf, Handbuch Fahrerassistenzsysteme: Grundlagen, Komponenten und Systeme fur aktive Sicherheit und Komfort.
Springer-Verlag, 2011.
[7] Mapillary, mapillary.com, accessed: 2017-03-14.
[8] Placemeter, placemeter.com/, accessed: 2017-03-14.
[9] osm3d, osm-3d.org, accessed: 2017-03-14.
[10] Bostonography, bostonography.com/, accessed: 2017-03-14.
[11] GeOps, geops.de/, accessed: 2017-03-14.
[12] GeOps, tracker.geops.ch, accessed: 2017-03-14.
[13] Devgoogle, developers.google.com/, accessed: 2017-03-14.
[14] Socialapis, https://www.programmableweb.com/news/top-10-social-apis-facebook-twitter-andgoogle-plus/analysis/2015/02/17, accessed: 2017-03-21.
[15] T. Jeske, Sicherheit und Datenschutz in nicht-interaktiven crowdsourcing Szenarien, Ph.D. dissertation, 2015.
[16] C. P. Urmson, I. J. Mahon, D. A. Dolgov, and J. Zhu, Pedestrian notifications, Nov. 24 2015, US Patent 9,196,164.
[17] N. A. Ogale, A survey of techniques for human detection from video, Survey, University of Maryland, vol. 125, no. 133, p. 19, 2006.
[18] xens, xsens.com, accessed: 2017-03-14.
[19] P. Borges, R. Zlot, and A. Tews, Pedestrian detection for driver assist and autonomous vehicle operation using offboard and onboard sensing, in Australian Conference on Robotics and Automation (ACRA), 2010, pp. 1-6.
[20] M. Enzweiler and D. M. Gavrila, Monocular pedestrian detection: Survey and experiments, IEEE transactions on pattern analysis and machine intelligence, vol. 31, no. 12, pp. 2179-2195, 2009.
[21] S. Lefevre, D. Vasquez, and C. Laugier, A survey on motion prediction and risk assessment for intelligent vehicles, Robomech Journal, vol. Ι,ηο. 1, p. 1, 2014.
[22] A. D. V. Govea, Incremental learning for motion prediction of pedestrians and vehicles, Ph.D. dissertation, 2010.
[23] B. D. Ziebart, Modeling purposeful adaptive behavior with the principle of maximum causal entropy, Ph.D. dissertation, 2010.
[24] D. Ellis, E. Sommerlade, and I. Reid, Modelling pedestrian trajectory patterns with Gaussian processes, in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 1229-1234.
[25] C. Goerzen, Z. Kong, and B. Mettler, A survey of motion planning algorithms from the perspective of autonomous uav guidance, in Selected papers from the 2nd International Symposium on UAVs, Reno, Nevada, USA June 8-10, 2009. Springer, 2009, pp. 65-100.
[26] B. Paden, M. “C'ap, S. Z. Yong, D. Yershov, and E. Frazzoli, A survey of motion planning and control techniques for self-driving urban vehicles, IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 3355, 2016.
[27] C. Katrakazas, M. Quddus, W.-H. Chen, and L. Deka, Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions, Transportation Research Part C: Emerging Technologies, vol. 60, pp. 416-442, 2015.
[28] D. Gonz'alez, J. P'erez, V. Milan'es, and F. Nashashibi, A review of motion planning techniques for automated vehicles, IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 11351145, 2016.
[29] J. Choi, Kinodynamic motion planning for autonomous vehicles, International Journal of Advanced Robotic Systems, vol. 11, no. 6, p. 90, 2014.
[30] A. Perez, R. Platt, G. Konidaris, L. Kaelbling, and T. Lozano-Perez, Lqr-rrt*: Optimal sampling-based motion planning with automatically derived extension heuristics, in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 2537-2542.
[31] S. Karaman and E. Frazzoli, Optimal kinodynamic motion planning using incremental samplingbased methods, in Decision and Control (CDC), 2010 49th IEEE Conference on. IEEE, 2010, pp. 76817687.
[32] U. Schwesinger, M. Rufli, P. Furgale, and R. Siegwart, A sampling-based partial motion planning framework for system-compliant navigation along a reference path, in Intelligent Vehicles Symposium (IV), 2013 IEEE. IEEE, 2013, pp. 391-396.
[33] D. J. Webb and J. v. d. Berg, Kinodynamic rrt*: Optimal motion planning for systems with linear differential constraints, arXiv preprint arXiv:1205.5088, 2012.
[34] T. Schouwenaars, Έ. F'eron, and J. How, Safe receding horizon path planning for autonomous vehicles, in PROCEEDINGS OF THE ANNUAL ALLERTON CONFERENCE ON COMMUNICATION CONTROL AND COMPUTING, vol. 40, no. 1. The University; 1998, 2002, pp. 295-304.
[35] A. Richards, T. Schouwenaars, J. P. How, and E. Feron, Spacecraft trajectory planning with avoidance constraints using mixed-integer linear programming, Journal of Guidance, Control, and Dynamics, vol. 25, no. 4, pp. 755-764, 2002.
[36] T. Schouwenaars, B. De Moor, E. Feron, and J. How, Mixed integer programming for multi-vehicle path planning, in Control Conference (ECC), 2001 European. IEEE, 2001, pp. 2603-2608.
[37] T. Schouwenaars, Safe trajectory planning of autonomous vehicles, Ph.D. dissertation, Massachusetts Institute of Technology, 2005.
[38] J. Eilbrecht and O. Stursberg, Auction-based cooperation of autonomous vehicles using mixed integer planning, AAET-Automatisiertes und vernetztes Fahren 2017, pp. 266-285, 2017.
[39] A. Augst and C. Patron, Verfahren zur Ausfuhrung einer zumindest teilweise automatisierten bewegung eines Fahrzeugs innerhalb eines raumlich begrenzten Bereichs, Patent DE 10 2014 218 429 Al, 03 17, 2016. [Online], Available:
https://register.dpma. de/DPMAregister/pat/register?AKZ=1020142184290 [40] R. Kastner, M. Kleinehagenbrock, M. Nishigaki, H. Kamiya, N. Mori, S. Wako-shi, and Kusuhara, Driver assist system with cut-in prediction, Patent DE 10 2015 200 215 Al, 07 28, 2016.
[41] S. Zecha and R. R. Helmar, Verfahren und vorrichtung zur prdiktion der position und/oder bewegung eines objekts relativ zu einem Fahrzeug, Patent DE 10 2009 035 072 Al, 07 28, 2009.
[42] . W. D. F. M. D.. N. D. G. R. . L. D. H. C. . L. D. H. S.. W. D. I. K. . N. D. K. S. D. . L. D. Fechner, Thomas, Verfahren zum Bestimmen einer Fahrstrategie, Patent DE 10 2014 216 257 Al, 02 18, 2016.
[43] K. Sakai, T. Kindo, and M. Harada, Vorrichtung zum Vorhersagen der Bewegung eines mobilen Korpers, Patent DE 11 2010 000 802 T5, 02 12, 2010.
[44] J. Chassot, G. Ottmar, H. Frederic, S. Paasche, A. Schwarzhaupt, G. Speigelberg, and A. Sulzmann, Verfahren und system zur Vermeidungeiner Kollision eines Kraftfahrzeuges mit einem Objekt, Patent DE 10 2005 023 832 Al, 11 30, 2006.
[45] S. Nordbruch, Verfahren und Vorrichtung zum Betreiben eines Fahrzeugs respektive eines Parkplatzes, Patent DE 10 2014 224 104 Al, 11
26, 2016.
[46] E.-H. Waled, Verfahren und Vorrichtung zum Vermeiden von Kollisionen zwischen Industrierobotern und deren objekten, Patent DE 10226 140 Al, 06 13, 2004.
[47] K. Taguchi, Vorrichtung zur Vorhersage eines Verhaltens, Patent DE 11 2008 002 268 T5, 07 15, 2010.
[48] R. Η. M. S. R. H. Meinecke, Marc-Michael, Probabilistische auslsestrategie, Patent DE 10 2008 046488 Al, 03 11,2010.
[49] n.d., Verfahren und vorrichtung zum prdizieren einer bewegungstrajektorie, Patent DE 10 2006036 363 Al, 04 05, 2007.
[50] T. Kindo, Pedestrian action prediction device and pedestrian action prediction method, Patent EP2 759 998 Al, 09 20, 2011.
[51] L. Caminiti, J. C. Lovell, J. J. Richardson, and C. T. Higgins, Communication based vehicle-pedestrian collision warning system, Dec. 2
2014, US Patent 8,903,640.
[52] L. Caminiti, J. C. Lovell, and J. J. Richardson, Communication based vehicle-pedestrian collision warning system, Mar. 12 2009, US Patent App. 12/403,067.
[53] T. N. Dos Santos, R. C. Folco, and B. H. Leitao, Crowd movement prediction using optical flow algorithm, May 15 2013, uS Patent App. 13/894,458.
[54] D. Rosenbaum, A. Gurman, Y. Samet, G. P. Stein, and D. Aloni, Pedestrian collision warning system, Dec. 29 2015, US Patent App. 14/982,198.
[55] G. Stein, E. Dagan, O. Mano, and A. Shashua, Collision warning system, Jun. 29 2015, US Patent App. 14/753,762.
[56] J. Zhu, M. S. Montemerlo, C. P. Urmson, and A. Chatham, Object detection and classification for autonomous vehicles, Jun. 5 2012, US Patent 8,195,394.
[57] --, Object detection and classification for autonomous vehicles, Oct. 28 2014, US Patent
8,874,372.
[58] C. Stiller, W. Burgard, B. Deml, L. Eckstein, F. Flemisch, F. K'oster, M. Maurer, and G. Wanielik, Kooperativ interagierende Automobile.
[59] M. Naumann, P. F. Orzechowski, C. Burger, 0. S^. Tasj7 and C. Stiller, Herausforderungen fur die Verhaltensplanung kooperativer automatischer Fahrzeuge.
[60] S. Kumar, S. Gollakota, and D. Katabi, A cloud-assisted design for autonomous driving, in Proceedings of the first edition of the MCC workshop on Mobile cloud computing. ACM, 2012, pp. 41-46.
[61] bestmile, https://bestmile.com/, accessed: 2017-03-21.
[62] Itenach, http://www.lte-anbieter.info/5g/, accessed: 2017-03-22.
DESCRIPTION OF INVENTION
The invention presents a method to virtually test traffic situations with pedestrian interaction.
In a first step, the real world or new virtual environments are calculated with vehicles and road users (e.g. pedestrians) and made accessible in a virtual reality environment. Information about the virtual reality environment is transmitted to the stimuli unit for the test person and the vehicle. With this information at the stimuli unit, the virtual environment is made accessible for test persons and their actions on the virtual environment and on traffic scenarios is measured using a motion capture system. In a fourth step, the information of measured gesture and actions of test persons reacting on the various traffic scenarios is transmitted to the visualization processing unit for visualization of the movement and to the vehicle. The prediction of the test persons' movements and the predictive control actuator signals for the vehicle are calculated. Herefore a method with Gaussian processes [24] are used and adapted with manifold learning approaches. The results of the calculation are transmitted to the visualization processing unit.
Figure 1 shows the flow chart covering the six steps of the invention.
Small-Scale virtual simulation platform
In Figure 2 a small scale co-simulation unit is presented with integration of human interaction. Block A: Virtual environment where all virtual traffic participants have an interaction. Block B: Server and Processing Unit where processing units generate the virtual environment. With the advances in mobile internet technology (5G internet) Block C: Communication Device (wireless: 5G (successor of LTE) or wired bus system) can be wireless or a hardware based communication system to get data from Block D: Single testable object (e.g. vehicle in Autonomous Mode as simulation software (SIL), model- (MIL), vehicle-(VIL) or hardware- (HIL) in the loop) and Block E: Person with virtual(mixed-) reality devices with perception-, motion- and position- capture system devices and (mixed-) virtual reality glasses.
The persons can be involved in real traffic situations or virtual reality cabs (e.g. position, posture, perception). For Block D: Single testable object (e.g. vehicle in Autonomous Mode as simulation software (SIL), model- (MIL), vehicle-(VIL) or hardware- (HIL) in the loop) and Block E: Person with virtual (mixed-) reality devices with perception-, motion- and position- capture system devices and (mixed-) virtual reality glasses. The persons can be involved in real traffic situations or virtual reality cabs a stimuli of the perception (e.g. visualization of a scene of the virtual environment) is assumed.
Networked based and world wide web virtual simulation platform
Figure 3 shows a (distributed) large-scale co-simulation units with incorporation of real world events and real environments and human interaction. Again, a large-scale network is used, with a large-scale communication network (Block F: Communication system (e.g. Internet, wireless 5G) ). Stimuli for the perception units in Block Gl...Gn: Perception-Stimuli Unit for the Block Dl...Dn: Testable Object (VIL, MIL, SIL, PIL) and Block ΗΙ.,.Ηη: Human machine interface for 3D visualization for perception stimuli for the Block 11...In: Persons with virtual (mixed-) reality devices (e.g. perception, motion and position capture systems) in real traffic situations or virtual reality cabs. The real world can be incorporated with Block J: Adaptive Virtual world (e.g. adaption to real world) and Block K: Real world information unit (e.g.
adaption, pedestrian movement tracking). The same holds if Block J: Adaptive Virtual world (e.g. adaption to real world) is distributed in several servers. Depending on the positions of Block Dl...Dn: Testable Object (VIL, MIL, SIL, PIL) and Block 11...In: Persons with virtual (mixed-) reality devices (e.g. perception, motion and position capture systems) in real traffic situations or virtual reality cabs in the real or in the virtual world there are automatically queries to the Block J: Adaptive Virtual world (e.g. adaption to real world) for triggering of the stimuli units.
Figure Description
Fig. 1 shows an example flow-chart for a sequential procedure
Fig. 2 shows the small-scale co-simulation with human-interaction units
Fig. 3 shows the (distributed) large-scale co-simulation units with incorporation of real world events and real environmentsand human interaction
Legend • Block A: Virtual environment • Block B: Server and Processing Unit • Block C: Communication Device (wireless: 5G (successor of LTE) or wired bus system) • Block D: Single testable object (e.g. vehicle in Autonomous Mode as simulation software (SIL), model- (MIL), vehicle-(VIL) or hardware- (HIL) in the loop) • Block E: Person with virtual (mixed-) reality devices with perception-, motion- and position- capture system devices and (mixed-) virtual reality glasses. The persons can be involved in real traffic situations or virtual reality cabs • Block E: Person with virtual (mixed-) reality devices with perception-, motion- and position- capture system devices and (mixed-) virtual reality glasses. The persons can be involved in real traffic situations or virtual reality cabs • Block F: Communication system (e.g. Internet, wireless 5G) • Block Gl...Gn: Perception-Stimuli Unit • Block DI...Dn: Testable Object (VIL, MIL, SIL, PIL) • Block ΗΙ.,.Ηη: Human machine interface for 3D visualization for perception stimuli • Block 11...In: Persons with virtual (mixed-) reality devices (e.g. perception, motion and position capture systems) in real traffic situations or virtual reality cabs • Block J: Adaptive Virtual world (e.g. adaption to real world) • Block K: Real world information unit (e.g. adaption, pedestrian movement tracking) Autonomous Mode

Claims (16)

1. Method for virtual testing of vehicles and their algorithms in situations with pedestrian interaction compromising, in a first step the computation of real world or new virtual environments with vehicles and road users (e.g. pedestrians), in a second step the sending process of information about the virtual reality environment to the stimuli unit for the test person and the vehicle, in a third step the measurement of the actions of the test person in a motion capture system in a fourth step the processing for sending the information of gesture and actions to the visualization processing unit for visualization of the movement and to the vehicle in a fifth step the computation of predictions of the pedestrian movement and predictive control actuator signals for the vehicle in a sixth step the sending process to the visualization processing unit.
2. Method according to claim 1 characterized by the behavior in virtual environments with incorporation of real world environments, real world events, realistic road topologies, spatial and daily typical road behavior and an interface to incorporate realistic human behavior, especially in scenarios with safety related scenarios (e.g. collision avoidance)
3. Method according to claim 1 characterized by a motion capture system and stimuli unit to capture and stimulate the behavior of a test person in a virtual reality cage or virtual reality cave.
4. Method according to claim 1 characterized by a visualization processing unit for processing virtual realities with interfaces for the vehicle and the motion capture system of the pedestrian.
5. Method according to claim 1 characterized by a stimuli unit for the test person in form of a (mixed) virtual reality glass or a virtual reality cave.
6. Method according to claim 1 characterized by vehicles by all forms of vehicles in every mode, for each different autonomous vehicle level.
7. Method according to claim 1 characterized by incorporation of SIL, PIL, HIL, VIL units.
8. Method according to claim 1 characterized by incorporation as an extension of existing SIL, PIL, HIL, VIL units in form of a co-simulation and for large scale scenarios as an online game with usage of world wide web infrastructure.
9. Method according to claim 1 characterized by testing the interaction process between pedestrians, virtual environments, and other road users and the vehicles.
10. Method according to claim 1, characterized by an integration of realistic 3D environment models (different historical and cultural environments) and the motion behavior of road users into the test system.
11. Method according to claim 1, characterized by evasive maneuver secure motion planning and uncertainty quantification.
12. Method according to claim 1, characterized by testing in real cities virtual environment and their infrastructure with people from different countries and cultures.
13. Method according to claim 1, characterized by testing with real world events, incorporation of social networks and other information.
14. Method according to claim 1, characterized by an operation mode in an non-sequential manner (e.g. multi-threading)
15. Method according to claim 1, characterized by an outdoor virtual testing method with augmented reality glasses, characterized by a test person on real environments (e.g. city, street), seeing coming vehicles, by virtualization of the vehicle with certain vehicle dynamics, stimulated by the augmented reality glasses and without virtualization of computational cost intensive 3D worlds.
16. Method according to claim 1 and 15, characterized by a new interaction based collision avoidance testing.
GB1709348.5A 2017-06-13 2017-06-13 Method and process for co-simulation with virtual testing of real environments with pedestrian interaction Withdrawn GB2563400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1709348.5A GB2563400A (en) 2017-06-13 2017-06-13 Method and process for co-simulation with virtual testing of real environments with pedestrian interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1709348.5A GB2563400A (en) 2017-06-13 2017-06-13 Method and process for co-simulation with virtual testing of real environments with pedestrian interaction

Publications (2)

Publication Number Publication Date
GB201709348D0 GB201709348D0 (en) 2017-07-26
GB2563400A true GB2563400A (en) 2018-12-19

Family

ID=59358177

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1709348.5A Withdrawn GB2563400A (en) 2017-06-13 2017-06-13 Method and process for co-simulation with virtual testing of real environments with pedestrian interaction

Country Status (1)

Country Link
GB (1) GB2563400A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427682A (en) * 2019-07-26 2019-11-08 清华大学 A kind of traffic scene simulation experiment platform and method based on virtual reality
US20210394787A1 (en) * 2020-06-17 2021-12-23 Shenzhen Guo Dong Intelligent Drive Technologies Co., Ltd. Simulation test method for autonomous driving vehicle, computer equipment and medium
WO2023081948A1 (en) 2021-11-09 2023-05-19 Avl List Gmbh Test environment for urban human-machine interaction

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110710813B (en) * 2019-10-08 2023-10-20 浙江博泰家具股份有限公司 Dynamic interactive seat of virtual reality
CN111458154A (en) * 2020-04-01 2020-07-28 清华大学苏州汽车研究院(吴江) System and method for testing human-vehicle-surrounding conflict scene based on automatic driving of whole vehicle
CN111841012B (en) * 2020-06-23 2024-05-17 北京航空航天大学 Automatic driving simulation system and test resource library construction method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160000136A (en) * 2014-06-24 2016-01-04 군산대학교산학협력단 Walking pattern analysis system and method
US9741169B1 (en) * 2014-05-20 2017-08-22 Leap Motion, Inc. Wearable augmented reality devices with object detection and tracking
US9754167B1 (en) * 2014-04-17 2017-09-05 Leap Motion, Inc. Safety for wearable virtual reality devices via object detection and tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754167B1 (en) * 2014-04-17 2017-09-05 Leap Motion, Inc. Safety for wearable virtual reality devices via object detection and tracking
US9741169B1 (en) * 2014-05-20 2017-08-22 Leap Motion, Inc. Wearable augmented reality devices with object detection and tracking
KR20160000136A (en) * 2014-06-24 2016-01-04 군산대학교산학협력단 Walking pattern analysis system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427682A (en) * 2019-07-26 2019-11-08 清华大学 A kind of traffic scene simulation experiment platform and method based on virtual reality
CN110427682B (en) * 2019-07-26 2020-05-19 清华大学 Traffic scene simulation experiment platform and method based on virtual reality
US20210394787A1 (en) * 2020-06-17 2021-12-23 Shenzhen Guo Dong Intelligent Drive Technologies Co., Ltd. Simulation test method for autonomous driving vehicle, computer equipment and medium
WO2023081948A1 (en) 2021-11-09 2023-05-19 Avl List Gmbh Test environment for urban human-machine interaction

Also Published As

Publication number Publication date
GB201709348D0 (en) 2017-07-26

Similar Documents

Publication Publication Date Title
Deo et al. Trajectory forecasts in unknown environments conditioned on grid-based plans
GB2563400A (en) Method and process for co-simulation with virtual testing of real environments with pedestrian interaction
Laugier et al. Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety
AU2019233779B2 (en) Vehicle tracking
GB2562049A (en) Improved pedestrian prediction by using enhanced map data in automated vehicles
Sales et al. Adaptive finite state machine based visual autonomous navigation system
Niranjan et al. Deep learning based object detection model for autonomous driving research using carla simulator
Ridel et al. Understanding pedestrian-vehicle interactions with vehicle mounted vision: An LSTM model and empirical analysis
CN116323364A (en) Waypoint prediction and motion forecast for vehicle motion planning
EP4181091A1 (en) Pedestrian behavior prediction with 3d human keypoints
GB2564897A (en) Method and process for motion planning in (un-)structured environments with pedestrians and use of probabilistic manifolds
JP7471397B2 (en) Simulation of various long-term future trajectories in road scenes
WO2022156181A1 (en) Movement trajectory prediction method and apparatus
CN115485698A (en) Space-time interaction network
Goebl et al. Design and capabilities of the Munich cognitive automobile
Agafonov et al. 3D objects detection in an autonomous car driving problem
Sales et al. 3d vision-based autonomous navigation system using ann and kinect sensor
Sharma et al. Navigating uncertainty: The role of short-term trajectory prediction in autonomous vehicle safety
US11887317B2 (en) Object trajectory forecasting
Bhaggiaraj et al. Deep Learning Based Self Driving Cars Using Computer Vision
Sales et al. Fsm-based visual navigation for autonomous vehicles
Sokolov et al. Methodological Aspects for the Development of Information Systems of Unmanned Mobile Vehicles.
Perla et al. Implementation of Autonomous Cars using Machine Learning
Babaei et al. Perception System Architecture for Self-Driving Vehicles: A Cyber-Physical Systems Framework
US12005892B2 (en) Simulating diverse long-term future trajectories in road scenes

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)