CN107199966B - Method and system for enabling interaction in a test environment - Google Patents

Method and system for enabling interaction in a test environment Download PDF

Info

Publication number
CN107199966B
CN107199966B CN201710142119.6A CN201710142119A CN107199966B CN 107199966 B CN107199966 B CN 107199966B CN 201710142119 A CN201710142119 A CN 201710142119A CN 107199966 B CN107199966 B CN 107199966B
Authority
CN
China
Prior art keywords
target
data
vehicle
test
occupant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710142119.6A
Other languages
Chinese (zh)
Other versions
CN107199966A (en
Inventor
P·安德松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volvo Car Corp
Original Assignee
Volvo Car Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volvo Car Corp filed Critical Volvo Car Corp
Publication of CN107199966A publication Critical patent/CN107199966A/en
Application granted granted Critical
Publication of CN107199966B publication Critical patent/CN107199966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/042Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles providing simulation in a real vehicle
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Mechanical Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method performed by an interactive testing system for enabling interaction between a first test object and at least a second test object, the first object acting within a first environment and the at least second object acting within at least a second environment. The system provides a first virtual reality layer to a first target and a second virtual reality layer to a second target such that the mixed reality by the first target corresponds to the mixed reality of the second target; continuously, periodically or intermittently: deriving first data relating to the first target, the data including status information of the first target; providing first target behavior data to the second layer based on the first data such that state information of the first target is contained in a mixed reality of the second target; deriving second data relating to the second target, the second data including status information of the second target; second target behavior data is provided to the first layer based on the second data such that state information of the second target is included in the mixed reality of the first target. It also relates to an interactive testing system.

Description

Method and system for enabling interaction in a test environment
Technical Field
The invention relates to an interactive test system and a method for the implementation thereof for enabling an interaction between a first test object comprising a vehicle and at least a second test object in a test environment.
Background
Since countless automobiles travel around the earth around the road without rest all year round, day, night, there is always a risk of imminent collisions involving the vehicle with other road users, such as pedestrians, cyclists or other vehicles, etc. Vehicle impact can, of course, result in physical injury and/or injury to the own vehicle and/or other associated road users. Thus, in order to try to avoid a vehicle collision, the car may be at least partially autonomous and/or provided with a driver assistance system. These systems may monitor the vehicle environment, determine whether a collision with an object is likely, and further alert and/or intervene, for example, the steering system and/or braking system of the vehicle to avoid an imminent or likely collision. For example, when a vehicle enters, for example, a city, it is important that the vehicle properly and/or desirably interact with a fragile road user (VRU), such as a pedestrian, to avoid a critical situation. In order for the driver assistance system to function as desired, the functions and/or parameters relating to active safety need to be configured and/or adjusted in a suitable manner. There are many types of potential impacts and/or impending impacts that a vehicle may be involved in, and therefore it may be challenging to anticipate appropriate configurations for different impact situations and/or adjustments of the functions and/or parameters. Thus, in order to determine the appropriate or proper settings, a test procedure involving different traffic conditions is typically performed. But such testing procedures tend to be complicated and/or expensive, particularly when high risk situations are involved. If the traffic situation also relates to the VRU, the test procedure is still risky and/or dangerous, in particular for the VRU. It is therefore further known to digitally simulate different traffic situations. US6950788, for example, relates to such computer-implemented systems and methods. However, while US6950788 discloses models that simulate automobile traffic as well as bicycle and pedestrian traffic, constructing and/or adjusting functions and/or parameters related to active safety in a satisfactory manner remains a challenge.
Disclosure of Invention
It is therefore an object of embodiments herein to provide an alternative method of enabling interaction between a first test object comprising a vehicle and at least a second test object in a test environment. According to a first aspect of embodiments herein, the object is achieved by the method of the present invention. The technical features and the corresponding advantages of the method will be further detailed below.
By proposing a method for enabling interaction between a first test object comprising a process and at least a second test object adapted to communicate directly or indirectly with the first test object in a test environment, a method is provided that enables, for example, testing of high risk situations involving a test vehicle and one or more other test objects. "interaction" may refer to "secure and/or contactless interaction" and/or "interaction without direct contact," while "enabling … …" interaction may refer to "supporting, providing, monitoring, evaluating, analyzing, and/or studying" interaction. A "test environment" may refer to "test equipment" and/or "test sites" and is further physical and/or digital. "vehicle", which may be a test vehicle, may refer to any vehicle, such as an engine-propelled vehicle, e.g., an automobile, truck, van, bus, motorcycle, and the like. The vehicle, which may include a positioning system, may support one or more driving assistance functions, such as driving assistance functions related to active safety, such as collision avoidance. According to an example, a "vehicle" may thus refer to a "vehicle supporting one or more driving assistance functions". Additionally or alternatively, the vehicle may be represented by a vehicle that partially supports autonomous, semi-autonomous, and/or fully autonomous driving. Furthermore, at least a second "test object" may refer to any physical object considered in the test scenario of the first test object, and may for example refer to any "road user" as represented by and/or including a person, e.g. a pedestrian, a cyclist, etc., an animal and/or other vehicle. It should be noted that the term "test target" may mean one or more such "road users". The term "adapted to communicate" may mean "adapted to communicate wirelessly, e.g. via WiFi or the like or an equivalent or a substitute thereof", and/or "adapted such that information and/or data may be transferred between said first and said at least second test object". Further, "indirectly" may refer to "via a remoting server. Furthermore, an "interactive test system" may be completely and/or at least partly comprised in the control server and/or further distributed between the first and/or at least second test object.
Because during the test procedure the first test object acts in a first physical test environment and the at least second test object acts in at least a second physical test environment that is physically separate from the first test environment, the first test object and the at least second test object are physically separate from each other during the test procedure. Thus, a test scenario may be performed without direct contact of the first test object and at least the second object. The "Test session" may refer to a "Test scenario", and the term target "action" in the physical Test environment may refer to the target "participating in the Test session", "being in the physical Test environment, and/or" moving within "all the time. The "first physical test environment" may refer to a "first physical test environment of the test environment", and the "second physical test environment" may refer to a "second physical test environment of the test environment". Furthermore, the first and/or at least second "physical test environment" may refer to various (e.g., restricted) arbitrary areas, such as a test path, a substantially open area, site and/or region, an indoor facility and/or room, and so forth. The characteristics of the physical test environment, such as structure, configuration, size, range, and/or size, for example, may vary with the upcoming conditions, and may range from meters and/or meters square to kilometers and/or kilometers square, for example. Furthermore, the characteristics of the first physical test environment may be different from those of at least the second physical test environment-even very different; the first physical test environment may be represented by a test path, for example, while the second physical test environment may be represented by a substantially empty room, for example. Furthermore, the term "physically separated" may refer to "physically separated by at least a minimum distance and/or a safe distance" which may vary with the upcoming situation, e.g. from substantially 0 meters (if the first and at least second physical test environment are separated, e.g. by a wall and/or fence, etc.) to a substantially infinite distance, e.g. thousands of kilometres.
Since the interactive testing system provides said first test target with a first virtual reality layer associated with a first test environment and provides at least a second test target with at least a second virtual reality layer associated with at least a second test environment, such that the mixed reality perceived by the first test target corresponds to the mixed reality perceived by the at least second test target, a first set of information, e.g. data, graphics and/or sound, etc., may be superimposed on the first test environment from the viewpoint of the first test target, and at least a second set of information may be superimposed on the at least second test environment from the viewpoint of the at least second test target, the first and the at least second test target and/or a user thereof experience a similar "reality". The first virtual reality layer may thus be different from at least the second virtual reality layer, since the characteristics of the first physical test environment may be different from the characteristics of at least the second physical test environment. "mixed reality" as is generally known in the art may refer to superimposing information on the real world such that the "digital world" blends with the real physical world, thus mixing and/or merging virtual reality with reality. Thus, "mixed reality," sometimes referred to as "hybrid reality," generally refers to the merging of real and virtual worlds to produce a new environment and vision in which physical and digital objects coexist and interact in real time. The term "virtual reality layer" -which may refer to "virtual reality" as is known in the art in a similar manner-may include, for example, information that reproduces or provides a particular traffic environment and/or traffic condition. A virtual reality layer "associated with" a test environment may refer to a virtual reality layer that is "superimposed on" and/or "based on" the test environment. Furthermore, "providing" a virtual reality layer may refer to "applying", "enabling", "submitting" and/or "providing" the virtual reality layer by the control server, whereas "providing to a test object" may refer to providing "to a virtual processing unit associated with and/or comprised in the test object, which virtual processing unit may be adapted to receive, interpret and/or derive (derivative) information from the virtual reality layer. The optional "virtual processing unit" may be included in a test target; additionally or alternatively, a "virtual processing unit" may be located remotely from the test object to which it is wirelessly connected, e.g. contained in and/or represented by any electronic unit, e.g. one or more Electronic Control Units (ECUs), laptops and/or smartphones. Furthermore, the term "perceived by the test target" may refer to "perceived by the virtual processing unit" and/or "perceived by a user of the test target". "corresponding to" may mean "substantially corresponding to", and "equivalent to", "mapped", "matched", "consistent", and/or "related".
Since the interactive test system also continuously, periodically or intermittently derives first target data relating to the first test target, which first target data comprises state information and/or actions of the first test target, information relating to the current state of the first test target and/or information relating to how the first test target acts (e.g. behaviour and/or interactions during the test procedure) is monitored substantially continuously. The term target data "related" to the test target may refer to "target data" of the test target throughout the present invention, and the term "target data" may refer to "related target data", "active safety-related target data", "driver assistance-related target data", and/or "current target data" throughout the present invention. Further, "data" may refer to "information," "parameters," and/or "attributes" throughout the present disclosure. "status information" may refer to "current status information", "related status information", "active safety-related status information", "driver assistance-related status information" and/or "parameters" throughout, while "actions" of a test target may refer to "actions performed", "steps", "actions", "behaviors" and/or "driving behaviors" throughout the test target. "status information" and/or "action" may accordingly refer throughout the present invention to status information and/or actions relating to, for example, geographic location, speed, direction, sound, indicators, speakers, lights, brake lights, brakes, braking force, steering wheel angle, etc. "Deriving" target data may refer throughout to supporting "receiving", "tracking" and/or "monitoring" target data, for example, with one or more sensors carried on the test target and/or at least one visual sensor (e.g., a camera) adapted to externally monitor the test target. Such one or more visual sensors may be arbitrarily positioned and/or moved within the test environment, such as to view the test target from a remote location, and/or positioned on and/or carried by the test target. The target data may further be submitted, sent and/or provided to the control server described above. The "export target data" may therefore be before the action step "monitoring target data" throughout the present invention, and may furthermore refer to "exporting target data by the control server". The term "continuously, periodically or intermittently" may refer throughout to "continuously, periodically or intermittently during the course of the test".
Because the interactive testing system further provides the first target behavior data to at least a second virtual reality layer continuously, periodically or intermittently based on the first target data, such that at least a portion of the state information and/or actions of the first test target are contained in the mixed reality perceived by the at least second test target, the virtual reality layer of the second test target incorporates the first test target and substantially continuously incorporates at least a portion of its state and/or actions, such that the second test target and/or its user or occupant thus experiences a virtual copy of the first test target and its state and/or actions via the mixed reality. At least the second test object and/or its user or occupant therefore interacts with the virtual copy of the first test object during the test procedure and can thus base the behavior and/or behavior that is imminent during the test procedure on the behavior of the first test object, i.e. on the first object behavior data. The term "target behavior data" may refer throughout the present disclosure to "data relating to the state and/or action of a test target," while "at least a portion" may refer throughout to "one or more portions and/or parameters. Furthermore, "providing" target behavior data may refer throughout the present disclosure to "submitting," "sending," and/or "providing" target behavior data by the control server, while the term "based on" target data may refer throughout to "originating from" or "being obtained from" target data.
Furthermore, since the interactive test system continuously, periodically or intermittently derives at least second target data related to at least a second test target, the at least second target data comprising state information and/or actions of the at least second test target, information relating to a current state of the at least second test target and/or information relating to how the at least second test target acts (e.g. behaviour and/or interactions during the test procedure) is monitored substantially continuously.
Since the interactive testing system further provides the at least second target behaviour data to the first virtual reality layer continuously, periodically or intermittently based on the at least second target data such that at least a part of the status information and/or actions of the at least second test target is comprised in the mixed reality perceived by the first test target, the virtual reality layer of the first test target merges with the at least second test target and substantially continuously merges at least a part of the status and/or actions thereof, the first test target and/or its user or occupant thus experiences a virtual copy of the at least second test target and its status and/or actions via said mixed reality. The first test object and/or its user or occupant therefore interacts with the virtual copy of at least the second test object during the test procedure and can thus base the behavior and/or behavior that is imminent during the test procedure on the behavior of at least the second test object, i.e. on at least the second object behavior data. Thus, according to the proposed method, the first and at least second test object may interact with each other throughout the test procedure without the risk of direct contact. Thus, complex and/or high risk test situations, such as test procedures relating to impacts and/or impending impacts, may be performed in a safe manner.
Alternatively, the interactive testing system may visualize (visualize) the testing process at least partially on one or more displays associated with the control server. Whereby the test procedure can be intuitively followed, studied and/or evaluated at least in part, for example by a test operator and/or test performer. The input to the visualization test procedure may comprise at least a digital representation of the first and/or at least the second physical test environment and further be based on the first and/or at least the second target data obtained continuously, periodically or intermittently.
Further, optionally, the interactive test system may initiate a test procedure. "Initiating" may for example mean "enabling", "engaging" and/or "Initiating by the control server". Initiating the testing process may further optionally comprise providing one or more initial action instructions to the first and/or at least the second test object, e.g. by the control server. "action instructions" may refer throughout the present disclosure to, for example, instructions regarding and/or implying geographic location, route, speed, directions, car horns, use of lights and/or headlights, etc., and may also include parameters of the aforementioned "status information".
Optionally, the interactive testing system further determines the interactive behaviour of the first test objective and/or the at least second test objective based on the obtained first objective data and/or the obtained at least second objective data. Subsequently, based on the interaction behavior, the interaction testing system also provides action instructions to the first and/or at least the second test object. Thereby, from the perspective of any one test object or all test objects, by evaluating the interaction between a first test object and at least a second test object, one or more action instructions may be issued to one or more test objects instructing it to act according to the action instructions, e.g. change routes, activate headlights, etc. Thus, depending on how the test object acts and/or interacts during the test procedure, new and/or up-to-date action instructions may be given, e.g. during an ongoing test procedure. The "interactive behavior" of a test object may refer to the "action" of the test object, while the term "based on" target data may refer to "derived from" target data throughout the present invention. Determining an interactive behavior may refer to "evaluating", "deriving", "monitoring", "studying" and/or "determining an interactive behavior by the control server". Furthermore, "providing" action instructions may refer to "delivering," "submitting," "sending," "providing during the testing process," and/or "providing by the control server" action instructions. Additionally or alternatively, the test interaction system also adjusts a data parameter of the first and/or at least second test object, which data parameter is related to the driving assistance function, based on the selectably determined interaction behavior. Thereby, from the perspective of any or all of the test targets, one or more data parameters related to the driving assistance function of one or more of the test targets, such as steering angle, braking force, activating horn, etc., may be adjusted by evaluating the interaction between the first test target and at least a second test target. Thus, depending on how the test object acts and/or interacts during the test procedure, for example, its data parameters or data parameters of other test objects may be constructed during the ongoing test procedure. According to an example, the data parameters may be adjusted in a self-learning manner. The term "parameter" may refer to a data "attribute", and "data parameter" may refer to a "data parameter related to an active safety and/or driving assistance function". The data parameters may thus be represented by parameters related to braking, steering, car horn, indication, use of lights and/or headlights, etc., and may further relate to parameters of the above-described "status information". Further, an "adjust" data parameter may refer to a "construct", "adjust by the control server", and/or "initiate an adjustment" data parameter.
Alternatively, the first test object may additionally include a vehicle occupant. Providing the first virtual reality layer then includes providing the first vehicle virtual reality layer to the vehicle and/or providing the first occupant virtual reality layer to a Head Mounted Display (HMD) carried by an occupant of the vehicle. Furthermore, deriving the first target data may comprise deriving vehicle data from the vehicle and/or deriving occupant data from a first motion capture system carried by and/or visually sensed by an occupant of the vehicle. The vehicle data includes state information and/or actions of the vehicle and the occupant data includes state information and/or actions of the vehicle occupants. Furthermore, providing the first target behavior data then comprises providing vehicle behavior data based on the vehicle data and/or occupant behavior data based on the occupant data to at least a second virtual reality layer such that at least a part of the state information and/or actions of the vehicle and/or vehicle occupants are comprised in the mixed reality perceived by the at least a second test target. Furthermore, providing at least second target behavior data then comprises providing at least second target behavior data to the first vehicle virtual reality layer and/or the first occupant virtual reality layer based on the at least second target data such that at least a portion of the state information and/or actions of the at least second test target are contained in the mixed reality perceived by the vehicle and/or the mixed reality perceived by the vehicle occupant. By virtue thereof, by additionally including a first target of a vehicle occupant (e.g. driver), for example in the case of a non-fully autonomous vehicle, the vehicle occupant is considered to be involved in the testing procedure, as the state and/or action of the first test target during the testing procedure may be relevant to the vehicle occupant. Thus, by providing the first virtual reality layer then comprises providing the first vehicle virtual reality layer to the vehicle and/or providing the first occupant virtual reality layer to an HMD carried by a vehicle occupant, the respective virtual reality layer is provided to the vehicle and/or the vehicle occupant such that the mixed reality perceived by the vehicle and/or the mixed reality perceived by the vehicle occupant corresponds to the mixed reality perceived by the at least second test target. From the perspective of the vehicle, the first set of vehicle information may be superimposed on the first test environment, and/or from the perspective of the vehicle occupant, the first set of occupant information may be superimposed on the first test environment, such that the vehicle-and/or vehicle occupant-and at least the second test object experience a similar "reality". The first vehicle virtual reality layer may be different from the first occupant virtual reality layer. It may be noted that the first vehicle information group or the first occupant information group may refer to the first group of information described above. Similarly, it may also be noted that the first vehicle virtual reality layer or the first occupant virtual reality layer may refer to the aforementioned first virtual reality layer. Head mounted displays known in the art provide virtual reality information so that its user, a vehicle occupant, may experience mixed reality, i.e., a virtual reality overlaying a real world, such as a first physical test environment. HMDs can be represented throughout the present invention by the well-known optical HMD (ohmd), which allows a user, such as a vehicle occupant, to see through it. The term "providing a virtual reality layer to a vehicle" may refer to "providing a virtual reality layer to a virtual processing unit associated with a vehicle", while the term "providing a virtual reality layer to an HMD" may refer to providing a virtual reality layer to a virtual processing unit associated with an HMD. The HMD may support wireless communication, and may further communicate with the aforementioned control server directly and/or via the virtual processing unit associated with the HMD. Furthermore, therefore, deriving the first target data by way of example comprises continuously, periodically or intermittently deriving vehicle data from the vehicle, the vehicle data comprising state information and/or motion of the vehicle, and/or deriving occupant data from a first motion capture system carried by and/or visually sensed by an occupant of the vehicle, the occupant data comprising state information and/or motion of the occupant of the vehicle, information about a current state of the vehicle and/or information about how the vehicle is moving and/or information about a current state of an occupant of the vehicle and/or information about how the occupant of the vehicle is moving is monitored substantially continuously. The term "vehicle data" may refer to the above-described "target data", while "occupant data" may refer to "related occupant data", "safety-related occupant data", and/or "current occupant data" throughout the present disclosure. "state information" may be referred to herein additionally as "safety-related occupant state information" and/or "occupant state information", and "action" may be referred to herein additionally as "occupant behavior". "status information" and/or "actions" may thus additionally refer to information and/or actions related to occupant geographic location, occupant eye location, occupant head location, occupant hand location, occupant movement, occupant sound, and/or the like. A "motion capture system" may refer throughout the present invention to any known arbitrary motion capture system, for example a motion capture system comprising a plurality of motion sensors (commonly referred to as motion tracking systems), for example carried, e.g. worn, by a user, such as a vehicle occupant. Additionally or alternatively, the motion capture system may further refer throughout to one or more known visual sensors, such as cameras, that visually monitor a test object, such as a vehicle occupant. Such a visual monitoring motion capture system may be positioned at any location and/or mobile location on the vehicle and/or on a test object, e.g., onboard the vehicle and/or on a vehicle occupant, and/or within a physical test environment, such as within a first test environment, to monitor the test object (e.g., a vehicle occupant) from a remote location. Furthermore, the provision of the first target behavior data thus comprises the provision of vehicle behavior data on the basis of vehicle data and/or occupant behavior data on the basis of occupant data continuously, periodically or intermittently to at least a second virtual reality layer, such that at least a part of the state information and/or actions of the vehicle and/or vehicle occupant are contained in a mixed reality perceived by at least a second test target, the virtual reality layer of the second test target merging the vehicle and/or vehicle occupant and-substantially continuously-at least partially merging the state and/or actions of the vehicle and/or vehicle occupant, such that the second test target experiences the virtual copy of the vehicle and/or vehicle occupant and its state and/or actions via said mixed reality, respectively. The at least second test object thus interacts with the virtual copy of the vehicle and/or vehicle occupant during the test procedure and may thus base the action and/or behavior imminent during the test procedure on the behavior of the vehicle and/or vehicle occupant, i.e. on the vehicle behavior data and/or occupant behavior data. The term "vehicle behavior data" may refer throughout this disclosure to "data relating to the state and/or action of the vehicle", while the term "occupant behavior data" may refer throughout to "data relating to the state and/or action of an occupant of the vehicle". Furthermore, the at least one second target behavior data is provided to the first vehicle virtual reality layer and/or the first occupant virtual reality layer, such that at least a part of the state information and/or actions of the at least one second test object are contained in the mixed reality perceived by the vehicle and/or the mixed reality perceived by the vehicle occupant, the vehicle virtual reality layer and/or the occupant virtual reality layer merging the at least one second test object and substantially continuously merging at least a part of its state and/or actions, such that the vehicle and/or the vehicle occupant experiences a virtual copy of the at least one second test object and its state and/or actions, respectively, via the respective mixed reality. The vehicle and/or the vehicle occupant thus interact with the virtual copy of the at least second test object during the test procedure and may thus base the action and/or behavior imminent during the test procedure on the behavior of the at least second test object, i.e. on the at least second object behavior data.
Optionally, the second test object includes a fragile road user (VRU). Providing at least a second virtual reality layer then includes providing the VRU virtual reality layer to an HMD carried by the VRU, and deriving at least second target data then includes deriving the second target data from a VRU motion capture system carried by and/or visually sensed by the VRU. Thereby, by letting the second test target comprise a VRU, test situations which are normally considered e.g. too stressful, uncomfortable and/or risky for the VRU can now be performed in a safe manner. Thus, high risk situations involving VRUs (which are generally not possible to implement completely) can now be included in the testing process. Furthermore, thus, by providing at least a second virtual reality layer then comprising providing the VRU virtual reality layer to a HMD carried by the VRU, the mixed reality perceived by the first test target corresponds toMixed reality as perceived by the HMD or VRU. Thus, from the perspective of the HMD and/or VRU, the VRU information set may be superimposed on the second test environment such that the first test target and/or its user or occupant and the HMD and/or VRU experience similar "reality". It may be noted that the VRU information group may refer to the above-mentioned second group information; it may similarly be noted that the VRU virtual reality layer may refer to the aforementioned second virtual reality layer. "VRU" may refer to any fragile road user, such as a pedestrian, a pedestrian pushing a baby carriage or the like, a cyclist, a motorcycle rider, a SageWis rider ((R))
Figure GDA0002794627960000101
rider), and so forth. The VRU-carried HMD may be referred to as similar to the HMD described above. Furthermore, therefore, by deriving at least second target data then comprising deriving second target data continuously, periodically or intermittently from a VRU motion capture system carried by and/or visually sensed by the VRU, information relating to the current state of the VRU and/or information about how the VRU is acting is substantially continuously monitored. The term "target data" may additionally refer herein to "VRU data," related VRU data, "" security-related VRU data, "and/or" current VRU data. "State information" may additionally be referred to herein as "security-related VRU state information" and/or "VRU state information," and "actions" may additionally be referred to herein as "VRU behavior. Thus, "status information" and/or "actions" herein may additionally refer to information and/or actions related to VRU geographic location, VRU eye location, VRU head location, VRU hand location, VRU motion, VRU sound, and the like. A "motion capture system" may refer to a motion capture system similar to that described above. Thus, the virtual reality layer of the HMD merges the first test target and substantially continuously merges the state and/or actions of at least a portion of the first test target such that the HMD and/or VRU thus experiences the virtual copy of the first test target and its state and/or actions via mixed reality. The HMD and/or VRU thus interacts with the virtual copy of the first test target during the testing process, and thus will detect upcoming actions and/or behaviors during the testing processThe behavior based on the first test objective is based on the first objective behavior data. The term "target behavior data" may additionally refer herein to "data related to the state and/or actions of a VRU. Furthermore, the virtual reality layer of the first test object thus merges the VRU and at least a part of its state and/or actions substantially continuously, so that the first test object and/or its user thus experiences the virtual copy of the first test object and its state and/or actions via mixed reality. The first test target thus interacts with the virtual copy of the VRU during the test procedure and thus bases the actions and/or behaviors that are imminent during the test procedure on the behavior of the VRU, i.e., on the second target behavior data.
Alternatively, the second test object includes a vehicle (referred to throughout as a "second vehicle") and its vehicle occupants. Providing at least a second virtual reality layer then includes providing the second vehicle virtual reality layer to a second vehicle and/or providing the second occupant virtual reality layer to an HMD carried by a vehicle occupant of the second vehicle. Furthermore, providing the first target behavior data then comprises providing the first target behavior data to the second vehicle virtual reality layer and/or the second occupant virtual reality layer based on the first target data such that at least a portion of the state information and/or actions of the first test target are contained in the mixed reality perceived by the second vehicle and/or the mixed reality perceived by the HMD carried by the vehicle occupant of the second vehicle. Furthermore, deriving at least second target data comprises deriving second vehicle data from the second vehicle and/or deriving second occupant data from a second motion capture system carried by and/or visually sensed by a vehicle occupant of the second vehicle. The second vehicle data includes state information and/or actions of the second vehicle and the second occupant data includes state information and/or actions of the second vehicle occupant. Furthermore, providing at least second target behavior data comprises providing second vehicle behavior data based on the second vehicle data and/or second occupant behavior data based on the second occupant data to the first virtual reality layer such that at least a portion of state information and/or actions of the second vehicle and/or the second occupant are contained in the mixed reality perceived by the first test target. Thereby, the passing of the second test object comprises the second vehicle and its vehicle occupant, e.g. the driver, which is considered to be involved in the test procedure, as the state and/or the action of the second test object during the test procedure may be related to one or both of the second vehicle and its vehicle occupant. Furthermore, thus, by providing at least a second virtual reality layer then comprising providing a second vehicle virtual reality layer to the second vehicle and/or providing a second occupant virtual reality layer to an HMD carried by a vehicle occupant of the second vehicle, the respective virtual reality layer is provided to the second vehicle and/or the second vehicle occupant such that the mixed reality perceived by the second vehicle and/or the mixed reality perceived by the second vehicle occupant corresponds to the mixed reality perceived by the first test target. Thus, from the perspective of the second vehicle, the second vehicle information set may be superimposed on the second test environment, and/or from the perspective of the second vehicle occupant, the second occupant information set may be superimposed on the second test environment, such that the second vehicle and/or the second vehicle occupant and the first test target experience a similar "reality". The second vehicle virtual reality layer may be different from the second occupant virtual reality layer. It should be noted that the second vehicle information group or the second occupant information group may refer to the above-described second information group. It may similarly be noted that the second vehicle virtual reality layer or the second occupant virtual reality layer may refer to the aforementioned second virtual reality layer. The HMD carried by the second vehicle occupant may be referred to as similar to the HMD described above. Furthermore, therefore, by providing the first target behaviour data then comprising continuously, periodically or intermittently providing the first target behaviour data to the second vehicle virtual reality layer and/or the second occupant virtual reality layer based on the first target data such that at least a part of the state information and/or actions of the first test target are comprised in the mixed reality perceived by the second vehicle and/or the mixed reality perceived by the HMD carried by the vehicle occupant of the second vehicle, the second vehicle virtual reality layer and/or the second occupant virtual reality layer subsequently merges the first test target and substantially continuously merges at least part of its state and/or actions, locking the second vehicle and/or the second vehicle occupant in the virtual copy of the first test target and its state and/or actions via the respective mixed reality, respectively. The second vehicle and/or the second vehicle occupant thus interact with the virtual copy of the first test object during the test procedure and thus base the action and/or behavior imminent during the test procedure on the behavior of the first test object, i.e. on the first object behavior data. Furthermore, whereby deriving at least the second target data then comprises deriving second vehicle data continuously, periodically or intermittently from the second vehicle, the second vehicle data comprising status information and/or actions of the second vehicle, and/or deriving second occupant data continuously, periodically or intermittently from a second motion capture system carried by and/or visually sensed by a vehicle occupant of the second vehicle, the second occupant data comprising status information and/or actions of the second vehicle occupant, information relating to a current status of the second vehicle and/or information relating to how the second vehicle is acting and/or information relating to a current status of the second vehicle occupant and/or information relating to how the second vehicle occupant is acting being monitored substantially continuously. A "motion capture system" may refer to a motion capture system similar to that described above. Furthermore, whereby providing at least the second target behavior data then comprises continuously, periodically or intermittently providing the second vehicle behavior data based on the second vehicle data and/or the second occupant behavior data based on the second occupant data to the first virtual reality layer such that at least a part of the status information and/or actions of the second vehicle and/or the second occupant are comprised by the mixed reality perceived by the first test target, the first virtual reality layer incorporating the second vehicle and/or the second vehicle occupant and substantially continuously incorporating at least a part of their status and/or actions, the locking of the first test target via its mixed reality experiences the virtual copy of the second vehicle and/or the second vehicle occupant and its status and/or actions accordingly. The first test object thus interacts with the virtual copy of the second vehicle and/or the second vehicle occupant during the test procedure and thus bases the action and/or behavior imminent during the test procedure on the behavior of the second vehicle and/or the second vehicle occupant, i.e. on the second vehicle behavior data and/or the second occupant behavior data.
Optionally, during the test procedure, at least a first auxiliary target acts within the first and/or at least a second physical test environment, the at least first auxiliary target being adapted to communicate directly or indirectly with the first test target and/or at least a second test target. The interactive test system then derives, e.g., continuously, periodically or intermittently, auxiliary target data related to at least the first auxiliary target, the auxiliary target data comprising auxiliary status information and/or actions. The interaction testing system then further provides auxiliary target behavior data to the first and/or at least second virtual reality layer based on the auxiliary target data such that at least a portion of the state information and/or actions of at least the first auxiliary target are contained in the mixed reality perceived by the first test target and/or the mixed reality perceived by at least the second test target. Thereby, one or more auxiliary targets (e.g. known traffic light devices, traffic sign devices and/or mobile robots) capable of e.g. wireless communication with the above-mentioned control server may participate in the test procedure, by the action of at least a first auxiliary target within the first and/or at least a second physical test environment, which at least first auxiliary target is adapted to communicate directly or indirectly with the first test target and/or at least a second test target. At least the first auxiliary target may further optionally be controlled, for example by said control server. "mobile robot" may refer to a well-known automated vehicle. Furthermore, thereby information relating to e.g. the current state of at least the first auxiliary target and/or information relating to how the auxiliary target is acting is e.g. substantially continuously monitored, as the interactive test system subsequently derives e.g. continuously, periodically or intermittently auxiliary target data relating to at least the first auxiliary target, the auxiliary target data comprising state information and/or actions of the auxiliary target. The term "target data" may additionally refer herein to "assistance data", "related assistance data", "security related assistance data" and/or "current assistance data". "state information" may additionally be referred to herein as "security-related auxiliary state information" and/or "auxiliary state information," and "actions" may additionally be referred to herein as "auxiliary target behavior. "status information" and/or "action" may therefore additionally refer herein to information relating to auxiliary target geographical position, auxiliary target direction, auxiliary target speed, auxiliary target movement, auxiliary target light color, auxiliary target sound, etc. Furthermore, by means of the interactive test system also providing auxiliary target behavior data to the first and/or at least second virtual reality layer, for example continuously, periodically or intermittently, based on the auxiliary target data, such that at least a part of the state information and/or actions of at least the first auxiliary target is comprised in the mixed reality perceived by the first test target and/or the mixed reality perceived by the at least second test target, the virtual reality layer of the first test target and/or the virtual reality layer of the at least second test target respectively merging, e.g. substantially continuously, the states and/or actions of at least the first auxiliary test target and at least the part of the auxiliary test target, the first test object and/or the second test object are respectively subjected to a virtual copy of at least the first auxiliary test object and its state and/or action via the respective mixed reality. The first test object and/or the at least second test object thus interact with the virtual copy of the at least first auxiliary test object during the test procedure and thus base the actions and/or behaviors that are imminent during the test procedure on the behavior of the at least first auxiliary test object, i.e. on the auxiliary object behavior data. Thus, the first and/or at least the second test object may interact with one or more auxiliary test objects during the entire test procedure. The term "target behavior data" may additionally refer herein to "data relating to a state and/or action of at least a first auxiliary target".
Optionally, the interactive testing system derives environmental condition data from the cloud service, the environmental condition data comprising one or more environmental conditions associated with the first and/or at least second physical testing environment. The interactive testing system then further provides at least a portion of the environmental condition data to the first test target, the first virtual layer, the at least second test target, and/or the at least second virtual layer such that at least a portion of the environmental condition data is contained in the mixed reality perceived by the first test target and/or the mixed reality perceived by the at least second test target. Whereby, by the interactive testing system deriving from the cloud service environmental condition data comprising one or more environmental conditions related to the first and/or at least the second physical testing environment, the testing process may allow for externally input holding information regarding e.g. road conditions or weather conditions related to the first and/or second physical testing environment. The term "environmental condition data" may refer to "ambient condition data," environmental status data, "" environmental condition input, "and/or" environmental condition information, "while" environmental conditions associated with a physical testing environment "refer to" potentially applicable to, effective for, and/or focused on the physical testing environment. "cloud service" may refer to a well-known cloud service capable of collecting and distributing information from a plurality of its users, such as vehicles and/or vehicle occupants. Cloud services may further refer to pseudo cloud services (fake cloud services) and/or crowd-sourced cloud services (crowdsource cloud services). Furthermore, by means of the interactive testing system subsequently also providing the environmental condition data to the first test object, the first virtual layer, the at least second test object and/or the at least second virtual layer, such that the environmental condition data is comprised in the mixed reality perceived by the first test object and/or the mixed reality perceived by the at least second test object, the virtual reality layer of the first test object, the at least second test object and/or the virtual reality layer of the at least second test object may respectively incorporate the environmental condition data. The first test object and/or the at least second test object may thus act taking into account the environmental condition data during the test procedure, irrespective of whether the environmental condition data applies to its own physical test environment, and may thus base the actions and/or behaviors that are imminent during the test procedure on the environmental condition data. Thus, the first and/or at least the second test object may interact during the entire test procedure additionally taking into account the environmental condition data. "exporting environmental condition data" may refer to "exporting environmental conditions through the control server"; similarly, "providing environmental condition data" may refer to "providing environmental condition data through the control server".
According to a second aspect of embodiments herein, the object is achieved by an interactive testing system as described above. Similar advantages mentioned in the foregoing in relation to the first aspect thus apply to the second aspect, which is why these advantages are not discussed further. According to a third aspect of embodiments herein, the object is achieved by a computer program product comprising a computer program stored on a computer readable medium or carrier, the computer program comprising computer program code means arranged to cause a computer or processor to perform the steps of the above-described interactive test system. Furthermore, similar advantages mentioned above in relation to the first aspect apply equally to the third aspect, which is why they are not discussed.
Drawings
The various aspects of the present invention, including certain features and advantages, of non-limiting embodiments thereof, will be readily understood from the following detailed description and the accompanying drawings, wherein:
FIG. 1 depicts a schematic diagram of an exemplary interactive testing system, according to an embodiment of the invention;
FIG. 2 shows a schematic diagram of mixed realities (mixed realities) according to an exemplary embodiment of the present invention that may result from the condition of FIG. 1;
3a-c show schematic diagrams of alternative mixed reality according to an exemplary embodiment of the invention;
FIG. 4 is a schematic block diagram illustrating an exemplary interactive testing system in accordance with embodiments of the present invention; and is
FIG. 5 sets forth a flow chart illustrating an exemplary method of an interactive test system according to embodiments of the present invention.
Detailed Description
Non-limiting embodiments of the present invention will now be described more fully with reference to the accompanying drawings, in which presently preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Like reference numerals refer to like elements throughout, and reference numerals followed by one or more prime notation refer to elements similar to the previous description. The dashed lines in some of the blocks in the figures indicate that these elements or acts are optional and not mandatory. In the following, according to embodiments herein relating to enabling interaction between a first test object comprising a vehicle and at least a second test object in a test environment, a method of enabling a safety test, for example in a high risk situation involving testing the vehicle and one or more other test objects, is disclosed.
Referring now to the drawings and in particular to FIG. 1, there is shown a schematic diagram of an exemplary interactive testing system 1 according to an embodiment of the invention. The interactive testing system 1 optionally belongs at least in part to one or more control servers 10, which system is adapted to enable interaction between a first test object 3 comprising a vehicle 31 and at least a second test object 4 under a test environment 2. At least the second test object 4 is adapted to communicate directly or indirectly with the first test object 3, e.g. via the control server 10. During the test procedure, the first test object 3 acts within a first physical test environment 21 and the at least second test object acts within at least a second physical test environment 22 physically separated from the first test environment 21. In the illustrated exemplary embodiment, the vehicle 31 is a passenger car adapted to support driving assistance functions, the first physical test environment 21 being represented by an outdoor open test area (outdoor open test area) and the second physical test environment 22 being represented by another outdoor open test area physically separated from the first test environment 21, for example by a shortest distance 23. The second test object 4 is here represented by a fragile road user (VRU)41 wearing a Head Mounted Display (HMD) 45 and/or comprises a fragile road user 41 wearing a head mounted display 45. Further shown is an exemplary motion capture system 5, here a VRU motion capture system 541, in a second physical test environment 22 including one or more cameras 51 that visually sense test targets 3, 4, here VRU 41. Additionally or alternatively, the motion capture system 5 includes a wearable motion capture system 52 carried by the targets 3, 4 (here VRU 41). Furthermore, first target data are shown which are obtained in relation to the first test target 3, which first target data 6 comprise status information and/or actions of the first test target 3 (here the vehicle 31). At least second target data 7 relating to at least a second test target 4 is similarly shown, the at least second target data 7 comprising status information and/or actions of at least the second test target 4 (here VRU 41).
FIG. 2 shows a schematic diagram of an example mixed reality that may result from the condition of FIG. 1, according to an example embodiment of the invention; a mixed reality 210 perceived by a first test target 3 (here a vehicle 31 and/or a user thereof), and a mixed reality 220 perceived by at least a second test target 4 (here a VRU41 and/or an HMD45 carried by a VRU 41). Shown in fig. 2 is first target behavior data 60 based on first target data 6 and at least second target behavior data 70 based on at least second target data 7. What further appears in the mixed reality 220 to be perceived by at least a second test target 4 is an illustrative virtual copy 30 of the first test target 3 (here a virtual copy 310 of the vehicle 31), and what appears in the mixed reality 210 to be perceived by at least the first test target 3 is an illustrative virtual copy 40 of at least the second test target 4 (here a virtual copy 410 of the VRU 41).
FIG. 3a shows a schematic diagram illustrating an alternative mixed reality according to an exemplary embodiment of the present invention. In the illustrated embodiment, the first test object 3 (here comprising the vehicle 31 described above) additionally comprises a vehicle occupant 311 carrying an HMD (not shown). The first physical test environment 21 (here similar to the second physical test environment 22) includes a motion capture system 5 (here a first motion capture system 53) that includes one or more cameras 51 that here visually sense the vehicle occupant 311. Additionally or alternatively, the first motion capture system 53 comprises a wearable motion capture system 52, here carried by the vehicle occupant 311. Also shown are optional vehicle data 61 obtained by the vehicle 31 and additional or alternative optional occupant data 62 obtained by the first motion capture system 53 carried 52 by the vehicle occupant 311 and/or visually sensed 51. The vehicle data 61 includes state information and/or actions of the vehicle 31, and the occupant data 62 includes state information and/or actions of the vehicle occupant 311. Further shown in fig. 3a are optional vehicle behavior data 610 based on the vehicle data 61 and additional or alternative optional occupant behavior data 620 based on the occupant data 62. Additionally shown is how one or more optional action instructions are provided for one or more test targets 3, 4; providing action instructions for a first test object, here a car 31, and/or providing action instructions 65 for at least a second test object 4, here a VRU 41. The exemplary interactive testing system 1 of fig. 3a further comprises at least a first optional auxiliary target 8, here a traffic light device 81, which auxiliary target 8 is adapted to act within the first physical testing environment 21 and/or the second physical testing environment 22 during the testing process. At least a first auxiliary target 8, here acting within the second test environment 22, is adapted to communicate directly or indirectly, e.g. via the control server 10, with a first test target 3, here a vehicle 31 and/or a vehicle occupant 311, and/or with at least a second test target 4, here a VRU 41. Further shown is optionally auxiliary target data 9 related to at least the first auxiliary test target 8, which auxiliary target data 9 comprises status information and/or actions of the auxiliary target 8, here the traffic light device 81. Additionally shown is optional auxiliary goal behavior data 90 based on auxiliary goal data 9. It further appears that what is perceived in the mixed reality 210 by the first test target 3 (here the vehicle 31), its vehicle occupant 311 and/or the HMD carried by the vehicle occupant 31 is an exemplifying virtual copy 80 of at least the first auxiliary target 8 (here the virtual copy 810 of the traffic light device 81).
FIG. 3b shows a schematic diagram of an illustrative or other alternative mixed reality according to an illustrative embodiment of the invention. In the illustrated embodiment, the optional auxiliary target 8 is represented by a robot vehicle (robot vehicle) 82. The auxiliary target data 9 here comprises status information and/or actions of the robotic vehicle 82, and furthermore, the instantiated virtual copy 80 of at least the first auxiliary target 8 is here represented by a virtual copy 820 of the robotic vehicle 82. Fig. 3b further illustrates an optional cloud service 11, from which optional cloud service 11 environmental condition data 12 may be obtained. The environmental condition data 12 includes one or more environmental conditions associated with the first physical test environment 21 or at least the second physical test environment 22.
FIG. 3c shows a schematic diagram of an illustration and other alternative mixed reality according to an illustrative embodiment of the present invention. In the illustrated embodiment, the at least second test target 4 or other test targets includes the second vehicle 42 and its vehicle occupant (i.e., second vehicle occupant 421), the second vehicle occupant 421 carrying an HMD (not shown). The motion capture system 5 of the second physical test environment 22 here includes a second motion capture system 54, and the second motion capture system 54 may include one or more cameras 51, here visually sensing the second vehicle occupant 421 and the optional additional second vehicle 42. Additionally or alternatively, the second motion capture system 54 comprises a wearable motion capture system 52, here carried by a second vehicle occupant 421. Further shown are optional second vehicle data 71 derived from the second vehicle 42 and additional or alternative optional second occupant data 72 derived from the second motion capture system 54 carried 52 by the second vehicle occupant 421 and/or visually sensed 51 by the vehicle occupant 421. The second vehicle data 71 includes state information and/or actions of the second vehicle 42, and the second occupant data 72 includes state information and/or actions of the second vehicle occupant 421. Further shown in fig. 3c are optional second vehicle behaviour data 710 based on the second vehicle data 71 and additional or alternative optional second occupant behaviour data 720 based on the second occupant data 72.
As further shown in fig. 4, which shows a schematic block diagram of an exemplary interactive testing system 1 according to an embodiment of the present invention, the interactive testing system 1 comprises a virtual reality providing unit 102, a data deriving unit 103 and a behavior data providing unit 104, all described in detail below. The interactive testing system 1 may further comprise an optional action instruction providing unit 101, an optional action behavior determining unit 105, an optional data parameter adjusting unit 106, an optional environmental condition data deriving unit 107 and/or an optional environmental condition providing unit 108, similarly to be further detailed. Furthermore, the embodiments herein for enabling interaction between a first test target 3 comprising a vehicle 31 and at least a second test target 4 in a test environment 2 may be implemented by one or more processors (e.g., processor 109), here represented by a CPU, together with computer program code for performing the functions and acts of the embodiments herein, for example in a single and/or multi-threaded parallel computing environment for a CPU and/or GPU ("graphics processing unit"). The program code may also be provided as a computer program product, for example in the form of a data carrier carrying computer program code, for performing the embodiments herein when the interactive test system 1 is loaded. One such carrier may be in the form of a CD ROM disc. But other data carriers such as memory sticks are also feasible. The computer program code may furthermore be arranged to be downloaded to the interactive system 1 as pure program code on a server. The interactive test system 1 may further comprise a memory 110 based on one or more memory locations; additionally or alternatively include a hard disk drive, solid state disk, flash memory, GPU memory, or the like. The memory 110 may be arranged to store, for example, information and further to store data, configurations, schedules and applications to perform the methods herein when executed in the interactive test system 1. Furthermore, the one or more units 101, 108, the processor 109 and/or the memory 110 described above may be implemented, for example, in one or several control servers 2, in the first test object 3 and/or in at least the second test object 4, for example in one or more Electronic Control Units (ECUs) thereof, and/or in one or more mobile units which may be onboard, mounted and/or integrated in the first test object 3 and/or in at least the second test object 4. It will also be appreciated by those skilled in the art that the one or more units 101-108 (described in greater detail below) as described above may refer to a combination of analog-to-digital circuitry and/or one or more processors configured with software and/or firmware, for example, stored in a memory, such as memory 111, that operates when executed by the one or more processors, such as processor 109, as described in greater detail below. One or more of these processors and additional digital hardware may be included in a single ASIC (application specific integrated circuit), or several processors and various digital hardware may be assigned to several discrete components, or packaged or assembled separately as a System-on-a-Chip (SoC).
FIG. 5 sets forth a flow chart illustrating an exemplary method of the interactive test system 1 according to embodiments of the present invention. The method implemented by the interactive testing system 1 is for enabling interaction in a test environment 2 between a first test object 3 comprising a vehicle 31 and at least a second test object 4 adapted to communicate directly or indirectly with the first test object 3. During the test procedure, the first test target 3 acts within a first physical test environment 21 and at least the second test target 4 acts within at least a second physical test environment 22 that is physically separated from the first test environment 21. The exemplary method may be continuously repeated, which includes the following actions supported by FIGS. 1-4. The acts may be performed in any suitable order, for example acts 1003 and 1005 may be performed simultaneously and/or in an alternating order.
Act 1001
In an optional act 1001, the interactive test system 1 may initiate a test procedure, e.g. by the action instruction providing unit 101, at least as shown in fig. 4. Initiating the testing process may further optionally comprise providing one or more initial action instructions, e.g. by the control server 2, to the first test object 3 and/or at least the second test object 4.
Act 1002
In action 1002, the interactive testing system 1 provides (e.g. by the virtual reality layer providing unit 102) a first virtual reality layer associated with the first test environment 21 to the first test target 3 and at least a second virtual reality layer associated with the at least second test environment 22 to the at least second test target 4, such that the mixed reality 210 perceived by the first test target 3 corresponds to the mixed reality 220 perceived by the at least second test target 4, at least as shown in fig. 1, 2 and 4.
Alternatively, if the first test object 3 additionally comprises a vehicle occupant 311, providing the first virtual reality layer comprises providing (e.g. by the virtual reality layer providing unit 102) the first vehicle virtual reality layer to the vehicle 311 and/or providing the first occupant virtual reality layer to a Head Mounted Display (HMD) carried by the vehicle occupant, at least as shown in fig. 3 a.
Further, optionally, if the second test target 4 or other test target comprises a VRU41, providing at least a second virtual reality layer comprises providing-e.g. by the virtual reality layer providing unit 102-a VRU virtual reality layer to the HMD45 carried by the VRU41, at least as shown in fig. 2.
Further, optionally, if the second test object 4 or other test object comprises the second vehicle 42 and its vehicle occupant 421, providing at least the second virtual reality layer comprises providing (e.g. by the virtual reality layer providing unit 102) the second vehicle virtual reality layer to the second vehicle 42 and/or providing the second occupant virtual reality layer to an HMD carried by the vehicle occupant 421 of the second vehicle 42, at least as shown in fig. 3 c.
Act 1003
In action 1003, the interactive test system 1 continuously, periodically or intermittently (e.g. by the data deriving unit 103) derives first target data 6 related to the first test target 3, the first target data 6 comprising state information and/or actions of the first test target 3. As shown at least in fig. 1 and 4.
Alternatively, if the first test object 3 additionally comprises a vehicle occupant 311, deriving the first object data 6 comprises deriving (e.g. by the data deriving unit 103) vehicle data 61 from the vehicle 31, and/or deriving occupant data 62 by the first motion capture system 53 of the vehicle occupant 311 and/or of the visual sensing 51 carried 52 by the vehicle occupant, the vehicle data 61 comprising state information and/or actions of the vehicle 31 and the occupant data 62 comprising state information and/or actions of the vehicle occupant 311, at least as shown in fig. 3 a.
Act 1004
In act 1004, the interactive testing system 1 provides continuously, periodically or intermittently first target behaviour data 60 to at least a second virtual reality layer based on the first target data 6 (e.g. by the behaviour data providing unit 104) such that at least a part of the state information and/or actions of the first test target 3 is contained in the mixed reality 220 perceived by at least the second test target 4, as shown at least in fig. 2 and 4.
Alternatively, if the first test object 3 additionally comprises a vehicle occupant 311, providing the first object behaviour data 60 comprises providing the vehicle behaviour data 60 based on the vehicle data 61 and/or providing the occupant behaviour data 620 based on the occupant data 62 to the at least second virtual reality layer, for example by means of the behaviour data providing unit 104, such that at least part of the state information and/or actions of the vehicle 31 and/or the vehicle occupant 311 is comprised in the mixed reality 220 perceived by the at least second test object 4, as at least shown in fig. 3 a.
Furthermore, optionally, if the second test object 4 or other test object comprises the second vehicle 42 and its vehicle occupant 421, providing the first target behavior data 60 comprises providing the first target behavior data 60 to the second vehicle virtual reality layer and/or the second occupant virtual reality layer based on the first target data 6, such that at least part of the state information and/or actions of the first test object 3 is comprised in the mixed reality 220 perceived by the second vehicle 42 and/or the mixed reality 220 perceived by the HMD carried by the vehicle occupant 421 of the second vehicle 42, at least as shown in fig. 3 c.
Act 1005
In action 1005, the interactive test system 1 continuously, periodically or intermittently, for example by means of the data derivation unit 103, derives at least second target data 7 related to at least a second test target 4, the at least second target data 7 comprising state information and/or actions of the at least second test target 4. As shown at least in fig. 1 and 4.
Alternatively, if the second test target 4 or other test target comprises VRU41, deriving at least second target data 7 comprises deriving the second target data 7, for example by data derivation unit 103, from VRU motion capture system 541 carried 52 by VRU41 and/or visual sense 51VRU, at least as shown in fig. 1.
Furthermore, optionally, if the second test object 4 or other test objects comprises a second vehicle 42 and its vehicle occupant 421, deriving at least the second object behaviour data 7 comprises deriving second vehicle data 71 from the second vehicle 42, e.g. by means of the data deriving unit 103, and/or deriving second occupant data 72 from a second motion capture system 54 carried 52 by and/or visually sensed 51 by the vehicle occupant 421 of the second vehicle 42, the second vehicle data 71 comprising status information and/or actions of the second vehicle 42 and the second occupant data 72 comprising status information and/or actions of the vehicle occupant 421 of the second vehicle 42, at least as shown in fig. 3 c.
Act 1006
In action 1006, the interactive testing system 1 provides, e.g. by the behavior data providing unit 104, continuously, periodically or intermittently, at least second target behavior data 70 to the first virtual reality layer based on at least second target data 7, such that at least a part of the state information and/or actions of at least the second test target 4 is comprised in the mixed reality 210 perceived by the first test target 3, as at least shown in fig. 2 and 4.
Alternatively, if the first test object 3 additionally comprises a vehicle occupant 311, providing the at least second target behaviour data 70 comprises providing the at least second target behaviour data 70 to the first vehicle virtual reality layer and/or the first occupant virtual reality layer, e.g. by the behaviour data providing unit 104, based on the at least second target data 7, such that at least part of the status information and/or actions of the at least second test object 4 is comprised by the mixed reality 210 perceived by the vehicle 31 and/or the mixed reality 210 perceived by the vehicle occupant 311, at least as shown in fig. 3 a.
Furthermore, optionally, if the second test object 4 or other test object comprises a second vehicle 42 and its vehicle occupant 421, providing at least the second object behavioural data 70 comprises providing, for example by the behavioural data providing unit 104, second vehicle behavioural data 710 based on said second vehicle data 71 and/or second occupant behavioural data 720 based on the second occupant data 72 to the first virtual reality layer, such that at least part of the state information and/or actions of the second vehicle 42 and/or the second occupant 421 is comprised by the mixed reality 410 perceived by the first test object 3, at least as shown in fig. 3 c.
Act 1007
In an optional act 1007, as shown at least in fig. 1 and 4, the interactive testing system 1 may determine the interactive behaviour of the first test object 3 and/or of the at least second test object 4, for example by means of the interactive behaviour determination unit 105, based on the obtained first target data 6 and/or the obtained at least second target data 7.
Act 1008
Optionally after optional act 1007, in optional act 1008, the interactive testing system 1 may provide action instructions 65,75 to the first test target 3 and/or at least the second test target 4, e.g. by means of the data parameter adjustment unit 106, based on the determined interactive behaviour, at least as shown in fig. 1, 3a and 4.
Act 1009
Optionally after optional act 1007, in optional act 1009 the interactive testing system 1 may adjust the driving assistance function related data parameters of the first test target 3 and/or of the at least second test target 4, for example by means of the data parameter adjustment unit 106, based on the determined interactive behaviour, at least as shown in fig. 1, 2 and 4.
Act 1010
During the test procedure, at least a first auxiliary target 8,81,82 may be active within the first physical test environment 21 and/or at least the second physical test environment 22, the at least first auxiliary target 8,81,82 being adapted to communicate directly or indirectly with the first test target 3 and/or at least the second test target 4. Thus, in an optional act 1010, the interactive test system 1 may continuously, periodically or intermittently derive auxiliary target data 9 related to at least the first auxiliary target 8,81,82, e.g. by means of the data derivation unit 103, the auxiliary target data 8,81,82 comprising status information and/or actions of the auxiliary targets 8,81,82, at least as shown in fig. 3b and 4.
Act 1011
Following optional act 1010, in optional act 1011, the interactive testing system 1 may continuously, periodically or intermittently provide the auxiliary target behavior data 90 to the first and/or at least second virtual reality layer, e.g. by the behavior data providing unit 104, based on the auxiliary target data 9, such that at least a part of the state information and/or actions of at least the first auxiliary target 8,81,82 is comprised by the mixed reality 210 perceived by the first test target 3 and/or the mixed reality 220 perceived by the at least second test target 4, at least as shown in fig. 3b and 4.
Act 1012
In an optional act 1012, the interactive testing system 1 may derive environmental condition data 12 from the cloud service 11, e.g. by means of the environmental condition data derivation unit 107, the environmental condition data 12 comprising one or more environmental conditions related to the first physical testing environment and/or the at least second physical testing environment 22, as at least shown in fig. 3b and 4.
Act 1013 of
Following optional act 1012, in optional act 1013, the interactive testing system 1 may provide the environmental condition data 12 to the first test target 3, the first virtual layer, the at least second test target 4 and/or the at least second virtual layer, e.g. by the environmental condition providing unit 108, such that the environmental condition data 12 is comprised by the mixed reality 210 perceived by the first test target 3 and/or the mixed reality 220 perceived by the at least second test target, at least as shown in fig. 3 b.
The person skilled in the art realizes that the present invention by no means is limited to the preferred embodiments described above. On the contrary, many modifications and variations are possible within the scope of the appended claims. In the claims, the word "comprising" does not exclude the presence of other elements or steps, and the indefinite article "a" or "an" does not exclude the presence of a plurality.

Claims (15)

1. A method performed by an interactive testing system (1) for enabling interaction under a testing environment (2) between a first test target (3) comprising a vehicle (31) and at least a second test target (4) adapted to communicate directly or indirectly with the first test target (3), wherein during a testing procedure the first test target (3) acts within a first physical testing environment (21) and the at least second test target (4) acts within at least a second physical testing environment (22) physically separated from the first physical testing environment (21), the method comprising:
providing (1002) a first virtual reality layer associated with the first physical test environment (21) to the first test target (3) and at least a second virtual reality layer associated with the at least a second physical test environment (22) to the at least a second test target (4) such that the mixed reality (210) perceived by the first test target (3) corresponds to the mixed reality (220) perceived by the at least a second test target; and is
Continuously, periodically or intermittently:
deriving (1003) first target data (6) relating to the first test target (3), the first target data (6) comprising status information and/or actions of the first test target (3);
providing (1004) first target behavior data (60) to the at least second virtual reality layer based on the first target data (6) such that at least a part of the status information and/or actions of the first test target (3) are comprised in a mixed reality (220) perceived by the at least second test target (4);
deriving (1005) at least second target data (7) related to the at least second test target (4), the at least second target data (7) comprising status information and/or actions of the at least second test target (4); and is
Providing (1006) at least second target behavior data (70) to the first virtual reality layer based on the second target data (7), such that at least a part of the state information and/or actions of the at least second test target (4) is comprised in a mixed reality (210) perceived by a first test target (3).
2. The method of claim 1, further comprising:
determining (1007) an interaction behavior of the first test target (3) and/or the at least second test target (4) based on the derived first target data (6) and/or the derived at least second target data (7); and is
Based on the interaction behavior:
providing (1008) action instructions (65,75) to the first test object (3) and/or the at least second test object (4), and/or
-adjusting (1009) data parameters of said first test object (3) and/or at least second test object (4), said data parameters being related to a driving assistance function.
3. The method according to claim 1 or 2, wherein the first test object (3) additionally comprises a vehicle occupant (311);
wherein the providing (1002) the first virtual reality layer comprises providing a first vehicle virtual reality layer to the vehicle (31) and/or providing a first occupant virtual reality layer to a Head Mounted Display (HMD) carried by the vehicle occupant (311);
wherein the deriving (1003) first target data (6) comprises deriving vehicle data (61) from the vehicle (31), and/or deriving occupant data (62) from a first motion capture system (53) carried (52) by the vehicle occupant (311) and/or visually sensing (51) the vehicle occupant, the vehicle data (61) comprising state information and/or motion of the vehicle (31) and the occupant data (62) comprising state information and/or motion of the vehicle occupant (311);
wherein the providing (1004) of first target behavior data (60) comprises providing vehicle behavior data (610) based on the vehicle data (61) and/or occupant behavior data (620) based on the occupant data (62) to the at least second virtual reality layer such that at least a part of the status information and/or actions of the vehicle (31) and/or the vehicle occupant (311) is comprised by a mixed reality (220) perceived by the at least second test target (4); and is
Wherein the providing (1006) at least second target behavior data (70) comprises providing at least second target behavior data (70) to the first vehicle virtual reality layer and/or the first occupant virtual reality layer based on the at least second target data (7) such that at least a part of the status information and/or actions of the at least second test target (4) is comprised by a mixed reality (210) perceived by a vehicle (31) and/or a mixed reality (210) perceived by a vehicle occupant (331).
4. The method of claim 1 or 2, wherein the second test object (4) or other test objects comprise a fragile road user (41);
wherein said providing (1002) at least a second virtual reality layer comprises providing a fragile road user virtual reality layer to an HMD (45) carried by the fragile road user (41); and is
Wherein said deriving (1005) at least second target data (7) comprises deriving said second target data (7) from a fragile road user motion capture system (541) carried (52) by said fragile road user (41) and/or visually sensing (51) said fragile road user.
5. The method according to claim 1 or 2, wherein the second test object (4) or other test objects comprise a second vehicle (42) and its vehicle occupant (421);
wherein the providing (1002) at least a second virtual reality layer comprises providing a second vehicle virtual reality layer to the second vehicle (42) and/or providing a second occupant virtual reality layer to an HMD carried by a vehicle occupant (421) of the second vehicle (42);
wherein the providing (1004) of first target behavior data (60) comprises providing first target behavior data (60) to the second vehicle virtual reality layer and/or the second occupant virtual reality layer based on the first target data such that at least a part of the status information and/or actions of the first test target (3) are comprised in a mixed reality (220) perceived by the second vehicle (42) and/or a HMD perceived mixed reality (220) carried by a vehicle occupant (421) of the second vehicle (42); and is
Wherein the deriving (1005) at least second target data (7) comprises deriving second vehicle data (71) from the second vehicle (42) and/or deriving second occupant data (72) from a second motion capture system (54) carried (52) by a second vehicle occupant (421) and/or visually sensing (51) a second vehicle occupant (421), the second vehicle data (71) comprising state information and/or motion of the second vehicle (42) and the second occupant data (72) comprising state information and/or motion of the second vehicle occupant (421); and is
Wherein the providing (1006) of at least second target behavior data (70) comprises providing second vehicle behavior data (710) based on the second vehicle data (71) and/or occupant behavior data (720) based on the second occupant data (72) to the first virtual reality layer such that at least a part of the status information and/or actions of the second vehicle (42) and/or the second vehicle occupant (421) is comprised by a mixed reality (210) perceived by a first test target (3).
6. The method of claim 1 or 2, wherein:
-during the test procedure, at least a first auxiliary target (8,81,82) acts under the first physical test environment (21) and/or the at least a second physical test environment (22), the at least a first auxiliary target (8,81,82) being adapted to communicate directly or indirectly with the first test target (3) and/or the at least a second test target (4);
the method further comprises continuously, periodically or intermittently:
deriving (1010) auxiliary target data (9) related to the at least first auxiliary target (8,81,82), the auxiliary target data (9) comprising status information and/or actions of the auxiliary target (8,81, 82); and is
Providing (1011) auxiliary target behavior data (90) to the first and/or the at least second virtual reality layer based on the auxiliary target data (9) such that at least a part of the status information and/or actions of the at least first auxiliary target (8,81,82) is comprised by a mixed reality (210) perceived by a first test target (3) and/or a mixed reality (220) perceived by at least a second test target (4).
7. The method of claim 1 or 2, the method further comprising:
deriving (1012) environmental condition data (12) from a cloud service (11), the environmental condition data (12) comprising one or more environmental conditions related to the first physical test environment (21) and/or at least a second physical test environment (22); and is
Providing (1013) the environmental condition data (12) to the first test target (3), the first virtual reality layer, the at least second test target (4) and/or the at least second virtual reality layer, such that the environmental condition data (12) is comprised by a mixed reality (210) perceived by the first test target (3) and/or a mixed reality (220) perceived by the at least second test target (4).
8. An interactive testing system (1) adapted to enable interaction under a testing environment (2) between a first test object (3) comprising a vehicle (31) and at least a second test object (4) adapted to communicate directly or indirectly with the first test object (3), wherein during a testing procedure the first test object (3) acts within a first physical testing environment (21) and the at least second test object (4) acts within at least a second physical testing environment (22) physically separated from the first physical testing environment (21), the interactive testing system (1) comprising:
a virtual reality layer providing unit (102) adapted to provide (1002) a first virtual reality layer associated with the first physical test environment (21) to the first test target (3) and at least a second virtual reality layer associated with the at least a second physical test environment (22) to the at least a second test target (4) such that the mixed reality (210) perceived by the first test target (3) corresponds to the mixed reality (220) perceived by the at least a second test target; and is
A data deriving unit (103) adapted to derive (1003) continuously, periodically or intermittently first target data (6) related to the first test object (3), the first target data (6) comprising status information and/or actions of the first test object (3);
a behavior data providing unit (104) adapted to continuously, periodically or intermittently provide (1004) first target behavior data (60) to the at least second virtual reality layer based on the first target data (6) such that at least a part of the status information and/or actions of the first test target (3) are comprised by a mixed reality (220) perceived by the at least second test target (4);
wherein the data deriving unit (103) is further adapted to derive (1005) at least second target data (7) related to the at least second test target (4), the at least second target data (7) comprising status information and/or actions of the at least second test target (4); and is
Wherein the behavior data providing unit (104) is further adapted to continuously, periodically or intermittently provide (1006) at least second target behavior data (70) to the first virtual reality layer based on the at least second target data (7) such that at least a part of the status information and/or actions of the at least second test target (4) is comprised by the mixed reality (210) as perceived by the first test target (3).
9. The interaction testing system (1) of claim 8, further comprising:
an interaction behavior determination unit (105) adapted to determine (1007) an interaction behavior of the first test target (3) and/or the at least second test target (4) based on the derived first target data (6) and/or the derived at least second target data (7); and is
An action instruction providing unit (101) adapted to provide (1008) action instructions (65,75) to the first test object (3) and/or the at least second test object (4) based on the interaction behavior, and/or
A data parameter adjusting unit (106) adapted to adjust (1009) data parameters of the first and/or at least second test object (4) based on the interaction behavior, the data parameters being related to a driving assistance function.
10. The interactive testing system (1) according to claim 8 or 9, wherein the first test object (3) additionally comprises a vehicle occupant (311);
wherein the virtual reality layer providing unit (102) is further adapted to provide a first vehicle virtual reality layer to the vehicle (31) and/or to provide a first occupant virtual reality layer to a Head Mounted Display (HMD) carried by the vehicle occupant (311);
wherein the data derivation unit (103) is further adapted to derive vehicle data (61) from the vehicle (31), and/or to derive occupant data (62) from a first motion capture system (53) carried (52) by the vehicle occupant (311) and/or visually sensing (51) the vehicle occupant (311), the vehicle data (61) comprising state information and/or an action of the vehicle (31) and the occupant data (62) comprising state information and/or an action of the vehicle occupant (311);
wherein the behavior data providing unit (104) is further adapted to provide vehicle behavior data (610) based on the vehicle data (61) and/or occupant behavior data (620) based on the occupant data (62) to the at least second virtual reality layer such that at least a part of the status information and/or actions of the vehicle (31) and/or the vehicle occupant (311) is comprised by the mixed reality (220) perceived by the at least second test target (4); and is
Wherein the behavior data providing unit (104) is further adapted to provide at least second target behavior data (70) to the first vehicle virtual reality layer and/or the first occupant virtual reality layer based on the at least second target data (7) such that at least a part of the status information and/or actions of the at least second test target (4) is comprised by a mixed reality (210) perceived by a vehicle (31) and/or a mixed reality (210) perceived by a vehicle occupant (311).
11. The interactive testing system (1) according to claim 8 or 9, wherein the second test object (4) or other test objects comprise a fragile road user (41);
wherein the virtual reality layer providing unit (102) is further adapted to provide a fragile road user virtual reality layer to an HMD (45) carried by the fragile road user (41); and is
Wherein the data derivation unit (103) is further adapted to derive the second target data (7) from a fragile road user motion capture system (541) carried (52) by the fragile road user (41) and/or visually sensing (51) the fragile road user (41).
12. The interactive testing system (1) according to claim 8 or 9, wherein the second test object (4) or other test objects comprise a second vehicle (42) and its vehicle occupant (421);
wherein the virtual reality layer providing unit (102) is further adapted to provide a second vehicle virtual reality layer to the second vehicle (42) and/or to provide a second occupant virtual reality layer to an HMD carried by the second vehicle occupant (421);
wherein the behavior data providing unit (104) is further adapted to provide first target behavior data (60) to the second vehicle virtual reality layer and/or the second occupant virtual reality layer based on the first target data (6) such that at least a part of the status information and/or actions of the first test target (3) are comprised in a mixed reality (220) perceived by a second vehicle (42) and/or a mixed reality of an HMD perceived (220) carried by a second vehicle occupant (421); and is
Wherein the data derivation unit (103) is further adapted to derive second vehicle data (71) from the second vehicle (42) and/or to derive second occupant data (72) from a second motion capture system (54) carried (52) by the second vehicle occupant (421) and/or visually sensing (51) the second vehicle occupant (421), the second vehicle data (71) comprising status information and/or actions of the second vehicle (42) and the second occupant data (72) comprising status information and/or actions of the second vehicle occupant (421); and is
Wherein the behavior data providing unit (104) is further adapted to provide second vehicle behavior data (710) based on the second vehicle data (71) and/or second occupant behavior data (720) based on the second occupant data (72) to the first virtual reality layer such that at least a part of the status information and/or actions of the second vehicle (42) and/or the second vehicle occupant (421) is comprised in a mixed reality (210) perceived by a first test target (3).
13. The interactive testing system (1) according to claim 8 or 9, further comprising at least a first auxiliary target (8,81,82) adapted to act within the first physical testing environment (21) and/or the at least a second physical testing environment (22) during the testing process, the at least a first auxiliary target (8,81,82) being adapted to communicate directly or indirectly with the first testing target (3) and/or the at least a second testing target (4);
wherein the data deriving unit (103) is further adapted to derive (1010) continuously, periodically or intermittently auxiliary target data (9) related to the at least first auxiliary target (8,81,82), the auxiliary target data (9) comprising status information and/or actions of the auxiliary target (8,81, 82); and is
Wherein the behavior data providing unit (104) is further adapted to continuously, periodically or intermittently provide (1011) auxiliary target behavior data (90) to the first and/or the at least second virtual reality layer based on the auxiliary target data (9) such that at least a part of the status information and/or actions of the at least first auxiliary target (8,81,82) is comprised in a mixed reality (210) perceived by a first test target (3) and/or a mixed reality (220) perceived by at least a second test target (4).
14. The interaction test system (1) according to claim 8 or 9, further comprising:
an environmental condition data derivation unit (107) adapted to derive (1012) environmental condition data (12) from a cloud service (11), the environmental condition data (12) comprising one or more environmental conditions related to the first physical test environment (21) and/or at least a second physical test environment (22); and
an environmental condition providing unit (108) adapted to provide (1013) the environmental condition data (12) to the first test target (3), the first virtual reality layer, the at least second test target (4) and/or the at least second virtual reality layer, such that the environmental condition data (12) is comprised in a mixed reality (210) perceived by the first test target (3) and/or a mixed reality (220) perceived by the at least second test target (4).
15. A computer readable medium embodying a computer program, the computer program comprising computer program code means arranged to cause a computer or processor to perform the method steps according to any of claims 1-7.
CN201710142119.6A 2016-03-18 2017-03-10 Method and system for enabling interaction in a test environment Active CN107199966B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16161115.7A EP3220233B1 (en) 2016-03-18 2016-03-18 Method and system for enabling interaction in a test environment
EP16161115.7 2016-03-18

Publications (2)

Publication Number Publication Date
CN107199966A CN107199966A (en) 2017-09-26
CN107199966B true CN107199966B (en) 2021-01-22

Family

ID=55862510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710142119.6A Active CN107199966B (en) 2016-03-18 2017-03-10 Method and system for enabling interaction in a test environment

Country Status (3)

Country Link
US (1) US10444826B2 (en)
EP (1) EP3220233B1 (en)
CN (1) CN107199966B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016219031B4 (en) * 2016-09-30 2024-04-11 Ford Global Technologies, Llc Method and device for testing a driver assistance system
US11372744B1 (en) * 2017-03-31 2022-06-28 Headspin, Inc. System for identifying issues during testing of applications
US11954651B2 (en) * 2018-03-19 2024-04-09 Toyota Jidosha Kabushiki Kaisha Sensor-based digital twin system for vehicular analysis
CN112009395A (en) * 2019-05-28 2020-12-01 北京车和家信息技术有限公司 Interaction control method, vehicle-mounted terminal and vehicle
CN112150885B (en) * 2019-06-27 2022-05-17 统域机器人(深圳)有限公司 Cockpit system based on mixed reality and scene construction method
CN113038114B (en) * 2021-02-01 2022-08-16 中国船舶重工集团公司第七0九研究所 AR simulation system and method based on human visual characteristics
CN113192381B (en) * 2021-05-11 2023-07-28 上海西井科技股份有限公司 Hybrid scene-based driving simulation method, system, equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2703025C3 (en) * 1977-01-26 1981-04-02 Reiner Dr.-Ing. 5270 Gummersbach Foerst Driving simulator
US7021937B2 (en) * 2000-04-14 2006-04-04 Viretek Race car simulator
US6950788B2 (en) 2000-09-27 2005-09-27 Ardeshir Faghri Computer-implemented system and method for simulating motor vehicle and bicycle traffic
US20020052724A1 (en) * 2000-10-23 2002-05-02 Sheridan Thomas B. Hybrid vehicle operations simulator
CN101417637B (en) * 2008-03-14 2012-09-05 北京理工大学 Communications system for pure electric motor coach power cell management system and management method thereof
EP2543552B1 (en) * 2011-07-04 2020-05-06 Veoneer Sweden AB A vehicle safety system
EP2562910B1 (en) * 2011-08-25 2018-07-11 Volvo Car Corporation Multi battery system for start/stop
CN102495959A (en) * 2011-12-05 2012-06-13 无锡智感星际科技有限公司 Augmented reality (AR) platform system based on position mapping and application method
JP2014174447A (en) * 2013-03-12 2014-09-22 Japan Automobile Research Institute Vehicle dangerous scene reproducer, and method of use thereof
US9547173B2 (en) * 2013-10-03 2017-01-17 Honda Motor Co., Ltd. System and method for dynamic in-vehicle virtual reality
US9755848B2 (en) * 2014-05-19 2017-09-05 Richard Matthew Cieszkowski, III System and method for simulating a user presence

Also Published As

Publication number Publication date
US20170269681A1 (en) 2017-09-21
EP3220233B1 (en) 2020-11-04
CN107199966A (en) 2017-09-26
US10444826B2 (en) 2019-10-15
EP3220233A1 (en) 2017-09-20

Similar Documents

Publication Publication Date Title
CN107199966B (en) Method and system for enabling interaction in a test environment
JP6796798B2 (en) Event prediction system, event prediction method, program, and mobile
EP3729401B1 (en) Method and system for driving mode switching based on driver's state in hybrid driving
US10453260B2 (en) System and method for dynamic in-vehicle virtual reality
CN109421742B (en) Method and apparatus for monitoring autonomous vehicles
US10850744B2 (en) System and method for dynamic in-vehicle virtual reality
US20200192360A1 (en) Method and system for driving mode switching based on self-aware capability parameters in hybrid driving
US10915101B2 (en) Context-dependent alertness monitor in an autonomous vehicle
EP3729400B1 (en) Method and system for augmented alerting based on driver's state in hybrid driving
JP7048398B2 (en) Vehicle control devices, vehicle control methods, and programs
CN110036425A (en) Dynamic routing for automatic driving vehicle
CN109421738A (en) Method and apparatus for monitoring autonomous vehicle
WO2018088224A1 (en) Information processing device, information processing method, program, and moving body
WO2018220829A1 (en) Policy generation device and vehicle
JP7382327B2 (en) Information processing device, mobile object, information processing method and program
US11703577B2 (en) Recalibration determination system for autonomous driving vehicles with multiple LiDAR sensors
JP2019079363A (en) Vehicle control device
JP6890265B2 (en) Event prediction system, event prediction method, program, and mobile
US11491976B2 (en) Collision warning system for safety operators of autonomous vehicles
EP3838696A1 (en) A post collision, damage reduction brake system
JP6811429B2 (en) Event prediction system, event prediction method, program, and mobile
US11656262B2 (en) Software simulation system for indoor EMC test
Pilipovic et al. Toward intelligent driver-assist technologies and piloted driving: Overview, motivation and challenges
US20230060776A1 (en) Decision consistency profiler for an autonomous driving vehicle
CA3083411C (en) Method and system for adapting augmented switching warning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant