EP3693943A1 - Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system - Google Patents

Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system Download PDF

Info

Publication number
EP3693943A1
EP3693943A1 EP19155614.1A EP19155614A EP3693943A1 EP 3693943 A1 EP3693943 A1 EP 3693943A1 EP 19155614 A EP19155614 A EP 19155614A EP 3693943 A1 EP3693943 A1 EP 3693943A1
Authority
EP
European Patent Office
Prior art keywords
event
person
occurrence
entities
hypothetical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP19155614.1A
Other languages
English (en)
French (fr)
Other versions
EP3693943B1 (de
Inventor
Christiane WIEBEL-HERBOTH
Matti Krüger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Research Institute Europe GmbH
Original Assignee
Honda Research Institute Europe GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Research Institute Europe GmbH filed Critical Honda Research Institute Europe GmbH
Priority to EP19155614.1A priority Critical patent/EP3693943B1/de
Priority to JP2020005151A priority patent/JP7071413B2/ja
Publication of EP3693943A1 publication Critical patent/EP3693943A1/de
Application granted granted Critical
Publication of EP3693943B1 publication Critical patent/EP3693943B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/161Decentralised systems, e.g. inter-vehicle communication
    • G08G1/163Decentralised systems, e.g. inter-vehicle communication involving continuous checking
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the invention regards a method and a system for assisting a person in acting in a dynamic environment.
  • the invention also concerns the area of human-machine interfaces, in particular for assistance systems operating in a dynamic environment, for example a traffic environment in the automotive, nautical or aviation domain.
  • An assistance system that any information that is provided by a system that is capable of perceiving the environment of a person or a vehicle does not need to be perceived directly by the person or vehicle driver. Filtering information with respect to its importance is performed in many cases by such assistance systems.
  • a traffic situation is only one example where it is desirable to assist a person in perceiving relevant or important aspects in its environment and in filtering information. It is evident that an assistance system is also suitable for a person navigating a boat or a pilot operating an air vehicle, or, more generally, any other person that has to operate in a dynamic environment. Most assistance systems analyze a scene based on sensor data acquired by physical sensors. The assistance systems assist, for example, a vehicle driver based on their scene analysis by presenting warnings, making suggestions to the driver how to act in the current traffic situation or even by performing vehicle control actions for autonomous or semi-autonomous driving.
  • a known example for such assistance system is a blind spot surveillance system that observes an area of the environment of a vehicle, which is usually not observed actively or not observable at all by a vehicle driver who predominantly focusses on the area in front of the ego-vehicle.
  • a warning will be output to the driver.
  • vibration of a steering wheel of the ego-vehicle is used to stimulate the driver.
  • a sensory capability of a driver which is not actively used to perceive the traffic environment can be used to provide additional information on the traffic environment, which is in turn then used by the driver for improved assessment of the entire traffic situation.
  • the driver will be alerted of another vehicle, which is driving in the blind spot and thus, he can quickly have a look to get full knowledge of the situation of which he was previously unaware.
  • the object of the present invention is to assist a person in judging a situation in a dynamic environment by providing the person with easy to recognize information about potential future events relating to task-relevant entities.
  • the method according to the first aspect and the system according to the second aspect obtain information on actual states of at least two entities in a common environment of these two entities for assisting a person in assessing a dynamic environment.
  • the environment of the entities is physically sensed by one or a plurality of sensors.
  • a first future state for each of the entities is calculated (predicted) by a processor.
  • the first future state is a state that develops from the current, actual state under the assumption that no action is taken, i.e. that none of the entities involved changes its behavior.
  • At least one of a time to event, a position of occurrence of the first event relative to the person or to a predetermined entity associated with the person, a direction to a current location of an entity involved in the first event and a probability of occurrence of the first event is calculated based on the predicted future states.
  • a second future state for at least one of the entities is predicted.
  • the second future state is predicted based on a hypothetical state of at least one of the entities, wherein a hypothetical state is defined as deviating from a current actual state of the respective entity in at least one parameter.
  • This at least one parameter is suitable to cause another evolvement of the situation.
  • Typical parameters that may be altered generate a hypothetical state are velocity, position, acceleration and direction.
  • At least one of a time to event for a corresponding second event involving the at least two entities, a position of occurrence of the corresponding second event relative to the person or to a predetermined entity associated with the person, a direction to a current location of an entity involved in the second event and a probability of occurrence of the second event is calculated based on the second future state.
  • the processor then generates a signal for driving at least one actuator, wherein the signal is indicative of information which encodes at least one of the position of the first event, the time to event for the first event, the direction to a current location of an entity involved in the first event and the probability of occurrence of the first event, and further information which encodes at least one of the position of the second event, the time to event for the second event, the direction to a current location of an entity involved in the second event and the probability of occurrence of the second event.
  • the at least one actuator causes a stimulation by emitting stimuli being perceivable by the person by its perceptual capabilities.
  • the method and system according to the invention have the advantage, that the person who acts in the dynamic environment is informed not only about a first event that will occur if no change in the person's behavior starting from the current situation an actual states of the entities occurs , but also about a second event that might occur in case that at least one of the entities involved in the current situation will change its behavior.
  • Such behavior change is reflected by at least one hypothetical state of an involved entity.
  • Such hypothetical state can be generated by changing a parameter of the actual state.
  • This approach corresponds to the consideration that are made by for example a vehicle driver when he recognizes that there is a chance that any one of the vehicles in his environment for example may change lane in order to overtake the preceding vehicle.
  • actual is hereinafter used to denote a predicted event or state, which is predicted under the assumption that task relevant parameters remain unchanged when calculating the prediction.
  • hyperthetical will hereinafter be used to denote the second state and second events and differs insofar, that a predicted trajectory of the predetermined entity or the person in the hypothetical future state differs from a predicted trajectory in the "actual" future state.
  • Such possible changes may manifest itself in one or more changes in relevant parameters in the perceived environment.
  • Such changed parameter can for example be a lane change of an entity, in particular another vehicle or the ego-vehicle in a traffic scene.
  • the person operating in the dynamic environment is not only informed about existence and current state of an entity in his environment but actual and hypothetical future states of involved entities are predicted and thus, the analysis of the current state is performed to a large extent by the system.
  • the invention therefore supports the person in understanding potential consequences of sufficiently probable hypothetical situations in the environment. This may lead to an improved ability of the person to act appropriately in the currently perceived situation.
  • the system since the system not only interpolates the current states of the involved entities but also predicts hypothetical events, the information provided by generating a stimulus for the person corresponds to an information from a situation estimation that otherwise would have to be made by the person itself.
  • the time to event is encoded such that the saliency of the stimuli is the higher the smaller the time to event is.
  • the position of occurrence is encoded such that a saliency of the stimuli is the higher the closer the position of occurrence is.
  • the probability of occurrence is encoded such that a saliency of the stimuli is the higher the higher the probability of occurrence is.
  • the position of occurrence is encoded such that a saliency of the stimuli is the higher the more distant the position of occurrence is.
  • time to event into the information conveyed by the signal using the saliency of the stimulus provides the advantage that without own individual consideration of the situation, the person directly obtains information about an urgency of the predicted event but also the hypothetical event. This may advantageously improve his reaction to the current scene in the dynamic environment.
  • the saliency of the stimuli corresponding to hypothetical events are scaled such that their maximum saliency is equal to or smaller than the saliency of the stimuli corresponding to the actual event.
  • Scaling may also be chosen according to event probability.
  • the saliency may also be scaled according to an event's probability. In such a case the saliency of the hypothetical event would be stronger than the saliency of the actual event. Scaling the maximum saliency corresponding to hypothetical events relative to the saliency of an actual event avoids that the high saliency of the hypothetical event masks the stimulus associated with the actual event.
  • the signal drives a plurality of actuators, such that different stimuli can be identified to indicate different events.
  • a plurality of different actuators for providing different stimulations may be used. Using different actuators, each one associated with only one of the actual or hypothetical event, avoids that the person confuses information that is presented simultaneously to the person. Thus, the person who is informed about at least two events, namely one actual event and additionally a hypothetical event, will be aware which one is the predicted hypothetical or actual event.
  • the invention enables the person to focus on the event, which is more likely to occur, because the actual event assumes that none of the involved entities changes its behavior.
  • generating the stimulation corresponding to a hypothetical event is suppressed unless the respective hypothetical event is judged to be relevant.
  • Modern assistance systems may generate a plurality of different signals and stimuli provided in order to assist a vehicle user. However, if all these signals and stimuli for informing the vehicle driver are output unconditioned, the driver might be overwhelmed by the amount of presented information. Filtering all the information for relevant information and simultaneously still concentrating on the traffic in the environment of the ego-vehicle is almost impossible. Consequently, an adverse effect is achieved, which is to be avoided.
  • information is provided to the person only in case that such information is considered to be relevant.
  • a criterion for judging a hypothetical event to be relevant is set dependent on the person's preferences and/or the person's situation assessment capabilities. This ensures that only helpful information is provided to the person. This is not only an effective way to assist in assessing a dynamic situation, but also acceptance of the assistance system is improved. Assistance systems that are accepted by their users will finally contribute to improving overall traffic safety. This results from a tendency of humans to switch off a system rather than to be annoyed by unwanted or inappropriate presentation of information.
  • a criterion for judging that the hypothetical event is relevant is set dependent on the encountered situation.
  • information particularly relevant for the person in the encountered situation can be provided. This allows to present information on particularly dangerous or important hypothetical events which can be identified from an analysis of the encountered situation.
  • the prediction of the hypothetical future states may be based on an analysis of the encountered situation.
  • the prediction of the hypothetical future states can be based on the person's behavior and/or the person's entity operation. Taking into consideration for example a person's behavior and/or operation of the own entity by the person in the past provides an increased probability that a particular predicted future behavior will indeed occur. Thus, calculating the future states provides a high probability of occurrence for the calculated future states and accordingly the relevance of corresponding hypothetical event is very high.
  • Figure 1 shows a simplified two-dimensional example for a scenario with moving entities and visual representations of directional time to event (TTE) signals that have been determined for the scenario. Based on the signals a stimulation of human body is performed.
  • TTE time to event
  • the entity of interest relative to which directional TTEs are determined is represented by a dark circle.
  • White circles represent other relevant entities in the environment, for example other vehicles in a traffic scenario.
  • the entity of interest may be any entity for which a relative position of an event, in particular a collision between entities is estimated.
  • one of the entities that are involved in the event is the entity of interest.
  • the event is a collision between an ego-vehicle and one of the other traffic objects.
  • Events not directly involving the ego-vehicle also may be regarded, for example a collision between two vehicles in front of the ego-vehicle. This may be highly relevant for the ego-vehicle driver as well, because the crashed cars may block his lane and other vehicles may break sharply. For the following description however, it is always assumed that the event involves the ego-vehicle and at least one other entity or other traffic participant, and the event is referred to as collision.
  • the direction of an outgoing arrow represents the moving direction of the attached entity and the arrow-length represents the velocity of movement into that direction.
  • the orientation of an incoming arrow represents the direction of the predicted future contact and the magnitude of the arrow represents the signal component for encoding the TTE. It should be understood in a reciprocal manner such that a long arrow represents a short TTE and therefore a signal with high saliency, a short arrow a long TTE and no arrow an infinite or above a threshold TTE.
  • the basis for a stimulation of a person is information included in the signal which is generated by a processor and that, after being appropriately adapted to the actuators input characteristic by a driver, is converted by the actuator into a stimulus for the person.
  • Actuators can operate using different modalities for generating stimulation perceivable by the person. In order to avoid that only one specific stimulation is encompassed, explanation of the figures refers to the signals, but not to a specific type of actuator.
  • the two-dimensional example illustration of a scenario shown in figure 1 shows that a signal causing multiple stimulations may be created when collisions with multiple entities are predicted.
  • the signal is indicative of the times to event (or TTE)
  • the person to which such event here: collision
  • the representation shows five entities that move in a common environment.
  • the dark circle moves along the same path as two white circles on the left side but their velocities differ in such a way that a collision with the preceding and the succeeding entities would occur at approximately the same time.
  • one white circle (upper right) moves with a relatively high velocity towards a future location of the dark one, which creates another possible future collision.
  • figure 1 only serves for explaining the basic principle of the present invention. Thus, it is limited to explaining only the stimulus that is generated for an actual event.
  • An actual event is an event that assumes that certain dimensions, aspects or parameters of a current state of the involved entities do not change. For example, a vehicle's trajectory or speed remain unchanged, while other parameters of the vehicle such as an absolute location (position) in the environment do change.
  • the left side of figure 2 for example shows a top view of a road having two lanes on which vehicles drive in the same direction. Again, the arrows connected to the different vehicles illustrate direction and velocity of movement of the vehicle.
  • the ego-vehicle E is driving on the right lane. It follows currently on the right lane a vehicle A that drives slower than the ego-vehicle E. On the left lane, still behind the ego-vehicle E, a further vehicle B drives, but with a velocity which is significantly higher than the velocity of the ego-vehicle E.
  • the corresponding signal is shown as a solid arrow pointing to the front of the ego-vehicle E.
  • an additional signal is generated which is depicted as a dashed arrow that is directed to the rear left of the ego-vehicle E.
  • the signal corresponds to a hypothetical event which can be predicted by adapting the parameter settings of the scenario thereby creating a hypothetical scenario. Such adaptation of the parameter settings to hypothetical parameter settings of the scenario is made in addition to predicting actual events.
  • the hypothetical scenario which is underlying the predicted hypothetical event, which results in the dashed arrow, would for example occur if the ego-vehicle E would move to the left lane.
  • the hypothetical situation is a consequence of a change in behavior of the ego-vehicle E.
  • Such a change of behavior may also be predicted, because it is evident that the ego-vehicle E is faster than its predecessor.
  • There are systems in the market that can predict such change in behavior and thus will predict with certain probability that the ego-vehicle E will change its lane. Starting from such a potential hypothetical lane change, the system will then estimate for this hypothetical behavior respective time to event and/or position of the event.
  • the dashed arrow shows the point of collision with respect to the ego-vehicle E but also encodes by its length with the time to hypothetical event.
  • Figure 2 discusses the embodiment with reference to a point of collision as example for an event.
  • Another example would be a predicted event, wherein the event involves another entity, for example another vehicle.
  • the other vehicle then represents a source for the predicted event.
  • the other vehicle or source is located in a certain direction from the ego-vehicle E.
  • a signal provided to the person operating the ego-vehicle E may also include information encoding a direction to the source or vehicle, with which the ego-vehicle E would collide if the hypothetical event would occur.
  • the stimuli for the actual event and for the hypothetical event are output using different actuators.
  • one actuator or set of actuators could be arranged in the seat belt for outputting the stimulation encoding direction and urgency of an actual event.
  • a further actuator or set of actuators could be provided in the steering wheel in order to provide information on a hypothetical event.
  • Using the steering wheel for providing information on a hypothetical event is particularly preferred as the steering wheel is more closely associated with an action. The driver will directly identify a hypothetical event and his/her driving action, i.e., the driver will understand the message: "If you turn the wheel left, this is going to happen".
  • output stimulations may be for instance visual, tactile, auditory, olfactory, vestibular or the like.
  • the time to event in particular the time to collision, shall be presented to the ego-vehicle driver.
  • the saliency of the stimulation is inversely proportional to the time to event TTE.
  • the saliency of the stimulus may be influenced by an intensity of the stimulus perceivable by the driver of the ego-vehicle E. Additionally or alternatively, the saliency of the stimulus may also be influenced by other modalities of the stimulus, for example a frequency of the stimulus.
  • FIG. 3 Another example for a traffic scenario illustrated in top view is shown in figure 3 .
  • the ego-vehicle E drives with a velocity higher than the velocity of its predecessor and will consequently collide with the predecessor A at a certain point in time in the future.
  • a further vehicle B drives on the left lane with the speed similar to the predecessor's speed.
  • the actual event would thus be a collision between the ego-vehicle E and its predecessor A on the right lane.
  • the hypothetical event assuming a lane change of the ego-vehicle E from the right lane to the left lane, would be a collision between the ego-vehicle E and, in the actual scenario, the vehicle B driving on the left lane.
  • Dashed lines in figure 3 represent the outlines of the ego-vehicle E with an altered hypothetical geometry, such that it extends to the left lane as well. Since the vehicle B on the left lane drives at the same speed but a small distance ahead of the ego-vehicle's predecessor A, this hypothetical event is predicted to occur slightly later than the predicted actual event.
  • Figure 3 shows four different possibilities for informing the driver. All four opportunities have in common that the actual event, which corresponds to the signal indicated as solid arrow, is output with a certain saliency indicating the time to the actual event and at a position, which corresponds to the position of collision of the ego-vehicle E and its predecessor A.
  • the arrow may point towards a current direction of the source or origin of the actual event.
  • the solid arrow points towards the front of the ego-vehicle E in all four cases, which are marked as a), b), c) and d).
  • Figures a) and b) show the point of collision of the hypothetical event at the front left of the ego-vehicle E. This is one possible way to indicate that, having in mind the current position of the ego-vehicle E, the vehicle B that is also involved in the hypothetical collision currently is left of the ego-vehicle E.
  • the dashed arrow's could also point to the front of the ego-vehicle E and thus the output stimulus would indicate the same position as for the actual event, The reason is, that the hypothetical event only occurs, if the ego-vehicle E would change its lane to the left lane, as it is indicated by the dashed line silhouette of the ego-vehicle E in the scenario at the left side of figure 3 .
  • the dashed arrow and consequently the saliency of the output stimulus is only slightly shorter than the solid arrow indicating the time to collision of the actual event.
  • the time to collision of the actual event and the hypothetical event thus are encoded using the same scale for the signal saliency.
  • the signal saliency of the hypothetical event is scaled so that the resulting saliency is reduced. This avoids an unwanted masking of the information on the actual event, which may be considered as having a higher occurrence probability.
  • Figure 3 illustrates this by reducing the length of the dashed arrow.
  • the hypothetical event may even have a higher occurrence probability than the predicted actual event.
  • the saliency of the signal corresponding to a hypothetical event will always be scaled in such a way that the maximum saliency of the hypothetical event is always smaller than the saliency for an actual event.
  • the saliency for an actual event may be defined when setting up the system or can be adjusted by the individual driver of the ego-vehicle E. If referring to a time to collision as an example for a time to event TTE, a threshold can be defined for the time interval such that a collision that might occur at a point in time lying further in the future will not be notified to the ego-vehicle driver at all.
  • the ego-vehicle driver may be informed about a hypothetical event by stimulating the driver corresponding to the position that indicates the direction of the collision relative to the actual location of the ego-vehicle E.
  • options c) and d) show stimulating the driver at positions corresponding to the direction of the collision relative to the hypothetical location of the ego-vehicle E or relative to the hypothetical vehicle geometry in figure 3 .
  • the position of the stimulation and direction of the event could be indicated through a map that may contain coordinates for locations outside the actual vehicle boundaries.
  • the map of one embodiment is egocentric with respect to the ego-vehicle E or the driver of the ego-vehicle E. It is to be noted that the term map can here also refer to non-visual maps such as a tactile or auditory maps where certain areas could be mapped to outside coordinates as well as multimodal maps which may be read through multiple senses.
  • Figure 4 shows another example for the road having two lanes on which three vehicles, including the ego-vehicle E, currently drive. Again, there are two vehicles A, B in front of the ego-vehicle, both driving slower than the ego-vehicle E but at similar speed. The actual situation corresponds to the situation of figure 3 . But in figure 3 the hypothetical situation is generated by extending the geometry of the ego-vehicle E such that it extends to the left lane as well. Contrary, figure 4 adds its possible lane changing behavior of the ego-vehicle E, indicated by the dashed vehicle outlines showing the change in direction.
  • the signal is similar as in figure 3a ), which means that it indicates the collision as originating from the top left and thus relative to the actual current ego-vehicle location.
  • the signal may be interpreted as, "if you would enter the left lane, you would collide with the vehicle located on the front left relative to your current position in x seconds given current estimates of situation dynamics".
  • Figure 5 shows a further situation involving the ego-vehicle, or the entity of interest to be more general, and five other entities that surround the ego entity. Again, movement direction and velocity are indicated by the solid arrows to represent the current states of the entities. Further, the dashed arrows in the scenario shown at the left side of figure 5 show the hypothetical movement directions of the ego entity. It is obvious, that the actual situation would lead to a collision between the black circle and the white one directly above it in the drawing. But also the hypothetical development of the situation would lead to collisions with either the white circle at the right side above the ego circle but also with the white circle to the lower left of the ego circle. Due to respective velocities of the entities, the time to collision will vary to a large extent.
  • the saliency that corresponds to hypothetical events may be scaled such that no masking with indications of actual events can occur.
  • the saliencies might for example be scaled relative to the solid arrow or following a different absolute (possibly even binary) scale which allows the evidence-based and for some embodiments arguably more relevant signal component (solid arrows) to be most clearly perceivable.
  • solid arrows the simultaneous evaluation and communication of information for different hypothetical settings may in some scenarios increase the need for having multiple separately identifiable encodings (e.g. moving left vs. moving right).
  • One way to avoid that the ego-vehicle driver is annoyed by output information is to output such information only in case that the ego-vehicle driver requests for such information. Thus, outputting information to the driver is suppressed unless it is requested by the driver. Suppressing output of information may be well done after generating the corresponding signal so that simply the corresponding actuator, which generates the stimulus for the ego-vehicle driver is switched off. Such a driver request may for instance occur in the form of speech interaction, bodily or ocular gestures or even the use of any other user input device. In vehicles for instance the handle used for activation of the indicator lights could co-serve as a trigger for providing hypothetical time to collision information from the respective neighboring Lane.
  • figure 6 shows three different scenarios.
  • scenario a) on the left the ego-vehicle E is not on a collision path with its predecessor A, because both vehicles are driving with the same velocity. Consequently, a lane change of the ego-vehicle E does not need to be assumed and the vehicle B coming from behind on the left lane but at a higher velocity may simply pass.
  • scenario a the hypothetical event of the collision between the vehicle B on the left lane and the ego-vehicle E would rather annoy the ego-vehicle driver than add any benefit.
  • the situation is different in the scenario b) shown in the middle of figure 6 .
  • the ego-vehicle E drives with a velocity that is higher than the velocity of its predecessor A.
  • a time to event is estimated for a collision between the ego-vehicle E and its predecessor A. If this time to collision falls below a certain threshold, the condition for informing the driver of the upcoming actual event is fulfilled and the signal will cause an output stimulating the driver indicating its collision at the front side of the ego-vehicle E and the time to collision.
  • the driver could consider a lane change and thus the hypothetical situation in which the ego-vehicle E changed lane may lead to a hypothetical collision between the ego-vehicle E and the vehicle B on the left lane. Since this hypothetical development of the situation has a certain probability, the respective collision may be indicated to the ego-vehicle driver as indicated in the lower part of figure 6 .
  • a third example is illustrated on the right side of figure 6 , where the ego-vehicle E has a successor C driving at a higher velocity than the ego-vehicle E and again another vehicle B is driving on the left lane with an even higher velocity.
  • an actual event is estimated, which is a collision between the ego-vehicle E and its successor C and since the time to collision falls below the threshold for outputting the information to the ego-vehicle driver, also the hypothetical event of a collision between the vehicle B on the left lane of the ego-vehicle E is output for the driver.
  • the discussed scenarios are only simple examples showing how the relevance for a certain event may be determined.
  • a personalized model of user behavior, experience or preferences could tune decisions about the relevance of the situation.
  • An experienced driver may for instance be more likely to initiate an overtaking maneuver in the kind of the situation described in the middle of figure 6 , than a beginner or very cautious drivers which would make the presentation of signals - based on hypothetical lane change - more relevant for the experienced driver.
  • information about such hypothetical situations could be overwhelming especially when presented in addition to the information provided about the actual scene and can thus be imagined to not only be momentarily irrelevant but even distracting in certain embodiments.
  • the opposite effect to the described case may also be feasible.
  • a novice in driving may benefit from information about hypothetical scenarios involving the hypothetical events and therefore experience a learning effect.
  • the experienced driver may appreciate receiving a momentary feedback on the actual situation.
  • step S1 dynamic environment data is obtained as indicated in step S1. Such data collection is performed for each time step.
  • a step size may differ between embodiments and is influenced, in particular limited, by hardware capabilities and requirements.
  • the data may for example be collected using one or a plurality of sensors that are mounted on the vehicle for which assistance shall be provided.
  • the signal measurement may be supplemented or substituted by other ways of information reception, for example transferring information on states of the other traffic participants by a car to car communication system or other external measurement and communication resources.
  • Signal measurement or reception is performed in step S2.
  • a spatiotemporal state estimation is performed in step S3.
  • the spatiotemporal state estimation defines the current state of the respective entity, for example a current location, current trajectory, current velocity of the entity.
  • step S4 the situation parameters which are derived from information on the environment of the ego-vehicle are altered in order to generate one or multiple hypothetical situation(s). As it was already explained with reference to the drawings, this may be achieved by simulating geometry changes of the involved entities but also by assuming moving direction and/or velocity variations for the involved entities. Then, in steps S5 and S5'it is determined whether an event or a hypothetical event occurs. Such an event is determined to happen if a target state deviation can be recognized. Such target state deviation occurs when the predicted state of an involved entity deviates from states that are considered to be desirable. This is the case for example if the time to collision falls below a certain threshold. Thus, by defining conditions for which developments of the situations can be accepted, a deviation from a target state can be identified.
  • steps S6 and S6' whether the relevance criteria are met. If such relevance criteria are met, a signal is generated by the system's processor based on which an actuator generates a stimulus which is perceivable by the user. The signal generation is performed in step S7 and output of the respective stimulation is done in step S8.
  • step S5 if either no deviation from the target state can be recognized in step S5 or S5', the respective evidence-based or hypothetical signal iteration is terminated. This is also true in case that the relevance criteria, which are analyzed in step S6 and S6', are not met.
  • FIG 8 illustrates the system with its main components.
  • Sensors 1 - 4 physically sense entities in a dynamic scenario in which entities may move relative to one another, repeatedly.
  • TTE estimation may be achieved by incorporating information from a variety of resources.
  • This sensing is the basis for determining a relative position and velocity, relative to the person who is assisted by the system and from the sensed values information on states of the entities (direction, velocity) is derived for every repetition.
  • This information is stored in a memory 6 and is the basis for behavior prediction and trajectory estimation.
  • individual trajectories and relative velocities of the involved entities are estimated in a processor 5.
  • the estimates provide the basis for inferences or predictions about possible future contact between the entity or entities of interest and other relevant entities in the environment. Also, additional information that may be available and relevant for a scenario may be incorporated when making such estimates.
  • the time to event (TTE) estimation is performed also by processor 5.
  • the algorithms for predicting a future behavior (estimating a future trajectory) of the entities is known in the art and thus details thereof are omitted.
  • probability distributions over different TTEs could be generated for each direction in which potentially relevant entities are identified. Such distributions would be advantageous in that they preserve the uncertainties associated with the available information and may be suitable for the application of signal selection criteria.
  • the signals are adapted by a driver 7 that is configured suitably to drive the actuator 8.
  • Signal generation encodes a direction of a predicted event and TTE such that one or a plurality of actuator elements of actuator 8 are driven to stimulate the person at a location of its body indicative of the direction where the event will occur and with an perceived saliency indicative of the TTE.
  • the one or the plurality of actuator elements of the actuator 8 are driven to stimulate the person at a location of its body indicative of the direction towards the momentary location of the event related entity of interest and with an perceived saliency indicative of the TTE.
  • the tactile sense of the driver or assisted person is used as a channel for signal transmission.
  • Communication is realized in the form of an array of tactile actuators 8 (e.g. vibrotactile) which is arranged for example around the driver's torso and which is capable of simultaneously varying the perceived stimulus locations, frequencies and amplitudes.
  • tactile actuators 8 e.g. vibrotactile
  • the direction towards a relevant entity with a TTE below a certain threshold corresponds to the location of the driver's torso, which is oriented towards this direction.
  • the TTE is encoded in the vibration frequency such that the frequency approaches the assumed optimal excitation frequency for human lamellar corpuscles with shortening of the TTE, which has the advantage of coupling stimulus detectability with situation urgency.
  • the encoding in frequency has high potential for personalization because stimulus amplitude and frequency range could be adapted to the driver's preferences and sensitivity, which lowers the risk of creating annoying or undetectable signals.
  • the actuator 8 comprises a plurality of actuator elements that are attached to the user's seat belt and embedded in the area of the user's seat that is in contact with his or her lower back.
  • This setup would have the advantage that a user would not need to be bothered with putting on additional equipment which increases the probability of actual usage in places where seat belts are common or even a legal requirement.
  • the actuators could also be embedded in a belt, jacket or another piece of clothing that can be extended with an arrangement of tactile actuator elements around the waist of the wearer.
  • the placement and/or the control of the actuators would have to be adapted such that the perceived signal location always corresponds to the correct direction with respect to the spatial frame of reference of the body.
  • the mapping of actuator directions could for instance be a function of the belt's length around the waist.
  • the use of an actuator-array with sufficient spatial resolution the exploitation of vibrotactile illusions or a combination of both could aid in achieving this personalization.
  • the above given explanations of the system in figure 8 do not distinguish between predicted actual events and predicted hypothetical events.
  • the actuator 8 is used for indicating actual events as well as hypothetical events and it is a design measure which of the actuator elements is dedicated to which of the event types in an embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
EP19155614.1A 2019-02-05 2019-02-05 Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system Active EP3693943B1 (de)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19155614.1A EP3693943B1 (de) 2019-02-05 2019-02-05 Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system
JP2020005151A JP7071413B2 (ja) 2019-02-05 2020-01-16 動的環境内での人の行動を補助するための方法および対応するシステム

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP19155614.1A EP3693943B1 (de) 2019-02-05 2019-02-05 Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system

Publications (2)

Publication Number Publication Date
EP3693943A1 true EP3693943A1 (de) 2020-08-12
EP3693943B1 EP3693943B1 (de) 2024-05-29

Family

ID=65324290

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19155614.1A Active EP3693943B1 (de) 2019-02-05 2019-02-05 Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system

Country Status (2)

Country Link
EP (1) EP3693943B1 (de)
JP (1) JP7071413B2 (de)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9104965B2 (en) * 2012-01-11 2015-08-11 Honda Research Institute Europe Gmbh Vehicle with computing means for monitoring and predicting traffic participant objects
US9566981B2 (en) * 2014-09-01 2017-02-14 Honda Research Institute Europe Gmbh Method and system for post-collision manoeuvre planning and vehicle equipped with such system
EP3413288A1 (de) * 2017-06-09 2018-12-12 Honda Research Institute Europe GmbH Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2969176B1 (ja) * 1998-06-03 1999-11-02 建設省土木研究所長 車の自動合流制御方法及び装置
JP2001199296A (ja) * 2000-01-17 2001-07-24 Matsushita Electric Ind Co Ltd 警報装置、振動体を有する運転座席、警報装置を車両に搭載した移動体
JP2004164187A (ja) * 2002-11-12 2004-06-10 Nissan Motor Co Ltd 車両用報知装置
US7245231B2 (en) * 2004-05-18 2007-07-17 Gm Global Technology Operations, Inc. Collision avoidance system
JP2006284254A (ja) 2005-03-31 2006-10-19 Denso It Laboratory Inc 進路予測方法及び進路予測装置並びに進路予測情報利用システム
US8515659B2 (en) 2007-03-29 2013-08-20 Toyota Jidosha Kabushiki Kaisha Collision possibility acquiring device, and collision possibility acquiring method
JP2009031946A (ja) * 2007-07-25 2009-02-12 Toyota Central R&D Labs Inc 情報提示装置
JP6171499B2 (ja) 2013-04-02 2017-08-02 トヨタ自動車株式会社 危険度判定装置及び危険度判定方法
JP6657674B2 (ja) 2015-08-28 2020-03-04 いすゞ自動車株式会社 車間距離警報装置及び車間距離警報制御方法
US10202127B2 (en) 2016-05-19 2019-02-12 Toyota Jidosha Kabushiki Kaisha User profile-based automatic parameter tuning system for connected vehicles
CN110431613B (zh) * 2017-03-29 2023-02-28 索尼公司 信息处理装置、信息处理方法、程序和移动物体

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9104965B2 (en) * 2012-01-11 2015-08-11 Honda Research Institute Europe Gmbh Vehicle with computing means for monitoring and predicting traffic participant objects
US9566981B2 (en) * 2014-09-01 2017-02-14 Honda Research Institute Europe Gmbh Method and system for post-collision manoeuvre planning and vehicle equipped with such system
EP3413288A1 (de) * 2017-06-09 2018-12-12 Honda Research Institute Europe GmbH Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system

Also Published As

Publication number Publication date
JP7071413B2 (ja) 2022-05-18
EP3693943B1 (de) 2024-05-29
JP2020161117A (ja) 2020-10-01

Similar Documents

Publication Publication Date Title
EP3540711B1 (de) Verfahren zur unterstützung des betriebs eines ego-fahrzeuges, verfahren zur unterstützung anderer verkehrsteilnehmer und entsprechende unterstützungssysteme und fahrzeuge
EP3759700B1 (de) Verfahren zur bestimmung einer fahrrichtlinie
CN107336710B (zh) 驾驶意识推定装置
US10543854B2 (en) Gaze-guided communication for assistance in mobility
US9740203B2 (en) Drive assist apparatus
US10475348B2 (en) Method for assisting a person in acting in a dynamic environment and corresponding system
JP6565408B2 (ja) 車両制御装置及び車両制御方法
EP3042371B1 (de) Vorrichtung zur bestimmung von fahrzeugfahrsituationen und verfahren zur bestimmung von fahrzeugfahrsituationen
JP2016215658A (ja) 自動運転装置および自動運転システム
JP4912057B2 (ja) 車両警報装置
JP2018082805A (ja) 違和感判別方法及び違和感判別装置
JP6811743B2 (ja) 安全運転支援装置
JP2018139070A (ja) 車両用表示制御装置
EP3723066A1 (de) Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system
EP3693943B1 (de) Verfahren zur unterstützung einer person in einer dynamischen umgebung und entsprechendes system
JPWO2018158950A1 (ja) 作業適性判定装置、作業適性判定方法、及び作業適性判定プログラム
US11738748B2 (en) Method and apparatus for adaptive lane keep assist for assisted driving
EP3531337B1 (de) Auf optischem fluss basierende unterstützung für betrieb und koordination in dynamischen umgebungen
JP2019103581A (ja) 脳波測定方法及び脳波測定装置
JP2024049542A (ja) 注意喚起システム及び注意喚起方法
KR20160068143A (ko) 스마트 크루즈 컨트롤 시스템 및 이의 정보 표시방법

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210113

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230317

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20240216

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED