CN114426030A - Pedestrian passing intention estimation method, device and equipment and automobile - Google Patents

Pedestrian passing intention estimation method, device and equipment and automobile Download PDF

Info

Publication number
CN114426030A
CN114426030A CN202011176754.4A CN202011176754A CN114426030A CN 114426030 A CN114426030 A CN 114426030A CN 202011176754 A CN202011176754 A CN 202011176754A CN 114426030 A CN114426030 A CN 114426030A
Authority
CN
China
Prior art keywords
pedestrian
moment
intention
state information
surface layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011176754.4A
Other languages
Chinese (zh)
Other versions
CN114426030B (en
Inventor
范时伟
李飞
李向旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011176754.4A priority Critical patent/CN114426030B/en
Priority to PCT/CN2021/095259 priority patent/WO2022088658A1/en
Publication of CN114426030A publication Critical patent/CN114426030A/en
Application granted granted Critical
Publication of CN114426030B publication Critical patent/CN114426030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • B60W60/0017Planning or execution of driving tasks specially adapted for safety of other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a pedestrian passing intention estimation method, a pedestrian passing intention estimation device, pedestrian passing intention estimation equipment and an automobile, and relates to the field of intelligent driving. Wherein the method comprises the following steps: the method comprises the steps of obtaining state information of at least one pedestrian around a vehicle, calculating surface layer intention of the first pedestrian at a first moment according to the state information of the first pedestrian at the first moment and deep layer intention of the first pedestrian at a second moment, obtaining a surface layer intention estimation result with high precision, real-time performance and accuracy, and determining whether the first pedestrian passes through a road at the first moment according to the surface layer intention of the first pedestrian at the first moment.

Description

Pedestrian passing intention estimation method, device and equipment and automobile
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a pedestrian passing intention estimation method, device and equipment and an automobile.
Background
With the advent of intelligent driving, intelligent vehicles become the target of intensive research of various manufacturers. For intelligent driving, the implementation process mainly comprises the steps of positioning, sensing, predicting, planning and controlling and the like. The prediction step mainly aims to estimate the future position and the future behavior of the surrounding target so that the planning control module can make corresponding decisions, thereby avoiding the occurrence of traffic accidents.
However, due to the characteristics of high dynamics, small constraints of maps and traffic rules, and the like, the prediction of the motion trail and the intention of the pedestrian is quite challenging. If the prediction is not accurate, the vehicle is easy to have a person collision event, thus bringing great hidden danger to the vehicle.
Disclosure of Invention
In order to solve the above-mentioned problems, embodiments of the present application provide a pedestrian crossing intention estimation method, apparatus, device, and automobile.
In a first aspect, the present application provides a pedestrian crossing intention estimation method, including: acquiring state information of at least one pedestrian around a vehicle, wherein the at least one pedestrian comprises a first pedestrian; calculating a surface layer intention of the first pedestrian at a first moment according to state information of the first pedestrian at the first moment and a deep layer intention of the first pedestrian at a second moment, wherein the deep layer intention is the probability that the first pedestrian passes through a road in the whole life cycle before the first moment at the second moment, and the surface layer intention is the probability that the first pedestrian passes through the road at the current moment or is about to pass through the road; and determining whether the first pedestrian passes through the road at the first moment according to the surface layer intention of the first pedestrian at the first moment.
In this embodiment, the surface layer intention estimation result with high precision, real-time performance and accuracy can be obtained by continuously acquiring information of pedestrians, lane lines of roads, lane marks, other vehicles around the roads on which the vehicles travel, analyzing state information of the pedestrians at the current moment, such as behavior characteristics, static interaction characteristics and dynamic interaction characteristics, and combining the deep layer intention at the previous moment.
In one embodiment, before calculating the surface layer intention of the first pedestrian at the first time according to the state information of the first pedestrian at the first time and the deep layer intention of the first pedestrian at the second time, the method includes: calculating the likelihood probability of the pedestrian feature of the first pedestrian at the first moment according to the state information of the first pedestrian at the first moment; the calculating the surface layer intention of the first pedestrian at the first moment according to the state information of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment comprises the following steps: and calculating the surface layer intention of the first pedestrian at the first moment according to the likelihood probability of the pedestrian feature of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment.
In the embodiment, in the process of acquiring the state information of the pedestrian, because the monitored information is sometimes not completely correct, an error or an error may occur, for example, because the pedestrian target is small and the moving speed is slow, sometimes the pedestrian is not necessarily detected, the speed of the detected pedestrian is not changed, and other errors may have a large influence on the characteristics of the pedestrian, after the state information of the pedestrian is acquired, various detected information is subjected to fusion filtering, and the error is smoothed to a certain extent, so that the characteristics of the pedestrian corresponding to the detected state information are more true.
In one embodiment, the method further comprises: and when the surface layer intention of the first pedestrian at the first moment is larger than a set threshold value, controlling a loudspeaker to play an early warning signal and/or displaying early warning information on a display screen.
In one embodiment, before calculating the surface layer intention of the first pedestrian at the first time according to the state information of the first pedestrian at the first time and the deep layer intention of the first pedestrian at the second time, the method comprises: calculating the predicted state information of the first pedestrian at the first moment according to the state information of the first pedestrian at the second moment and the surface layer intention of the first pedestrian at the second moment; and calculating the deep level intention of the first pedestrian at the first moment according to the predicted state information of the first pedestrian at the first moment, the state information of the first pedestrian at the first moment and the deep level intention of the first pedestrian at the second moment.
In this embodiment, the deep level intention at the previous time is re-estimated and updated based on the deep level intention at the previous time, the state information of the pedestrian at the current time, and the predicted state information of the pedestrian at the current time, so that the reliability of the new deep level intention increases as the historical observation information is accumulated.
In one embodiment, the method further comprises: and calculating lane intention of at least one lane of a road where a vehicle is located according to the state information of the first pedestrian at the first moment and the surface layer intention of the first pedestrian at the first moment, wherein the lane intention is the probability of the first pedestrian passing through the first lane, and the at least one lane comprises the first lane.
In the embodiment, the road structure of each lane and the relevant vehicles are deduced according to the state information of the first pedestrian at the first time, the static interactive features and the dynamic interactive features of the lane level are extracted, the behavior features of the target pedestrian are calculated, the lane intention of the pedestrian is calculated, the vehicle can output the passing intention of the pedestrian relative to each lane, and the automatic driving vehicle can respond in advance.
In a second aspect, the present application also provides a pedestrian crossing intention estimation device including: an acquisition unit configured to acquire state information of at least one pedestrian around a vehicle, the at least one pedestrian including a first pedestrian; the processing unit is used for calculating a surface layer intention of the first pedestrian at a first moment according to the state information of the first pedestrian at the first moment and a deep layer intention of the first pedestrian at a second moment, wherein the second moment is before the first moment, the deep layer intention is the probability that the first pedestrian passes through a road in the whole life cycle, and the surface layer intention is the probability that the first pedestrian passes through the road at the current moment or is about to pass through the road; the processing unit is further used for determining whether the first pedestrian passes through the road at the first moment according to the surface layer intention of the first pedestrian at the first moment.
In one embodiment, the processing unit is specifically configured to calculate a likelihood probability of a pedestrian feature of the first pedestrian at the first time according to the state information of the first pedestrian at the first time; and calculating the surface layer intention of the first pedestrian at the first moment according to the likelihood probability of the pedestrian feature of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment.
In an embodiment, the processing unit is further configured to control a speaker to play a warning signal and/or display warning information on a display screen when the surface layer intention of the first pedestrian at the first time is greater than a set threshold.
In one embodiment, the processing unit is further configured to calculate predicted first pedestrian state information at the first time according to the first pedestrian state information at the second time and the surface layer intention of the first pedestrian at the second time; and calculating the deep level intention of the first pedestrian at the first moment according to the predicted state information of the first pedestrian at the first moment, the state information of the first pedestrian at the first moment and the deep level intention of the first pedestrian at the second moment.
In one embodiment, the processing unit is further configured to calculate a lane intention of at least one lane of a road where a vehicle is located according to the state information of the first pedestrian at the first time and the surface intention of the first pedestrian at the first time, where the lane intention is a probability that the first pedestrian passes through the first lane, and the at least one lane includes the first lane.
In a third aspect, the present application further provides an apparatus comprising at least one processor configured to execute instructions stored in a memory, so as to cause a terminal to perform the embodiments as various possible implementations of the first aspect.
In a fourth aspect, the present application further provides a vehicle for carrying out various possible implementations of the embodiments according to the first aspect.
In a fifth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform the embodiments as each possible implementation of the first aspect.
In a sixth aspect, the present application further provides a computing device, including a memory and a processor, where the memory stores executable codes, and the processor executes the executable codes to implement the embodiments as various possible implementations of the first aspect.
Drawings
The drawings that accompany the detailed description can be briefly described as follows.
FIG. 1 is a schematic structural diagram of a vehicle according to an embodiment of the present disclosure;
FIG. 2 is a view of a vehicle traveling on a road;
fig. 3 is a process flow chart of a method for estimating a pedestrian crossing intention according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a dynamic Bayesian network structure for modeling surface layer intent and deep layer intent provided by an embodiment of the present application;
FIG. 5 is a view of a vehicle and pedestrian on a road;
fig. 6 is a schematic structural diagram of a pedestrian crossing intention estimation device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The embodiments of the present application will subsequently refer to two concepts, the "surface intent" and the "deep intent". The surface layer intention is the probability that the pedestrian is passing through the road at the current moment or is about to pass through the road, namely the passing intention of the pedestrian considered in the instant or short time; the deep intention is the probability of a pedestrian crossing the road throughout the life cycle, i.e. the long-term consideration of the pedestrian's environmental impact.
TABLE 1 comparison of surface and deep intentions
Figure BDA0002748912420000031
Since the surface layer intention represents the passing intention of the pedestrian at the current moment, the surface layer intention is important for real-time performance, and therefore in the embodiment of the application, the surface layer intention is estimated by extracting detail information and interaction information of the pedestrian. However, since the extracted detail information and the interaction information are usually relatively noisy and easily affect the stability of the estimation of the surface layer intention, the embodiment of the application further introduces the deep layer intention, calculates the deep layer intention by using the data of the surface layer intention at the previous moment and the pedestrian feature information, and then improves the stability of the surface layer intention according to the data of the deep layer intention, so that the calculated surface layer intention can more accurately predict the intention of the pedestrian to pass through the road.
Fig. 1 is a frame structure diagram of a vehicle according to an embodiment of the present application. As shown in fig. 1, the vehicle 100 includes an input device 101, a memory 102, a processor 103, and a bus 104. Wherein the input device 101, memory 102 and processor 103 in the vehicle 100 may establish a communication connection over the bus 104.
The input device 101 may include an in-vehicle camera, an in-vehicle radar, an in-vehicle navigation, a Global Positioning System (GPS) sensor, and the like. The vehicle-mounted camera is used for shooting the driving direction of the vehicle 100 to acquire images including information such as lane lines, lane marks and pedestrians of a road; the vehicle-mounted radar may be a laser radar, a millimeter wave radar, or the like, and is configured to transmit a signal to the periphery of the vehicle 100 and receive a returned signal, so as to obtain distance information between the vehicle 100 and other surrounding vehicles of the vehicle 100, pedestrians, or other obstacles; the vehicle navigation is used for generating a navigation route according to the position of the vehicle 100 and the position information of a destination input by a driver; the GPS sensor is used to acquire real-time location information of the vehicle 100.
The memory 102 may be a random-access memory (RAM), a Hard Disk Drive (HDD), a Solid State Drive (SSD), or other devices with storage functions, and is used to store the surface layer intention, the deep layer intention, a high-precision map, and the like, so as to perform subsequent calculation by using historically stored data in the process of calculating the surface layer intention and the deep layer intention.
The processor 103 may be an Electronic Control Unit (ECU) of the vehicle 100, and after receiving the data sent by the input device 101, processes the data to obtain the state information of pedestrians on the road in front of the vehicle 100 and on both sides of the road, lane lines of the road, lane marks, and the like, and then calculates the surface layer intention of the pedestrian on the road in front of the vehicle 100 and on both sides of the road according to the state information of the pedestrian, (optionally, the surface layer intention of the pedestrian at the previous time stored in the memory 102) and the deep layer intention of the pedestrian at the previous time stored in the memory 102, and obtains the probability of the pedestrian crossing the road. When the probability of the pedestrian crossing the road is larger than the set threshold value, warning information is generated, and the driver is reminded of paying attention to the fact that the pedestrian crosses the road in front through modes such as display screen display or loudspeaker playing.
The processor 103 is further configured to re-estimate and update the deep level intention at the previous time already stored in the memory 102 according to the deep level intention at the previous time stored in the memory 102, the state information of the pedestrian at the current time, and the predicted state information of the pedestrian at the current time, so as to obtain a new deep level intention at the current time, such that the reliability of the new deep level intention at the current time increases with the accumulation of the historical observation information.
The processor 103 is further configured to calculate a probability of the pedestrian crossing each lane (hereinafter referred to as "lane intention") according to the received state information of the pedestrian, the lane line of the road, the lane identification, and the like, and the new deep intention at the current time, which are stored in the memory 102, so as to determine which lanes on the current road can be driven, which lanes are dangerous due to the pedestrian crossing, and to instruct the driver how to avoid danger when the danger occurs.
The processor 103 of the present application may also be a server, and the vehicle 100 communicates with the server through a communication unit, and then the function of the processor 103 is replaced by the server.
Fig. 2 shows a scenario in which a vehicle is traveling on a road. However, the vehicle 100 travels on the inner lane of the road where the vehicle 100 is located, and no other vehicle is present on the road on which the vehicle 100 travels in the same direction, and a pedestrian is about to pass through the road in front of the road on which the vehicle 100 travels. In the following, the embodiment of the present application takes the scene of the vehicle 100 shown in fig. 2 as an example to describe the implementation process of the technical solution of the present application.
Fig. 3 is a process flow chart of a method for estimating a pedestrian crossing intention according to an embodiment of the present application. As shown in fig. 3, the processor 103 implements the following steps:
in step S301, the processor 103 acquires state information of at least one pedestrian around the vehicle 100. In the application, the extracted pedestrian state information mainly includes three types of features, namely pedestrian behavior features, pedestrian static interaction features and pedestrian dynamic interaction features.
The behavior characteristics of the pedestrian represent detailed pedestrian actions observed by the vehicle 100, including the pedestrian's state of motion, body orientation, body language, and so forth. For example, after the video is acquired by the camera of the vehicle 100, the pedestrian in the video is detected by an algorithm such as a visual perception module area multi-person attitude estimation (RMPE) (in the processor 100) to obtain a bounding box of each pedestrian, then the human body key points are detected in each bounding box, and the key points in each bounding box are connected to form the attitude of each pedestrian. Meanwhile, the processor 103 calculates the distance of each pedestrian from the vehicle 100 according to algorithms such as monocular distance measurement and binocular distance measurement. The processor 103 calculates the orientation of the pedestrian relative to the vehicle 100 according to the key points and the postures of the pedestrian, and further calculates the direction of the pedestrian relative to the road as the orientation characteristic of the pedestrian; the processor 103 deduces the motion characteristics of the pedestrian by using the key points and the postures of the pedestrian, and takes the motion or static information of the pedestrian as the motion characteristics of the pedestrian; the processor 103 analyzes the key points of the hand of the pedestrian and obtains the body language of the pedestrian, such as the actions of waving hands, pointing and the like, as the body language features of the pedestrian. The three characteristics reflect the detail information of the pedestrian, have strong real-time performance and are used for judging the surface layer intention of the pedestrian.
The static interactive features of the pedestrian represent information about the pedestrian and the surrounding road environment. For example, the processor 103 of the vehicle 100 may obtain the distance from the pedestrian to the vehicle 100 by sending and receiving millimeter-wave signals through radar (taking millimeter-wave radar as an example), images captured by a camera, and the like, then determine the position of the pedestrian by combining with the map information stored in the memory 102, and determine information such as the distance between the pedestrian and the road boundary, the distance between the pedestrian and the lane where the vehicle 100 is located, and the distance between the pedestrian and each lane line in the road, and then use the information as the static interaction feature of the pedestrian.
The dynamic interactive features of the pedestrian represent information of interaction of the pedestrian with surrounding dynamic objects, such as with the vehicle 100. For example, the processor 102 of the vehicle 100 acquires information of the distance, speed, relative position, and the like of surrounding pedestrians with respect to the vehicle 100, and information of the distance, speed, relative position, and the like of other vehicles with respect to the vehicle 100 through a camera, a radar, and the like. The trajectories of pedestrians and other vehicles over a period of time are then predicted. Finally, the processor 103 calculates the minimum distance between each pedestrian and each vehicle, the time of collision, and the like according to the motion tracks of the pedestrian and other vehicles, and then uses the information as the dynamic interaction features of the pedestrian.
In step S304, the processor 103 calculates a surface layer intention of the first pedestrian crossing the road according to the state information of the first pedestrian at the current time and the deep layer intention of the first pedestrian at the previous time. The first pedestrian is a pedestrian in front of the left of the road in the driving direction of the vehicle 100 in fig. 2, the current time corresponds to the second time, and the previous time corresponds to the first time.
In the automatic driving scene, the vehicle 100 may obtain the motion information of other surrounding vehicles and obstacles through various input devices 101, and calculate the interaction information with the pedestrian as the judgment basis of the pedestrian passing intention. However, sometimes, the information monitored by the input device 101 is not necessarily completely correct, and an error or an error may occur, for example, because the pedestrian target is small and the moving speed is slow, the input device 101 sometimes does not necessarily detect the pedestrian, detects the error such as the constant speed of the pedestrian, and the like, which may have a large influence on the pedestrian characteristic, so that after the state information of the pedestrian is obtained, the application performs fusion filtering on various detected information, performs a certain degree of smoothing on the error, and makes the pedestrian characteristic corresponding to the detected pedestrian state information more real. Illustratively, the method combines three types of features, namely a pedestrian behavior feature, a pedestrian static interaction feature and a pedestrian dynamic interaction feature, and calculates the likelihood probability of the pedestrian feature Z through formula (1), namely:
Figure BDA0002748912420000051
wherein M istIndicating the pedestrian's intention to cross at time t, DTCtRepresenting the static interaction characteristic of the pedestrian at time t, Dmin tIndicates the pedestrian is at tDynamic interactive features of scales, ORtIndicates the pedestrian orientation, MO, of the pedestrian at time ttRepresenting the pedestrian's movement characteristics at time t, BLtRepresenting the body language of the pedestrian at time t. And carrying out maximum likelihood estimation on the likelihood probability of each feature through a large number of data samples, and calculating a probability parameter.
In the process of calculating the surface layer intention of the pedestrian crossing the road at the current moment, the dynamic Bayesian network structure is utilized to model the calculation of the surface layer intention of the pedestrian and the deep layer intention of the pedestrian, then the pedestrian characteristic Z, the observed pedestrian state information Y and the predicted pedestrian state information X are respectively used as input information, and then the model shown in FIG. 4 is constructed according to the mutual influence between the deep layer intention and the surface layer intention according to the time sequence. As shown in FIG. 4, the surface layer intention M at time t is calculatedtIn the process, the surface layer intention M corresponding to the previous momentt-1Pedestrian feature Z at time ttAnd the deep level intention D at time ttIt is related.
Illustratively, taking the pedestrian state information as the position and speed of the pedestrian as an example, the processor 103 calculates the surface layer intention M at time ttThe calculation is obtained through a formula (2), and specifically:
Figure BDA0002748912420000061
wherein, P (M)tI Mt-1) The transition probability of the surface layer intention is represented, the transition probability of the surface layer intention represents the influence of the historical surface layer intention on the current surface layer intention, and the effect of smoothing the intention is achieved; p (M)tI Zt-1) Representing the likelihood probability of the pedestrian features, wherein the likelihood probability of the pedestrian features represents the real-time pedestrian passing intention deduced from the observed pedestrian behavior features and interaction features; p (D)t,Dt-1,Xt,Yt) The deep level intention represents the influence of the deep level intention of the pedestrian on the surface level intention.
If the detected pedestrian is the initial detection, and the memory 102 does not store the historical deep level intention of the pedestrian, an initial value of the deep level intention, such as 0.5, is stored in the memory 102. When the processor 103 calculates the surface layer intention, the initial value is selected to calculate the surface layer intention, and the calculated surface layer intention at this time is mainly determined by the pedestrian feature at the current time.
After obtaining the surface layer intention, the processor 103 determines whether the calculated probability of the surface layer intention exceeds a set threshold, and if the calculated probability of the surface layer intention exceeds the set threshold, the processor 103 may prompt the driver in a manner of playing voice through a speaker, displaying a prompt message through a display screen, or the like, or may directly control the vehicle 100 to slow down, avoid a passing pedestrian, or the like, so that the vehicle 100 runs safely on the current running road.
In the embodiment of the present application, information such as pedestrians, lane lines of a road, lane marks, other surrounding vehicles on the road where the vehicle 100 runs is continuously acquired through the input devices 101 of various types on the vehicle 100, and then after information such as behavior characteristics, static interaction characteristics, dynamic interaction characteristics, and the like of the pedestrians at the current moment is analyzed, a surface layer intention estimation result with high precision, real-time performance, and accuracy can be obtained by combining the surface layer intention at the previous moment and the deep layer intention at the previous moment stored in the memory 102.
The accuracy is that the pedestrian is predicted by utilizing various factors, the deep intention of the pedestrian is excavated, and the influence of noise in pedestrian perception is reduced. The timeliness is improved on the premise of ensuring the prediction accuracy by using real-time elements such as detail information and interactive information as a surface layer intention judgment basis and using hysteresis elements such as tracking information for adjusting a deep layer intention.
In this embodiment of the application, before the processor 103 calculates the surface layer intention at the current time in step S304, the following process is further implemented:
in step S302, the processor 103 determines the predicted pedestrian state information at the current time according to the predicted pedestrian state information at the previous time and the surface layer intention of the pedestrian at the previous time.
Specifically, after acquiring the speed and the position in the pedestrian state information predicted at the previous time, the processor 103 determines whether the pedestrian travels through the road according to the surface layer intention at the previous time, and then the processor 103 calculates the speed and the position of the pedestrian predicted at the current time according to the speed and the position of the pedestrian predicted at the previous time and the time difference between the previous time and the current time, so as to obtain the pedestrian state information at the current time.
In step S303, the processor 103 calculates the deep intention at the current time based on the predicted pedestrian state information at the current time, the observed pedestrian state information at the current time, and the deep intention of the pedestrian at the previous time.
The deep level intention is whether the pedestrian really has the passing intention in the life cycle observed by the vehicle 100, and the intention is the intention of long-term stability. In fact, the deep-layer intention of the pedestrian is determined and does not change with time, and in the intention estimation process, due to the unobservability of the deep-layer intention and the reliability influence of the reference information, the node is a gradual and convergent process in the estimation process, and the reliability of the node is increased along with the accumulation of the historical observation information.
Specifically, since the deep intention has high requirements on stability and accuracy and low requirements on real-time performance, the method mainly adopts the predicted state information of the pedestrian at the previous moment, the actual state information of the pedestrian at the current moment, the surface intention at the current moment and the deep intention of the pedestrian at the previous moment to estimate and update the deep intention at the current moment.
Illustratively, taking the pedestrian state information as the position and the speed of the pedestrian as an example, as shown in fig. 4, the processor 103 predicts the motion trajectory of the pedestrian by using a uniform speed model, and compares the position and the speed information of the pedestrian predicted at the last moment with the surface layer intention of the pedestrian at the last momenttInputting the information into a constant speed model, calculating the position and speed information of the pedestrian predicted at the current moment, and then calculating the correction value of the deep intention according to the position and speed information of the pedestrian observed at the current moment and a formula (3), namely:
Figure BDA0002748912420000071
the processor 103 corrects the deep intention of the pedestrian obtained at the previous moment according to the obtained correction value of the deep intention, and obtains the deep intention at the current moment.
The formula (3) may use the lane direction of the road in the driving direction of the vehicle 100 as a reference line, and respectively set the actual position information of the pedestrian at the current time and the predicted position information of the pedestrian at the current time as the transverse direction and the longitudinal direction, where the calculation formula is:
Figure BDA0002748912420000072
where x denotes the predicted position of the pedestrian at the present time, y denotes the actual position of the pedestrian at the present time, vxIndicating the predicted speed, v, of the pedestrian at the present momentyRepresents the actual speed of the pedestrian at the present time, L represents the direction perpendicular to the lane, and S represents the direction parallel to the lane. The above equation (4) represents the influence of the difference between the pedestrian's movement in the lane direction and the vertical lane direction and the predicted movement on the deep level intention. For example, when the moving distance of the pedestrian along the vertical lane direction is greater than the moving distance predicted by the historical intent, the probability that the deep intent is the passing through is greater than the non-passing through probability, and then the historical deep intent is combined, the deep intent can be corrected, and the deep intent at the current moment can be obtained.
In addition, in this embodiment of the application, after calculating the surface layer intention of the pedestrian at the current time, if it is determined that there is a pedestrian passing through the road, the processor 103 needs to determine how the vehicle 100 avoids the pedestrian, and the specific implementation process is as follows:
in step S305, the processor 103 calculates lane intentions of the lanes of the road on which the vehicle 100 is located according to the state information of the pedestrian at the current time and the surface layer intention at the current time.
Specifically, when determining that the probability of the surface layer intention is greater than the set threshold value according to the surface layer intention at the current time, the processor 103 indicates that a pedestrian is about to pass through the road, and continuously calculates the probability of collision between the vehicle 100 and the pedestrian in the own lane and the probability of collision between the vehicle 100 and the pedestrian in other lanes by calculating the probability of collision between the vehicle 100 and the pedestrian and the probability of collision between the vehicle 100 turning to other lanes in the road where the vehicle 100 is located according to the information such as the position of the pedestrian, the speed of the pedestrian, the distance between the pedestrian and each lane, the distance between the pedestrian and the vehicle 100 and the like in the pedestrian state information.
Exemplarily, as shown in fig. 5, the processor 103 obtains the speed v1 of the pedestrian, the speed v2 of the vehicle 100, the vertical distance between the pedestrian and the vehicle 100 is L1, the lane width is L2, and the distance between the pedestrian and the boundary of the road where the vehicle 100 is located is L3, and then determines whether the time t1 ═ L3/v1 when the pedestrian reaches the boundary of the road where the vehicle 100 is located is greater than t2 ═ L1/v2, and if so, when the lane intention of the current lane where the vehicle 100 is located, which is calculated by combining with other elements such as the movement, the orientation, and the deep intention of the pedestrian, is less than a set threshold value, the vehicle 100 is safe to run on the current lane; if the lane intention of the current lane of the vehicle 100 calculated by combining other factors such as the movement, the orientation, the deep intention and the like of the pedestrian is less than the set threshold value, the vehicle 100 is unsafe to drive on the current lane. Then, whether the time t3 when the pedestrian reaches the lane boundary adjacent to the lane where the vehicle 100 is located is (L3+ L2)/v1 is larger than t2 (for convenience of calculation, the time when the vehicle 100 turns to the adjacent lane from the current lane is ignored), and if the time t3 is larger than t2, and the lane intention of the vehicle 100 on the current lane, which is calculated by combining other factors such as the movement, the direction and the deep intention of the pedestrian, is smaller than a set threshold value, then it is safe for the vehicle 100 to turn to the adjacent lane; if the lane intention of the current lane of the vehicle 100, which is calculated by combining other factors such as the movement, the orientation and the deep intention of the pedestrian, is less than the set threshold value, the vehicle 100 is not safe to run on the adjacent lane, and the vehicle 100 can be controlled to slow down or stop.
According to the method and the device, the static interaction characteristics and the dynamic interaction characteristics of the lane level are extracted by utilizing the road structure of each lane and the related vehicles, the lane intention of the pedestrian is inferred by combining the behavior characteristics of the target pedestrian, the vehicle 100 can output the passing intention of the pedestrian relative to each lane, and the automatic driving vehicle can respond in advance.
Fig. 6 is a schematic structural diagram of a pedestrian crossing intention estimation device according to an embodiment of the present application. As shown in fig. 6, the apparatus 600 includes an acquisition unit 601 and a processing unit 602.
The acquiring unit 601 is used for acquiring state information of at least one pedestrian around the vehicle, wherein the at least one pedestrian comprises a first pedestrian;
the processing unit 602 is configured to calculate a surface layer intention of the first pedestrian at a first time according to the state information of the first pedestrian at the first time and a deep layer intention of the first pedestrian at a second time, where the second time is before the first time, the deep layer intention is a probability that the first pedestrian passes through a road in the whole life cycle, and the surface layer intention is a probability that the first pedestrian is passing through the road at the current time or is about to pass through the road;
the processing unit 602 is further configured to determine whether the first pedestrian travels through the road at the first time according to the surface layer intention of the first pedestrian at the first time.
In one embodiment, the processing unit 602 is specifically configured to calculate a likelihood probability of a pedestrian feature of the first pedestrian at the first time according to the state information of the first pedestrian at the first time; and calculating the surface layer intention of the first pedestrian at the first moment according to the likelihood probability of the pedestrian feature of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment.
In one embodiment, the processing unit 602 is further configured to control a speaker to play a warning signal and/or display warning information on a display screen when the surface layer intention of the first pedestrian at the first time is greater than a set threshold.
In one embodiment, the processing unit 602 is further configured to calculate the predicted state information of the first pedestrian at the first time point according to the state information of the first pedestrian at the second time point and the surface layer intention of the first pedestrian at the second time point; and calculating the deep level intention of the first pedestrian at the first moment according to the predicted state information of the first pedestrian at the first moment, the state information of the first pedestrian at the first moment and the deep level intention of the first pedestrian at the second moment.
In one embodiment, the processing unit 602 is further configured to calculate a lane intention of at least one lane of a road on which the vehicle is located according to the state information of the first pedestrian at the first time and the surface intention of the first pedestrian at the first time, where the lane intention is a probability that the first pedestrian passes through the first lane, and the at least one lane includes the first lane.
The present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform any of the methods described above.
The invention provides a computing device, which comprises a memory and a processor, wherein the memory stores executable codes, and the processor executes the executable codes to realize any method.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
Moreover, various aspects or features of embodiments of the application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash memory devices (e.g., erasable programmable read-only memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.
In the above embodiments, the pedestrian crossing intention estimation apparatus 600 in fig. 6 may be wholly or partially implemented by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not imply an order of execution, and the order of execution of the processes should be determined by their functions and inherent logic, and should not limit the implementation processes of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application, which essentially or partly contribute to the prior art, may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or an access network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (14)

1. A pedestrian crossing intention estimation method, characterized by comprising:
acquiring state information of at least one pedestrian around a vehicle, wherein the at least one pedestrian comprises a first pedestrian;
calculating a surface layer intention of the first pedestrian at a first moment according to state information of the first pedestrian at the first moment and a deep layer intention of the first pedestrian at a second moment, wherein the deep layer intention is the probability that the first pedestrian passes through a road in the whole life cycle before the first moment at the second moment, and the surface layer intention is the probability that the first pedestrian passes through the road at the current moment or is about to pass through the road;
and determining whether the first pedestrian passes through the road at the first moment according to the surface layer intention of the first pedestrian at the first moment.
2. The method according to claim 1, wherein before calculating the surface layer intention of the first pedestrian at the first time according to the state information of the first pedestrian at the first time and the deep layer intention of the first pedestrian at the second time, the method comprises:
calculating the likelihood probability of the pedestrian feature of the first pedestrian at the first moment according to the state information of the first pedestrian at the first moment;
the calculating the surface layer intention of the first pedestrian at the first moment according to the state information of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment comprises the following steps:
and calculating the surface layer intention of the first pedestrian at the first moment according to the likelihood probability of the pedestrian feature of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment.
3. The method according to any one of claims 1-2, further comprising:
and when the surface layer intention of the first pedestrian at the first moment is larger than a set threshold value, controlling a loudspeaker to play an early warning signal and/or displaying early warning information on a display screen.
4. The method according to any one of claims 1 to 3, wherein before calculating the surface layer intention of the first pedestrian at the first time according to the state information of the first pedestrian at the first time and the deep layer intention of the first pedestrian at the second time, the method comprises:
calculating the predicted state information of the first pedestrian at the first moment according to the state information of the first pedestrian at the second moment and the surface layer intention of the first pedestrian at the second moment;
and calculating the deep level intention of the first pedestrian at the first moment according to the predicted state information of the first pedestrian at the first moment, the state information of the first pedestrian at the first moment and the deep level intention of the first pedestrian at the second moment.
5. The method according to any one of claims 1-4, further comprising:
and calculating lane intention of at least one lane of a road where a vehicle is located according to the state information of the first pedestrian at the first moment and the surface layer intention of the first pedestrian at the first moment, wherein the lane intention is the probability of the first pedestrian passing through the first lane, and the at least one lane comprises the first lane.
6. A pedestrian crossing intention estimation device characterized by comprising:
an acquisition unit configured to acquire state information of at least one pedestrian around a vehicle, the at least one pedestrian including a first pedestrian;
the processing unit is used for calculating a surface layer intention of the first pedestrian at a first moment according to the state information of the first pedestrian at the first moment and a deep layer intention of the first pedestrian at a second moment, wherein the second moment is before the first moment, the deep layer intention is the probability that the first pedestrian passes through a road in the whole life cycle, and the surface layer intention is the probability that the first pedestrian passes through the road at the current moment or is about to pass through the road;
the processing unit is further used for determining whether the first pedestrian passes through the road at the first moment according to the surface layer intention of the first pedestrian at the first moment.
7. The device according to claim 6, wherein the processing unit is specifically configured to calculate a likelihood probability of a pedestrian feature of the first pedestrian at the first time point according to the state information of the first pedestrian at the first time point;
and calculating the surface layer intention of the first pedestrian at the first moment according to the likelihood probability of the pedestrian feature of the first pedestrian at the first moment and the deep layer intention of the first pedestrian at the second moment.
8. The device according to any one of claims 6 to 7, wherein the processing unit is further configured to control a speaker to play a warning signal and/or display a warning message on a display screen when the surface layer intention of the first pedestrian at the first time is greater than a set threshold.
9. The device according to any one of claims 6 to 8, wherein the processing unit is further configured to calculate the predicted state information of the first pedestrian at the first time point according to the state information of the first pedestrian at the second time point and the surface intention of the first pedestrian at the second time point;
and calculating the deep level intention of the first pedestrian at the first moment according to the predicted state information of the first pedestrian at the first moment, the state information of the first pedestrian at the first moment and the deep level intention of the first pedestrian at the second moment.
10. The device according to any one of claims 6 to 9, wherein the processing unit is further configured to calculate a lane intention of at least one lane of a road on which a vehicle is located according to the state information of the first pedestrian at the first time and the surface intention of the first pedestrian at the first time, wherein the lane intention is a probability that the first pedestrian passes through the first lane, and the at least one lane includes the first lane.
11. An apparatus comprising at least one processor configured to execute instructions stored in a memory to cause a terminal to perform the method of any of claims 1-5.
12. An automobile for carrying out the method according to any one of claims 1-5.
13. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-5.
14. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that, when executed by the processor, implements the method of any of claims 1-5.
CN202011176754.4A 2020-10-29 2020-10-29 Pedestrian passing intention estimation method, device, equipment and automobile Active CN114426030B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011176754.4A CN114426030B (en) 2020-10-29 2020-10-29 Pedestrian passing intention estimation method, device, equipment and automobile
PCT/CN2021/095259 WO2022088658A1 (en) 2020-10-29 2021-05-21 Pedestrian crossing intention estimation method and apparatus, device, and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011176754.4A CN114426030B (en) 2020-10-29 2020-10-29 Pedestrian passing intention estimation method, device, equipment and automobile

Publications (2)

Publication Number Publication Date
CN114426030A true CN114426030A (en) 2022-05-03
CN114426030B CN114426030B (en) 2024-04-09

Family

ID=81309344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011176754.4A Active CN114426030B (en) 2020-10-29 2020-10-29 Pedestrian passing intention estimation method, device, equipment and automobile

Country Status (2)

Country Link
CN (1) CN114426030B (en)
WO (1) WO2022088658A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115759B (en) * 2023-04-12 2024-04-09 盐城工学院 Road side traffic target detection system and method based on category guidance

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014205953A1 (en) * 2014-03-31 2015-10-01 Robert Bosch Gmbh Method for analyzing a traffic environment situation of a vehicle
CN107221175A (en) * 2017-05-31 2017-09-29 深圳市鸿逸达科技有限公司 A kind of pedestrian is intended to detection method and system
CN107423679A (en) * 2017-05-31 2017-12-01 深圳市鸿逸达科技有限公司 A kind of pedestrian is intended to detection method and system
DE102018104270A1 (en) * 2018-02-26 2019-08-29 Connaught Electronics Ltd. Method for predicting the behavior of at least one pedestrian
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111497840A (en) * 2020-04-27 2020-08-07 清华大学 Calculation method and safety evaluation system for vehicle-pedestrian collision risk domain
US20200283016A1 (en) * 2019-03-06 2020-09-10 Robert Bosch Gmbh Movement prediction of pedestrians useful for autonomous driving
CN111797751A (en) * 2020-06-29 2020-10-20 中国第一汽车股份有限公司 Pedestrian trajectory prediction method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014205953A1 (en) * 2014-03-31 2015-10-01 Robert Bosch Gmbh Method for analyzing a traffic environment situation of a vehicle
CN107221175A (en) * 2017-05-31 2017-09-29 深圳市鸿逸达科技有限公司 A kind of pedestrian is intended to detection method and system
CN107423679A (en) * 2017-05-31 2017-12-01 深圳市鸿逸达科技有限公司 A kind of pedestrian is intended to detection method and system
DE102018104270A1 (en) * 2018-02-26 2019-08-29 Connaught Electronics Ltd. Method for predicting the behavior of at least one pedestrian
US20200283016A1 (en) * 2019-03-06 2020-09-10 Robert Bosch Gmbh Movement prediction of pedestrians useful for autonomous driving
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111497840A (en) * 2020-04-27 2020-08-07 清华大学 Calculation method and safety evaluation system for vehicle-pedestrian collision risk domain
CN111797751A (en) * 2020-06-29 2020-10-20 中国第一汽车股份有限公司 Pedestrian trajectory prediction method, device, equipment and medium

Also Published As

Publication number Publication date
WO2022088658A1 (en) 2022-05-05
CN114426030B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11815904B2 (en) Trajectory selection for an autonomous vehicle
US11897518B2 (en) Systems and methods for navigating with sensing uncertainty
US11561551B2 (en) Prioritized constraints for a navigational system
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
CN108399218B (en) Automatic driving vehicle positioning based on Walsh kernel projection technology
US10324469B2 (en) System and method for controlling motion of vehicle in shared environment
US11125566B2 (en) Method and apparatus for determining a vehicle ego-position
CN108688660B (en) Operating range determining device
CN112839855B (en) Trajectory prediction method and device
WO2017217265A1 (en) Surrounding environment recognition device
RU2742213C1 (en) Method to control information on lanes, method of traffic control and device for control of information on lanes
CN111788571A (en) Vehicle tracking
MX2015001842A (en) Autonomous control in a dense vehicle environment.
CN108255170A (en) The method for dynamically adjusting the speed control rate of automatic driving vehicle
CN103781685A (en) Autonomous driving control system for vehicle
JP2019039831A (en) Automatic driving device
RU2757234C2 (en) Method and system for calculating data for controlling the operation of a self-driving car
CN114694111A (en) Vehicle positioning
CN114426030B (en) Pedestrian passing intention estimation method, device, equipment and automobile
CN111881245B (en) Method, device, equipment and storage medium for generating visibility dynamic map
CN111959482B (en) Autonomous driving apparatus and method
US11254326B2 (en) Automatic comfort score system based on human driving reference data
CN109314763A (en) Vehicle headway estimates method and vehicle headway estimating device
CN110497906A (en) Control method for vehicle, device, equipment and medium
CN115512542B (en) Track restoration method and system considering shielding based on roadside laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant