WO2023206166A1 - 目标检测方法及装置 - Google Patents

目标检测方法及装置 Download PDF

Info

Publication number
WO2023206166A1
WO2023206166A1 PCT/CN2022/089660 CN2022089660W WO2023206166A1 WO 2023206166 A1 WO2023206166 A1 WO 2023206166A1 CN 2022089660 W CN2022089660 W CN 2022089660W WO 2023206166 A1 WO2023206166 A1 WO 2023206166A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
target
information
congestion level
sensing
Prior art date
Application number
PCT/CN2022/089660
Other languages
English (en)
French (fr)
Inventor
罗竞雄
严官林
羌波
万广南
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2022/089660 priority Critical patent/WO2023206166A1/zh
Publication of WO2023206166A1 publication Critical patent/WO2023206166A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Definitions

  • the present application relates to the field of perception, and in particular to target detection methods and devices.
  • Intelligent driving can include autonomous driving (also called driverless driving) or assisted driving.
  • automatic driving means that the automatic driving device in the vehicle can operate the vehicle to drive safely without the participation of the driver during the driving process.
  • Assisted driving refers to the auxiliary driving device in the vehicle that assists the driver in safe driving while the vehicle is driving.
  • intelligent driving vehicles need to sense the environment and make decisions based on the sensing results, such as determining vehicle speed or driving direction, to prevent vehicle collisions and ensure vehicle driving safety. It can be seen that whether the perception of the environment is accurate is crucial to intelligent driving. However, the current method of sensing the environment is relatively simple and the perception accuracy is low.
  • Embodiments of the present application provide target detection methods and devices, which can determine the congestion level of the environment to obtain more accurate detection results.
  • a target detection method in a first aspect, may be a target detection device; it may also be a module used in the target detection device, such as a chip or a chip system.
  • the following description takes the execution subject as the target detection device as an example.
  • the method includes: obtaining first information, the first information including first perception information about the environment; obtaining second information, the second information including second perception information about the environment; determining the environment according to the first information and the second information
  • the congestion level of the environment is used to detect at least one target in the environment.
  • the target detection device can obtain the first perception information of the environment and the second perception information of the environment, and determine the congestion level of the environment, so that the congestion level of the environment can be used to detect the conditions in the environment.
  • Target is detected. That is to say, using the method provided in the first aspect above, it is possible to detect targets in the environment according to the congestion level of the environment. For example, different target detection strategies can be used for different congestion levels, so that the target detection results are more accurate.
  • the first perception information includes information about at least one lane; the second perception information includes position information about at least one target on the at least one lane, or the second perception information includes the at least one lane. Position information of at least one target on one lane, and speed information of at least one target on the at least one lane.
  • the target detection device can obtain the information of at least one lane and the position information of at least one target on the at least one lane, so that the target detection device divides lanes for the target in the second sensing information and knows which lane What are the goals on .
  • the target detection device may obtain the information of at least one lane, the position information of at least one target on the at least one lane, and the speed information of at least one target on the at least one lane, so that the target detection device is the second sensing information.
  • Targets divide lanes, knowing which targets are in which lane and what is the speed of each target.
  • the method further includes: dividing the at least one target according to lanes according to the first sensing information and the second sensing information.
  • the target detection device can divide lanes for the targets in the second sensing information, and know which targets are on which lanes, so that the target detection device can determine the congestion level based on the lanes, know the congestion level of each lane, and then be able to Using lanes as the granularity, determine the target detection strategy and obtain more accurate detection results.
  • determining the congestion level of the environment based on the first information and the second information includes: determining a first lane in at least one lane based on the first sensing information and the second sensing information.
  • the information of the targets on the first lane, the first lane is any one of the at least one lane, the information of the targets on the first lane includes at least one of the following: the number of targets on the first lane, the number of targets on the first lane The speed of the target on the first lane, the average distance between two adjacent targets on the first lane, the minimum distance between two adjacent targets on the first lane, or the traffic flow density of the first lane; according to the first lane Information about objects on a lane determines the congestion level of that first lane.
  • the target detection device can determine the information of the target on the first lane according to the first sensing information and the second sensing information.
  • the information of the targets on the first lane includes the number of targets on the first lane, the speed of the targets on the first lane, the average distance between two adjacent targets on the first lane, Parameters such as the minimum distance between two adjacent objects or the traffic density of the first lane can reflect the congestion situation in the first lane. Therefore, based on the information about the objects in the first lane, a more accurate congestion level can be determined .
  • the method further includes: obtaining at least one threshold, the at least one threshold including at least one of the following: a speed threshold or a spacing threshold; the speed threshold is the speed of the multiple targets on the first lane the median of the speed of multiple targets on the first lane or the maximum value of the speed of multiple targets on the first lane, or the speed threshold is preset; the spacing threshold It is obtained based on the speed threshold and the time interval, which is preset.
  • the target detection device can determine at least one threshold based on the information of the target on the first lane, or preset the at least one threshold so that the target detection device can determine the at least one threshold based on the information of the target on the first lane and the at least one threshold.
  • the threshold determines the congestion level of the first lane. It can be understood that if the above threshold is preset, the operation of the target detection device can be simplified. If the above threshold is calculated based on the information of the target on the first lane, the robustness of the target detection method can be improved and its adaptability is better.
  • determining the congestion level of the first lane based on the information of the target on the first lane includes: determining the first lane based on the information of the target on the first lane and the at least one threshold. Lane congestion level.
  • the target detection device can compare the parameters in the information of the target on the first lane with at least one threshold to determine the congestion level of the first lane.
  • the calculation amount is small, and it is relatively easy and cost-effective to implement. lower.
  • the congestion level of the first lane includes 4 levels, 5 levels or 6 levels.
  • the congestion level of the first lane can be set as needed to suit different scenarios.
  • determining the congestion level of the first lane based on the information of the target on the first lane and the at least one threshold includes: if The number of objects on the first lane is greater than or equal to the first quantity threshold, and the speed of the objects on the first lane is less than or equal to the speed threshold, then the congestion level of the first lane is level 5; or, if the The number of objects on the first lane is greater than or equal to the first quantity threshold, then the congestion level of the first lane is level 4; or if the average distance between two adjacent objects on the first lane is less than or equal to the spacing threshold, then the congestion level of the first lane is level 3; or, if the minimum distance between two adjacent objects on the first lane is less than or equal to the spacing threshold, then the congestion level of the first lane is level 3 2; If the minimum distance between two adjacent objects on the first lane is greater than the gap threshold, and the number of objects on the first lane is greater
  • the information about the targets on the first lane includes multiple parameters, such as: the number of targets on the first lane, the speed of the targets on the first lane, the distance between two adjacent targets on the first lane. The average distance between targets and the minimum distance between two adjacent targets on the first lane.
  • the target detection device may determine the congestion level of the first lane based on the multiple parameters and thresholds corresponding to the multiple parameters. In the above process, the target detection device comprehensively considers multiple parameters, so the congestion level of the first lane determined is relatively accurate.
  • determining the congestion level of the first lane based on the information of the target on the first lane and the at least one threshold includes: if The traffic density of the first lane is greater than or equal to the first density threshold, and the speed of the target on the first lane is less than or equal to the speed threshold, then the congestion level of the first lane is level 5; or, if the first lane If the traffic density of the lane is greater than or equal to the first density threshold, then the congestion level of the first lane is level 4; or, if the average distance between two adjacent objects on the first lane is less than or equal to the distance threshold, then The congestion level of the first lane is level 3; or, if the minimum distance between two adjacent objects on the first lane is less than or equal to the distance threshold, the congestion level of the first lane is level 2; if the minimum distance between two adjacent objects on the first lane is less than or equal to the distance threshold, the congestion level of the first lane
  • the information of the target on the first lane includes multiple parameters, such as: the traffic density of the first lane, the speed of the target on the first lane, the distance between two adjacent targets on the first lane. The average spacing and the minimum spacing between two adjacent targets on the first lane.
  • the target detection device may determine the congestion level of the first lane based on the multiple parameters and thresholds corresponding to the multiple parameters. In the above process, the target detection device comprehensively considers multiple parameters, so the congestion level of the first lane determined is relatively accurate.
  • the method further includes: determining a sensor fusion strategy for the first lane according to the congestion level of the first lane, and the sensor fusion strategy for the first lane is used to estimate the target on the first lane. actual location.
  • the target detection device can determine a suitable sensor fusion strategy according to the congestion level of the first lane, which not only makes the location of the target estimated according to the sensor fusion strategy more accurate, but also enables the target number to be determined when determining the number of targets. Determine a more accurate target quantity.
  • determining the sensor fusion strategy of the first lane according to the congestion level of the first lane includes: if the congestion level of the first lane is greater than or equal to the first level, then the first lane The sensor fusion strategy is the first fusion strategy; or, if the congestion level of the first lane is less than the first level, the sensor fusion strategy of the first lane is the second fusion strategy.
  • different congestion levels can correspond to different sensor fusion strategies, so that when the first lane is in different congestion levels, the target position can be estimated using the sensor fusion strategy corresponding to the congestion level. to get accurate results.
  • the first sensing information also includes position information of at least one target on the at least one lane
  • the position information in the first sensing information includes a first distance of the first target, and the first The first angle of the target, the abscissa coordinate of the first target and the ordinate coordinate of the first target, the first target is any target in the first lane, the first target is different from the target where the target detection device is located , the first distance is the distance between the target detection device and the first target, the first angle is the angle between the forward direction of the target detection device and the first connecting line, and the first connecting line is The connection between the target detection device and the first target; the position information in the second sensing information includes the abscissa of the second target and the ordinate of the second target, and the second target is the second sensing information Among the corresponding at least one target, the target corresponding to the first target.
  • the first perception information also includes the first distance of the first target, the first angle of the first target, the abscissa of the first target and the ordinate of the first target
  • the second perception information includes the second The abscissa coordinate of the target and the ordinate of the second target are such that when the target detection device determines that the sensor fusion strategy of the first lane is the first fusion strategy, the abscissa coordinate of the first target, the ordinate of the first target, and the second The abscissa coordinate of the target and the ordinate of the second target can be used to determine the actual position of the target on the first lane; when the target detection device determines that the sensor fusion strategy of the first lane is the second fusion strategy, the first angle , the first distance, the abscissa of the second target and the ordinate of the second target are fused, which can be used to determine the actual position of the target on the first lane.
  • different sensor fusion strategies can be used to estimate the location
  • the first fusion strategy is to fuse the abscissa of the first target, the ordinate of the first target, the abscissa of the second target and the ordinate of the second target, Obtain the actual position of the target on the first lane;
  • the second fusion strategy is to fuse the first angle, the first distance, the abscissa of the second target and the ordinate of the second target to obtain the third The actual position of the target on a lane.
  • the congestion level of the first lane is low (that is, the first lane is less congested)
  • the first angle and the first distance sensed by the first sensing device are more accurate (for example, through the camera By performing image recognition on the photos taken by the device, a more accurate first angle and first distance can be obtained), therefore, the first angle, the first distance, the abscissa of the second target and the ordinate of the second target are fused (i.e. Through the second fusion strategy), a more accurate position can be estimated.
  • the first angle and the first distance sensed by the first sensing device have a large error (for example, the first angle and the first distance obtained through image recognition technology (the first distance has a large error)
  • the first angle, the first distance, the abscissa of the second target and the ordinate of the second target are fused (that is, through the second fusion strategy), and the estimated position is inaccurate, so , the abscissa coordinate of the first target, the ordinate of the first target, the abscissa of the second target and the ordinate of the second target can be fused (that is, through the first fusion strategy) to estimate a more accurate position.
  • the method further includes: determining a strategy for sensing targets in the first lane according to the congestion level of the first lane, and the strategy for sensing targets in the first lane is used to sense objects in the first lane. Whether the target exists.
  • the target detection device can determine a suitable strategy for sensing targets in the first lane according to the congestion level of the first lane, so as to prevent missing targets from being sensed or identifying false targets.
  • determining a sensing target strategy on the first lane based on the congestion level of the first lane includes: if the congestion level of the first lane is greater than or equal to the second level, using the first sensing A second sensing method is used to sense whether there is a target on the first lane; or, if the congestion level of the first lane is less than the second level, a second sensing method is used to sense whether there is a target on the first lane.
  • different congestion levels correspond to different strategies for sensing targets in the first lane.
  • the higher the congestion level of the first lane i.e. the more congested the first lane
  • the looser the strategy for sensing targets in the first lane to prevent missed sensing targets the lower the congestion level of the first lane (i.e. the more congested the first lane is) no congestion)
  • the more stringent the strategy for sensing targets in the first lane is to prevent false targets from being identified.
  • the first sensing mode is a target obtained by sensor fusion based on the first sensing information and the second sensing information, and the target in the first sensing information without sensor fusion and the third sensing information are The target in the second sensing information that has not been sensor fused is determined to be the target on the first lane; the second sensing mode is the target obtained by sensor fusion based on the first sensing information and the second sensing information, and is determined to be the target. Target in the first lane.
  • the policy of sensing targets in the first lane can be set loosely, for example,
  • the acquired targets (such as targets obtained by sensor fusion based on the first sensing information and the second sensing information, targets without sensor fusion in the first sensing information and targets without sensor fusion in the second sensing information) are not processed. Instead of filtering, the acquired targets are directly determined as targets in the first lane.
  • the strategy for sensing targets in the first lane can be set more strictly.
  • the obtained targets can be filtered and the filtered targets can be filtered.
  • the final target (such as the target obtained by sensor fusion based on the first sensing information and the second sensing information) is determined as the target on the first lane to prevent false targets from being determined.
  • the first information and the second information are obtained by different types of sensing devices; or, the first information and the second information are obtained by the same type of sensing devices.
  • the first information and the second information are obtained by different types of sensing devices
  • the first information can be obtained by a sensing device sensitive to graphics (such as lane lines), and the first information can be obtained by a sensing device sensitive to the target (such as lane lines). : vehicle, pedestrian, tree or road test device) sensitive sensing device to obtain the second information to obtain more accurate first information and second information.
  • the first information and the second information include similar information, which is more convenient to process and easy to implement.
  • multiple low-cost sensing devices can be placed at different locations to obtain the first information and the second information, thereby reducing costs.
  • the sensing device that obtains the first information is a camera device, lidar, millimeter wave radar or sonar; the sensing device that obtains the second information is a millimeter wave radar, lidar or sonar.
  • the first information can be obtained through a camera device, lidar, millimeter wave radar or sonar that is sensitive to the first information, and relatively accurate first information can be obtained.
  • the second information is obtained through millimeter-wave radar, lidar or sonar that is sensitive to the second information, and more accurate second information is obtained. In this way, a more accurate congestion level can be determined based on the first information and the second information.
  • the method further includes: sending information about the congestion level of the environment.
  • the target detection device can send information about the congestion level of the environment to other devices or modules, so that other devices or modules can use the congestion level of the environment.
  • the target detection device can send information about the congestion level of the environment to the intelligent driving vehicle, so that the intelligent driving vehicle can select a better road planning scheme and/or vehicle control strategy based on the information, and/or so that multiple vehicles can share the environment.
  • Congestion level thereby forming a regional congestion map to facilitate each vehicle's own driving planning and/or regional traffic flow control on the road (for example, traffic light control).
  • the target detection device includes ADAS
  • the information about the congestion level of the environment can be sent to the target selection module, so that the target selection module can determine the congestion level of the environment based on the first information and the second information.
  • the information adjusts the algorithm for selecting the target, and/or, sends information about the congestion level of the environment to the control module, so that the control module adjusts parameters in the functions controlled by the control module based on the information (for example, in the case of a higher congestion level, Adjust the parameters in the AEB function to make the brakes more sensitive and the braking time shorter).
  • a target detection device for implementing the above method.
  • the target detection device may be the target detection device in the above-mentioned first aspect, or a device including the above-mentioned target detection device.
  • the target detection device includes modules, units, or means (means) corresponding to the above method.
  • the modules, units, or means can be implemented by hardware, software, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules or units corresponding to the above functions.
  • the target detection device may include a processing module.
  • This processing module can be used to implement the processing functions in the above first aspect and any possible implementation manner thereof.
  • the processing module may be, for example, a processor.
  • the target detection device may further include a transceiver module.
  • the transceiver module which may also be called a transceiver unit, is used to implement the sending and/or receiving functions in the above first aspect and any possible implementation thereof.
  • the transceiver module can be composed of a transceiver circuit, a transceiver, a transceiver or a communication interface.
  • the transceiver module includes a sending module and a receiving module, respectively configured to implement the sending and receiving functions in the above first aspect and any possible implementation thereof.
  • a third aspect provides a target detection device, including: a processor; the processor is configured to be coupled to a memory, and after reading instructions in the memory, execute the method described in the first aspect according to the instructions.
  • the target detection device may be the target detection device in the above-mentioned first aspect, or a device including the above-mentioned target detection device.
  • the target detection device further includes a memory, and the memory is used to store necessary program instructions and data.
  • the target detection device is a chip or a chip system.
  • the target detection device when it is a chip system, it may be composed of a chip, or may include a chip and other discrete devices.
  • a target detection device including: a processor and an interface circuit; the interface circuit is used to receive a computer program or instructions and transmit them to the processor; the processor is used to execute the computer program or instructions so that the The target detection device executes the method described in the first aspect above.
  • the target detection device is a chip or a chip system.
  • the target detection device when it is a chip system, it may be composed of a chip, or may include a chip and other discrete devices.
  • a computer-readable storage medium is provided. Instructions are stored in the computer-readable storage medium, and when run on a computer, the computer can execute the method described in the first aspect.
  • a sixth aspect provides a computer program product containing instructions that, when run on a computer, enable the computer to execute the method described in the first aspect.
  • a seventh aspect provides an intelligent driving vehicle, which includes a target detection device for performing the method described in the first aspect.
  • Figure 1 is a schematic diagram of the target detection system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of the hardware structure of the target detection device provided by the embodiment of the present application.
  • Figure 3 is a schematic flow chart of the target detection method provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of a lane provided by an embodiment of the present application.
  • Figure 5 is a schematic structural diagram of a target detection device provided by an embodiment of the present application.
  • the methods provided by the embodiments of the present application can be used in various target detection systems to sense the congestion level of the environment. For example, the congestion level of a certain road or a certain lane can be sensed, so that the congestion level of the environment can be used to detect objects in the environment. Targets are detected to obtain more accurate detection results.
  • the following uses the target detection system 10 shown in Figure 1 as an example to describe the method provided by the embodiment of the present application.
  • the target detection system 10 may include a target detection device 101 .
  • the target detection device 101 can be used to implement the target detection method provided by the embodiment of the present application.
  • the target detection device 101 can obtain the first information and the second information, and determine the congestion level of the environment based on the first information and the second information, so that the congestion level of the environment can be used to detect targets in the environment to obtain more accurate results. test results.
  • This process will be described in detail in the embodiment shown in Figure 3 below, and will not be described again here.
  • Figure 1 is only a schematic diagram and does not constitute a limitation on the applicable scenarios of the technical solution provided by this application.
  • the target detection device in the embodiment of the present application can be any device with computing capabilities.
  • the target detection device may include a handheld device, a vehicle-mounted device, a sensing device, a computing device or an intelligent driving vehicle.
  • the target detection device may include advanced driver-assistance systems (ADAS), various devices with computing capabilities in the car, sensing devices in the car (such as camera devices in the car, etc.) or automatic Intelligent driving vehicles with driving functions or assisted driving functions.
  • ADAS advanced driver-assistance systems
  • ADAS may include a sensor perception module, a sensor fusion module, a target selection module and a control module.
  • the sensor perception module is used to sense information about the surrounding environment, or to obtain information about the surrounding environment from other devices.
  • the sensor sensing module can obtain the first information and the second information.
  • the sensor fusion module may be used to fuse information about the surrounding environment, for example, to determine the congestion level of the environment based on the first information and the second information.
  • the target selection module can be used to select targets, for example, to select targets for tracking or to select targets to prevent collisions.
  • the control module can be used to control at least one of the following functions: automatic emergency braking (AEB) function, adaptive cruise control (ACC) function or forward collision warning (FCW) function.
  • AEB automatic emergency braking
  • ACC adaptive cruise control
  • FCW forward collision warning
  • various devices with computing capabilities in the car may include: gateway, vehicle T-Box (telematics box), body control module (BCM), smart cockpit domain controller (cockpit domain controller) , CDC), multi domain controller (MDC), vehicle control unit (VCU), electronic control unit (ECU), vehicle domain controller (VDC) ) or vehicle integrated/integration unit (VIU), etc.
  • gateway vehicle T-Box (telematics box), body control module (BCM), smart cockpit domain controller (cockpit domain controller) , CDC), multi domain controller (MDC), vehicle control unit (VCU), electronic control unit (ECU), vehicle domain controller (VDC) ) or vehicle integrated/integration unit (VIU), etc.
  • the target detection device 101 also has a sensing capability for sensing the first information and/or the second information.
  • the target detection system 10 also includes a sensing device 102 and/or a sensing device 103 that are communicatively connected to the target detection device 101 .
  • the sensing device 102 can obtain the first information and send the first information to the target detection device 101 .
  • the sensing device 103 can obtain the second information and send the second information to the target detection device 101 .
  • sensing device in the embodiment of the present application, for example, the sensing device 102 or the sensing device 103, can be any device with sensing capabilities.
  • sensing devices may include cameras, radar, or sonar.
  • the camera device may include a monocular camera, a binocular camera, a trinocular camera or a depth camera, etc.
  • Radar can be lidar or millimeter wave radar, etc.
  • the target detection system 10 shown in FIG. 1 is only used as an example and is not used to limit the technical solution of the present application. Those skilled in the art should understand that during specific implementation, the target detection system 10 may also include other equipment, and the number of target detection devices or sensing devices may also be determined according to specific needs without limitation.
  • each device for example, a target detection device or a sensing device, etc.
  • each device may be a general device or a special device, which is not specifically limited in this embodiment of the present application.
  • each device for example, target detection device or sensing device, etc.
  • the relevant functions of each device can be implemented by one device, or can be implemented by multiple devices together, or can be implemented by one device.
  • One or more functional modules are implemented, which are not specifically limited in the embodiments of this application. It can be understood that the above functions can be either network elements in hardware devices, software functions running on dedicated hardware, or a combination of hardware and software, or virtualization instantiated on a platform (for example, a cloud platform) Function.
  • each device for example, a target detection device or a sensing device, etc.
  • FIG. 2 shows a schematic diagram of the hardware structure of a target detection device applicable to embodiments of the present application.
  • the target detection device 20 includes at least one processor 201 and at least one communication interface 204, which are used to implement the method provided by the embodiment of the present application.
  • the target detection device 20 may also include a communication line 202 and a memory 203 .
  • the processor 201 can be a general central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more processors used to control the execution of the program of the present application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication line 202 may include a path, such as a bus, that carries information between the above-mentioned components.
  • Communication interface 204 is used to communicate with other devices or communication networks.
  • the communication interface 204 can be any device such as a transceiver, such as an Ethernet interface, a radio access network (RAN) interface, a wireless local area networks (WLAN) interface, a transceiver, and pins , bus, or transceiver circuit, etc.
  • RAN radio access network
  • WLAN wireless local area networks
  • the memory 203 is used to store computer execution instructions involved in executing the solutions provided by the embodiments of this application, and is controlled by the processor 201 for execution.
  • the processor 201 is used to execute computer execution instructions stored in the memory 203, thereby implementing the method provided by the embodiment of the present application.
  • the computer-executed instructions in the embodiments of the present application may also be called application codes, which are not specifically limited in the embodiments of the present application.
  • the coupling in the embodiment of this application is an indirect coupling or communication connection between devices, units or modules, which may be in electrical, mechanical or other forms, and is used for information interaction between devices, units or modules.
  • the processor 201 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 2 .
  • the target detection device 20 may include multiple processors, such as the processor 201 and the processor 207 in FIG. 2 . Each of these processors may be a single-CPU processor or a multi-CPU processor.
  • the target detection device 20 may also include an output device 205 and/or an input device 206. Output device 205 and processor 201 are coupled.
  • the target detection device 20 also includes a sensing module (not shown in FIG. 2 ), configured to sense the first information and/or the second information.
  • the perception module may include at least one of the following: camera, radar or sonar.
  • composition structure shown in Figure 2 does not constitute a limitation on the target detection device.
  • the target detection device may include more or fewer components than shown in the figure, or Combining certain parts, or different arrangements of parts.
  • the target detection method may include S301-S303:
  • the target detection device obtains the first information.
  • the target detection device in S301 may be the target detection device 101 shown in FIG. 1 .
  • the first information may include first perception information about the environment.
  • the environment may be the environment in which the target detection device is located, or the environment around the target detection device.
  • the environment may include the lane in which the intelligent driving vehicle is located. It can be understood that the embodiment of the present application is explained by taking the environment including at least one lane as an example. The situation where the environment includes other scenes is similar to the situation where the environment includes at least one lane. You can refer to the corresponding description in the embodiment of the present application. To elaborate.
  • the first sensing information may include information about at least one lane.
  • the first sensing information includes information of lane 1, or the first sensing information includes information of lane 1 and information of lane 2.
  • the information of any lane may include the location information of the lane.
  • information for a lane may include a function indicating the location of the lane.
  • the information of a lane may include the position information of the curb of the road. In this case, the entire road may be regarded as a lane.
  • the information of any lane can also indicate the width of the lane.
  • x is the ordinate in the coordinate system where the target detection device is located
  • y is the abscissa in the coordinate system.
  • the coordinate system in which the target detection device is located may be a coordinate system with the target detection device as the origin.
  • the target detection device After the target detection device obtains the information of the lane 402, it can determine the position of the lane 402 based on the information of the lane 402.
  • the target detection device After the target detection device obtains the information of the lane 403, it can determine the position of the lane 403 based on the information of the lane 403.
  • lane 401, lane 402 and lane 403 are respectively regarded as one lane.
  • all lanes on the road can also be regarded as one lane.
  • lane 401, lane 402 and lane 403 can be regarded as one lane.
  • the lane information can indicate the position of the entire road.
  • the lane information includes position information of the curb of the road.
  • the lane information also includes road width information.
  • the target detection device obtains the second information.
  • the second information may include second perception information about the environment.
  • the environment may be the environment in which the target detection device is located, or the environment surrounding the target detection device.
  • the second sensing information may include position information of at least one target on at least one lane, or the second sensing information includes position information of at least one target on at least one lane, and speed information of at least one target on at least one lane.
  • the location information of at least one target on at least one lane may indicate the location of the at least one target.
  • the speed information of the at least one target may indicate the speed of the at least one target.
  • the position information of at least one target may include the coordinates of the target in the Cartesian coordinate system, or the coordinates of the target in the polar coordinate system, without limitation.
  • the above-mentioned rectangular coordinate system or polar coordinate system is the coordinate system in which the target detection device is located.
  • the target detection device can obtain the first information and the second information in various ways. This is explained in detail below.
  • the first information is obtained by the target detection device from the first sensing device. That is to say, the first sensing device can obtain the first information and send it to the target detection device.
  • the first sensing device may be the sensing device 102 shown in FIG. 1 .
  • the first sensing device is a camera device, lidar, millimeter wave radar or sonar.
  • the lane information may include a function indicating the position of the lane, or the lane information may include a function indicating the position of the lane. function and information indicating the width of the lane.
  • the first sensing device is a millimeter-wave radar
  • the millimeter-wave radar may not be able to "see” or detect the lane lines, but the millimeter-wave radar can "see” or detect the curb, so the lane information can include the curb of the road.
  • the position information, or the lane information includes the position information of the curb of the road and the width information of the road.
  • the first information is obtained locally by the target detection device. That is to say, the target detection device may have sensing capabilities.
  • the target detection device is equipped with a sensing device, such as a camera device, lidar, millimeter wave radar or sonar, and the first information is obtained through the sensing device.
  • the second information is obtained by the target detection device from the second sensing device. That is to say, the second sensing device can obtain the second information and send it to the target detection device.
  • the second sensing device may be the sensing device 103 shown in FIG. 1 .
  • the second sensing device is millimeter wave radar, lidar or sonar.
  • the second information is obtained locally by the target detection device. That is to say, the target detection device may have sensing capabilities.
  • the target detection device is configured with a sensing device, such as lidar, millimeter wave radar or sonar, and acquires the second information through the sensing device.
  • first sensing device and the second sensing device are only exemplary.
  • first sensing device and/or the second sensing device can also be other objects capable of detecting or "seeing" the target. , lane lines or curbs and other objects are not restricted.
  • the second sensing information includes the target 4011 position information, position information of target 4012, position information of target 4021 and position information of target 4031, or the second sensing information includes position information of target 4011, speed information of target 4011, position information of target 4012, position information of target 4012 Speed information, position information of target 4021, speed information of target 4021, position information of target 4031, and speed information of target 4031.
  • first sensing device and the second sensing device may be of the same or different types, which will be described in detail below.
  • the first sensing device and the second sensing device are different types of sensing devices. That is to say, the first information and the second information are obtained by different types of sensing devices.
  • the first sensing device is a camera device
  • the second sensing device is a millimeter wave radar.
  • the first information can be obtained by a sensing device sensitive to graphics (such as lane lines)
  • the second information can be obtained by a sensing device sensitive to targets (such as vehicles, pedestrians, trees or road testing devices), To obtain more accurate first information and second information.
  • the first sensing device and the second sensing device are the same type of sensing device. That is to say, the first information and the second information are obtained by the same type of sensing device.
  • the first sensing device and the second sensing device are both millimeter wave radars.
  • the first information and the second information include similar information, which is more convenient to process and easy to implement.
  • multiple lower-cost sensing devices such as millimeter-wave radars are placed at different locations to obtain the first information and the second information, thereby reducing costs.
  • S301 may be executed first and then S302
  • S302 may be executed first and then S301, or S301 and S302 may be executed at the same time, without limitation.
  • the target detection device After the target detection device obtains the first information and the second information, it can determine the congestion level of the environment based on the first information and the second information. However, considering that in practical applications, the target is usually to drive according to certain rules, for example, according to lanes, so the congestion situation in different lanes may be different. Therefore, the method provided by the embodiments of the present application can determine the congestion level based on the lane, so that targets on the lane can be detected based on the congestion level of the lane.
  • the target detection device can divide at least one target according to lanes according to the first perception information and the second perception information, so that the target detection device knows which targets are on which lane.
  • the target detection device determines in which lane each of the at least one target is located based on the position of the at least one lane indicated by the first sensing information and the position of the at least one target indicated by the second sensing information. Specifically, if the position of the target indicated by the second sensing information overlaps with the position of a certain lane indicated by the first sensing information, the target detection device determines that the target is on this lane. If the position of the target indicated by the second sensing information overlaps If it does not overlap with the position of the lane indicated by the first sensing information, the target sensing device determines that the target is not in this lane. It can be understood that if the second sensing information also includes speed information of at least one target in at least one lane, the target detection device can also determine the speed of the target in at least one lane.
  • the first perception information includes information about lane 401, information about lane 402 and information about lane 403, the second perception information
  • the information includes the position information of target 4011, the position information of target 4012, the position information of target 4021 and the position information of target 4031.
  • the target detection device can determine that the position of lane 401 overlaps with the position of target 4011, and the position of lane 401 also overlaps with the target.
  • the position of 4012 overlaps, so the target 4011 and the target 4012 are located on the lane 401.
  • the target detection device can determine that the position of the lane 402 overlaps with the position of the target 4021, so the target 4021 is located on the lane 402.
  • the target detection device can determine the position of the lane 403 and The positions of target 4031 overlap, so target 4031 is located on lane 403.
  • the target detection device determines the congestion level of the environment based on the first information and the second information.
  • the congestion level of the environment can be used to detect at least one target in the environment.
  • the congestion level of the environment may be used to determine a sensor fusion strategy so that the first device estimates the actual location of one or more targets in the environment, and/or the congestion level of the environment may be used to determine a strategy for sensing targets so that the first device estimates the actual location of one or more targets in the environment.
  • the second device senses whether there is a target in the environment.
  • the first device and the second device may be the same device or different devices. It can be understood that the first device and/or the second device may also be a target detection device.
  • the process by which the target detection device determines the sensor fusion strategy based on the congestion level of the environment will be introduced in S304 below.
  • the process by which the target detection device determines the strategy for sensing the target based on the congestion level of the environment will be introduced in S305 below, which will not be described in detail here. .
  • the target detection device determining the congestion level of the environment based on the first information and the second information. It can be understood that if the environment includes multiple lanes, the target detection device can repeatedly perform the following process to determine the congestion level of each lane, that is, the congestion level of the environment is obtained.
  • the target detection device determines the information of the target in the first lane of at least one lane according to the first sensing information and the second sensing information, and determines the first lane based on the information of the target in the first lane. congestion level.
  • the first lane is any one of at least one lane, or in other words, the first lane is any lane in the environment.
  • the information about the targets on the first lane includes at least one of the following: the number of targets on the first lane, the speed of the targets on the first lane, the distance between two adjacent targets on the first lane. Average spacing, the minimum spacing between two adjacent objects in the first lane, or the traffic density of the first lane.
  • the speed of the target on the first lane may be the average speed of the target on the first lane, the minimum speed of the target on the first lane, or the maximum speed of the target on the first lane.
  • the traffic density of the first lane may represent the number of objects within a preset distance on the first lane. Taking the preset distance as 50 meters and the traffic density of the first lane as 10 as an example, the traffic density of the first lane means that there are 10 targets within 50 meters of the first lane.
  • the target detection device can determine which targets are on the first lane, that is, the target on the first lane is obtained. The number of targets. If the second sensing information includes speed information of at least one target on at least one lane, the target detection device can also directly obtain the speed of the target on the first lane. If the second sensing information does not include the speed information of at least one target on at least one lane, the target detection device can acquire the second sensing information multiple times within a period of time, and determine the target on the first lane based on the second sensing information acquired multiple times. The speed of each target is obtained, and then the speed of the target on the first lane is obtained.
  • the target detection device can obtain the average distance between two adjacent targets on the first lane and the average distance between two adjacent targets on the first lane based on the position information of the two adjacent targets on the first lane. Minimum spacing or traffic density in the first lane.
  • the object detection device can determine the congestion level of the first lane based on the information about the objects on the first lane. For example, the object detection device determines the congestion level of the first lane based on information about the objects on the first lane and at least one threshold.
  • At least one threshold may include at least one of the following: a speed threshold, a spacing threshold, a quantity threshold (such as a first quantity threshold, and/or a second quantity threshold) or a density threshold (such as a first density threshold, and/or, a second density threshold).
  • a speed threshold such as a first quantity threshold, and/or a second quantity threshold
  • a density threshold such as a first density threshold, and/or, a second density threshold.
  • the target detection device acquires at least one threshold. For example, the target detection device determines the speed threshold based on the speed of multiple targets on the first lane.
  • the speed threshold is the median of the speeds of the multiple targets on the first lane, the minimum value of the speeds of the multiple targets on the first lane, or the maximum value of the speeds of the multiple targets on the first lane.
  • the target detection device obtains the distance threshold based on the speed threshold and the time interval.
  • the speed threshold may be preset, or determined by the target detection device based on the speed of multiple targets on the first lane.
  • the time interval is preset.
  • the congestion level of the first lane can be set as needed to suit different scenarios.
  • the congestion level of the first lane may include 2 levels, 3 levels, 4 levels, 5 levels, 6 levels or more levels without limitation.
  • the following takes the following cases 1 and 2 as examples to introduce the specific process of the target detection device determining the congestion level of the first lane based on the information of the target on the first lane and at least one threshold.
  • the information about the target on the first lane includes a parameter, and the target detection device can determine the congestion level of the first lane according to the parameter and the threshold corresponding to the parameter. Because the target detection device in case 1 does not need to combine multiple parameters in the information of the target on the first lane for processing, it is relatively simple to implement and has low complexity.
  • the information about the target on the first lane includes multiple parameters, and the target detection device determines the congestion level of the first lane based on the multiple parameters and thresholds corresponding to the multiple parameters.
  • the target detection device in case 2 is slightly more complicated to implement, because the target detection device comprehensively considers multiple parameters, the congestion level of the first lane determined is more accurate. Detailed introduction is given below.
  • Case 1 The information about the target on the first lane includes a parameter, and the target detection device determines the congestion level of the first lane based on the parameter and the threshold corresponding to the parameter.
  • the congestion level of the first lane is level 1; or, if the speed of the target on the first lane is less than the speed threshold, the congestion level of the first lane is level 0.
  • the information about the target on the first lane includes the average distance between two adjacent targets on the first lane, at least one threshold includes distance threshold 1 and distance threshold 2, and the congestion of the first lane is equal to including 3
  • the congestion level of the first lane is level 2; or, if the average distance between two adjacent objects on the first lane is If the average distance between objects is less than the distance threshold 1 and greater than or equal to the distance threshold 2, then the congestion level of the first lane is level 1; or if the average distance between two adjacent objects on the first lane is less than the distance threshold 2, Then the congestion level of the first lane is level 0.
  • At least one threshold includes a distance threshold 3 and a distance threshold 4, and the congestion of the first lane is equal to including 3
  • the congestion level of the first lane is level 2; or if the congestion level between two adjacent objects on the first lane is If the minimum distance between objects is less than the distance threshold 3 and greater than or equal to the distance threshold 4, then the congestion level of the first lane is level 1; or if the minimum distance between two adjacent objects on the first lane is less than the distance threshold 4, Then the congestion level of the first lane is level 0.
  • the congestion of the first lane is equal to including 4 levels. For example, if the number of objects on the first lane is greater than or equal to the quantity threshold 1, the congestion level of the first lane is level 3; or, if the number of objects on the first lane is less than the quantity threshold 1, and greater than or equal to the quantity Threshold 2, then the congestion level of the first lane is level 2; or, if the number of targets on the first lane is less than the quantity threshold 2 and greater than or equal to the quantity threshold 3, then the congestion level of the first lane is level 1; or , if the number of objects on the first lane is less than the quantity threshold 3, the congestion level of the first lane is level 0.
  • the information about the target on the first lane includes the traffic density of the first lane, at least one threshold includes density threshold 1, density threshold 2, density threshold 3 and density threshold 4, and the congestion of the first lane is equal to including 5
  • the congestion level of the first lane is level 4; or, if the traffic density of the first lane is less than the density threshold 1 and greater than or equal to the density threshold 2 , then the congestion level of the first lane is level 3; or, if the traffic density of the first lane is less than the density threshold 2 and greater than or equal to the density threshold 3, then the congestion level of the first lane is level 2; or, if the traffic density of the first lane is level 2, If the traffic density of the lane is less than the density threshold 3 and greater than or equal to the density threshold 4, the congestion level of the first lane is level 1; or, if the traffic density of the first lane is less than the density threshold 4, the congestion level of the first lane is equal to including 5
  • the congestion level of the first lane is greater than or equal to the density threshold
  • the information about the target on the first lane includes multiple parameters, and the target detection device determines the congestion level of the first lane based on the multiple parameters and the thresholds corresponding to the multiple parameters.
  • the information about the targets on the first lane includes the number of targets on the first lane, the speed of the targets on the first lane, the average distance between two adjacent targets on the first lane and the first lane
  • the minimum distance between two adjacent targets on the upper lane, at least one threshold includes a first quantity threshold, a speed threshold, a distance threshold and a second quantity threshold.
  • the congestion of the first lane is equal to including 6 levels.
  • the congestion level of the first lane is level 5; or, if the number of objects on the first lane is greater than or equal to the first quantity threshold, then the congestion level of the first lane is level 4; or, if the average distance between two adjacent objects on the first lane is less than or equal to the spacing threshold, then the congestion level of the first lane is level 4 3; or, if the minimum distance between two adjacent objects on the first lane is less than or equal to the distance threshold, the congestion level of the first lane is level 2; if the minimum distance between two adjacent objects on the first lane If the distance is greater than the distance threshold, and the number of targets on the first lane is greater than or equal to the second quantity threshold, then the congestion level of the first lane is level 1; or, if the number of targets on the first lane is less than
  • the information about the target on the first lane includes the speed of the target on the first lane, the average distance between two adjacent targets on the first lane, and the distance between two adjacent targets on the first lane.
  • the minimum distance and the traffic density of the first lane, at least one threshold includes a first density threshold, a speed threshold, a distance threshold and a second density threshold.
  • the congestion of the first lane is equal to including 6 levels.
  • the congestion level of the first lane is level 5; or, if the traffic density of the first lane is greater than or equal to the first density threshold , then the congestion level of the first lane is level 4; or, if the average distance between two adjacent objects on the first lane is less than or equal to the distance threshold, then the congestion level of the first lane is level 3; or, if If the minimum distance between two adjacent objects on a lane is less than or equal to the distance threshold, the congestion level of the first lane is level 2; if the minimum distance between two adjacent objects on the first lane is greater than the distance threshold, and If the traffic density of the first lane is greater than or equal to the second density threshold, the congestion level of the first lane is level 1; or, if the traffic density of the first lane is less than the second density threshold, the congestion level of the first lane
  • the target detection device can also determine the congestion level of the first lane in other ways, which is not limited.
  • the target detection device sends information about the congestion level of the environment.
  • the information about the congestion level of the environment can indicate the congestion level of the environment.
  • the target detection device can send information about the congestion level of the environment to the intelligent driving vehicle, so that the intelligent driving vehicle can select a better road planning scheme and/or vehicle control strategy based on the information, and/or so that multiple vehicles can share the environment.
  • Congestion level thereby forming a regional congestion map to facilitate each vehicle's own driving planning and/or regional traffic flow control on the road (for example, traffic light control).
  • the target detection device includes ADAS
  • the sensor fusion module in the ADAS determines the congestion level of the environment based on the first information and the second information
  • it can send the information about the congestion level of the environment to the target selection module, so that the target selection module can determine the congestion level of the environment based on the first information and the second information.
  • the information adjusts the algorithm for selecting the target, and/or, sends information about the congestion level of the environment to the control module, so that the control module adjusts parameters in the functions controlled by the control module based on the information (for example, in the case of a higher congestion level, Adjust the parameters in the AEB function to make the brakes more sensitive and the braking time shorter).
  • the target detection device can obtain the first information including the first perception information of the environment, and the second information including the second perception information of the environment, and determine based on the first information and the second information.
  • the congestion level of the environment allows the congestion level of the environment to be used to detect targets in the environment.
  • the following takes the example in which the congestion level of the environment is used to determine the sensor fusion strategy, and/or the congestion level of the environment is used to determine the sensing target strategy, to introduce how the congestion level of the environment is used to detect at least one target in the environment.
  • the method shown in Figure 3 also includes S304 and/or S305.
  • the target detection device determines the sensor fusion strategy of the first lane according to the congestion level of the first lane.
  • the sensor fusion strategy of the first lane can be used to estimate the actual position of the target on the first lane. It is understandable that different congestion levels can correspond to different sensor fusion strategies.
  • the sensor fusion strategy of the first lane is the first fusion strategy; or, if the congestion level of the first lane is less than the first level, then The sensor fusion strategy of the first lane is the second fusion strategy.
  • the first level can be set as needed. Taking the first lane's six congestion levels as an example, the first level can be level 2, level 3, level 4 or level 5.
  • the sensor fusion strategy can also include 3 or more fusion strategies without limitation.
  • the target detection device obtains third information from a sensing device different from the second sensing device, such as the first sensing device.
  • the third information may include location information of at least one target on at least one lane. It can be understood that the third information may also be included in the first perception information. That is to say, the above third information can also be obtained in S301. It can be understood that the first fusion strategy and the second fusion strategy can use different algorithms to perform fusion calculations on the position information in the third information and the position information in the second perception information to estimate the actual position of the target on the first lane. . First, the position information in the third information and the position information in the second perception information are introduced.
  • the position information in the third information includes the first distance of the first target, the first angle of the first target, the abscissa of the first target and the ordinate of the first target.
  • the position information in the second sensing information includes the abscissa of the second target and the ordinate of the second target.
  • the first target is any target in the first lane, and the first target is different from the target where the target detection device is located. Taking for example that the first target and the target where the target detection device is located are both vehicles, the vehicle corresponding to the first target and the vehicle corresponding to the target where the target detection device is located are not the same vehicle.
  • the first distance is the distance between the target detection device and the first target
  • the first angle is the angle between the forward direction of the target detection device and the first connecting line
  • the first connecting line is the target.
  • the forward direction of the target detection device can be described as the forward direction of the target where the target detection device is located.
  • the target where the target detection device is located is a vehicle
  • the forward direction is the direction pointed by the front of the vehicle.
  • the second target is a target corresponding to the first target among the at least one target corresponding to the second sensing information.
  • the abscissa of the first target is the value sensed by the first sensing device.
  • the abscissa of the target 4012, the ordinate of the first target is the ordinate of the target 4012 sensed by the first sensing device, the first distance of the first target is the distance between the target 4013 and the target 4012 sensed by the first sensing device, and the The first angle ⁇ of a target is the angle between the forward direction of the target 4013 sensed by the first sensing device and the line connecting the target 4013 and the target 4012 .
  • the abscissa of the second target is the abscissa of the target 4012 sensed by the second sensing device, and the ordinate of the second target is the ordinate of the target 4012 sensed by the second sensing device.
  • the first fusion strategy is to fuse the abscissa of the first target, the ordinate of the first target, the abscissa of the second target and the ordinate of the second target to obtain the target on the first lane.
  • the second fusion strategy is to fuse the first angle, the first distance, the abscissa of the second target and the ordinate of the second target to obtain the actual position of the target on the first lane.
  • the congestion level of the first lane is low (that is, the first lane is less congested)
  • the first angle and the first distance sensed by the first sensing device are more accurate (for example, by photographing the camera device)
  • image recognition of photos a more accurate first angle and first distance can be obtained)
  • the first angle, the first distance, the abscissa of the second target and the ordinate of the second target are fused (that is, through the second Fusion strategy), which can estimate a more accurate position.
  • the first angle and the first distance sensed by the first sensing device have a large error (for example, the first angle and the first distance obtained through image recognition technology (the first distance has a large error)
  • the first angle, the first distance, the abscissa of the second target and the ordinate of the second target are fused (that is, through the second fusion strategy), and the estimated position is inaccurate, so , the abscissa coordinate of the first target, the ordinate of the first target, the abscissa of the second target and the ordinate of the second target can be fused (that is, through the first fusion strategy) to estimate a more accurate position.
  • At least one target corresponding to the third information and at least one target corresponding to the second perception information are in one-to-one correspondence.
  • the first sensing device senses three targets
  • the second sensing device should also sense these three targets.
  • the accuracy of the first sensing device or the second sensing device is different, or the principle of sensing the target by the first sensing device (such as sensing the target through image recognition technology) is different from that of the second sensing device.
  • the principles of sensing targets (such as sensing targets through radar) are different, and at least one target corresponding to the third information does not have a one-to-one correspondence with at least one target corresponding to the second sensing information.
  • the number of targets sensed by the first sensing device is greater than the number of targets sensed by the second sensing device, or if the first sensing device misses detection , or the second sensing device detects multiple detections, and the number of targets sensed by the first sensing device is less than the number of targets sensed by the second sensing device.
  • the first device can determine the target corresponding to the third information based on the third information and the second sensing information.
  • a target corresponding to a target For example, the first device may determine a target whose distance from the first target is less than or equal to the second distance among at least one target corresponding to the second sensing information as the second target.
  • the first means are means for estimating the actual position of one or more objects in the environment.
  • the fusion strategy will not be executed on the first target, that is, the first target will not be combined with the first target.
  • the targets corresponding to the second sensing information are fused, or the target closest to the first target among the at least one target corresponding to the second sensing information can be determined as the second target, and the second target can be determined according to the congestion level of the first lane.
  • the first target and the second target execute the fusion strategy.
  • the target detection device can determine a suitable sensor fusion strategy according to the congestion level of the first lane, which not only makes the location of the target estimated according to the sensor fusion strategy more accurate, but also makes it possible to determine the number of targets. Determine a more accurate target quantity.
  • the target detection device determines a strategy for sensing targets in the first lane based on the congestion level of the first lane.
  • the strategy of sensing targets in the first lane can be used to sense whether there is a target in the first lane. It is understandable that different congestion levels correspond to different strategies for sensing targets in the first lane. The higher the congestion level of the first lane (that is, the more congested the first lane is), the looser the strategy for sensing targets in the first lane to prevent missed sensing targets; the lower the congestion level of the first lane (that is, the less congested the first lane is) ), the more stringent the strategy for sensing targets in the first lane is to prevent false targets from being identified.
  • One possible implementation method is that if the congestion level of the first lane is greater than or equal to the second level, then the first sensing method is used to sense whether there is a target in the first lane; or, if the congestion level of the first lane is less than the second level, Then the second sensing method is used to sense whether there is a target on the first lane.
  • the second level can be set as needed. Taking the first lane's 6 congestion levels as an example, the second level can be level 2, level 3, level 4 or level 5. The second level and the first level may be the same or different.
  • the strategy for sensing targets can also include 3 or more sensing methods, without limitation.
  • the first sensing mode is a target obtained by sensor fusion based on the third information and the second sensing information, the target in the third information without sensor fusion and the target in the second sensing information without sensor fusion. , determined as the target on the first lane; the second sensing method is to determine the target obtained by sensor fusion based on the third information and the second sensing information as the target on the first lane.
  • At least one target corresponding to the third information and at least one target corresponding to the second sensing information are not in one-to-one correspondence, and the first sensing device or the second sensing device may miss detection or detect multiple errors. inspection situation. Therefore, when the target corresponding to the third information and the target corresponding to the second perception information are fused according to the sensor fusion strategy, the following three targets may appear: the target obtained by sensor fusion based on the third information and the second perception information; Targets for which sensor fusion is not performed in the third information and targets for which sensor fusion is not performed in the second perception information.
  • the target obtained by sensor fusion based on the third information and the second sensing information includes the target obtained by sensor fusion based on the position information of target 2 and the position information of target 5.
  • the targets in the third information that have not undergone sensor fusion include target 1.
  • the targets in the second sensing information for which sensor fusion is not performed include target 4.
  • the policy of sensing the target in the first lane can be set loosely, for example, not to obtain the Targets (such as: targets obtained by sensor fusion based on the first perception information and the second perception information, targets without sensor fusion in the first perception information and targets without sensor fusion in the second perception information) are screened, and It is to directly determine the acquired target as the target on the first lane.
  • the congestion level of the first lane is low (that is, the first lane is less congested)
  • the strategy for sensing targets in the first lane can be set more strictly.
  • the obtained targets can be filtered and the filtered targets can be filtered.
  • the final target (such as the target obtained by sensor fusion based on the first sensing information and the second sensing information) is determined as the target on the first lane to prevent false targets from being determined.
  • the target detection device can determine a suitable strategy for sensing targets in the first lane according to the congestion level of the first lane.
  • the first lane is relatively congested, collisions can be prevented. Less congested conditions can prevent false targets from being identified.
  • the actions of the target detection device in the above-mentioned S301-S305 can be executed by the processor 201 in the target detection device 20 shown in Figure 2 calling the application code stored in the memory 203.
  • This embodiment of the present application does not do anything in this regard. limit.
  • the methods and/or steps implemented by the target detection device can also be implemented by components (such as chips or circuits) that can be used in the target detection device, without limitation.
  • the above mainly describes the target detection method provided by the embodiment of the present application.
  • Embodiments of the present application also provide a target detection device.
  • the target detection device may be the target detection device in the above method embodiment, or a device including the above target detection device, or a component that can be used in the target detection device.
  • the above-mentioned target detection device includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
  • Embodiments of the present application can divide the target detection device into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. It can be understood that the division of modules in the embodiment of the present application is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • FIG. 5 shows a schematic structural diagram of a target detection device 50 .
  • the target detection device 50 includes a processing module 501 .
  • the target detection device 50 also includes a transceiver module 502 .
  • the processing module 501 which may also be called a processing unit, is used to perform operations other than sending and receiving operations, and may be, for example, a processing circuit or a processor.
  • the transceiver module 502, which may also be called a transceiver unit, is used to perform transceiver operations, and may be, for example, a transceiver circuit, transceiver, transceiver, or communication interface.
  • the target detection device 50 may also include a storage module (not shown in Figure 5) for storing program instructions and data.
  • the target detection device 50 is used to implement the functions of the target detection device in the above method embodiment.
  • the target detection device 50 is, for example, the target detection device described in the embodiment shown in FIG. 3 .
  • the processing module 501 is used to obtain the first information.
  • the first information includes first perception information about the environment.
  • the processing module 501 may be used to perform S301.
  • the processing module 501 is also used to obtain second information.
  • the second information includes second perception information about the environment.
  • the processing module 501 can also be used to perform S302.
  • the processing module 501 is also configured to determine the congestion level of the environment based on the first information and the second information. Wherein, the congestion level of the environment is used to detect at least one target in the environment. For example, the processing module 501 can also be used to perform S303.
  • the first perception information includes information about at least one lane; the second perception information includes position information about at least one target on at least one lane, or the second perception information includes at least one target on at least one lane. Position information of a target, and speed information of at least one target in at least one lane.
  • the processing module 501 is also configured to divide at least one target into lanes based on the first perception information and the second perception information.
  • the processing module 501 is specifically configured to determine, based on the first sensing information and the second sensing information, information about the target in the first lane of at least one lane, and the first lane is the target in the first lane of at least one lane.
  • the information of the target on the first lane includes at least one of the following: the number of targets on the first lane, the speed of the target on the first lane, the average value between two adjacent targets on the first lane distance, the minimum distance between two adjacent objects on the first lane, or the traffic flow density of the first lane; the processing module 501 is also specifically configured to determine the congestion level of the first lane based on the information about the objects on the first lane.
  • the processing module 501 is also configured to obtain at least one threshold, and the at least one threshold includes at least one of the following: a speed threshold or a spacing threshold; the speed threshold is the speed of multiple targets on the first lane. The median, the minimum value of the speed of multiple targets on the first lane or the maximum value of the speed of multiple targets on the first lane, or the speed threshold is preset; the spacing threshold is based on the speed threshold and The time interval is obtained, and the time interval is preset.
  • the processing module 501 is specifically configured to determine the congestion level of the first lane according to the information of the target on the first lane and at least one threshold.
  • the congestion level of the first lane includes 4 levels, 5 levels or 6 levels.
  • the congestion level of the first lane includes 6 levels, if the number of objects on the first lane is greater than or equal to the first number threshold, and the speed of the objects on the first lane is less than or equal to the speed threshold, then the congestion level of the first lane is level 5; or, if the number of objects on the first lane is greater than or equal to the first quantity threshold, then the congestion level of the first lane is level 4; or, if the number of objects on the first lane is greater than or equal to the first quantity threshold, the congestion level of the first lane is level 4; or, if If the average distance between two adjacent objects is less than or equal to the distance threshold, the congestion level of the first lane is level 3; or, if the minimum distance between two adjacent objects on the first lane is less than or equal to the distance threshold, Then the congestion level of the first lane is level 2; if the minimum distance between two adjacent targets on the first lane is greater than the spacing threshold, and the number of targets on the first lane is greater than
  • the congestion level of the first lane includes 6 levels. If the traffic density of the first lane is greater than or equal to the first density threshold, and the speed of the target on the first lane is less than or equal to the speed threshold, Then the congestion level of the first lane is level 5; or, if the traffic density of the first lane is greater than or equal to the first density threshold, then the congestion level of the first lane is level 4; or, if the two adjacent traffic flow density levels on the first lane If the average distance between objects is less than or equal to the distance threshold, then the congestion level of the first lane is level 3; or, if the minimum distance between two adjacent objects on the first lane is less than or equal to the distance threshold, then the congestion level of the first lane The congestion level of is level 2; if the minimum distance between two adjacent objects on the first lane is greater than the spacing threshold, and the traffic flow density of the first lane is greater than or equal to the second density threshold, then the congestion level of the first lane is level 2 1; or,
  • the processing module 501 is used to determine the sensor fusion strategy of the first lane according to the congestion level of the first lane.
  • the sensor fusion strategy of the first lane is used to estimate the actual position of the target on the first lane. Location.
  • the sensor fusion strategy of the first lane is the first fusion strategy; or, if the congestion level of the first lane is less than the first level , then the sensor fusion strategy of the first lane is the second fusion strategy.
  • the first sensing information also includes position information of at least one target on at least one lane
  • the position information in the first sensing information includes a first distance of the first target, a first distance of the first target. angle, the abscissa of the first target and the ordinate of the first target, the first target is any target in the first lane, the first target is different from the target where the target detection device 50 is located, the first distance is the target detection device 50
  • the distance from the first target, the first angle is the angle between the forward direction of the target detection device 50 and the first connection line
  • the first connection line is the connection line between the target detection device 50 and the first target ;
  • the position information in the second sensing information includes the abscissa coordinate of the second target and the ordinate coordinate of the second target, and the second target is the target corresponding to the first target among the at least one target corresponding to the second sensing information.
  • the first fusion strategy is to fuse the abscissa of the first target, the ordinate of the first target, the abscissa of the second target and the ordinate of the second target to obtain the The actual position of the target;
  • the second fusion strategy is to fuse the first angle, the first distance, the abscissa of the second target and the ordinate of the second target to obtain the actual position of the target on the first lane.
  • the processing module 501 is also used to determine a strategy for sensing targets in the first lane according to the congestion level of the first lane.
  • the strategy for sensing targets in the first lane is used to sense whether the target in the first lane is in the first lane. Goals exist.
  • the first sensing method is used to sense whether there is a target in the first lane; or, if the congestion level of the first lane is less than the second level, the first sensing method is used to sense whether there is a target in the first lane. level, the second sensing method is used to sense whether there is a target in the first lane.
  • the first sensing mode is a target obtained by sensor fusion based on the first sensing information and the second sensing information.
  • the targets in the first sensing information that are not sensor fused and the second sensing information are not.
  • the target for sensor fusion is determined as the target on the first lane;
  • the second sensing method is to determine the target obtained by sensor fusion based on the first sensing information and the second sensing information as the target on the first lane.
  • the first information and the second information are obtained by different types of sensing devices; or, the first information and the second information are obtained by the same type of sensing devices.
  • the sensing device for obtaining the first information is a camera device, lidar, millimeter wave radar or sonar; the sensing device for obtaining the second information is a millimeter wave radar, lidar or sonar.
  • the transceiving module 502 is configured to send information about the congestion level of the environment.
  • the target detection device 50 can take the form shown in FIG. 2 .
  • the processor 201 in Figure 2 can cause the target detection device 50 to execute the method described in the above method embodiment by calling the computer execution instructions stored in the memory 203.
  • the functions/implementation processes of the processing module 501 and the transceiver module 502 in Figure 5 can be implemented by the processor 201 in Figure 2 calling computer execution instructions stored in the memory 203.
  • the function/implementation process of the processing module 501 in Figure 5 can be implemented by the processor 201 in Figure 2 calling the computer execution instructions stored in the memory 203
  • the function/implementation process of the transceiver module 502 in Figure 5 can be implemented through Figure It is implemented by the communication interface 204 in 2.
  • the above modules or units can be implemented in software, hardware, or a combination of both.
  • the software exists in the form of computer program instructions and is stored in the memory.
  • the processor can be used to execute the program instructions and implement the above method flow.
  • the processor can be built into an SoC (System on a Chip) or ASIC, or it can be an independent semiconductor chip.
  • the processor can further include necessary hardware accelerators, such as field programmable gate array (FPGA), PLD (programmable logic device) , or a logic circuit that implements dedicated logic operations.
  • FPGA field programmable gate array
  • PLD programmable logic device
  • the hardware can be a CPU, a microprocessor, a digital signal processing (DSP) chip, a microcontroller unit (MCU), an artificial intelligence processor, an ASIC, Any one or any combination of SoC, FPGA, PLD, dedicated digital circuits, hardware accelerators or non-integrated discrete devices, which can run the necessary software or not rely on software to perform the above method flow.
  • DSP digital signal processing
  • MCU microcontroller unit
  • embodiments of the present application also provide a chip system, including: at least one processor and an interface.
  • the at least one processor is coupled to the memory through the interface.
  • the at least one processor executes the computer program or instructions in the memory
  • the chip system further includes a memory.
  • the chip system may be composed of chips, or may include chips and other discrete devices, which is not specifically limited in the embodiments of the present application.
  • embodiments of the present application also provide a computer-readable storage medium. All or part of the processes in the above method embodiments can be completed by instructing relevant hardware through a computer program.
  • the program can be stored in the above computer-readable storage medium. When executed, the program can include the processes of the above method embodiments. .
  • the computer-readable storage medium may be an internal storage unit of the target detection device of any of the aforementioned embodiments, such as a hard disk or memory of the target detection device.
  • the above-mentioned computer-readable storage medium may also be an external storage device of the above-mentioned target detection device, such as a plug-in hard disk, a smart media card (SMC), or a secure digital (SD) equipped on the above-mentioned target detection device.
  • SMC smart media card
  • SD secure digital
  • the computer-readable storage medium may also include both an internal storage unit and an external storage device of the target detection device.
  • the above computer-readable storage medium is used to store the above computer program and other programs and data required by the above target detection device.
  • the above-mentioned computer-readable storage media can also be used to temporarily store data that has been output or is to be output.
  • the embodiment of the present application also provides a computer program product. All or part of the processes in the above method embodiments can be completed by instructing relevant hardware through a computer program.
  • the program can be stored in the above computer program product. When executed, the program can include the processes of the above method embodiments.
  • the embodiment of the present application also provides a computer instruction. All or part of the processes in the above method embodiments can be completed by computer instructions to instruct related hardware (such as computers, processors, access network equipment, mobility management network elements or session management network elements, etc.).
  • the program may be stored in the above-mentioned computer-readable storage medium or in the above-mentioned computer program product.
  • embodiments of the present application also provide an intelligent driving vehicle, including the target detection device in the above embodiment.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be The combination can either be integrated into another device, or some features can be omitted, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

目标检测方法及装置,涉及感知领域,应用于辅助驾驶或者自动驾驶,可以确定环境的拥堵等级,以得到较为准确的检测结果。该方法包括:获取包括对环境的第一感知信息的第一信息,以及包括对环境的第二感知信息的第二信息,并根据第一信息和第二信息确定环境的拥堵等级。该环境的拥堵等级用于对环境中的至少一个目标进行检测。

Description

目标检测方法及装置 技术领域
本申请涉及感知领域,尤其涉及目标检测方法及装置。
背景技术
随着计算机技术和互联网技术的不断发展,智能驾驶得到了广泛关注。智能驾驶可以包括自动驾驶(也称为无人驾驶)或辅助驾驶等。其中,自动驾驶是指在车辆行驶过程中不需要驾驶人员的参与,车辆中的自动驾驶装置就能够操作车辆安全行驶。辅助驾驶是指在车辆行驶过程中,车辆中的辅助驾驶装置辅助驾驶人员安全驾驶。
智能驾驶车辆在行驶的过程中,需要对环境进行感知,并根据感知结果进行决策,如确定车速或行车方向等信息,以防止车辆发生碰撞,保证车辆的行驶安全。由此可以看出,对环境的感知结果是否准确对于智能驾驶至关重要。然而,目前对环境进行感知的方法较为单一,感知准确率较低。
发明内容
本申请实施例提供目标检测方法及装置,可以确定环境的拥堵等级,以得到较为准确的检测结果。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,提供了一种目标检测方法,执行该方法的装置可以为目标检测装置;也可以为应用于目标检测装置中的模块,例如芯片或芯片系统。下面以执行主体为目标检测装置为例进行描述。该方法包括:获取第一信息,该第一信息包括对环境的第一感知信息;获取第二信息,该第二信息包括对环境的第二感知信息;根据第一信息和第二信息确定环境的拥堵等级,环境的拥堵等级用于对环境中的至少一个目标进行检测。
基于上述第一方面提供的方法,目标检测装置可以获取对环境的第一感知信息,以及对环境的第二感知信息,并确定环境的拥堵等级,使得环境的拥堵等级可以用于对环境中的目标进行检测。也就是说,采用上述第一方面提供的方法,可以实现根据环境的拥堵等级检测环境中的目标,例如,对于不同的拥堵等级可以采用不同的目标检测策略,使得目标检测结果较为准确。
在一种可能的实现方式中,该第一感知信息包括至少一个车道的信息;该第二感知信息包括该至少一个车道上的至少一个目标的位置信息,或者,该第二感知信息包括该至少一个车道上的至少一个目标的位置信息,和该至少一个车道上的至少一个目标的速度信息。
基于上述可能的实现方式,目标检测装置可以获取至少一个车道的信息,以及该至少一个车道上的至少一个目标的位置信息,以便目标检测装置为第二感知信息中的目标划分车道,知道哪个车道上有哪些目标。或者,目标检测装置可以获取至少一个 车道的信息、该至少一个车道上的至少一个目标的位置信息以及该至少一个车道上的至少一个目标的速度信息,以便目标检测装置为第二感知信息中的目标划分车道,知道哪个车道上有哪些目标,每个目标的速度是多少。
在一种可能的实现方式中,该方法还包括:根据该第一感知信息和该第二感知信息,将该至少一个目标按照车道进行划分。
基于上述可能的实现方式,目标检测装置可以为第二感知信息中的目标划分车道,知道哪个车道上有哪些目标,以便目标检测装置基于车道确定拥堵等级,知道每个车道的拥堵等级,进而能够以车道为粒度,确定目标检测策略,得到更为准确的检测结果。
在一种可能的实现方式中,根据该第一信息和该第二信息确定该环境的拥堵等级,包括:根据该第一感知信息和该第二感知信息,确定至少一个车道中的第一车道上的目标的信息,该第一车道为该至少一个车道中的任意一个车道,该第一车道上的目标的信息包括以下至少一项:该第一车道上的目标的数量、该第一车道上的目标的速度、该第一车道上相邻两个目标之间的平均间距、该第一车道上相邻两个目标之间的最小间距、或该第一车道的车流密度;根据该第一车道上的目标的信息确定该第一车道的拥堵等级。
基于上述可能的实现方式,目标检测装置可以根据第一感知信息和第二感知信息确定第一车道上的目标的信息。其中,第一车道上的目标的信息包括的第一车道上的目标的数量、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距、第一车道上相邻两个目标之间的最小间距、或第一车道的车流密度等参数都能够反映第一车道的拥堵情况,因此,基于第一车道上的目标的信息,能够确定出更为准确的拥堵等级。
在一种可能的实现方式中,该方法还包括:获取至少一个阈值,该至少一个阈值包括以下至少一项:速度阈值或间距阈值;该速度阈值为该第一车道上的多个目标的速度的中位数、该第一车道上的多个目标的速度的极小值或该第一车道上的多个目标的速度的极大值,或者,该速度阈值为预设置的;该间距阈值是根据该速度阈值和时间间隔得到的,该时间间隔为预设置的。
基于上述可能的实现方式,目标检测装置可以根据第一车道上的目标的信息确定至少一个阈值,或者预设置该至少一个阈值,以便目标检测装置根据第一车道上的目标的信息和该至少一个阈值确定第一车道的拥堵等级。可以理解的,若上述阈值是预设置的,则可以简化目标检测装置的操作。若上述阈值是根据第一车道上的目标的信息计算的,则可以提高目标检测方法的鲁棒性,使其适应性更好。
在一种可能的实现方式中,根据该第一车道上的目标的信息确定该第一车道的拥堵等级,包括:根据该第一车道上的目标的信息和该至少一个阈值,确定该第一车道的拥堵等级。
基于上述可能的实现方式,目标检测装置可以将第一车道上的目标的信息中的参数和至少一个阈值进行对比,以确定该第一车道的拥堵等级,计算量小,实现起来较为容易并且成本较低。
在一种可能的实现方式中,该第一车道的拥堵等级包括4个等级,5个等级或6个 等级。
基于上述可能的实现方式,可以根据需要设置第一车道的拥堵等级,以适用不同的场景。
在一种可能的实现方式中,若该第一车道的拥堵等级包括6个等级,根据该第一车道上的目标的信息和该至少一个阈值,确定该第一车道的拥堵等级,包括:若该第一车道上的目标的数量大于或等于第一数量阈值,并且该第一车道上的目标的速度小于或等于该速度阈值,则该第一车道的拥堵等级为等级5;或者,若该第一车道上的目标的数量大于或等于第一数量阈值,则该第一车道的拥堵等级为等级4;或者,若该第一车道上相邻两个目标之间的平均间距小于或等于该间距阈值,则该第一车道的拥堵等级为等级3;或者,若该第一车道上相邻两个目标之间的最小间距小于或等于该间距阈值,则该第一车道的拥堵等级为等级2;若该第一车道上相邻两个目标之间的最小间距大于间距阈值,并且该第一车道上的目标的数量大于或者等于第二数量阈值,则该第一车道的拥堵等级为等级1;或者,若该第一车道上目标的数量小于第二数量阈值,则该第一车道的拥堵等级为等级0。
基于上述可能的实现方式,第一车道上的目标的信息包括多个参数,如:第一车道上的目标的数量、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距和第一车道上相邻两个目标之间的最小间距。目标检测装置可以根据该多个参数和该多个参数对应的阈值,确定第一车道的拥堵等级。在上述过程中,目标检测装置综合考虑了多个参数,所以确定的第一车道的拥堵等级较为准确。
在一种可能的实现方式中,若该第一车道的拥堵等级包括6个等级,根据该第一车道上的目标的信息和该至少一个阈值,确定该第一车道的拥堵等级,包括:若该第一车道的车流密度大于或等于第一密度阈值,并且该第一车道上的目标的速度小于或等于该速度阈值,则该第一车道的拥堵等级为等级5;或者,若该第一车道的车流密度大于或等于第一密度阈值,则该第一车道的拥堵等级为等级4;或者,若该第一车道上相邻两个目标之间的平均间距小于或等于该间距阈值,则该第一车道的拥堵等级为等级3;或者,若该第一车道上相邻两个目标之间的最小间距小于或等于该间距阈值,则该第一车道的拥堵等级为等级2;若该第一车道上相邻两个目标之间的最小间距大于间距阈值,并且该第一车道的车流密度大于或者等于第二密度阈值,则该第一车道的拥堵等级为等级1;或者,若该第一车道的车流密度小于第二密度阈值,则该第一车道的拥堵等级为等级0。
基于上述可能的实现方式,第一车道上的目标的信息包括多个参数,如:第一车道的车流密度、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距和第一车道上相邻两个目标之间的最小间距。目标检测装置可以根据该多个参数和该多个参数对应的阈值,确定第一车道的拥堵等级。在上述过程中,目标检测装置综合考虑了多个参数,所以确定的第一车道的拥堵等级较为准确。
在一种可能的实现方式中,该方法还包括:根据该第一车道的拥堵等级,确定第一车道的传感器融合策略,该第一车道的传感器融合策略用于估计该第一车道上的目标的实际位置。
基于上述可能的实现方式,目标检测装置可以根据第一车道的拥堵等级,确定适 合的传感器融合策略,不仅使得根据该传感器融合策略估计的目标的位置较为准确,还使得在确定目标数量时,能够确定出较为准确的目标数量。
在一种可能的实现方式中,根据该第一车道的拥堵等级,确定该第一车道的传感器融合策略,包括:若该第一车道的拥堵等级大于或等于第一等级,则该第一车道的传感器融合策略为第一融合策略;或者,若该第一车道的拥堵等级小于第一等级,则该第一车道的传感器融合策略为第二融合策略。
基于上述可能实现方式,不同的拥堵等级可以对应不同的传感器融合策略,使得在第一车道处于不同的拥堵等级的情况下,都可以采用与该拥堵等级对应的传感器融合策略估计的目标的位置,以得到准确的结果。
在一种可能的实现方式中,该第一感知信息还包括该至少一个车道上的至少一个目标的位置信息,该第一感知信息中的位置信息包括第一目标的第一距离,该第一目标的第一角度,该第一目标的横坐标和该第一目标的纵坐标,该第一目标为该第一车道中的任意一个目标,该第一目标与该目标检测装置所在的目标不同,该第一距离为该目标检测装置与该第一目标之间的距离,该第一角度为该目标检测装置的正向方向与第一连线之间的夹角,该第一连线为该目标检测装置与该第一目标之间的连线;该第二感知信息中的位置信息包括第二目标的横坐标和该第二目标的纵坐标,该第二目标为该第二感知信息对应的至少一个目标中,与该第一目标对应的目标。
基于上述可能的实现方式,第一感知信息还包括第一目标的第一距离,第一目标的第一角度,第一目标的横坐标和第一目标的纵坐标,第二感知信息包括第二目标的横坐标和第二目标的纵坐标,使得在目标检测装置确定第一车道的传感器融合策略为第一融合策略的情况下,第一目标的横坐标,第一目标的纵坐标,第二目标的横坐标和第二目标的纵坐标,可以用于确定第一车道上的目标的实际位置;在目标检测装置确定第一车道的传感器融合策略为第二融合策略的情况下,第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合,可以用于确定第一车道上的目标的实际位置。这样可以实现在第一车道处于不同的拥堵等级的情况下,采用不同的传感器融合策略估计目标的位置。
在一种可能的实现方式中,该第一融合策略是将该第一目标的横坐标,该第一目标的纵坐标,该第二目标的横坐标和该第二目标的纵坐标进行融合,得到该第一车道上的目标的实际位置;该第二融合策略是将该第一角度、该第一距离、该第二目标的横坐标和该第二目标的纵坐标进行融合,得到该第一车道上的目标的实际位置。
基于上述可能的实现方式,在第一车道的拥堵等级较低(即第一车道较不拥堵)的情况下,第一感知装置感知的第一角度和第一距离较为准确(如:通过对摄像装置拍摄的照片进行图像识别,可以得到较为准确的第一角度和第一距离),因此,将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合(即通过第二融合策略),能够估计出较为准确的位置。在第一车道的拥堵等级较高(即第一车道较为拥堵)的情况下,第一感知装置感知的第一角度和第一距离误差较大(例如,通过图像识别技术得到的第一角度和第一距离有较大误差),将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合(即通过第二融合策略),估计出的位置不准确,因此,可以将第一目标的横坐标,第一目标的纵坐标,第二目标的横坐标 和第二目标的纵坐标进行融合(即通过第一融合策略),以估计出较为准确的位置。
在一种可能的实现方式中,该方法还包括:根据该第一车道的拥堵等级,确定第一车道上感知目标的策略,该第一车道上感知目标的策略用于感知该第一车道中是否存在目标。
基于上述可能的实现方式,目标检测装置可以根据第一车道的拥堵等级,确定适合的第一车道上感知目标的策略,以防止出现漏感知目标或确定出虚假目标的情况。
在一种可能的实现方式中,根据该第一车道的拥堵等级,确定第一车道上感知目标的策略,包括:若该第一车道的拥堵等级大于或等于第二等级,则采用第一感知方式感知该第一车道上是否存在目标;或者,若该第一车道的拥堵等级小于第二等级,则采用第二感知方式感知该第一车道上是否存在目标。
基于上述可能的实现方式,不同的拥堵等级对应的第一车道上感知目标的策略不同。例如,第一车道的拥堵等级越高(即第一车道越拥堵),第一车道上感知目标的策略越宽松,以防止漏感知目标;第一车道的拥堵等级越低(即第一车道越不拥堵),第一车道上感知目标的策略越严格,以防止确定出虚假目标。
在一种可能的实现方式中,该第一感知方式为将根据该第一感知信息和该第二感知信息进行传感器融合得到的目标,该第一感知信息中未进行传感器融合的目标和该第二感知信息中未进行传感器融合的目标,确定为该第一车道上的目标;该第二感知方式为将根据该第一感知信息和该第二感知信息进行传感器融合得到的目标,确定为该第一车道上的目标。
基于上述可能的实现方式,在第一车道的拥堵等级较高(即第一车道较拥堵)的情况下,为了防止发生碰撞,可以将第一车道上感知目标的策略设置的较为宽松,例如,不对获取到的目标(如:根据第一感知信息和第二感知信息进行传感器融合得到的目标,第一感知信息中未进行传感器融合的目标和第二感知信息中未进行传感器融合的目标)进行筛选,而是直接将获取到的目标确定为第一车道上的目标。在第一车道的拥堵等级较低(即第一车道较不拥堵)的情况下,可以将第一车道上感知目标的策略设置的较为严格,例如,可以对获取到的目标进行筛选,将筛选后的目标(如:根据第一感知信息和第二感知信息进行传感器融合得到的目标)确定为第一车道上的目标,以防止确定出虚假目标。
在一种可能的实现方式中,该第一信息和该第二信息是由不同类型的感知装置获取的;或者,该第一信息和该第二信息是由相同类型的感知装置获取的。
基于上述可能的实现方式,若第一信息和第二信息是由不同类型的感知装置获取的,则可以由对图形(如:车道线)敏感的感知装置获取第一信息,由对目标(如:车辆、行人、树木或路测装置)敏感的感知装置获取第二信息,以获得较为准确的第一信息和第二信息。若第一信息和第二信息是由相同类型的感知装置获取的,则第一信息和第二信息包括的信息类似,处理起来较为方便,易于实现。另外,可以将多个成本较低的感知装置设置在不同的位置,以获取第一信息和第二信息,进而可以降低成本。
在一种可能的实现方式中,获取该第一信息的感知装置为摄像装置、激光雷达、毫米波雷达或声呐;获取该第二信息的感知装置为毫米波雷达、激光雷达或声呐。
基于上述可能的实现方式,可以通过对第一信息敏感的摄像装置、激光雷达、毫米波雷达或声呐来获取第一信息,得到较为准确的第一信息。通过对第二信息敏感的毫米波雷达、激光雷达或声呐来获取第二信息,得到较为准确的第二信息。如此,根据该第一信息和第二信息能确定出较为准确的拥堵等级。
在一种可能的实现方式中,该方法还包括:发送该环境的拥堵等级的信息。
基于上述可能的实现方式,目标检测装置可以向其他设备或模块发送环境的拥堵等级的信息,以便其他设备或模块使用环境的拥堵等级。例如,目标检测装置可以向智能驾驶车辆发送环境的拥堵等级的信息,以便智能驾驶车辆根据该信息选择更优的道路规划方案和/或车辆控制策略,和/或,以便多辆车共享环境的拥堵等级,进而形成区域的拥堵地图,方便每辆车自身的行驶规划和/或道路的区域性车流控制(例如,红绿灯控制)。再例如,若目标检测装置包括ADAS,ADAS中的传感器融合模块根据第一信息和第二信息确定环境的拥堵等级后,可以向目标选择模块发送环境的拥堵等级的信息,以便目标选择模块根据该信息调整选择目标的算法,和/或,向控制模块发送环境的拥堵等级的信息,以便控制模块根据该信息调整控制模块所控制的功能中的参数(例如,在拥堵等级较高的情况下,调整AEB功能中的参数,使得刹车更加敏感,刹车用时更短)。
第二方面,提供了一种目标检测装置用于实现上述方法。该目标检测装置可以为上述第一方面中的目标检测装置,或者包含上述目标检测装置的装置。该目标检测装置包括实现上述方法相应的模块、单元、或手段(means),该模块、单元、或means可以通过硬件实现,软件实现,或者通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块或单元。
结合上述第二方面,在一种可能的实现方式中,该目标检测装置可以包括处理模块。该处理模块,可以用于实现上述第一方面及其任意可能的实现方式中的处理功能。该处理模块例如可以为处理器。
结合上述第二方面,在一种可能的实现方式中,该目标检测装置还可以包括收发模块。该收发模块,也可以称为收发单元,用以实现上述第一方面及其任意可能的实现方式中的发送和/或接收功能。该收发模块可以由收发电路,收发机,收发器或者通信接口构成。
结合上述第二方面,在一种可能的实现方式中,收发模块包括发送模块和接收模块,分别用于实现上述第一方面及其任意可能的实现方式中的发送和接收功能。
第三方面,提供了一种目标检测装置,包括:处理器;该处理器用于与存储器耦合,并读取存储器中的指令之后,根据该指令执行如上述第一方面所述的方法。该目标检测装置可以为上述第一方面中的目标检测装置,或者包含上述目标检测装置的装置。
结合上述第三方面,在一种可能的实现方式中,该目标检测装置还包括存储器,该存储器,用于保存必要的程序指令和数据。
结合上述第三方面,在一种可能的实现方式中,该目标检测装置为芯片或芯片系统。可选的,该目标检测装置是芯片系统时,可以由芯片构成,也可以包含芯片和其他分立器件。
第四方面,提供了一种目标检测装置,包括:处理器和接口电路;接口电路,用于接收计算机程序或指令并传输至处理器;处理器用于执行所述计算机程序或指令,以使该目标检测装置执执行如上述第一方面所述的方法。
结合上述第四方面,在一种可能的实现方式中,该目标检测装置为芯片或芯片系统。可选的,该目标检测装置是芯片系统时,可以由芯片构成,也可以包含芯片和其他分立器件。
第五方面,提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面所述的方法。
第六方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机可以执行上述第一方面所述的方法。
第七方面,提供了一种智能驾驶车辆,该智能驾驶车辆包括用于执行上述第一方面所述的方法的目标检测装置。
其中,第二方面至第七方面中任一种可能的实现方式所带来的技术效果可参见上述第一方面或第一方面中不同可能的实现方式所带来的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的目标检测系统架构示意图;
图2为本申请实施例提供的目标检测装置的硬件结构示意图;
图3为本申请实施例提供的目标检测方法的流程示意图;
图4为本申请实施例提供的车道的示意图;
图5为本申请实施例提供的目标检测装置的结构示意图。
具体实施方式
下面结合附图对本申请实施例的实施方式进行详细描述。
本申请实施例提供的方法可用于各种目标检测系统中,以感知环境的拥堵等级,例如,可以感知某条道路或某个车道的拥堵等级,使得环境的拥堵等级可以用于对环境中的目标进行检测,以得到较为准确的检测结果。下面以图1所示目标检测系统10为例,对本申请实施例提供的方法进行描述。
如图1所示,为本申请实施例提供的目标检测系统10的架构示意图。图1中,目标检测系统10可以包括目标检测装置101。该目标检测装置101可以用于实现本申请实施例提供的目标检测方法。例如,目标检测装置101可以获取第一信息和第二信息,根据第一信息和第二信息确定环境的拥堵等级,使得环境的拥堵等级可以用于对环境中的目标进行检测,以得到较为准确的检测结果。这一过程,将在下述图3所示的实施例中进行具体阐述,在此不做赘述。图1仅为示意图,并不构成对本申请提供的技术方案的适用场景的限定。
本申请实施例中的目标检测装置,例如,目标检测装置101,可以是任意一个具备计算能力的设备。示例性的,目标检测装置可以包括手持式设备、车载设备、感知装置、计算设备或智能驾驶车辆。例如,目标检测装置可以包括高级辅助驾驶系统(advanced driver-assistance systems,ADAS)、车内的各种具备计算能力的设备、车内的感知装置(如:车内的摄像装置等)或具备自动驾驶功能或辅助驾驶功能的智能 驾驶车辆。
本申请实施例中,ADAS可以包括传感器感知模块、传感器融合模块、目标选择模块和控制模块。传感器感知模块用于感知周围环境的信息,或者从其他装置获取周围环境的信息。例如,传感器感知模块可以获取第一信息和第二信息。传感器融合模块可以用于对周围环境的信息进行融合处理,例如,根据第一信息和第二信息确定环境的拥堵等级。目标选择模块可以用于选择目标,例如,选择跟踪的目标或选择防止碰撞的目标。控制模块可以用于控制以下至少一项功能:自动紧急刹车(automatic emergency brake,AEB)功能、自适应巡航(automatic cruise control,ACC)功能或前方碰撞预警(forward collision warning,FCW)功能。
本申请实施例中,车内的各种具备计算能力的设备可以包括:网关、车载T-Box(telematics box)、车身控制模块(body control module,BCM)、智能座舱域控制器(cockpit domain controller,CDC)、多域控制器(multi domain controller,MDC)、整车控制单元(vehicle control unit,VCU)、电子控制单元(electronic control unit,ECU)、车控域控制器(vehicle domain controller,VDC)或整车集成单元(vehicle integrated/integration unit,VIU)等。
可选的,目标检测装置101还具备感知能力,用于感知第一信息和/或第二信息。
可选的,目标检测系统10还包括与目标检测装置101通信连接的感知装置102和/或感知装置103。感知装置102可以获取第一信息,并向目标检测装置101发送该第一信息。感知装置103可以获取第二信息,并向目标检测装置101发送该第二信息。
本申请实施例中的感知装置,例如,感知装置102或感知装置103,可以是任意一个具备感知能力的设备。例如,感知装置可以包括摄像装置、雷达或声呐。其中,摄像装置可以包括单目摄像机、双目摄像机、三目摄像机或深度摄像机等。雷达可以是激光雷达或毫米波雷达等。
图1所示的目标检测系统10仅用于举例,并非用于限制本申请的技术方案。本领域的技术人员应当明白,在具体实现过程中,目标检测系统10还可以包括其他设备,同时也可根据具体需要来确定目标检测装置或感知装置的数量,不予限制。
可选的,本申请实施例图1中的各装置(例如,目标检测装置或感知装置等)可以是一个通用设备或者是一个专用设备,本申请实施例对此不作具体限定。
可选的,本申请实施例图1中的各装置(例如,目标检测装置或感知装置等)的相关功能可以由一个设备实现,也可以由多个设备共同实现,还可以是由一个设备内的一个或多个功能模块实现,本申请实施例对此不作具体限定。可以理解的是,上述功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者硬件与软件的结合,或者平台(例如,云平台)上实例化的虚拟化功能。
在具体实现时,本申请实施例图1中的各装置(例如,目标检测装置或感知装置等)都可以采用图2所示的组成结构,或者包括图2所示的部件。图2所示为可适用于本申请实施例的目标检测装置的硬件结构示意图。该目标检测装置20包括至少一个处理器201和至少一个通信接口204,用于实现本申请实施例提供的方法。该目标检测装置20还可以包括通信线路202和存储器203。
处理器201可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路202可包括一通路,在上述组件之间传送信息,例如总线。
通信接口204,用于与其他设备或通信网络通信。通信接口204可以是任何收发器一类的装置,如可以是以太网接口、无线接入网(radio access network,RAN)接口、无线局域网(wireless local area networks,WLAN)接口、收发器、管脚、总线、或收发电路等。
存储器203用于存储执行本申请实施例提供的方案所涉及的计算机执行指令,并由处理器201来控制执行。处理器201用于执行存储器203中存储的计算机执行指令,从而实现本申请实施例提供的方法。
可选的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施例对此不作具体限定。
本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。
作为一种实施例,处理器201可以包括一个或多个CPU,例如图2中的CPU0和CPU1。
作为一种实施例,目标检测装置20可以包括多个处理器,例如图2中的处理器201和处理器207。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。
作为一种实施例,目标检测装置20还可以包括输出设备205和/或输入设备206。输出设备205和处理器201耦合。
可选的,目标检测装置20还包括感知模块(图2中未示出),用于感知第一信息和/或第二信息。该感知模块可以包括以下至少一种:摄像装置、雷达或声呐。
可以理解的,图2中示出的组成结构并不构成对该目标检测装置的限定,除图2所示部件之外,该目标检测装置可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面将结合附图,对本申请实施例提供的方法进行描述。下述实施例中的各网元可以具备图2所示部件,不予赘述。
可以理解的是,本申请实施例提供的方法可以应用于多个领域,例如:无人驾驶领域、自动驾驶领域、辅助驾驶领域、智能驾驶领域、网联驾驶领域、智能网联驾驶领域、汽车共享领域等。
如图3所示,为本申请实施例提供的一种目标检测方法,该目标检测方法可以包括S301-S303:
S301:目标检测装置获取第一信息。
其中,S301中的目标检测装置可以是图1所示的目标检测装置101。
本申请实施例中,第一信息可以包括对环境的第一感知信息。环境可以是目标检测装置所处的环境,或者目标检测装置周围的环境。以目标检测装置是智能驾驶车辆为例,环境可以包括该智能驾驶车辆所在的车道。可以理解的,本申请实施例是以环 境包括至少一个车道为例进行说明的,环境包括其他场景的情况,与环境包括至少一个车道的情况类似,可以参考本申请实施例中对应的描述,不做赘述。
本申请实施例中,第一感知信息可以包括至少一个车道的信息。例如,第一感知信息包括车道1的信息,或者第一感知信息包括车道1的信息和车道2的信息。
本申请实施例中,任意一个车道的信息可以包括该车道的位置信息。例如,一个车道的信息可以包括用于指示该车道的位置的函数。又例如,一个车道的信息可以包括道路的路沿的位置信息,在这种情况下,可以将整个道路作为一个车道。可选的,任意一个车道的信息还可以指示该车道的宽度。
示例性的,以图4所示的三个车道(车道401、车道402和车道403)为例,对于车道402,车道402的信息可以包括表征车道线405的函数(如:y=c 3x 3+c 2x 2+c 1x+c 0)和表征车道线406的函数(如:y=a 3x 3+a 2x 2+a 1x+a 0)。其中,x为目标检测装置所在的坐标系中的纵坐标,y为该坐标系中的横坐标。目标检测装置所在的坐标系可以是以目标检测装置为原点的坐标系。目标检测装置获取到车道402的信息后,可以根据车道402的信息确定车道402的位置。对于车道403,车道403的信息可以包括表征车道线406的函数(如:y=a 3x 3+a 2x 2+a 1x+a 0)和车道403的宽度信息。目标检测装置获取到车道403的信息后,可以根据车道403的信息确定车道403的位置。
可以理解的,上述表征车道线405的函数和表征车道线406的函数仅是示例性的。在实际应用中,还可以通过其他函数来表征车道线,不予限制。
可以理解的,在上述示例中,是将车道401、车道402和车道403分别作为一个车道。在具体应用中,也可以把道路中的所有车道作为一个车道,例如,可以将车道401、车道402和车道403作为一个车道。在这种情况下,车道的信息可以指示整个道路的位置。例如,车道的信息包括道路的路沿的位置信息。可选的,车道的信息还包括道路的宽度信息。
S302:目标检测装置获取第二信息。
本申请实施例中,第二信息可以包括对环境的第二感知信息。环境可以是目标检测装置所处的环境,或者目标检测装置周围的环境。第二感知信息可以包括至少一个车道上的至少一个目标的位置信息,或者,第二感知信息包括至少一个车道上的至少一个目标的位置信息,和至少一个车道上的至少一个目标的速度信息。至少一个车道上的至少一个目标的位置信息可以指示该至少一个目标的位置。至少一个目标的速度信息可以指示该至少一个目标的速度。本申请实施例中,至少一个目标的位置信息可以包括该目标在直角坐标系中的坐标,或者包括该目标在极坐标系中的坐标,不予限制。上述直角坐标系或极坐标系为目标检测装置所在的坐标系。
可以理解的,目标检测装置可以通过多种方式获取第一信息和第二信息。下面进行具体阐述。
(1)目标检测装置获取第一信息的方式:
一种可能的实现方式,第一信息是目标检测装置从第一感知装置获取的。也就是说,第一感知装置可以获取第一信息并发送给目标检测装置。第一感知装置可以是图1所示的感知装置102。例如,第一感知装置为摄像装置、激光雷达、毫米波雷达或声呐。
一种可能的设计,若第一感知装置为摄像装置、激光雷达或声呐,则车道的信息可以包括用于指示该车道的位置的函数,或者,车道的信息可以包括用于指示该车道的位置的函数和指示该车道的宽度的信息。若第一感知装置为毫米波雷达,由于毫米波雷达可能无法“看见”或检测到车道线,但是毫米波雷达可以“看见”或检测到路沿,所以车道的信息可以包括道路的路沿的位置信息,或者,车道的信息包括道路的路沿的位置信息和道路的宽度信息。
另一种可能的实现方式,第一信息是目标检测装置从本地获取的。也就是说,目标检测装置可以具备感知能力,例如,目标检测装置中配置了感知装置,如摄像装置、激光雷达、毫米波雷达或声呐等,并通过感知装置获取第一信息。
(2)目标检测装置获取第二信息的方式:
一种可能的实现方式,第二信息是目标检测装置从第二感知装置获取的。也就是说,第二感知装置可以获取第二信息并发送给目标检测装置。第二感知装置可以是图1所示的感知装置103。例如,第二感知装置为毫米波雷达、激光雷达或声呐。
另一种可能的实现方式,第二信息是目标检测装置从本地获取的。也就是说,目标检测装置可以具备感知能力,例如,目标检测装置配置了感知装置,如激光雷达、毫米波雷达或声呐等,并通过感知装置获取第二信息。
可以理解的,上述第一感知装置和第二感知装置的具体形式仅是示例性的,在具体应用中,第一感知装置和/或第二感知装置还可以是其他能够检测或“看见”目标、车道线或路沿等物体的设备,不予限制。
示例性的,以图4所示的三个车道(车道401、车道402和车道403)为例,若目标检测装置或第二感知装置被配置在目标4022上,则第二感知信息包括目标4011的位置信息、目标4012的位置信息、目标4021的位置信息和目标4031的位置信息,或者,第二感知信息包括目标4011的位置信息、目标4011的速度信息、目标4012的位置信息、目标4012的速度信息、目标4021的位置信息、目标4021的速度信息、目标4031的位置信息和目标4031的速度信息。
可以理解的,第一感知装置和第二感知装置的类型可以相同也可以不同,下面进行具体阐述。
一种可能的设计,第一感知装置和第二感知装置是不同类型的感知装置。也就是说,第一信息和第二信息是由不同类型的感知装置获取的。例如,第一感知装置为摄像装置,第二感知装置为毫米波雷达。在这种情况下,可以由对图形(如:车道线)敏感的感知装置获取第一信息,由对目标(如:车辆、行人、树木或路测装置)敏感的感知装置获取第二信息,以获得较为准确的第一信息和第二信息。
另一种可能的设计,第一感知装置和第二感知装置是相同类型的感知装置。也就是说,第一信息和第二信息是由相同类型的感知装置获取的。例如,第一感知装置和第二感知装置都是毫米波雷达。在这种情况下,第一信息和第二信息包括的信息类似,处理起来较为方便,易于实现。另外,将多个成本较低的感知装置(如毫米波雷达)设置在不同的位置,以获取第一信息和第二信息,进而可以降低成本。
可以理解的,本申请实施例不限制S301和S302的执行顺序。例如,本申请实施例可以先执行S301再执行S302,也可以先执行S302再执行S301,还可以同时执行 S301和S302,不予限制。
可以理解的,目标检测装置获取到第一信息和第二信息后,即可以根据第一信息和第二信息确定环境的拥堵等级。但是,考虑到在实际应用中,目标通常是按照一定的规则行驶的,例如,按照车道行驶的,所以不同车道的拥堵情况可能不同。因此,本申请实施例提供的方法可以基于车道确定拥堵等级,以便该车道上的目标根据该车道的拥堵等级进行目标检测。
一种可能的实现方式,在S302之后,目标检测装置可以根据第一感知信息和第二感知信息,将至少一个目标按照车道进行划分,以便目标检测装置知道哪个车道上有哪些目标。
一种可能的设计,目标检测装置根据第一感知信息指示的至少一个车道的位置,以及第二感知信息指示的至少一个目标的位置,确定至少一个目标中的每个目标位于哪个车道。具体来说,若第二感知信息指示的目标的位置与第一感知信息指示的某一个车道的位置重叠,则目标检测装置确定该目标在这个车道上,若第二感知信息指示的目标的位置与第一感知信息指示的车道的位置不重叠,则目标感知装置确定该目标没有在这个车道上。可以理解的,若第二感知信息还包括至少一个车道上的至少一个目标的速度信息,则目标检测装置还可以确定至少一个车道中的目标的速度。
示例性的,以图4所示的三个车道(车道401、车道402和车道403)为例,若第一感知信息包括车道401的信息、车道402的信息和车道403的信息,第二感知信息包括目标4011的位置信息、目标4012的位置信息、目标4021的位置信息和目标4031的位置信息,则目标检测装置可以确定车道401的位置和目标4011的位置重叠,车道401的位置还和目标4012的位置重叠,所以目标4011和目标4012位于车道401上,目标检测装置可以确定车道402的位置和目标4021的位置重叠,所以目标4021位于车道402上,目标检测装置可以确定车道403的位置与目标4031的位置重叠,所以目标4031位于车道403上。
S303:目标检测装置根据第一信息和第二信息确定环境的拥堵等级。
本申请实施例中,环境的拥堵等级可以用于对环境中的至少一个目标进行检测。例如,环境的拥堵等级可以用于确定传感器融合策略,以便第一装置估计该环境中一个或多个目标的实际位置,和/或,环境的拥堵等级可以用于确定感知目标的策略,以便第二装置感知该环境中是否存在目标。其中,第一装置和第二装置可以是相同的设备,也可以是不同的设备。可以理解的,第一装置和/或第二装置也可以是目标检测装置。目标检测装置根据环境的拥堵等级确定传感器融合策略的过程,将在下述S304中介绍,目标检测装置根据环境的拥堵等级确定感知目标的策略的过程,将在下述S305中介绍,在此不做赘述。
下面以环境包括第一车道为例,介绍目标检测装置根据第一信息和第二信息确定环境的拥堵等级的具体过程。可以理解的,若环境包括多个车道,则目标检测装置可以重复执行下述过程,以确定每个车道的拥堵等级,即得到了环境的拥堵等级。
一种可能的实现方式,目标检测装置根据第一感知信息和第二感知信息,确定至少一个车道中的第一车道上的目标的信息,并根据第一车道上的目标的信息确定第一车道的拥堵等级。
本申请实施例中,第一车道为至少一个车道中的任意一个车道,或者说第一车道为环境中的任意一个车道。
本申请实施例中,第一车道上的目标的信息包括以下至少一项:第一车道上的目标的数量、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距、第一车道上相邻两个目标之间的最小间距、或第一车道的车流密度。
本申请实施例中,第一车道上的目标的速度可以是第一车道上的目标的平均速度、第一车道上的目标的最低速度或第一车道上的目标的最大速度等。第一车道的车流密度可以表征在第一车道上的预设距离内的目标的数量。以预设距离为50米,第一车道的车流密度为10为例,第一车道的车流密度表示在第一车道上的50米内有10个目标。
可以理解的,在目标检测装置根据第一感知信息和第二感知信息,将至少一个目标按照车道进行划分之后,目标检测装置可以确定哪些目标在第一车道上,即得到了第一车道上的目标的数量。若第二感知信息包括至少一个车道上的至少一个目标的速度信息,则目标检测装置还可以直接得到第一车道上的目标的速度。若第二感知信息不包括至少一个车道上的至少一个目标的速度信息,则目标检测装置可以在一段时间内多次获取第二感知信息,根据多次获取的第二感知信息确定第一车道上的每个目标的速度,进而得到第一车道上的目标的速度。
可以理解的,目标检测装置可以根据第一车道上相邻两个目标的位置信息,得到第一车道上相邻两个目标之间的平均间距、第一车道上相邻两个目标之间的最小间距或第一车道的车流密度。
可以理解的,目标检测装置获取到第一车道上的目标的信息后,可以根据第一车道上的目标的信息确定第一车道的拥堵等级。例如,目标检测装置根据第一车道上的目标的信息和至少一个阈值,确定第一车道的拥堵等级。
本申请实施例中,至少一个阈值可以包括以下至少一项:速度阈值、间距阈值、数量阈值(如:第一数量阈值,和/或,第二数量阈值)或密度阈值(如:第一密度阈值,和/或,第二密度阈值)。可以理解的,上述阈值可以是预设置的,也可以是根据第一车道上的目标的信息计算的。若上述阈值是预设置的,则可以简化目标检测装置的操作。若上述阈值是根据第一车道上的目标的信息计算的,则可以提高本申请实施例提供的目标检测方法的鲁棒性,使其适应性更好。
一种可能的设计,在目标检测装置根据第一车道上的目标的信息和至少一个阈值,确定第一车道的拥堵等级之前,目标检测装置获取至少一个阈值。例如,目标检测装置根据第一车道上的多个目标的速度,确定速度阈值。其中,速度阈值为第一车道上的多个目标的速度的中位数、第一车道上的多个目标的速度的极小值或第一车道上的多个目标的速度的极大值。又例如,目标检测装置根据速度阈值和时间间隔,得到间距阈值。该速度阈值可以是预设置的,或者是目标检测装置根据第一车道上的多个目标的速度确定的。时间间隔为预设置的。
可以理解的,在具体应用中,可以根据需要设置第一车道的拥堵等级,以适用不同的场景。例如,第一车道的拥堵等级可以包括2个等级、3个等级、4个等级、5个等级、6个等级或者更多的等级,不予限制。
下面以下述情况1和情况2为例,介绍目标检测装置根据第一车道上的目标的信息和至少一个阈值,确定第一车道的拥堵等级的具体过程。在情况1中,第一车道上的目标的信息包括一个参数,目标检测装置可以根据该一个参数以及该一个参数对应的阈值,确定第一车道的拥堵等级。因为情况1中的目标检测装置不需要将第一车道上的目标的信息中的多个参数结合起来进行处理,所以实施起来较为简便,复杂度低。在情况2中,第一车道上的目标的信息包括多个参数,目标检测装置根据该多个参数和该多个参数对应的阈值,确定第一车道的拥堵等级。相对于情况1,情况2中的目标检测装置虽然实施起来稍微复杂,但是因为目标检测装置综合考虑了多个参数,所以确定的第一车道的拥堵等级较为准确。下面进行具体介绍。
情况1:第一车道上的目标的信息包括一个参数,目标检测装置根据该一个参数和该一个参数对应的阈值,确定第一车道的拥堵等级。
示例性的,以第一车道上的目标的信息包括第一车道上的目标的速度,至少一个阈值包括速度阈值,第一车道的拥堵等于包括2个等级为例,若第一车道上的目标的速度大于或等于速度阈值,则第一车道的拥堵等级为等级1;或者,若第一车道上的目标的速度小于速度阈值,则第一车道的拥堵等级为等级0。
示例性的,以第一车道上的目标的信息包括第一车道上相邻两个目标之间的平均间距,至少一个阈值包括间距阈值1和间距阈值2,第一车道的拥堵等于包括3个等级为例,若第一车道上相邻两个目标之间的平均间距大于或等于间距阈值1,则第一车道的拥堵等级为等级2;或者,若第一车道上相邻两个目标之间的平均间距小于间距阈值1,并且大于或等于间距阈值2,则第一车道的拥堵等级为等级1;或者,若第一车道上相邻两个目标之间的平均间距小于间距阈值2,则第一车道的拥堵等级为等级0。
示例性的,以第一车道上的目标的信息包括第一车道上相邻两个目标之间的最小间距,至少一个阈值包括间距阈值3和间距阈值4,第一车道的拥堵等于包括3个等级为例,若第一车道上相邻两个目标之间的最小间距大于或等于间距阈值3,则第一车道的拥堵等级为等级2;或者,若第一车道上相邻两个目标之间的最小间距小于间距阈值3,并且大于或等于间距阈值4,则第一车道的拥堵等级为等级1;或者,若第一车道上相邻两个目标之间的最小间距小于间距阈值4,则第一车道的拥堵等级为等级0。
示例性的,以第一车道上的目标的信息包括第一车道上的目标的数量,至少一个阈值包括数量阈值1、数量阈值2和数量阈值3,第一车道的拥堵等于包括4个等级为例,若第一车道上的目标的数量大于或等于数量阈值1,则第一车道的拥堵等级为等级3;或者,若第一车道上的目标的数量小于数量阈值1,并且大于或等于数量阈值2,则第一车道的拥堵等级为等级2;或者,若第一车道上的目标的数量小于数量阈值2,并且大于或等于数量阈值3,则第一车道的拥堵等级为等级1;或者,若第一车道上的目标的数量小于数量阈值3,则第一车道的拥堵等级为等级0。
示例性的,以第一车道上的目标的信息包括第一车道的车流密度,至少一个阈值包括密度阈值1、密度阈值2、密度阈值3和密度阈值4,第一车道的拥堵等于包括5个等级为例,若第一车道的车流密度大于或等于密度阈值1,则第一车道的拥堵等级 为等级4;或者,若第一车道的车流密度小于密度阈值1,并且大于或等于密度阈值2,则第一车道的拥堵等级为等级3;或者,若第一车道的车流密度小于密度阈值2,并且大于或等于密度阈值3,则第一车道的拥堵等级为等级2;或者,若第一车道的车流密度小于密度阈值3,并且大于或等于密度阈值4,则第一车道的拥堵等级为等级1;或者,若第一车道的车流密度小于密度阈值4,则第一车道的拥堵等级为等级0。
情况2:第一车道上的目标的信息包括多个参数,目标检测装置根据该多个参数和该多个参数对应的阈值,确定第一车道的拥堵等级。
示例性的,以第一车道上的目标的信息包括第一车道上的目标的数量、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距和第一车道上相邻两个目标之间的最小间距,至少一个阈值包括第一数量阈值、速度阈值、间距阈值和第二数量阈值,第一车道的拥堵等于包括6个等级为例,若第一车道上的目标的数量大于或等于第一数量阈值,并且第一车道上的目标的速度小于或等于速度阈值,则第一车道的拥堵等级为等级5;或者,若第一车道上的目标的数量大于或等于第一数量阈值,则第一车道的拥堵等级为等级4;或者,若第一车道上相邻两个目标之间的平均间距小于或等于间距阈值,则第一车道的拥堵等级为等级3;或者,若第一车道上相邻两个目标之间的最小间距小于或等于间距阈值,则第一车道的拥堵等级为等级2;若第一车道上相邻两个目标之间的最小间距大于间距阈值,并且第一车道上的目标的数量大于或者等于第二数量阈值,则第一车道的拥堵等级为等级1;或者,若第一车道上目标的数量小于第二数量阈值,则第一车道的拥堵等级为等级0。
示例性的,以第一车道上的目标的信息包括第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距、第一车道上相邻两个目标之间的最小间距和第一车道的车流密度,至少一个阈值包括第一密度阈值、速度阈值、间距阈值和第二密度阈值,第一车道的拥堵等于包括6个等级为例,若第一车道的车流密度大于或等于第一密度阈值,并且第一车道上的目标的速度小于或等于速度阈值,则第一车道的拥堵等级为等级5;或者,若第一车道的车流密度大于或等于第一密度阈值,则第一车道的拥堵等级为等级4;或者,若第一车道上相邻两个目标之间的平均间距小于或等于间距阈值,则第一车道的拥堵等级为等级3;或者,若第一车道上相邻两个目标之间的最小间距小于或等于间距阈值,则第一车道的拥堵等级为等级2;若第一车道上相邻两个目标之间的最小间距大于间距阈值,并且第一车道的车流密度大于或者等于第二密度阈值,则第一车道的拥堵等级为等级1;或者,若第一车道的车流密度小于第二密度阈值,则第一车道的拥堵等级为等级0。
可以理解的,上述情况1或情况2中的例子仅是示例性的,在具体应用中,目标检测装置还可以通过其他方式确定第一车道的拥堵等级,不予限制。
可选的,S303之后,目标检测装置发送环境的拥堵等级的信息。其中,环境的拥堵等级的信息可以指示环境的拥堵等级。例如,目标检测装置可以向智能驾驶车辆发送环境的拥堵等级的信息,以便智能驾驶车辆根据该信息选择更优的道路规划方案和/或车辆控制策略,和/或,以便多辆车共享环境的拥堵等级,进而形成区域的拥堵地图,方便每辆车自身的行驶规划和/或道路的区域性车流控制(例如,红绿灯控制)。又例如,若目标检测装置包括ADAS,ADAS中的传感器融合模块根据第一信息和第二信 息确定环境的拥堵等级后,可以向目标选择模块发送环境的拥堵等级的信息,以便目标选择模块根据该信息调整选择目标的算法,和/或,向控制模块发送环境的拥堵等级的信息,以便控制模块根据该信息调整控制模块所控制的功能中的参数(例如,在拥堵等级较高的情况下,调整AEB功能中的参数,使得刹车更加敏感,刹车用时更短)。
基于图3所示的方法,目标检测装置可以获取包括对环境的第一感知信息的第一信息,以及包括对环境的第二感知信息的第二信息,并根据第一信息和第二信息确定环境的拥堵等级,使得环境的拥堵等级可以用于对环境中的目标进行检测。也就是说,采用图3所示的方法,可以实现根据环境的拥堵等级检测环境中的目标,例如,对于不同的拥堵等级可以采用不同的目标检测策略,使得目标检测结果较为准确。
下面以环境的拥堵等级用于确定传感器融合策略,和/或,环境的拥堵等级用于确定感知目标的策略为例,介绍环境的拥堵等级如何用于对环境中的至少一个目标进行检测。
可选的,图3所示方法还包括S304和/或S305。
S304:目标检测装置根据第一车道的拥堵等级,确定第一车道的传感器融合策略。
本申请实施例中,第一车道的传感器融合策略可以用于估计第一车道上的目标的实际位置。可以理解的,不同的拥堵等级可以对应不同的传感器融合策略。
一种可能的实现方式,若第一车道的拥堵等级大于或等于第一等级,则第一车道的传感器融合策略为第一融合策略;或者,若第一车道的拥堵等级小于第一等级,则第一车道的传感器融合策略为第二融合策略。可以理解的,第一等级可以根据需要进行设置。以第一车道的拥堵等级6个等级为例,第一等级可以是等级2、等级3、等级4或等级5。传感器融合策略也可以包括3个或3个以上的融合策略,不予限制。
在S304之前,目标检测装置从与第二感知装置不同的感知装置,如第一感知装置处获取第三信息。第三信息可以包括至少一个车道上的至少一个目标的位置信息。可以理解的,第三信息也可以包括在第一感知信息中。也就是说,上述第三信息也可以在S301中获取。可以理解的,第一融合策略和第二融合策略可以采用不同的算法对第三信息中的位置信息和第二感知信息中的位置信息进行融合计算,来估计第一车道上的目标的实际位置。首先,对第三信息中的位置信息和第二感知信息中的位置信息进行介绍。
一种可能的设计,第三信息中的位置信息包括第一目标的第一距离,第一目标的第一角度,第一目标的横坐标和第一目标的纵坐标。第二感知信息中的位置信息包括第二目标的横坐标和第二目标的纵坐标。
本申请实施例中,第一目标为第一车道中的任意一个目标,第一目标与目标检测装置所在的目标不同。以第一目标和目标检测装置所在的目标均为车辆为例,第一目标对应的车辆与目标检测装置所在的目标对应的车辆不是同一辆车。
本申请实施例中,第一距离为目标检测装置与第一目标之间的距离,第一角度为目标检测装置的正向方向与第一连线之间的夹角,第一连线为目标检测装置与第一目标之间的连线。其中,目标检测装置的正向方向因为可以描述为目标检测装置所在的目标的正向方向。例如,若目标检测装置所在的目标为车辆,则该正向方向为车辆的 车头所指的方向。第二目标为第二感知信息对应的至少一个目标中,与第一目标对应的目标。
示例性的,以图4所示的车道为例,若第一车道为车道401,目标检测装置为目标4013,第一目标为目标4012,则第一目标的横坐标为第一感知装置感知的目标4012的横坐标,第一目标的纵坐标为第一感知装置感知的目标4012的纵坐标,第一目标的第一距离为第一感知装置感知的目标4013与目标4012之间的距离,第一目标的第一角度α为第一感知装置感知的目标4013的正向方向,与目标4013以及目标4012之间连线的夹角。第二目标的横坐标第二感知装置感知的目标4012的横坐标,第二目标的纵坐标为第二感知装置感知的目标4012的纵坐标。
一种可能的设计,第一融合策略是将第一目标的横坐标,第一目标的纵坐标,第二目标的横坐标和第二目标的纵坐标进行融合,得到第一车道上的目标的实际位置;第二融合策略是将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合,得到第一车道上的目标的实际位置。
可以理解的,在第一车道的拥堵等级较低(即第一车道较不拥堵)的情况下,第一感知装置感知的第一角度和第一距离较为准确(如:通过对摄像装置拍摄的照片进行图像识别,可以得到较为准确的第一角度和第一距离),因此,将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合(即通过第二融合策略),能够估计出较为准确的位置。在第一车道的拥堵等级较高(即第一车道较为拥堵)的情况下,第一感知装置感知的第一角度和第一距离误差较大(例如,通过图像识别技术得到的第一角度和第一距离有较大误差),将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合(即通过第二融合策略),估计出的位置不准确,因此,可以将第一目标的横坐标,第一目标的纵坐标,第二目标的横坐标和第二目标的纵坐标进行融合(即通过第一融合策略),以估计出较为准确的位置。
可以理解的,在理想情况下,第三信息对应的至少一个目标与第二感知信息对应的至少一个目标是一一对应的。也就是说,若第一感知装置感知到了3个目标,第二感知装置也应当感知到这3个目标。然而,在实际应用中,由于各种原因,如:第一感知装置或第二感知装置的精度不同,或者第一感知装置感知目标的原理(如通过图像识别技术感知目标)与第二感知装置感知目标的原理(如通过雷达感知目标)不同等,第三信息对应的至少一个目标与第二感知信息对应的至少一个目标不是一一对应的。例如,若第一感知装置多检,或者第二感知装置漏检,第一感知装置感知到的目标的数量,大于第二感知装置感知到的目标的数量,或者,若第一感知装置漏检,或者第二感知装置多检,第一感知装置感知到的目标的数量,小于第二感知装置感知到的目标的数量。
本申请实施例中,不论第三信息对应的至少一个目标与第二感知信息对应的至少一个目标是否是一一对应的,第一装置都可以根据第三信息和第二感知信息,确定与第一目标对应的目标。例如,第一装置可以将第二感知信息对应的至少一个目标中,与第一目标的距离小于或等于第二距离的目标,确定为第二目标。第一装置为用于估计环境中一个或多个目标的实际位置的装置。
可以理解的,若在第二感知信息对应的至少一个目标中,不存在与第一目标的距 离小于或等于第二距离的目标,则不对第一目标执行融合策略,即不将第一目标与第二感知信息对应的目标进行融合,或者,可以将第二感知信息对应的至少一个目标中,与第一目标距离最近的目标确定为第二目标,并根据第一车道的拥堵等级,对第一目标和第二目标执行融合策略。
可以理解的,通过S304,目标检测装置可以根据第一车道的拥堵等级,确定适合的传感器融合策略,不仅使得根据该传感器融合策略估计的目标的位置较为准确,还使得在确定目标数量时,能够确定出较为准确的目标数量。
S305:目标检测装置根据第一车道的拥堵等级,确定第一车道上感知目标的策略。
本申请实施例中,第一车道上感知目标的策略可以用于感知第一车道中是否存在目标。可以理解的,不同的拥堵等级对应的第一车道上感知目标的策略不同。第一车道的拥堵等级越高(即第一车道越拥堵),第一车道上感知目标的策略越宽松,以防止漏感知目标;第一车道的拥堵等级越低(即第一车道越不拥堵),第一车道上感知目标的策略越严格,以防止确定出虚假目标。
一种可能的实现方式,若第一车道的拥堵等级大于或等于第二等级,则采用第一感知方式感知第一车道上是否存在目标;或者,若第一车道的拥堵等级小于第二等级,则采用第二感知方式感知第一车道上是否存在目标。
可以理解的,第二等级可以根据需要进行设置。以第一车道的拥堵等级6个等级为例,第二等级可以是等级2、等级3、等级4或等级5。第二等级和第一等级可以相同也可以不同。感知目标的策略也可以包括3个或3个以上的感知方式,不予限制。
本申请实施例中,第一感知方式为将根据第三信息和第二感知信息进行传感器融合得到的目标,第三信息中未进行传感器融合的目标和第二感知信息中未进行传感器融合的目标,确定为第一车道上的目标;第二感知方式为将根据第三信息和第二感知信息进行传感器融合得到的目标,确定为第一车道上的目标。
如S304中所述,在实际应用中,第三信息对应的至少一个目标与第二感知信息对应的至少一个目标不是一一对应的,第一感知装置或第二感知装置可能出现漏检或多检的情况。因此,在根据传感器融合策略对第三信息对应的目标以及第二感知信息对应的目标进行融合时,可能出现以下三种目标:根据第三信息和第二感知信息进行传感器融合得到的目标;第三信息中未进行传感器融合的目标和第二感知信息中未进行传感器融合的目标。
示例性的,以第三信息包括目标1的位置信息、目标2的位置信息和目标3的位置信息,第二感知信息包括目标4的位置信息、目标5的位置信息和目标6的信息为例,若目标2和目标5对应,目标3和目标6对应,则根据第三信息和第二感知信息进行传感器融合得到的目标包括根据目标2的位置信息和目标5的位置信息进行传感器融合得到的目标,以及根据目标3的位置信息和目标6的位置信息进行传感器融合得到的目标。第三信息中未进行传感器融合的目标包括目标1。第二感知信息中未进行传感器融合的目标包括目标4。
可以理解的,在第一车道的拥堵等级较高(即第一车道较拥堵)的情况下,为了防止发生碰撞,可以将第一车道上感知目标的策略设置的较为宽松,例如,不对获取 到的目标(如:根据第一感知信息和第二感知信息进行传感器融合得到的目标,第一感知信息中未进行传感器融合的目标和第二感知信息中未进行传感器融合的目标)进行筛选,而是直接将获取到的目标确定为第一车道上的目标。在第一车道的拥堵等级较低(即第一车道较不拥堵)的情况下,可以将第一车道上感知目标的策略设置的较为严格,例如,可以对获取到的目标进行筛选,将筛选后的目标(如:根据第一感知信息和第二感知信息进行传感器融合得到的目标)确定为第一车道上的目标,以防止确定出虚假目标。
可以理解的,通过S305,目标检测装置可以根据第一车道的拥堵等级,确定适合的第一车道上感知目标的策略,在第一车道较拥堵的情况下,能够防止发生碰撞,在第一车道较不拥堵的情况下,能够防止确定出虚假目标。
其中,上述S301-S305中的目标检测装置的动作可以由图2所示的目标检测装置20中的处理器201调用存储器203中存储的应用程序代码来执行,本申请实施例对此不做任何限制。
本申请上文中提到的各个实施例之间在方案不矛盾的情况下,均可以进行结合,不作限制。
可以理解的,以上各个实施例中,由目标检测装置实现的方法和/或步骤,也可以由可用于目标检测装置的部件(例如芯片或者电路)实现,不予限制。
上述主要对本申请实施例提供的目标检测方法进行了阐述。本申请实施例还提供了目标检测装置,该目标检测装置可以为上述方法实施例中的目标检测装置,或者包含上述目标检测装置的装置,或者为可用于目标检测装置的部件。可以理解的是,上述目标检测装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法操作,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例可以根据上述方法示例对目标检测装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。可以理解的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
比如,以采用集成的方式划分各个功能模块的情况下,图5示出了一种目标检测装置50的结构示意图。目标检测装置50包括处理模块501。可选的,目标检测装置50还包括收发模块502。处理模块501,也可以称为处理单元用于执行除了收发操作之外的操作,例如可以是处理电路或者处理器等。收发模块502,也可以称为收发单元用于执行收发操作,例如可以是收发电路,收发机,收发器或者通信接口等。
在一些实施例中,该目标检测装置50还可以包括存储模块(图5中未示出),用于存储程序指令和数据。
示例性地,目标检测装置50用于实现上述方法实施例中的目标检测装置的功能。 目标检测装置50例如为图3所示的实施例所述的目标检测装置。
其中,处理模块501,用于获取第一信息。其中,第一信息包括对环境的第一感知信息。例如,处理模块501可以用于执行S301。
处理模块501,还用于获取第二信息。其中,第二信息包括对环境的第二感知信息。例如,处理模块501还可以用于执行S302。
处理模块501,还用于根据第一信息和第二信息确定环境的拥堵等级。其中,环境的拥堵等级用于对环境中的至少一个目标进行检测。例如,处理模块501还可以用于执行S303。
在一种可能的实现方式中,第一感知信息包括至少一个车道的信息;第二感知信息包括至少一个车道上的至少一个目标的位置信息,或者,第二感知信息包括至少一个车道上的至少一个目标的位置信息,和至少一个车道上的至少一个目标的速度信息。
在一种可能的实现方式中,处理模块501,还用于根据第一感知信息和第二感知信息,将至少一个目标按照车道进行划分。
在一种可能的实现方式中,处理模块501,具体用于根据第一感知信息和第二感知信息,确定至少一个车道中的第一车道上的目标的信息,第一车道为至少一个车道中的任意一个车道,第一车道上的目标的信息包括以下至少一项:第一车道上的目标的数量、第一车道上的目标的速度、第一车道上相邻两个目标之间的平均间距、第一车道上相邻两个目标之间的最小间距、或第一车道的车流密度;处理模块501,还具体用于根据第一车道上的目标的信息确定第一车道的拥堵等级。
在一种可能的实现方式中,处理模块501,还用于获取至少一个阈值,至少一个阈值包括以下至少一项:速度阈值或间距阈值;速度阈值为第一车道上的多个目标的速度的中位数、第一车道上的多个目标的速度的极小值或第一车道上的多个目标的速度的极大值,或者,速度阈值为预设置的;间距阈值是根据速度阈值和时间间隔得到的,时间间隔为预设置的。
在一种可能的实现方式中,处理模块501,具体用于根据第一车道上的目标的信息和至少一个阈值,确定第一车道的拥堵等级。
在一种可能的实现方式中,第一车道的拥堵等级包括4个等级,5个等级或6个等级。
在一种可能的实现方式中,第一车道的拥堵等级包括6个等级,若第一车道上的目标的数量大于或等于第一数量阈值,并且第一车道上的目标的速度小于或等于速度阈值,则第一车道的拥堵等级为等级5;或者,若第一车道上的目标的数量大于或等于第一数量阈值,则第一车道的拥堵等级为等级4;或者,若第一车道上相邻两个目标之间的平均间距小于或等于间距阈值,则第一车道的拥堵等级为等级3;或者,若第一车道上相邻两个目标之间的最小间距小于或等于间距阈值,则第一车道的拥堵等级为等级2;若第一车道上相邻两个目标之间的最小间距大于间距阈值,并且第一车道上的目标的数量大于或者等于第二数量阈值,则第一车道的拥堵等级为等级1;或者,若第一车道上目标的数量小于第二数量阈值,则第一车道的拥堵等级为等级0。
在一种可能的实现方式中,第一车道的拥堵等级包括6个等级,若第一车道的车流密度大于或等于第一密度阈值,并且第一车道上的目标的速度小于或等于速度阈值, 则第一车道的拥堵等级为等级5;或者,若第一车道的车流密度大于或等于第一密度阈值,则第一车道的拥堵等级为等级4;或者,若第一车道上相邻两个目标之间的平均间距小于或等于间距阈值,则第一车道的拥堵等级为等级3;或者,若第一车道上相邻两个目标之间的最小间距小于或等于间距阈值,则第一车道的拥堵等级为等级2;若第一车道上相邻两个目标之间的最小间距大于间距阈值,并且第一车道的车流密度大于或者等于第二密度阈值,则第一车道的拥堵等级为等级1;或者,若第一车道的车流密度小于第二密度阈值,则第一车道的拥堵等级为等级0。
在一种可能的实现方式中,处理模块501,用于根据第一车道的拥堵等级,确定第一车道的传感器融合策略,第一车道的传感器融合策略用于估计第一车道上的目标的实际位置。
在一种可能的实现方式中,若第一车道的拥堵等级大于或等于第一等级,则第一车道的传感器融合策略为第一融合策略;或者,若第一车道的拥堵等级小于第一等级,则第一车道的传感器融合策略为第二融合策略。
在一种可能的实现方式中,第一感知信息还包括至少一个车道上的至少一个目标的位置信息,第一感知信息中的位置信息包括第一目标的第一距离,第一目标的第一角度,第一目标的横坐标和第一目标的纵坐标,第一目标为第一车道中的任意一个目标,第一目标与目标检测装置50所在的目标不同,第一距离为目标检测装置50与第一目标之间的距离,第一角度为目标检测装置50的正向方向与第一连线之间的夹角,第一连线为目标检测装置50与第一目标之间的连线;第二感知信息中的位置信息包括第二目标的横坐标和第二目标的纵坐标,第二目标为第二感知信息对应的至少一个目标中,与第一目标对应的目标。
在一种可能的实现方式中,第一融合策略是将第一目标的横坐标,第一目标的纵坐标,第二目标的横坐标和第二目标的纵坐标进行融合,得到第一车道上的目标的实际位置;第二融合策略是将第一角度、第一距离、第二目标的横坐标和第二目标的纵坐标进行融合,得到第一车道上的目标的实际位置。
在一种可能的实现方式中,处理模块501,还用于根据第一车道的拥堵等级,确定第一车道上感知目标的策略,第一车道上感知目标的策略用于感知第一车道中是否存在目标。
在一种可能的实现方式中,若第一车道的拥堵等级大于或等于第二等级,则采用第一感知方式感知第一车道上是否存在目标;或者,若第一车道的拥堵等级小于第二等级,则采用第二感知方式感知第一车道上是否存在目标。
在一种可能的实现方式中,第一感知方式为将根据第一感知信息和第二感知信息进行传感器融合得到的目标,第一感知信息中未进行传感器融合的目标和第二感知信息中未进行传感器融合的目标,确定为第一车道上的目标;第二感知方式为将根据第一感知信息和第二感知信息进行传感器融合得到的目标,确定为第一车道上的目标。
在一种可能的实现方式中,第一信息和第二信息是由不同类型的感知装置获取的;或者,第一信息和第二信息是由相同类型的感知装置获取的。
在一种可能的实现方式中,获取第一信息的感知装置为摄像装置、激光雷达、毫米波雷达或声呐;获取第二信息的感知装置为毫米波雷达、激光雷达或声呐。
在一种可能的实现方式中,收发模块502,用于发送环境的拥堵等级的信息。
当用于实现上述方法实施例中的目标检测装置的功能时,关于目标检测装置50所能实现的其他功能,可参考图3所示的实施例的相关介绍,不多赘述。
在一个简单的实施例中,本领域的技术人员可以想到目标检测装置50可以采用图2所示的形式。比如,图2中的处理器201可以通过调用存储器203中存储的计算机执行指令,使得目标检测装置50执行上述方法实施例中所述的方法。
示例性的,图5中的处理模块501和收发模块502的功能/实现过程可以通过图2中的处理器201调用存储器203中存储的计算机执行指令来实现。或者,图5中的处理模块501的功能/实现过程可以通过图2中的处理器201调用存储器203中存储的计算机执行指令来实现,图5中的收发模块502的功能/实现过程可以通过图2中的通信接口204来实现。
可以理解的是,以上模块或单元的一个或多个可以软件、硬件或二者结合来实现。当以上任一模块或单元以软件实现的时候,所述软件以计算机程序指令的方式存在,并被存储在存储器中,处理器可以用于执行所述程序指令并实现以上方法流程。该处理器可以内置于SoC(片上系统)或ASIC,也可是一个独立的半导体芯片。该处理器内处理用于执行软件指令以进行运算或处理的核外,还可进一步包括必要的硬件加速器,如现场可编程门阵列(field programmable gate array,FPGA)、PLD(可编程逻辑器件)、或者实现专用逻辑运算的逻辑电路。
当以上模块或单元以硬件实现的时候,该硬件可以是CPU、微处理器、数字信号处理(digital signal processing,DSP)芯片、微控制单元(microcontroller unit,MCU)、人工智能处理器、ASIC、SoC、FPGA、PLD、专用数字电路、硬件加速器或非集成的分立器件中的任一个或任一组合,其可以运行必要的软件或不依赖于软件以执行以上方法流程。
可选的,本申请实施例还提供了一种芯片系统,包括:至少一个处理器和接口,该至少一个处理器通过接口与存储器耦合,当该至少一个处理器执行存储器中的计算机程序或指令时,使得上述任一方法实施例中的方法被执行。在一种可能的实现方式中,该芯片系统还包括存储器。可选的,该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件,本申请实施例对此不作具体限定。
可选的,本申请实施例还提供了一种计算机可读存储介质。上述方法实施例中的全部或者部分流程可以由计算机程序来指令相关的硬件完成,该程序可存储于上述计算机可读存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。计算机可读存储介质可以是前述任一实施例的目标检测装置的内部存储单元,例如目标检测装置的硬盘或内存。上述计算机可读存储介质也可以是上述目标检测装置的外部存储设备,例如上述目标检测装置上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,上述计算机可读存储介质还可以既包括上述目标检测装置的内部存储单元也包括外部存储设备。上述计算机可读存储介质用于存储上述计算机程序以及上述目标检测装置所需的其他程序和数据。上述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
可选的,本申请实施例还提供了一种计算机程序产品。上述方法实施例中的全部或者部分流程可以由计算机程序来指令相关的硬件完成,该程序可存储于上述计算机程序产品中,该程序在执行时,可包括如上述各方法实施例的流程。
可选的,本申请实施例还提供了一种计算机指令。上述方法实施例中的全部或者部分流程可以由计算机指令来指令相关的硬件(如计算机、处理器、接入网设备、移动性管理网元或会话管理网元等)完成。该程序可被存储于上述计算机可读存储介质中或上述计算机程序产品中。
可选的,本申请实施例还提供了一种智能驾驶车辆,包括上述实施例中的目标检测装置。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (42)

  1. 一种目标检测方法,其特征在于,所述方法包括:
    获取第一信息,所述第一信息包括对环境的第一感知信息;
    获取第二信息,所述第二信息包括对所述环境的第二感知信息;
    根据所述第一信息和所述第二信息确定所述环境的拥堵等级,所述环境的拥堵等级用于对所述环境中的至少一个目标进行检测。
  2. 根据权利要求1所述的方法,其特征在于,
    所述第一感知信息包括至少一个车道的信息;
    所述第二感知信息包括所述至少一个车道上的至少一个目标的位置信息,或者,所述第二感知信息包括所述至少一个车道上的至少一个目标的位置信息,和所述至少一个车道上的至少一个目标的速度信息。
  3. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据所述第一感知信息和所述第二感知信息,将所述至少一个目标按照车道进行划分。
  4. 根据权利要求2或3所述的方法,其特征在于,所述根据所述第一信息和所述第二信息确定所述环境的拥堵等级,包括:
    根据所述第一感知信息和所述第二感知信息,确定至少一个车道中的第一车道上的目标的信息,所述第一车道为所述至少一个车道中的任意一个车道,所述第一车道上的目标的信息包括以下至少一项:所述第一车道上的目标的数量、所述第一车道上的目标的速度、所述第一车道上相邻两个目标之间的平均间距、所述第一车道上相邻两个目标之间的最小间距、或所述第一车道的车流密度;
    根据所述第一车道上的目标的信息确定所述第一车道的拥堵等级。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    获取至少一个阈值,所述至少一个阈值包括以下至少一项:速度阈值或间距阈值;
    所述速度阈值为所述第一车道上的多个目标的速度的中位数、所述第一车道上的多个目标的速度的极小值或所述第一车道上的多个目标的速度的极大值,或者,所述速度阈值为预设置的;
    所述间距阈值是根据所述速度阈值和时间间隔得到的,所述时间间隔为预设置的。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第一车道上的目标的信息确定所述第一车道的拥堵等级,包括:
    根据所述第一车道上的目标的信息和所述至少一个阈值,确定所述第一车道的拥堵等级。
  7. 根据权利要求6所述的方法,其特征在于,所述第一车道的拥堵等级包括4个等级,5个等级或6个等级。
  8. 根据权利要求7所述的方法,其特征在于,若所述第一车道的拥堵等级包括6个等级,所述根据所述第一车道上的目标的信息和所述至少一个阈值,确定所述第一车道的拥堵等级,包括:
    若所述第一车道上的目标的数量大于或等于第一数量阈值,并且所述第一车道上的目标的速度小于或等于所述速度阈值,则所述第一车道的拥堵等级为等级5;或者,
    若所述第一车道上的目标的数量大于或等于第一数量阈值,则所述第一车道的拥堵等级为等级4;或者,
    若所述第一车道上相邻两个目标之间的平均间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级3;或者,
    若所述第一车道上相邻两个目标之间的最小间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级2;
    若所述第一车道上相邻两个目标之间的最小间距大于间距阈值,并且所述第一车道上的目标的数量大于或者等于第二数量阈值,则所述第一车道的拥堵等级为等级1;或者,
    若所述第一车道上目标的数量小于第二数量阈值,则所述第一车道的拥堵等级为等级0。
  9. 根据权利要求7所述的方法,其特征在于,若所述第一车道的拥堵等级包括6个等级,所述根据所述第一车道上的目标的信息和所述至少一个阈值,确定所述第一车道的拥堵等级,包括:
    若所述第一车道的车流密度大于或等于第一密度阈值,并且所述第一车道上的目标的速度小于或等于所述速度阈值,则所述第一车道的拥堵等级为等级5;或者,
    若所述第一车道的车流密度大于或等于第一密度阈值,则所述第一车道的拥堵等级为等级4;或者,
    若所述第一车道上相邻两个目标之间的平均间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级3;或者,
    若所述第一车道上相邻两个目标之间的最小间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级2;
    若所述第一车道上相邻两个目标之间的最小间距大于间距阈值,并且所述第一车道的车流密度大于或者等于第二密度阈值,则所述第一车道的拥堵等级为等级1;或者,
    若所述第一车道的车流密度小于第二密度阈值,则所述第一车道的拥堵等级为等级0。
  10. 根据权利要求4-9中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一车道的拥堵等级,确定第一车道的传感器融合策略,所述第一车道的传感器融合策略用于估计所述第一车道上的目标的实际位置。
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述第一车道的拥堵等级,确定所述第一车道的传感器融合策略,包括:
    若所述第一车道的拥堵等级大于或等于第一等级,则所述第一车道的传感器融合策略为第一融合策略;或者,
    若所述第一车道的拥堵等级小于第一等级,则所述第一车道的传感器融合策略为第二融合策略。
  12. 根据权利要求11所述的方法,其特征在于,所述方法应用于目标检测装置;
    所述第一感知信息还包括所述至少一个车道上的至少一个目标的位置信息,所述第一感知信息中的位置信息包括第一目标的第一距离,所述第一目标的第一角度,所述第一目标的横坐标和所述第一目标的纵坐标,所述第一目标为所述第一车道中的任意一个目标,所述第一目标与所述目标检测装置所在的目标不同,所述第一距离为所述目标检测装置与所述第一目标之间的距离,所述第一角度为所述目标检测装置的正向方向与第一连线之间的夹角,所述第一连线为所述目标检测装置与所述第一目标之间的连线;
    所述第二感知信息中的位置信息包括第二目标的横坐标和所述第二目标的纵坐标,所述第二目标为所述第二感知信息对应的至少一个目标中,与所述第一目标对应的目标。
  13. 根据权利要求12所述的方法,其特征在于,
    所述第一融合策略是将所述第一目标的横坐标,所述第一目标的纵坐标,所述第二目标的横坐标和所述第二目标的纵坐标进行融合,得到所述第一车道上的目标的实际位置;
    所述第二融合策略是将所述第一角度、所述第一距离、所述第二目标的横坐标和所述第二目标的纵坐标进行融合,得到所述第一车道上的目标的实际位置。
  14. 根据权利要求4-13中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述第一车道的拥堵等级,确定第一车道上感知目标的策略,所述第一车道上感知目标的策略用于感知所述第一车道中是否存在目标。
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述第一车道的拥堵等级,确定第一车道上感知目标的策略,包括:
    若所述第一车道的拥堵等级大于或等于第二等级,则采用第一感知方式感知所述第一车道上是否存在目标;或者,
    若所述第一车道的拥堵等级小于第二等级,则采用第二感知方式感知所述第一车道上是否存在目标。
  16. 根据权利要求15所述的方法,其特征在于,
    所述第一感知方式为将根据所述第一感知信息和所述第二感知信息进行传感器融合得到的目标,所述第一感知信息中未进行传感器融合的目标和所述第二感知信息中未进行传感器融合的目标,确定为所述第一车道上的目标;
    所述第二感知方式为将根据所述第一感知信息和所述第二感知信息进行传感器融合得到的目标,确定为所述第一车道上的目标。
  17. 根据权利要求1-16中任一项所述的方法,其特征在于,
    所述第一信息和所述第二信息是由不同类型的感知装置获取的;或者,
    所述第一信息和所述第二信息是由相同类型的感知装置获取的。
  18. 根据权利要求17所述的方法,其特征在于,
    获取所述第一信息的感知装置为摄像装置、激光雷达、毫米波雷达或声呐;
    获取所述第二信息的感知装置为毫米波雷达、激光雷达或声呐。
  19. 根据权利要求1-18中任一项所述的方法,其特征在于,所述方法还包括:
    发送所述环境的拥堵等级的信息。
  20. 一种目标检测装置,其特征在于,所述目标检测装置包括:处理模块;
    所述处理模块,用于获取第一信息,所述第一信息包括对环境的第一感知信息;
    所述处理模块,还用于获取第二信息,所述第二信息包括对所述环境的第二感知信息;
    所述处理模块,还用于根据所述第一信息和所述第二信息确定所述环境的拥堵等级,所述环境的拥堵等级用于对所述环境中的至少一个目标进行检测。
  21. 根据权利要求20所述的目标检测装置,其特征在于,
    所述第一感知信息包括至少一个车道的信息;
    所述第二感知信息包括所述至少一个车道上的至少一个目标的位置信息,或者,所述第二感知信息包括所述至少一个车道上的至少一个目标的位置信息,和所述至少一个车道上的至少一个目标的速度信息。
  22. 根据权利要求21所述的目标检测装置,其特征在于,
    所述处理模块,还用于根据所述第一感知信息和所述第二感知信息,将所述至少一个目标按照车道进行划分。
  23. 根据权利要求21或22所述的目标检测装置,其特征在于,
    所述处理模块,具体用于根据所述第一感知信息和所述第二感知信息,确定至少一个车道中的第一车道上的目标的信息,所述第一车道为所述至少一个车道中的任意一个车道,所述第一车道上的目标的信息包括以下至少一项:所述第一车道上的目标的数量、所述第一车道上的目标的速度、所述第一车道上相邻两个目标之间的平均间距、所述第一车道上相邻两个目标之间的最小间距、或所述第一车道的车流密度;
    所述处理模块,还具体用于根据所述第一车道上的目标的信息确定所述第一车道的拥堵等级。
  24. 根据权利要求23所述的目标检测装置,其特征在于,
    所述处理模块,还用于获取至少一个阈值,所述至少一个阈值包括以下至少一项:速度阈值或间距阈值;
    所述速度阈值为所述第一车道上的多个目标的速度的中位数、所述第一车道上的多个目标的速度的极小值或所述第一车道上的多个目标的速度的极大值,或者,所述速度阈值为预设置的;
    所述间距阈值是根据所述速度阈值和时间间隔得到的,所述时间间隔为预设置的。
  25. 根据权利要求24所述的目标检测装置,其特征在于,
    所述处理模块,具体用于根据所述第一车道上的目标的信息和所述至少一个阈值,确定所述第一车道的拥堵等级。
  26. 根据权利要求25所述的目标检测装置,其特征在于,所述第一车道的拥堵等级包括4个等级,5个等级或6个等级。
  27. 根据权利要求26所述的目标检测装置,其特征在于,所述第一车道的拥堵等级包括6个等级,
    若所述第一车道上的目标的数量大于或等于第一数量阈值,并且所述第一车道上的目标的速度小于或等于所述速度阈值,则所述第一车道的拥堵等级为等级5;或者,
    若所述第一车道上的目标的数量大于或等于第一数量阈值,则所述第一车道的拥堵等级为等级4;或者,
    若所述第一车道上相邻两个目标之间的平均间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级3;或者,
    若所述第一车道上相邻两个目标之间的最小间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级2;
    若所述第一车道上相邻两个目标之间的最小间距大于间距阈值,并且所述第一车道上的目标的数量大于或者等于第二数量阈值,则所述第一车道的拥堵等级为等级1;或者,
    若所述第一车道上目标的数量小于第二数量阈值,则所述第一车道的拥堵等级为等级0。
  28. 根据权利要求26所述的目标检测装置,其特征在于,所述第一车道的拥堵等级包括6个等级,
    若所述第一车道的车流密度大于或等于第一密度阈值,并且所述第一车道上的目标的速度小于或等于所述速度阈值,则所述第一车道的拥堵等级为等级5;或者,
    若所述第一车道的车流密度大于或等于第一密度阈值,则所述第一车道的拥堵等级为等级4;或者,
    若所述第一车道上相邻两个目标之间的平均间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级3;或者,
    若所述第一车道上相邻两个目标之间的最小间距小于或等于所述间距阈值,则所述第一车道的拥堵等级为等级2;
    若所述第一车道上相邻两个目标之间的最小间距大于间距阈值,并且所述第一车道的车流密度大于或者等于第二密度阈值,则所述第一车道的拥堵等级为等级1;或者,
    若所述第一车道的车流密度小于第二密度阈值,则所述第一车道的拥堵等级为等级0。
  29. 根据权利要求23-28中任一项所述的目标检测装置,其特征在于,
    所述处理模块,用于根据所述第一车道的拥堵等级,确定第一车道的传感器融合策略,所述第一车道的传感器融合策略用于估计所述第一车道上的目标的实际位置。
  30. 根据权利要求29所述的目标检测装置,其特征在于,
    若所述第一车道的拥堵等级大于或等于第一等级,则所述第一车道的传感器融合策略为第一融合策略;或者,
    若所述第一车道的拥堵等级小于第一等级,则所述第一车道的传感器融合策略为第二融合策略。
  31. 根据权利要求30所述的目标检测装置,其特征在于,
    所述第一感知信息还包括所述至少一个车道上的至少一个目标的位置信息,所述 第一感知信息中的位置信息包括第一目标的第一距离,所述第一目标的第一角度,所述第一目标的横坐标和所述第一目标的纵坐标,所述第一目标为所述第一车道中的任意一个目标,所述第一目标与所述目标检测装置所在的目标不同,所述第一距离为所述目标检测装置与所述第一目标之间的距离,所述第一角度为所述目标检测装置的正向方向与第一连线之间的夹角,所述第一连线为所述目标检测装置与所述第一目标之间的连线;
    所述第二感知信息中的位置信息包括第二目标的横坐标和所述第二目标的纵坐标,所述第二目标为所述第二感知信息对应的至少一个目标中,与所述第一目标对应的目标。
  32. 根据权利要求31所述的目标检测装置,其特征在于,
    所述第一融合策略是将所述第一目标的横坐标,所述第一目标的纵坐标,所述第二目标的横坐标和所述第二目标的纵坐标进行融合,得到所述第一车道上的目标的实际位置;
    所述第二融合策略是将所述第一角度、所述第一距离、所述第二目标的横坐标和所述第二目标的纵坐标进行融合,得到所述第一车道上的目标的实际位置。
  33. 根据权利要求23-32中任一项所述的目标检测装置,其特征在于,
    所述处理模块,还用于根据所述第一车道的拥堵等级,确定第一车道上感知目标的策略,所述第一车道上感知目标的策略用于感知所述第一车道中是否存在目标。
  34. 根据权利要求33所述的目标检测装置,其特征在于,
    若所述第一车道的拥堵等级大于或等于第二等级,则采用第一感知方式感知所述第一车道上是否存在目标;或者,
    若所述第一车道的拥堵等级小于第二等级,则采用第二感知方式感知所述第一车道上是否存在目标。
  35. 根据权利要求34所述的目标检测装置,其特征在于,
    所述第一感知方式为将根据所述第一感知信息和所述第二感知信息进行传感器融合得到的目标,所述第一感知信息中未进行传感器融合的目标和所述第二感知信息中未进行传感器融合的目标,确定为所述第一车道上的目标;
    所述第二感知方式为将根据所述第一感知信息和所述第二感知信息进行传感器融合得到的目标,确定为所述第一车道上的目标。
  36. 根据权利要求20-35中任一项所述的目标检测装置,其特征在于,
    所述第一信息和所述第二信息是由不同类型的感知装置获取的;或者,
    所述第一信息和所述第二信息是由相同类型的感知装置获取的。
  37. 根据权利要求36所述的目标检测装置,其特征在于,
    获取所述第一信息的感知装置为摄像装置、激光雷达、毫米波雷达或声呐;
    获取所述第二信息的感知装置为毫米波雷达、激光雷达或声呐。
  38. 根据权利要求20-37中任一项所述的目标检测装置,其特征在于,所述目标检测装置还包括:收发模块;
    所述收发模块,用于发送所述环境的拥堵等级的信息。
  39. 一种智能驾驶车辆,其特征在于,包括:如权利要求20-38中任一项所述的 目标检测装置。
  40. 一种目标检测装置,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述装置执行如权利要求1至19中任一项所述的方法。
  41. 一种芯片,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述芯片执行如权利要求1至19中任一项所述的方法。
  42. 一种计算机可读存储介质,其上存储有计算机程序或指令,其特征在于,所述计算机程序或指令被执行时使得计算机执行如权利要求1至19中任一项所述的方法。
PCT/CN2022/089660 2022-04-27 2022-04-27 目标检测方法及装置 WO2023206166A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/089660 WO2023206166A1 (zh) 2022-04-27 2022-04-27 目标检测方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/089660 WO2023206166A1 (zh) 2022-04-27 2022-04-27 目标检测方法及装置

Publications (1)

Publication Number Publication Date
WO2023206166A1 true WO2023206166A1 (zh) 2023-11-02

Family

ID=88516688

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089660 WO2023206166A1 (zh) 2022-04-27 2022-04-27 目标检测方法及装置

Country Status (1)

Country Link
WO (1) WO2023206166A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113450A1 (en) * 2016-10-20 2018-04-26 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous-mode traffic lane selection based on traffic lane congestion levels
CN108492557A (zh) * 2018-03-23 2018-09-04 四川高路交通信息工程有限公司 基于多模型融合的高速公路拥堵等级判断方法
CN111477005A (zh) * 2020-04-20 2020-07-31 北京中交华安科技有限公司 一种基于车辆状态及行车环境的智能感知预警方法及系统
CN113791410A (zh) * 2021-08-20 2021-12-14 北京市公安局公安交通管理局 一种基于多传感器信息融合的道路环境综合认知方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113450A1 (en) * 2016-10-20 2018-04-26 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous-mode traffic lane selection based on traffic lane congestion levels
CN108492557A (zh) * 2018-03-23 2018-09-04 四川高路交通信息工程有限公司 基于多模型融合的高速公路拥堵等级判断方法
CN111477005A (zh) * 2020-04-20 2020-07-31 北京中交华安科技有限公司 一种基于车辆状态及行车环境的智能感知预警方法及系统
CN113791410A (zh) * 2021-08-20 2021-12-14 北京市公安局公安交通管理局 一种基于多传感器信息融合的道路环境综合认知方法

Similar Documents

Publication Publication Date Title
US10585409B2 (en) Vehicle localization with map-matched sensor measurements
US11067996B2 (en) Event-driven region of interest management
CN109829351B (zh) 车道信息的检测方法、装置及计算机可读存储介质
JP7355877B2 (ja) 車路協同自動運転の制御方法、装置、電子機器及び車両
EP3086990B1 (en) Method and system for driver assistance for a vehicle
CN111123933A (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
EP4089659A1 (en) Map updating method, apparatus and device
US7765057B2 (en) Driving schedule creating device
US11003922B2 (en) Peripheral recognition device, peripheral recognition method, and computer readable medium
JP2018180735A (ja) 動作範囲決定装置
US11897511B2 (en) Multi-hypothesis object tracking for automated driving systems
JP2021020580A (ja) 車両制御装置、車両制御方法、およびプログラム
US20210107528A1 (en) Vehicle control system
US20210146953A1 (en) Electronic Control Unit
US20200283024A1 (en) Vehicle, information processing apparatus, control methods thereof, and system
US11618473B2 (en) Vehicle control system
WO2022078342A1 (zh) 动态占据栅格估计方法及装置
US11640172B2 (en) Vehicle controls based on reliability values calculated from infrastructure information
US11590971B2 (en) Apparatus and method for determining traveling position of vehicle
US20210380136A1 (en) Autonomous controller for detecting a low-speed target object in a congested traffic situation, a system including the same, and a method thereof
WO2023206166A1 (zh) 目标检测方法及装置
CN116513169A (zh) 车辆紧急避撞控制方法、装置、设备、介质、系统及车辆
CN115359332A (zh) 基于车路协同的数据融合方法、装置、电子设备及系统
CN113715753A (zh) 车辆传感器数据的处理方法和系统
JP7401273B2 (ja) 移動体の制御装置及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22939019

Country of ref document: EP

Kind code of ref document: A1