CN112014845A - Vehicle obstacle positioning method, device, equipment and storage medium - Google Patents

Vehicle obstacle positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN112014845A
CN112014845A CN202010902098.5A CN202010902098A CN112014845A CN 112014845 A CN112014845 A CN 112014845A CN 202010902098 A CN202010902098 A CN 202010902098A CN 112014845 A CN112014845 A CN 112014845A
Authority
CN
China
Prior art keywords
obstacle
vehicle
monitoring result
current
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010902098.5A
Other languages
Chinese (zh)
Other versions
CN112014845B (en
Inventor
李卫兵
祖春胜
吴琼
时利
张澄宇
张飞
曾伟
杨帆
徐瑞雪
张雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Jianghuai Automobile Group Corp
Original Assignee
Anhui Jianghuai Automobile Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Jianghuai Automobile Group Corp filed Critical Anhui Jianghuai Automobile Group Corp
Priority to CN202010902098.5A priority Critical patent/CN112014845B/en
Publication of CN112014845A publication Critical patent/CN112014845A/en
Application granted granted Critical
Publication of CN112014845B publication Critical patent/CN112014845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of automatic parking, and discloses a vehicle obstacle positioning method, device, equipment and storage medium. The method comprises the following steps: acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera; fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle; and determining the position information of the current obstacle according to the longitudinal projection distance, the height information and the boundary point coordinate information. Through the mode, the ultrasonic signal monitoring result and the video monitoring result are fused according to the principle that the accuracy of recognizing the front obstacle of the vehicle by the vehicle radar is high, and the accuracy of recognizing the side obstacle of the vehicle by the vehicle camera is high, so that the position information of the obstacle can be acquired more accurately.

Description

Vehicle obstacle positioning method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of automatic parking, in particular to a vehicle obstacle positioning method, device, equipment and storage medium.
Background
The most core technology of the full-automatic parking system is parking space detection and effective detection of obstacles, a sensor adopted by the current full-automatic parking system for monitoring the obstacles is mainly an ultrasonic radar, however, the anti-interference capability and the detection performance of the ultrasonic radar are reduced due to the environmental condition and the reflection characteristic of the obstacles, due to the influence of various external sound fields and electromagnetic fields (such as air pressure, engine noise, electric control gears, induction coils and the like of other vehicles using ultrasonic distance measurement), the system cannot achieve absolute anti-interference, so that along with the development of full-automatic parking technology, the sensor is also integrated with the look-around camera which is sensitive to the illumination condition, therefore, how to position the obstacle by fusing the monitoring results of the panoramic camera and the ultrasonic radar is the key for the development of the full-automatic parking system.
In the prior art, an ultrasonic radar calculates the distance between a barrier and the ultrasonic radar according to the echo time and the current sound velocity by using the aggregation principle of direct echoes and cross echoes, and calculates the direction of the barrier according to the wave sending direction, but the specific height of the barrier can only be calculated by geometric reasoning according to the change rate of the ultrasonic radar or the waveform state of the echoes, and the number of the ultrasonic radars generally arranged on the side surface of a vehicle is small, so that the number of the echoes which can be used for calculation is small, the error is large, and even the problem of false detection exists. The vehicle camera is based on the barrier positioning principle of the all-round camera, barrier feature recognition and matching positioning of a single fisheye camera are utilized, but judgment in the longitudinal distance is not accurate, so that judgment errors of barrier boundaries and height states are large.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a vehicle obstacle positioning method, a vehicle obstacle positioning device, vehicle obstacle positioning equipment and a storage medium, and aims to solve the technical problem of obstacle false detection or large positioning error in the prior art.
To achieve the above object, the present invention provides a method comprising the steps of:
acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle;
and determining the position information of the current obstacle according to the longitudinal projection distance, the height information and the boundary point coordinate information.
Preferably, the step of obtaining the monitoring result of the ultrasonic signal collected by the vehicle radar and the video monitoring result collected by the vehicle camera includes:
detecting monitoring data collected by a vehicle radar and a vehicle camera in real time, and judging whether the vehicle radar and the vehicle camera monitor obstacles or not according to the monitoring data;
when the vehicle radar monitors an obstacle, taking the current detection data of the vehicle radar as an ultrasonic signal monitoring result;
and when the vehicle camera monitors the obstacle, taking the current detection data of the vehicle camera as a video monitoring result.
Preferably, before the step of performing fusion processing on the ultrasonic signal monitoring result and the video monitoring result to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle, the method further includes:
determining first position information of an obstacle monitored by the vehicle radar according to the monitoring result of the ultrasonic signal;
determining second position information of the obstacle monitored by the vehicle camera according to the video monitoring result;
judging whether the vehicle radar and the vehicle camera monitor the same obstacle or not according to the first position information and the second position information;
and when the vehicle radar and the vehicle camera monitor the same obstacle, performing fusion processing on the ultrasonic signal monitoring result and the video monitoring result by taking the monitored same obstacle as the current obstacle so as to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle.
Preferably, the step of performing fusion processing on the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle includes:
determining a longitudinal projection distance between a current obstacle and a corresponding vehicle camera and a key point coordinate of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result;
determining the transverse direction information and the longitudinal direction information of the current barrier according to the video monitoring result;
determining the coordinates of the transverse boundary points of the current barrier according to the transverse direction information and the coordinates of the key points;
determining the longitudinal boundary point coordinates of the current barrier according to the longitudinal direction information and the key point coordinates;
combining the transverse boundary point coordinates and the longitudinal boundary point coordinates into boundary point coordinate information;
and determining the height information of the current obstacle according to the longitudinal boundary point coordinates.
Preferably, the step of determining a longitudinal projection distance between the current obstacle and a corresponding vehicle camera and a key point coordinate of the current obstacle in the spatial coordinate system according to the monitoring result of the ultrasonic signal includes:
constructing a space coordinate system by taking the center of the vehicle as a coordinate origin;
dividing the space coordinate system into a first type area and a second type area;
determining an obstacle area of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result;
when the obstacle area is the first type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the X axis of the space coordinate system according to the monitoring result of the ultrasonic signal;
and determining the coordinates of key points of the intersection points of the obstacles and the X axis of the space coordinate system according to the longitudinal projection distance.
Preferably, after the step of determining the obstacle region in the spatial coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result, the method further includes:
when the obstacle area is the second type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the Y axis of the space coordinate system according to the monitoring result of the ultrasonic signal;
and determining the coordinate of a key point of the intersection point of the obstacle and the Y axis of the space coordinate system according to the longitudinal projection distance.
Preferably, after the step of determining the position information of the current obstacle according to the longitudinal distance, the height information, and the boundary point coordinate information, the method further includes:
and performing three-dimensional reconstruction on the longitudinal distance, the height information and the boundary point coordinate information to obtain three-dimensional information of the obstacle and transmitting the three-dimensional information to a vehicle-mounted computer.
Further, to achieve the above object, the present invention also provides a vehicle obstacle locating device including:
the result acquisition module is used for acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
the data processing module is used for carrying out fusion processing on the ultrasonic signal monitoring result and the video monitoring result so as to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle;
and the barrier positioning module is used for determining the position information of the current barrier according to the longitudinal projection distance, the height information and the boundary point coordinate information.
Further, to achieve the above object, the present invention also proposes a vehicle obstacle locating apparatus including: the vehicle obstacle positioning system comprises a memory, a processor and a vehicle obstacle positioning program stored on the memory and capable of running on the processor, wherein the vehicle obstacle positioning program realizes the steps of the vehicle obstacle positioning method in any one of the above aspects when being executed by the processor.
In addition, in order to achieve the above object, the present invention further provides a computer readable storage medium, wherein a vehicle obstacle positioning program is stored on the computer readable storage medium, and when the vehicle obstacle positioning program is executed, the steps of the vehicle obstacle positioning method according to any one of the above are implemented.
According to the invention, by acquiring the ultrasonic signal monitoring result acquired by the vehicle radar and the video monitoring result acquired by the vehicle camera, the ultrasonic signal monitoring result and the video monitoring result are fused according to the principle that the vehicle radar has high accuracy in identifying the front obstacle of the vehicle and the vehicle camera has high accuracy in identifying the side obstacle of the vehicle, so that the position information of the obstacle can be acquired more accurately.
Drawings
FIG. 1 is a schematic diagram of a vehicle obstacle locating device for a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a first embodiment of a vehicle obstacle locating method according to the present invention;
FIG. 3 is a schematic diagram illustrating the division of spatial coordinate regions in the method for locating a vehicle obstacle according to the present invention;
FIG. 4 is a schematic flow chart diagram illustrating a vehicle obstacle locating method according to a second embodiment of the present invention;
fig. 5 is a block diagram showing the configuration of the first embodiment of the vehicle obstacle locating device according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a vehicle obstacle locating device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is one type of storage medium, may include therein an operating system, a network communication module, a user interface module, and a vehicle obstacle locating program.
In the electronic apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the electronic device of the present invention may be provided in the vehicle obstacle locating device, and the electronic device calls the vehicle obstacle locating program stored in the memory 1005 through the processor 1001 and executes the vehicle obstacle locating method provided by the embodiment of the present invention.
An embodiment of the present invention provides a vehicle obstacle positioning method, and referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of a vehicle obstacle positioning method according to the present invention.
In this embodiment, the vehicle obstacle positioning method includes the steps of:
step S10: acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
it should be noted that, the execution main body of the embodiment may be a vehicle obstacle positioning device disposed on a vehicle, where the vehicle is provided with a plurality of vehicle radars and vehicle cameras, and the embodiment is not limited to this.
It should be noted that, in this embodiment, the vehicle radar may be an ultrasonic radar, the ultrasonic signal monitoring result is monitoring data acquired when the vehicle radar detects an obstacle, the vehicle camera is a vehicle-mounted looking-around camera, and the video monitoring result is detection data acquired when the vehicle camera detects an obstacle.
When the ultrasonic radar operates, based on the ultrasonic principle, the ultrasonic wave is transmitted to impact an object and then reflected, the reflected ultrasonic wave is acquired, the time interval between the time when the ultrasonic wave is transmitted and the time when the reflected sound wave is acquired is calculated, and the actual distance between the vehicle body and the object which reflects the ultrasonic wave is calculated according to the time interval and the current sound velocity.
Further, in order to accurately obtain the monitoring result of the ultrasonic signal collected by the vehicle radar and the monitoring result of the video collected by the vehicle camera, and to exclude unnecessary monitoring data, the step S10 includes:
detecting monitoring data collected by a vehicle radar and a vehicle camera in real time, and judging whether the vehicle radar and the vehicle camera monitor obstacles or not according to the monitoring data; when the vehicle radar monitors an obstacle, taking the current detection data of the vehicle radar as an ultrasonic signal monitoring result; and when the vehicle camera monitors the obstacle, taking the current detection data of the vehicle camera as a video monitoring result.
It can be understood that, when the vehicle radar operates, corresponding monitoring data can be generated, the monitoring data can include information such as current temperature, ultrasonic echo duration, ultrasonic wave sending angle and the like, and whether the vehicle radar monitors the obstacle can be judged by analyzing various information in the monitoring data and carrying out obstacle tracking, clustering, updating, storing and clearing mechanisms on the identified information.
It should be noted that, in this embodiment, model training is performed through a large number of obstacle feature samples to obtain an obstacle feature matching model, and the vehicle camera acquires data through the fisheye lens to perform imaging, so as to obtain corresponding imaging data, obtain imaging data of the fisheye lens of the vehicle camera, and analyze the imaging data of the fisheye lens through the obstacle feature matching model, so as to determine whether the vehicle camera monitors an obstacle.
It should be noted that the vehicle radar and the vehicle camera are continuously monitored for a long time, but not all collected monitoring data are valid data, in order to more accurately collect required data and remove unnecessary monitoring data, the vehicle radar and the vehicle camera data need to be detected in real time, when the vehicle radar is judged to monitor an obstacle, the monitoring data currently collected by the vehicle radar is used as an ultrasonic signal monitoring result, and when the vehicle camera is judged to monitor the obstacle, the monitoring data currently collected by the vehicle camera is used as a video monitoring result.
In practical use, when an obstacle is monitored, monitoring data at the current moment is collected as a monitoring result, for example: the current time is 3: 00, at this moment, it is judged according to the monitoring data of the vehicle camera that the vehicle camera monitors the obstacle, and then 3: and taking the monitoring data acquired at the time 00 as a video monitoring result.
Step S20: fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle;
it should be noted that, because of the actual radar installation method and the radar ultrasonic positioning principle, the vehicle radar has high accuracy in identifying the obstacle on the front of the vehicle, but has poor accuracy in identifying the obstacle on the side of the vehicle, and because of the imaging principle and other reasons, the vehicle camera has poor accuracy in identifying the obstacle on the front of the vehicle, but has high accuracy in identifying the obstacle on the side of the vehicle. Therefore, after the ultrasonic signal monitoring result and the video monitoring result are obtained, the results are fused, and the attribute information of the obstacle can be obtained more accurately, for example: distance and direction between the obstacle and the vehicle.
Further, to better explain the processing flow of the fusion processing, step S20 includes:
determining a longitudinal projection distance between a current obstacle and a corresponding vehicle camera and a key point coordinate of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result; determining the transverse direction information and the longitudinal direction information of the current barrier according to the video monitoring result; determining the coordinates of the transverse boundary points of the current barrier according to the transverse direction information and the coordinates of the key points; determining the longitudinal boundary point coordinates of the current barrier according to the longitudinal direction information and the key point coordinates; combining the transverse boundary point coordinates and the longitudinal boundary point coordinates into boundary point coordinate information; and determining the height information of the current obstacle according to the longitudinal boundary point coordinates.
It should be noted that the current obstacle is an obstacle monitored by a vehicle radar and a vehicle camera, the corresponding vehicle camera is a vehicle camera which monitors the current obstacle, the longitudinal projection distance is a projection distance between the current obstacle and the corresponding vehicle camera, and the key point coordinate is a coordinate of an auxiliary point which is determined according to the longitudinal projection distance and used for auxiliary calculation.
It should be noted that, a certain frame of picture in the monitored video acquired by the camera may be regarded as a plane when being read, the transverse direction is a horizontal direction in the plane, and the longitudinal direction is a vertical direction in the plane, so that the transverse direction information of the obstacle refers to the direction information of the obstacle in the horizontal direction in the monitored video, and the longitudinal direction information of the obstacle refers to the direction information of the obstacle in the vertical direction in the monitored video.
In actual use, the vehicle radar can be divided into a front radar and a side radar according to different positions on the vehicle, the number of the front radar of the vehicle is large, and therefore the accuracy and the precision of the vehicle radar are high when the front obstacle of the vehicle is identified, but the accuracy and the precision are poor when the obstacle is identified due to the fact that the number of the side radar of the vehicle is small when the side obstacle of the vehicle is identified.
In practical use, this embodiment obtains standard test data through carrying out a large amount of experiments to vehicle side radar, analyzes the echo change law of vehicle side radar according to test data, calculates corresponding linear function, uses linear function to combine the monitoring result of side radar to calculate, can obtain higher precision and degree of accuracy when discerning vehicle side obstacle.
In actual use, the distance between each boundary point of the obstacle and the vehicle radar can be determined according to the monitoring result of the ultrasonic signal, the distance between the vehicle radar and the corresponding vehicle camera is known, so that the distance between the corresponding vehicle camera and each boundary point can be calculated, and when the coordinates of the key point, the projection distance, the distance between the corresponding vehicle camera and the boundary point and the direction information of the boundary point and the key point are determined, the coordinates of the corresponding boundary point can be calculated, for example: the boundary point is B, the corresponding point of the vehicle camera is A, the key point is O, the angle OAB is 45 degrees, the side length AO is the longitudinal projection distance of 2m, and the side length AB is the distance between the corresponding vehicle camera and the boundary point
Figure BDA0002655966990000091
The side length of OB obtained by calculation is also 2m, that is, the distance between the key point and the boundary point is 2m, the coordinate of the key point is known, and the direction of the boundary point is known, so that the corresponding coordinate of the boundary point can be calculated.
In actual use, the height information of the current obstacle can be calculated according to the coordinates of the longitudinal boundary points, for example: if the coordinates of the highest longitudinal boundary point of the obstacle are (10, 0, 4) and the coordinates of the lowest longitudinal boundary point of the obstacle are (10, 0, 1), the actual height of the obstacle can be calculated to be 3m and the height above the ground to be 1 m.
Further, in order to accurately calculate a longitudinal projection distance and a key point coordinate, the step of determining the longitudinal projection distance between the current obstacle and the corresponding vehicle camera and the key point coordinate of the current obstacle in the spatial coordinate system according to the monitoring result of the ultrasonic signal includes:
constructing a space coordinate system by taking the center of the vehicle as a coordinate origin; dividing the space coordinate system into a first type area and a second type area; determining an obstacle area of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result; when the obstacle area is the first type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the X axis of the space coordinate system according to the monitoring result of the ultrasonic signal; and determining the coordinates of key points of the intersection points of the obstacles and the X axis of the space coordinate system according to the longitudinal projection distance.
Referring to fig. 3, a spatial coordinate system is constructed with the center of the vehicle as the origin of coordinates, and the Z-axis is perpendicular to both the X-axis and the Y-axis, as shown in fig. 3, the spatial coordinate system is divided into a first type region and a second type region.
In practical use, when the obstacle area is the first type area, the longitudinal projection distance is a projection distance of the obstacle and the corresponding camera on the X axis, if the obstacle intersects with the X axis, the distance between the intersection point and the corresponding camera on the X axis is the longitudinal projection distance, if the obstacle does not intersect with the X axis, an extension line can be made according to the direction of the obstacle to intersect with the X axis, the distance between the intersection point and the corresponding camera on the X axis is the longitudinal projection distance, and the corresponding camera is arranged on the vehicle in advance, so that the coordinate distance of the key point can be calculated according to the longitudinal projection distance and the corresponding camera coordinate, if the corresponding camera coordinate is known.
For example: the distance between the intersection point of the obstacle and the X axis and the corresponding camera is 1m, the corresponding camera coordinate is (1, 0, 0), the coordinate of the key point can be calculated to be (2, 0, 0), if the obstacle does not intersect with the X axis, the extension line can be taken according to the direction of the obstacle, the distance between the intersection point of the extension line and the X axis and the corresponding camera is 2m, and the corresponding camera coordinate is (1, 0, 0), the coordinate of the key point can be calculated to be (3, 0, 0).
Further, in order to accurately calculate the longitudinal projection distance and the coordinates of the key points even when the obstacle area is the second type area, after the step of determining the obstacle area of the current obstacle in the spatial coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result, the method further includes:
when the obstacle area is the second type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the Y axis of the space coordinate system according to the monitoring result of the ultrasonic signal; and determining the coordinate of a key point of the intersection point of the obstacle and the Y axis of the space coordinate system according to the longitudinal projection distance.
In practical use, when the obstacle area is a second type area, the longitudinal projection distance is the projection distance of the obstacle and the corresponding camera on the Y axis, the corresponding camera is projected to the Y axis in parallel to obtain a camera projection point, if the obstacle intersects with the Y axis, the distance between the intersection point and the camera projection point on the Y axis is the longitudinal projection distance, if the obstacle does not intersect with the Y axis, an extension line can be made according to the direction of the obstacle to intersect with the Y axis, the distance between the intersection point and the camera projection point on the Y axis is the longitudinal projection distance, firstly, the camera projection point coordinate is calculated according to the corresponding camera coordinate, and then the corresponding key point coordinate can be calculated according to the longitudinal projection distance and the camera projection point coordinate.
For example: the distance between the intersection point of the obstacle and the Y axis and the corresponding camera is 1m, the corresponding camera coordinate is (1, 1, 0), the projection point coordinate of the corresponding camera on the Y axis is (0, 1, 0), the key point coordinate is (0, 2, 0) can be calculated, if the obstacle does not intersect with the Y axis, the extension line can be made according to the direction of the obstacle, the distance between the intersection point of the extension line and the Y axis and the corresponding camera is 2m, the corresponding camera coordinate is (1, 1, 0), the projection point coordinate of the corresponding camera on the Y axis is (0, 1, 0), the key point coordinate is (0, 3, 0) can be calculated.
Step S30: and determining the position information of the current obstacle according to the longitudinal projection distance, the height information and the boundary point coordinate information.
It can be understood that after the longitudinal projection distance, the height information of the current obstacle and the boundary point coordinate information of the current obstacle are obtained, the direction of the current obstacle relative to the vehicle, the distance between the obstacle and the vehicle and the contour information of the obstacle can be obtained through calculation, so that the specific position information of the obstacle can be determined.
This embodiment is through the ultrasonic signal monitoring result who acquires vehicle radar collection to and the video monitoring result of vehicle camera collection, and it is high to discern the positive barrier degree of accuracy of vehicle according to vehicle radar, and the principle that the vehicle camera discernment vehicle side barrier degree of accuracy is high fuses ultrasonic signal monitoring result and video monitoring result and handles, the positional information of acquireing the barrier that can be more accurate.
Referring to fig. 4, fig. 4 is a flowchart illustrating a vehicle obstacle locating method according to a second embodiment of the present invention.
Based on the first embodiment, before the step S20, the vehicle obstacle positioning method according to the embodiment further includes:
step 101: determining first position information of an obstacle monitored by the vehicle radar according to the monitoring result of the ultrasonic signal;
it should be noted that when the obstacle is located on the front side of the vehicle, the vehicle radar can directly locate the obstacle according to the law of direct echo and cross echo of the vehicle radar, and the position information of the obstacle is obtained as the first position information.
Step 102: determining second position information of the obstacle monitored by the vehicle camera according to the video monitoring result;
it can be understood that when the vehicle camera monitors the obstacle, the distance between the obstacle and the vehicle can be estimated by comparing the reference object distance with the obstacle distance, and then the obstacle is positioned by combining the distance with the direction information of the obstacle in the video, so that the position information of the obstacle is obtained as the second position information.
Step 103: judging whether the vehicle radar and the vehicle camera monitor the same obstacle or not according to the first position information and the second position information;
it can be understood that, the first position information and the second position information which are specifically identified may be analyzed to determine whether the first position information and the second position information are matched, and because there may be an error, a corresponding matching degree may be calculated according to the first position information and the second position information, and when the matching degree is greater than a preset matching degree threshold, it may be determined that the vehicle radar and the vehicle camera monitor the same obstacle.
In practical use, for example: the preset matching degree threshold value is 90%, if the matching degree of the first position information and the second position information is 80%, it is determined that the vehicle radar and the vehicle camera do not monitor the same obstacle, and if the matching degree of the first position information and the second position information is 95%, it is determined that the vehicle radar and the vehicle camera monitor the same obstacle.
Step 104: and when the vehicle radar and the vehicle camera monitor the same obstacle, taking the monitored same obstacle as the current obstacle, and executing the fusion processing of the ultrasonic signal monitoring result and the video monitoring result to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle.
It can be understood that when the vehicle radar and the vehicle camera monitor the same obstacle, the obstacle is used as the current obstacle needing to be fused, and then the ultrasonic signal monitoring result and the video monitoring result are fused, so that the position information of the obstacle can be acquired more accurately.
After the step S30, the method further includes:
step S40: and performing three-dimensional reconstruction on the longitudinal distance, the height information and the boundary point coordinate information to obtain three-dimensional information of the obstacle and transmitting the three-dimensional information to a vehicle-mounted computer.
It can be understood that the vehicle-mounted computer is generally provided with a display screen, a three-dimensional model of the current obstacle can be constructed by three-dimensionally reconstructing the longitudinal distance, the height information and the coordinate information of the boundary points, the three-dimensional model is transformed into corresponding three-dimensional information after the three-dimensional model is constructed, the three-dimensional information is transmitted to the vehicle-mounted computer and displayed by the vehicle-mounted computer, and the obstacle information can be more intuitively fed back to a vehicle driver, so that the vehicle driver can more intuitively identify the obstacle and take corresponding operation.
In the embodiment, the first position information is obtained by analyzing the monitoring result of the ultrasonic signal, the second position information is obtained by analyzing the monitoring result of the video, whether the first position information is matched with the second position information is judged, the matching degree of the first position information and the second position information is calculated, judging whether the vehicle radar and the vehicle camera detect the same barrier according to the matching degree, determining whether fusion processing is needed, effectively avoiding unnecessary fusion processing, saving computing resources, after the positioning information is obtained, and the longitudinal distance, the height information and the boundary point coordinate information are subjected to three-dimensional reconstruction, a three-dimensional model of the current obstacle is constructed, the three-dimensional model is converted into corresponding three-dimensional information, the three-dimensional information is transmitted to a vehicle-mounted computer, and the vehicle-mounted computer displays the three-dimensional information, so that a vehicle driver can more visually identify the obstacle and adopt corresponding operation.
Furthermore, an embodiment of the present invention further provides a storage medium, where a vehicle obstacle location program is stored, and the vehicle obstacle location program, when executed by a processor, implements the steps of the vehicle obstacle location method as described above.
Referring to fig. 5, fig. 5 is a block diagram illustrating a first embodiment of a vehicle obstacle locating device according to the present invention.
As shown in fig. 5, a vehicle obstacle positioning device according to an embodiment of the present invention includes:
a result obtaining module 501, configured to obtain an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
the data processing module 502 is configured to perform fusion processing on the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle;
and an obstacle positioning module 503, configured to determine the position information of the current obstacle according to the longitudinal projection distance, the height information, and the boundary point coordinate information.
Further, in this embodiment, the result obtaining module 501 is further configured to detect monitoring data collected by a vehicle radar and a vehicle camera in real time, and determine whether the vehicle radar and the vehicle camera monitor an obstacle according to the monitoring data; when the vehicle radar monitors an obstacle, taking the current detection data of the vehicle radar as an ultrasonic signal monitoring result; and when the vehicle camera monitors the obstacle, taking the current detection data of the vehicle camera as a video monitoring result.
Further, in this embodiment, the data processing module 502 is further configured to determine first position information of an obstacle monitored by the vehicle radar according to the monitoring result of the ultrasonic signal; determining second position information of the obstacle monitored by the vehicle camera according to the video monitoring result; judging whether the vehicle radar and the vehicle camera monitor the same obstacle or not according to the first position information and the second position information; and when the vehicle radar and the vehicle camera monitor the same obstacle, performing fusion processing on the ultrasonic signal monitoring result and the video monitoring result by taking the monitored same obstacle as the current obstacle so as to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle.
Further, in this embodiment, the data processing module 502 is further configured to determine, according to the monitoring result of the ultrasonic signal, a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and a key point coordinate of the current obstacle in the spatial coordinate system; determining the transverse direction information and the longitudinal direction information of the current barrier according to the video monitoring result; determining the coordinates of the transverse boundary points of the current barrier according to the transverse direction information and the coordinates of the key points; determining the longitudinal boundary point coordinates of the current barrier according to the longitudinal direction information and the key point coordinates; combining the transverse boundary point coordinates and the longitudinal boundary point coordinates into boundary point coordinate information; and determining the height information of the current obstacle according to the longitudinal boundary point coordinates.
Further, in this embodiment, the data processing module 502 is further configured to construct a spatial coordinate system with the vehicle center as a coordinate origin; dividing the space coordinate system into a first type area and a second type area; determining an obstacle area of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result; when the obstacle area is the first type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the X axis of the space coordinate system according to the monitoring result of the ultrasonic signal; and determining the coordinates of key points of the intersection points of the obstacles and the X axis of the space coordinate system according to the longitudinal projection distance.
Further, in this embodiment, the data processing module 502 is further configured to determine, according to the monitoring result of the ultrasonic signal, a longitudinal projection distance between the current obstacle and the corresponding vehicle camera on the Y axis of the spatial coordinate system when the obstacle area is the second type area; and determining the coordinate of a key point of the intersection point of the obstacle and the Y axis of the space coordinate system according to the longitudinal projection distance.
Further, in this embodiment, the obstacle positioning module 503 is further configured to perform three-dimensional reconstruction on the longitudinal distance, the height information, and the boundary point coordinate information, so as to obtain three-dimensional information of the obstacle and transmit the three-dimensional information to the vehicle-mounted computer.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
This embodiment leads to the ultrasonic signal monitoring result who acquires the vehicle radar collection to and the video monitoring result of vehicle camera collection, and according to vehicle radar discernment vehicle front obstacle degree of accuracy height, the principle that vehicle camera discernment vehicle side obstacle degree of accuracy is high carries out the fusion processing to ultrasonic signal monitoring result and video monitoring result, the positional information of acquireing the obstacle that can be more accurate.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not elaborated in the embodiment may be referred to a vehicle obstacle positioning method provided by any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A vehicle obstacle positioning method, characterized by comprising the steps of:
acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between a current obstacle and a corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle;
and determining the position information of the current obstacle according to the longitudinal projection distance, the height information and the boundary point coordinate information.
2. The vehicle obstacle locating method according to claim 1, wherein the step of obtaining the monitoring result of the ultrasonic signal collected by the vehicle radar and the video monitoring result collected by the vehicle camera includes:
detecting monitoring data collected by a vehicle radar and a vehicle camera in real time, and judging whether the vehicle radar and the vehicle camera monitor obstacles or not according to the monitoring data;
when the vehicle radar monitors an obstacle, taking the current detection data of the vehicle radar as an ultrasonic signal monitoring result;
and when the vehicle camera monitors the obstacle, taking the current detection data of the vehicle camera as a video monitoring result.
3. The method according to claim 1, wherein the step of fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle further comprises, before the step of fusing the ultrasonic signal monitoring result and the video monitoring result:
determining first position information of an obstacle monitored by the vehicle radar according to the monitoring result of the ultrasonic signal;
determining second position information of the obstacle monitored by the vehicle camera according to the video monitoring result;
judging whether the vehicle radar and the vehicle camera monitor the same obstacle or not according to the first position information and the second position information;
and when the vehicle radar and the vehicle camera monitor the same obstacle, taking the monitored same obstacle as the current obstacle, and executing the fusion processing of the ultrasonic signal monitoring result and the video monitoring result to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle.
4. The method according to claim 1, wherein the step of fusing the ultrasonic signal monitoring result and the video monitoring result to determine a longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and height information and boundary point coordinate information of the current obstacle comprises:
determining a longitudinal projection distance between a current obstacle and a corresponding vehicle camera and a key point coordinate of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result;
determining the transverse direction information and the longitudinal direction information of the current barrier according to the video monitoring result;
determining the coordinates of the transverse boundary points of the current barrier according to the transverse direction information and the coordinates of the key points;
determining the longitudinal boundary point coordinates of the current barrier according to the longitudinal direction information and the key point coordinates;
combining the transverse boundary point coordinates and the longitudinal boundary point coordinates into boundary point coordinate information;
and determining the height information of the current obstacle according to the longitudinal boundary point coordinates.
5. The vehicle obstacle positioning method according to claim 4, wherein the step of determining the longitudinal projection distance between the current obstacle and the corresponding vehicle camera and the key point coordinates of the current obstacle in the spatial coordinate system according to the ultrasonic signal monitoring result includes:
constructing a space coordinate system by taking the center of the vehicle as a coordinate origin;
dividing the space coordinate system into a first type area and a second type area;
determining an obstacle area of the current obstacle in the space coordinate system according to the ultrasonic signal monitoring result and the target video monitoring result;
when the obstacle area is the first type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the X axis of the space coordinate system according to the monitoring result of the ultrasonic signal;
and determining the coordinates of key points of the intersection points of the obstacles and the X axis of the space coordinate system according to the longitudinal projection distance.
6. The vehicle obstacle locating method according to claim 5, wherein, after the step of determining the obstacle region in the spatial coordinate system based on the ultrasonic signal monitoring result and the target video monitoring result, further comprising:
when the obstacle area is the second type area, determining the longitudinal projection distance of the current obstacle and the corresponding vehicle camera on the Y axis of the space coordinate system according to the monitoring result of the ultrasonic signal;
and determining the coordinate of a key point of the intersection point of the obstacle and the Y axis of the space coordinate system according to the longitudinal projection distance.
7. The vehicle obstacle positioning method according to any one of claims 1 to 6, further comprising, after the step of determining the position information of the current obstacle based on the longitudinal distance, the height information, and the boundary point coordinate information:
and performing three-dimensional reconstruction on the longitudinal distance, the height information and the boundary point coordinate information to obtain three-dimensional information of the obstacle and transmitting the three-dimensional information to a vehicle-mounted computer.
8. A vehicle obstacle locating device, characterized by comprising:
the result acquisition module is used for acquiring an ultrasonic signal monitoring result acquired by a vehicle radar and a video monitoring result acquired by a vehicle camera;
the data processing module is used for carrying out fusion processing on the ultrasonic signal monitoring result and the video monitoring result so as to determine the longitudinal projection distance between the current obstacle and the corresponding vehicle camera, and the height information and the boundary point coordinate information of the current obstacle;
and the barrier positioning module is used for determining the position information of the current barrier according to the longitudinal projection distance, the height information and the boundary point coordinate information.
9. A vehicle obstacle locating apparatus, characterized by comprising: memory, a processor and a vehicle obstacle location program stored on the memory and executable on the processor, the vehicle obstacle location program when executed by the processor implementing the steps of the vehicle obstacle location method as claimed in any one of claims 1-7.
10. A computer-readable storage medium, characterized in that a vehicle obstacle location program is stored thereon, which when executed implements the steps of the vehicle obstacle location method according to any one of claims 1-7.
CN202010902098.5A 2020-08-28 2020-08-28 Vehicle obstacle positioning method, device, equipment and storage medium Active CN112014845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902098.5A CN112014845B (en) 2020-08-28 2020-08-28 Vehicle obstacle positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902098.5A CN112014845B (en) 2020-08-28 2020-08-28 Vehicle obstacle positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112014845A true CN112014845A (en) 2020-12-01
CN112014845B CN112014845B (en) 2024-01-30

Family

ID=73516189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902098.5A Active CN112014845B (en) 2020-08-28 2020-08-28 Vehicle obstacle positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112014845B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112660123A (en) * 2021-01-14 2021-04-16 北汽福田汽车股份有限公司 Vehicle trafficability prompting method and vehicle
CN112776797A (en) * 2021-02-27 2021-05-11 重庆长安汽车股份有限公司 Original parking space parking establishment method and system, vehicle and storage medium
CN113246990A (en) * 2021-05-24 2021-08-13 广州小鹏自动驾驶科技有限公司 Method and device for determining position of obstacle and vehicle
CN113281759A (en) * 2021-04-28 2021-08-20 浙江吉利控股集团有限公司 Control method and system for ultrasonic sensor and computer storage medium
CN113486837A (en) * 2021-07-19 2021-10-08 安徽江淮汽车集团股份有限公司 Automatic driving control method for low-pass obstacle
CN113486836A (en) * 2021-07-19 2021-10-08 安徽江淮汽车集团股份有限公司 Automatic driving control method for low-pass obstacle
CN113581174A (en) * 2021-08-23 2021-11-02 安徽江淮汽车集团股份有限公司 Obstacle positioning method and obstacle positioning device for vehicle
CN113869432A (en) * 2021-09-28 2021-12-31 英博超算(南京)科技有限公司 Contour point distance similarity calculation method for automatic parking of ultrasonic sensor
CN115390079A (en) * 2022-10-28 2022-11-25 杭州枕石智能科技有限公司 Obstacle contour determination method and device based on ultrasonic distance signals
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1138142A (en) * 1997-07-23 1999-02-12 Denso Corp Obstacle-recognizing device for vehicle
US6678394B1 (en) * 1999-11-30 2004-01-13 Cognex Technology And Investment Corporation Obstacle detection system
US20060111841A1 (en) * 2004-11-19 2006-05-25 Jiun-Yuan Tseng Method and apparatus for obstacle avoidance with camera vision
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
JP2010249613A (en) * 2009-04-14 2010-11-04 Toyota Motor Corp Obstacle recognition device and vehicle control unit
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN109085598A (en) * 2018-08-13 2018-12-25 吉利汽车研究院(宁波)有限公司 Detection system for obstacle for vehicle
CN109532821A (en) * 2018-11-09 2019-03-29 重庆长安汽车股份有限公司 Merge parking system
CN110488319A (en) * 2019-08-22 2019-11-22 重庆长安汽车股份有限公司 A kind of collision distance calculation method and system merged based on ultrasonic wave and camera
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN110940319A (en) * 2019-10-21 2020-03-31 广东互动电子网络媒体有限公司 Height limit detection method and device based on machine vision recognition
CN111198376A (en) * 2020-01-13 2020-05-26 广州小鹏汽车科技有限公司 Reachable space adjusting method and device in automatic parking process, vehicle and storage medium
CN111220090A (en) * 2020-03-25 2020-06-02 宁波五维检测科技有限公司 Line focusing differential color confocal three-dimensional surface topography measuring system and method
CN111238472A (en) * 2020-01-20 2020-06-05 北京四维智联科技有限公司 Real-time high-precision positioning method and device for full-automatic parking

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1138142A (en) * 1997-07-23 1999-02-12 Denso Corp Obstacle-recognizing device for vehicle
US6678394B1 (en) * 1999-11-30 2004-01-13 Cognex Technology And Investment Corporation Obstacle detection system
US20060111841A1 (en) * 2004-11-19 2006-05-25 Jiun-Yuan Tseng Method and apparatus for obstacle avoidance with camera vision
WO2010044127A1 (en) * 2008-10-16 2010-04-22 三菱電機株式会社 Device for detecting height of obstacle outside vehicle
JP2010249613A (en) * 2009-04-14 2010-11-04 Toyota Motor Corp Obstacle recognition device and vehicle control unit
CN108444390A (en) * 2018-02-08 2018-08-24 天津大学 A kind of pilotless automobile obstacle recognition method and device
CN109085598A (en) * 2018-08-13 2018-12-25 吉利汽车研究院(宁波)有限公司 Detection system for obstacle for vehicle
CN109532821A (en) * 2018-11-09 2019-03-29 重庆长安汽车股份有限公司 Merge parking system
CN110488319A (en) * 2019-08-22 2019-11-22 重庆长安汽车股份有限公司 A kind of collision distance calculation method and system merged based on ultrasonic wave and camera
CN110940319A (en) * 2019-10-21 2020-03-31 广东互动电子网络媒体有限公司 Height limit detection method and device based on machine vision recognition
CN110861639A (en) * 2019-11-28 2020-03-06 安徽江淮汽车集团股份有限公司 Parking information fusion method and device, electronic equipment and storage medium
CN111198376A (en) * 2020-01-13 2020-05-26 广州小鹏汽车科技有限公司 Reachable space adjusting method and device in automatic parking process, vehicle and storage medium
CN111238472A (en) * 2020-01-20 2020-06-05 北京四维智联科技有限公司 Real-time high-precision positioning method and device for full-automatic parking
CN111220090A (en) * 2020-03-25 2020-06-02 宁波五维检测科技有限公司 Line focusing differential color confocal three-dimensional surface topography measuring system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王檀彬;陈无畏;焦俊;汪明磊;: "多传感器融合的智能车辆导航研究", 中国机械工程, no. 11, pages 1381 - 1385 *
薄博文 等: "基于UWB定位的自动驾驶路径规划方法研究", 自动化与仪器仪表, pages 13 - 16 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112660123A (en) * 2021-01-14 2021-04-16 北汽福田汽车股份有限公司 Vehicle trafficability prompting method and vehicle
CN112776797A (en) * 2021-02-27 2021-05-11 重庆长安汽车股份有限公司 Original parking space parking establishment method and system, vehicle and storage medium
CN113281759A (en) * 2021-04-28 2021-08-20 浙江吉利控股集团有限公司 Control method and system for ultrasonic sensor and computer storage medium
CN113246990A (en) * 2021-05-24 2021-08-13 广州小鹏自动驾驶科技有限公司 Method and device for determining position of obstacle and vehicle
CN113486837A (en) * 2021-07-19 2021-10-08 安徽江淮汽车集团股份有限公司 Automatic driving control method for low-pass obstacle
CN113486836A (en) * 2021-07-19 2021-10-08 安徽江淮汽车集团股份有限公司 Automatic driving control method for low-pass obstacle
CN113486836B (en) * 2021-07-19 2023-06-06 安徽江淮汽车集团股份有限公司 Automatic driving control method for low-pass obstacle
CN113581174A (en) * 2021-08-23 2021-11-02 安徽江淮汽车集团股份有限公司 Obstacle positioning method and obstacle positioning device for vehicle
CN113869432A (en) * 2021-09-28 2021-12-31 英博超算(南京)科技有限公司 Contour point distance similarity calculation method for automatic parking of ultrasonic sensor
CN115390079A (en) * 2022-10-28 2022-11-25 杭州枕石智能科技有限公司 Obstacle contour determination method and device based on ultrasonic distance signals
CN116883478A (en) * 2023-07-28 2023-10-13 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera
CN116883478B (en) * 2023-07-28 2024-01-23 广州瀚臣电子科技有限公司 Obstacle distance confirmation system and method based on automobile camera

Also Published As

Publication number Publication date
CN112014845B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
CN112014845A (en) Vehicle obstacle positioning method, device, equipment and storage medium
CN106952303B (en) Vehicle distance detection method, device and system
CN110867132B (en) Environment sensing method, device, electronic equipment and computer readable storage medium
US10194059B2 (en) Image processing apparatus and image processing method
CN109094669A (en) Method and apparatus for assessing hinge angle
CN110909705B (en) Road side parking space sensing method and system based on vehicle-mounted camera
US10762782B2 (en) On-street parking map generation
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN110751012B (en) Target detection evaluation method and device, electronic equipment and storage medium
CN110850859B (en) Robot and obstacle avoidance method and obstacle avoidance system thereof
EP3441839B1 (en) Information processing method and information processing system
US11807269B2 (en) Method for vehicle avoiding obstacle, electronic device, and computer storage medium
JP4664141B2 (en) Peripheral other vehicle notification device
CN111994081A (en) Parking space detection method, equipment, storage medium and device
CN113030990A (en) Fusion ranging method and device for vehicle, ranging equipment and medium
CN112801024B (en) Detection information processing method and device
CN113537606A (en) Accident prediction method, accident prediction device and computer-readable storage medium
CN113536867B (en) Object identification method, device and system
EP4024330A1 (en) Object recognition method and object recognition device
CN111832347A (en) Method and device for dynamically selecting region of interest
JP5773334B2 (en) Optical flow processing apparatus and display radius map generation apparatus
CN114596706A (en) Detection method and device of roadside sensing system, electronic equipment and roadside equipment
CN114638947A (en) Data labeling method and device, electronic equipment and storage medium
JP7319541B2 (en) Work machine peripheral object position detection system, work machine peripheral object position detection program
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant