CN113838125A - Target position determining method and device, electronic equipment and storage medium - Google Patents
Target position determining method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113838125A CN113838125A CN202111093181.3A CN202111093181A CN113838125A CN 113838125 A CN113838125 A CN 113838125A CN 202111093181 A CN202111093181 A CN 202111093181A CN 113838125 A CN113838125 A CN 113838125A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- target
- determining
- target object
- candidate point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a target position determining method and device, electronic equipment and a storage medium, and belongs to the technical field of automatic driving. The method comprises the following steps: determining a target area of a target object according to image data of the target object in a scene where a vehicle is located; determining candidate point clouds of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area; determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds; and determining the position of the target object according to the target point cloud. Through the technical scheme, the accuracy of determining the position of the target object is improved, and a new idea is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Description
Technical Field
The embodiment of the invention relates to the technical field of automatic driving, in particular to a target position determining method and device, electronic equipment and a storage medium.
Background
The driving safety requires that the automatic driving has extremely high-precision sensing capability, the visual field of the sensor is comprehensively covered, and dangerous traffic conditions and traffic behaviors can be effectively sensed. The camera is one of the earliest sensors used by an automatic driving system, is also the first choice sensor of various automobile manufacturers and automobile researchers, but is greatly influenced by external environment changes, such as night, rainfall, heavy fog and the like, so that the perception effect of the camera is greatly influenced; the laser radar irradiates an object with a plurality of denser laser beams, receives laser reflected by the object to obtain distance information of the object, has a larger and denser data size and higher accuracy than a millimeter wave radar, but has much lower adaptability to extreme environments and is expensive, and therefore improvement is urgently needed.
Disclosure of Invention
The invention provides a target position determining method, a target position determining device, electronic equipment and a storage medium, which are used for improving the accuracy of position determination of target objects around a vehicle.
In a first aspect, an embodiment of the present invention provides a method for determining a target position, where the method includes:
determining a target area of a target object according to image data of the target object in a scene where a vehicle is located;
determining candidate point clouds of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and determining the position of the target object according to the target point cloud.
In a second aspect, an embodiment of the present invention further provides a target position determining apparatus, where the apparatus includes:
the target area determining module is used for determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
the candidate point cloud determining module is used for determining candidate point clouds of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
the target point cloud determining module is used for determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and the position determining module is used for determining the position of the target object according to the target point cloud.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a target position determination method as provided by any of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the target position determining method according to any embodiment of the present invention.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located, then the target point cloud is determined from the candidate point cloud according to the matching degree between the target area and the candidate point cloud, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data and the point cloud data, the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Drawings
Fig. 1 is a flowchart of a target position determining method according to an embodiment of the present invention;
fig. 2 is a flowchart of a target location determining method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a target position determining method according to a third embodiment of the present invention
Fig. 4 is a schematic structural diagram of a target position determining apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a target position determining method according to an embodiment of the present invention, which is applicable to determining positions of target objects around a vehicle during automatic driving, especially to determining positions of target objects around the vehicle during automatic driving in extreme environments (severe weather conditions or dark environments). The method may be performed by a target position determining apparatus, which may be implemented in software and/or hardware, and may be integrated in an electronic device carrying the target position determining function, such as an onboard controller.
As shown in fig. 1, the method may specifically include:
110. and determining a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
The target object refers to an object in a scene where the vehicle is located, such as another vehicle. The target area refers to an area where a target object is located in image data in a scene around the vehicle captured by the image capturing device. The image acquisition equipment can be cameras installed on the vehicle, and can be four groups of cameras for example, and can acquire image data of the front, back, left and right directions of the vehicle.
In this embodiment, the image data of the target object in the scene where the vehicle is located is acquired by the image acquisition device, and then the target area of the target object in the image data is obtained based on the target detection model. Wherein the target detection model may be a YOLO-v3 model.
It should be noted that after the image data of the target object is acquired, the image data may be corrected based on preset correction parameters. The preset correction parameters are set by those skilled in the art according to actual conditions. Further, target object detection is carried out on the corrected image data based on a target detection model, and a target area of the target object is obtained.
And S120, determining candidate point clouds of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located.
The first point cloud data refers to point cloud data of a target object in a scene where the vehicle is located, and the point cloud data is acquired through a millimeter wave radar. The four groups of millimeter wave radars are arranged on the vehicle, and can be used for collecting point cloud data of target objects in front, back, left and right directions of the vehicle; meanwhile, the acquisition time of the millimeter wave radar and the acquisition time of the image acquisition equipment need to be synchronized in advance, so that the point cloud data acquired by each frame of millimeter wave radar corresponds to one frame of image data acquired by the image acquisition equipment. The candidate point cloud is point cloud data related to the target area.
In this embodiment, first point cloud data of a target object in a scene where a millimeter wave radar collects a vehicle is obtained, and the first point cloud data is projected to an image plane coordinate system based on a conversion relationship between a radar plane coordinate system and the image plane coordinate system to obtain second point cloud data. And determining candidate point clouds of the target object according to the position relation between the second point cloud data and the target area. Specifically, the point cloud data falling into the target area in the second point cloud data is used as the candidate point cloud of the target object.
And S130, determining the target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds.
Optionally, the distance between the candidate point cloud and the target area may be used as the matching degree between the target area and the candidate point cloud, and the target point cloud is determined from the candidate point cloud according to the matching degree.
The target point cloud refers to point cloud data which is most matched with the target area.
For example, the matching degrees may be sorted, and the target point cloud may be determined from the candidate point clouds according to the sorting result and a set threshold. Specifically, the matching degrees are sorted from small to large, and the candidate point cloud corresponding to the matching degree greater than the set threshold in the sorting result is used as the target point cloud. Wherein, the set threshold value can be set by the person skilled in the art according to the actual situation.
And S140, determining the position of the target object according to the target point cloud.
In this embodiment, according to the target point cloud, basic information of a target object corresponding to the target point cloud collected in the millimeter wave radar is obtained, where the basic information includes information such as a position and a speed of the target object. Further, the position of the target object is determined by using a Kalman filtering algorithm according to the basic information of the target object.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located, then the target point cloud is determined from the candidate point cloud according to the matching degree between the target area and the candidate point cloud, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data and the point cloud data, the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Example two
Fig. 2 is a flowchart of a target location determining method provided in the second embodiment of the present invention, and on the basis of the second embodiment, an alternative implementation is provided for further optimizing "determining candidate point clouds of a target object according to first point cloud data of the target object and a target area".
As shown in fig. 2, the method may specifically include:
s210, determining a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
And S220, performing three-dimensional conversion on the target area to obtain a point cloud cone.
In this embodiment, the target area may be three-dimensionally converted based on the three-dimensional conversion model, so as to obtain a point cloud cone corresponding to the target area.
And S230, projecting the first point cloud data to the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system to obtain second point cloud data.
In this embodiment, the first point cloud data is clustered to obtain clustered point cloud data. For example, the first point cloud data may be clustered Based on a DNSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm, so as to obtain clustered point cloud data.
Further, based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, the clustered point cloud data is projected to the image plane coordinate system, and second point cloud data is obtained.
It can be understood that by clustering the first point cloud data, irrelevant point cloud data can be filtered out, so that the subsequent target position can be determined more accurately.
S240, determining candidate point clouds of the target object according to the position relation between the second point cloud data and the point cloud cone.
In this embodiment, the point cloud data falling into the point cloud cone in the second point cloud data is used as the candidate point cloud of the target object; and eliminating the second point cloud data falling outside the point cloud cone.
And S250, determining the target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds.
And S260, determining the position of the target object according to the target point cloud.
According to the technical scheme of the embodiment of the invention, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, and the target area is subjected to three-dimensional conversion to obtain the point cloud cone. Based on the conversion relation between the radar plane coordinate system and the image plane coordinate system, the first point cloud data is projected to the image plane coordinate system to obtain second point cloud data, then candidate point clouds of a target object are determined according to the position relation between the second point cloud data and the point cloud cone, further the target point cloud is determined from the candidate point clouds according to the matching degree between a target area and the candidate point clouds, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data and the point cloud data, the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
EXAMPLE III
Fig. 3 is a flowchart of a target location determining method provided by the third embodiment of the present invention, and an alternative implementation is provided for further optimizing "determining a target point cloud from candidate point clouds according to a matching degree between a target area and the candidate point clouds" based on the third embodiment of the present invention.
As shown in fig. 3, the method may specifically include:
s310, determining a target area of the target object according to the image data of the target object in the scene where the vehicle is located.
S320, determining candidate point clouds of the target object according to the first point cloud data and the target area of the target object in the scene where the vehicle is located.
S330, aiming at each candidate point cloud, determining the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment.
The basic information comprises size information of the target area and center point coordinates of the target area; the size information includes the width and height of the target area. The focal length of the acquisition device comprises a horizontal focal length and a vertical focal length.
Optionally, the first distance between the target object and the vehicle is determined according to the point cloud coordinates of the candidate point cloud. For example, the square of the abscissa and the square of the ordinate of the point cloud coordinate may be added, and the added result may be used as the first distance between the target object and the vehicle.
After determining the first distance, an estimated height and an estimated width of the target object may be determined based on the first distance, the width or height in the dimensional information, and the focal length of the acquisition device.
For example, the estimated height of the target object may be determined according to the first distance, the height in the size information, and the vertical focal length of the acquisition device, and may be determined according to the following formula:
where h represents the estimated height of the target object, fyRepresents the vertical focal length, | y, of the acquisition device2-y1| represents a height in the size information, and r represents a first distance.
For example, the estimated width of the target object may be determined according to the first distance, the width in the size information, and the horizontal focal length of the acquisition device, and may be determined by the following formula:
where w represents the estimated width of the target object, fxRepresents the horizontal focal length, | x, of the acquisition device2-x1| represents a width in the size information, and r represents a first distance.
After determining the estimated height and width of the target object, a second distance between the target object and the vehicle may be determined based on the width or height of the target object, the focal length of the acquisition device, and the width or height in the dimensional information.
For example, if the height of the target object is known, the second distance between the target object and the vehicle is determined according to the height of the target object, the vertical focal length of the acquisition device, and the height in the size information. For example, it can be determined by the following formula:
where r 'represents a second distance between the target object and the vehicle, h' represents a height of the target object, | y2-y1| represents height in the size information, fyIndicating the vertical focal length of the acquisition device.
For example, if the width of the target object is known, the second distance between the target object and the vehicle is determined according to the width of the target object, the horizontal focal length of the acquisition device, and the width in the size information. This can be determined, for example, by the following equation:
where r 'represents a second distance between the target object and the vehicle, w' represents a width of the target object, | x2-x1| represents a width in the size information, fxIndicating the horizontal focal length of the acquisition device.
Further, the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and the basic information are input into the neural network model, and the matching degree of the candidate point cloud and the target area is obtained. Wherein the neural network model may be a radial basis neural network.
Specifically, the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and the basic information are input into a neural network model, and the matching degree of the candidate point cloud and the target area is obtained through processing of the neural network model.
It can be understood that the accuracy of the matching degree can be enhanced by determining the matching degree of the target area of the candidate point cloud through the neural network model.
And S340, determining the target point cloud from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
And S350, determining the position of the target object according to the target point cloud.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point clouds of the target object are determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located, the matching degree of the candidate point clouds and the target area is determined according to the point cloud coordinates of the candidate point clouds, the basic information of the target area and the focal length of the acquisition equipment aiming at each candidate point cloud, then the target point cloud is determined from the candidate point clouds according to the matching degree between each candidate point cloud and the target area, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data and the point cloud data, the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Example four
Fig. 4 is a schematic structural diagram of a target position determining apparatus according to a fourth embodiment of the present invention, which is applicable to determining the position of a target object around a vehicle during automatic driving, and is particularly applicable to determining the position of a target object around a vehicle during automatic driving in extreme environments (severe weather conditions or dark environments). The apparatus may be implemented in software and/or hardware and may be integrated into an electronic device, such as an onboard controller, that carries the target position determination function.
As shown in fig. 4, the apparatus may specifically include a target area determination module 410, a candidate point cloud determination module 420, a target point cloud determination module 430, and a location determination module 440, wherein,
a target area determination module 410, configured to determine a target area of a target object according to image data of the target object in a scene where the vehicle is located;
the candidate point cloud determining module 420 is configured to determine a candidate point cloud of a target object according to first point cloud data and a target area of the target object in a scene where the vehicle is located;
a target point cloud determining module 430, configured to determine a target point cloud from the candidate point clouds according to a matching degree between the target area and the candidate point clouds;
and a position determining module 440, configured to determine a position of the target object according to the target point cloud.
According to the technical scheme, the target area of the target object is determined according to the image data of the target object in the scene where the vehicle is located, then the candidate point cloud of the target object is determined according to the first point cloud data and the target area of the target object in the scene where the vehicle is located, then the target point cloud is determined from the candidate point cloud according to the matching degree between the target area and the candidate point cloud, and finally the position of the target object is determined according to the target point cloud. According to the technical scheme, the position of the target object in the scene where the vehicle is located is determined by combining the image data and the point cloud data, the accuracy of determining the position of the target object is improved, and a new thought is provided for determining the position of the target object in the surrounding environment of the vehicle in automatic driving.
Further, the candidate point cloud determining module 420 includes a point cloud pile obtaining unit, a second point cloud data obtaining unit, and a candidate point cloud determining unit, wherein,
the point cloud pile obtaining unit is used for performing three-dimensional conversion on the target area to obtain a point cloud cone;
a second point cloud data obtaining unit, configured to project the first point cloud data to the image plane coordinate system based on a conversion relationship between the radar plane coordinate system and the image plane coordinate system, so as to obtain second point cloud data;
and the candidate point cloud determining unit is used for determining the candidate point cloud of the target object according to the position relation between the second point cloud data and the point cloud cone.
Further, the target point cloud determining module 430 includes a matching degree determining unit and a target point cloud determining unit, wherein,
the matching degree determining unit is used for determining the matching degree of each candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment; the basic information comprises size information of the target area and center point coordinates of the target area;
and the target point cloud determining unit is used for determining the target point cloud from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
Further, the matching degree determination unit includes a first distance determination subunit, an estimation information determination subunit, a second distance determination subunit, and a matching degree determination subunit, wherein,
the first distance determining subunit is used for determining a first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
the estimation information determining subunit is used for determining the estimated height and the estimated width of the target object according to the first distance, the width or the height in the size information and the focal length of the acquisition equipment;
the second distance determining subunit is used for determining a second distance between the target object and the vehicle according to the width or height of the target object, the focal length of the acquisition equipment and the width or height in the size information;
and the matching degree determining subunit is used for inputting the first distance, the estimated height, the estimated width, the coordinates of the center point of the candidate point cloud, the second distance and the basic information into the neural network model to obtain the matching degree of the candidate point cloud and the target area.
Further, the candidate point cloud determination module 420 further comprises a clustered point cloud determination unit, wherein the distance point cloud unit is configured to:
clustering the first point cloud data to obtain clustered point cloud data;
correspondingly, the second point cloud data obtaining unit is further configured to:
and projecting the clustered point cloud data to the image plane coordinate system based on the conversion relation between the radar plane coordinate system and the image plane coordinate system to obtain second point cloud data.
Further, the target point cloud determining module 430 is specifically configured to:
and sorting the matching degrees, and determining the target point cloud from the candidate point clouds according to a sorting result and a set threshold value.
Further, the apparatus further includes a correction processing module, which is specifically configured to:
and performing correction processing on the image data based on preset correction parameters.
The target position determining device can execute the target position determining method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the executing method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and fig. 5 shows a block diagram of an exemplary device suitable for implementing the embodiment of the present invention. The device shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments described herein.
The processing unit 16 executes various functional applications and data processing, such as implementing a target position determination method provided by an embodiment of the present invention, by executing programs stored in the system memory 28.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program is used, when executed by a processor, to perform a target position determining method provided in the embodiment of the present invention, where the method includes:
determining a target area of a target object according to image data of the target object in a scene where a vehicle is located;
determining candidate point clouds of a target object according to first point cloud data and a target area of the target object in a scene where a vehicle is located;
determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and determining the position of the target object according to the target point cloud.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method for determining a location of a target, comprising:
determining a target area of a target object according to image data of the target object in a scene where a vehicle is located;
determining candidate point clouds of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and determining the position of the target object according to the target point cloud.
2. The method of claim 1, wherein determining the candidate point cloud of the target object from the first point cloud data of the target object and the target area comprises:
performing three-dimensional conversion on the target area to obtain a point cloud cone;
based on a conversion relation between a radar plane coordinate system and an image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain second point cloud data;
and determining candidate point clouds of the target object according to the position relation between the second point cloud data and the point cloud cone.
3. The method of claim 1, wherein determining a target point cloud from the candidate point clouds according to a degree of match between the target region and the candidate point clouds comprises:
aiming at each candidate point cloud, determining the matching degree of the candidate point cloud and the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition equipment; the basic information comprises size information of the target area and center point coordinates of the target area;
and determining target point clouds from the candidate point clouds according to the matching degree between each candidate point cloud and the target area.
4. The method of claim 3, wherein determining the matching degree of the candidate point cloud with the target area according to the point cloud coordinates of the candidate point cloud, the basic information of the target area and the focal length of the acquisition device comprises:
determining a first distance between the target object and the vehicle according to the point cloud coordinates of the candidate point cloud;
determining an estimated height and an estimated width of the target object according to the first distance, the width or the height in the size information and the focal length of the acquisition equipment;
determining a second distance between the target object and the vehicle according to the width or height of the target object, the focal length of the acquisition device, and the width or height in the size information;
and inputting the first distance, the estimated height, the estimated width, the coordinates of the central point of the candidate point cloud, the second distance and the basic information into a neural network model to obtain the matching degree of the candidate point cloud and the target area.
5. The method of claim 2, wherein before projecting the first point cloud data to the image plane coordinate system based on a transformation relationship between the radar plane coordinate system and the image plane coordinate system to obtain the second point cloud data, further comprising:
clustering the first point cloud data to obtain clustered point cloud data;
correspondingly, based on the conversion relationship between the radar plane coordinate system and the image plane coordinate system, projecting the first point cloud data to the image plane coordinate system to obtain second point cloud data, which comprises:
and projecting the clustered point cloud data to an image plane coordinate system based on a conversion relation between a radar plane coordinate system and the image plane coordinate system to obtain second point cloud data.
6. The method of claim 1, wherein determining a target point cloud from the candidate point clouds according to a degree of match between the target region and the candidate point clouds comprises:
and sorting the matching degrees, and determining the target point cloud from the candidate point clouds according to a sorting result and a set threshold value.
7. The method of claim 1, further comprising:
and performing correction processing on the image data based on preset correction parameters.
8. A target position determining apparatus, comprising:
the target area determining module is used for determining a target area of a target object according to image data of the target object in a scene where the vehicle is located;
the candidate point cloud determining module is used for determining candidate point clouds of the target object according to the first point cloud data of the target object in the scene where the vehicle is located and the target area;
the target point cloud determining module is used for determining a target point cloud from the candidate point clouds according to the matching degree between the target area and the candidate point clouds;
and the position determining module is used for determining the position of the target object according to the target point cloud.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a target position determination method as recited in any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the target position determination method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111093181.3A CN113838125A (en) | 2021-09-17 | 2021-09-17 | Target position determining method and device, electronic equipment and storage medium |
PCT/CN2022/117770 WO2023040737A1 (en) | 2021-09-17 | 2022-09-08 | Target location determining method and apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111093181.3A CN113838125A (en) | 2021-09-17 | 2021-09-17 | Target position determining method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113838125A true CN113838125A (en) | 2021-12-24 |
Family
ID=78959810
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111093181.3A Pending CN113838125A (en) | 2021-09-17 | 2021-09-17 | Target position determining method and device, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113838125A (en) |
WO (1) | WO2023040737A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115641567A (en) * | 2022-12-23 | 2023-01-24 | 小米汽车科技有限公司 | Target object detection method and device for vehicle, vehicle and medium |
WO2023040737A1 (en) * | 2021-09-17 | 2023-03-23 | 中国第一汽车股份有限公司 | Target location determining method and apparatus, electronic device, and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840448A (en) * | 2017-11-24 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Information output method and device for automatic driving vehicle |
CN110456363B (en) * | 2019-06-17 | 2021-05-18 | 北京理工大学 | Target detection and positioning method for three-dimensional laser radar point cloud and infrared image fusion |
CN111612841B (en) * | 2020-06-22 | 2023-07-14 | 上海木木聚枞机器人科技有限公司 | Target positioning method and device, mobile robot and readable storage medium |
CN111815707A (en) * | 2020-07-03 | 2020-10-23 | 北京爱笔科技有限公司 | Point cloud determining method, point cloud screening device and computer equipment |
CN112700552A (en) * | 2020-12-31 | 2021-04-23 | 华为技术有限公司 | Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium |
CN113838125A (en) * | 2021-09-17 | 2021-12-24 | 中国第一汽车股份有限公司 | Target position determining method and device, electronic equipment and storage medium |
-
2021
- 2021-09-17 CN CN202111093181.3A patent/CN113838125A/en active Pending
-
2022
- 2022-09-08 WO PCT/CN2022/117770 patent/WO2023040737A1/en unknown
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023040737A1 (en) * | 2021-09-17 | 2023-03-23 | 中国第一汽车股份有限公司 | Target location determining method and apparatus, electronic device, and storage medium |
CN115641567A (en) * | 2022-12-23 | 2023-01-24 | 小米汽车科技有限公司 | Target object detection method and device for vehicle, vehicle and medium |
CN115641567B (en) * | 2022-12-23 | 2023-04-11 | 小米汽车科技有限公司 | Target object detection method and device for vehicle, vehicle and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023040737A1 (en) | 2023-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
CN109059902B (en) | Relative pose determination method, device, equipment and medium | |
CN109343061B (en) | Sensor calibration method and device, computer equipment, medium and vehicle | |
CN113486797B (en) | Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle | |
CN109188457B (en) | Object detection frame generation method, device, equipment, storage medium and vehicle | |
CN106951847B (en) | Obstacle detection method, apparatus, device and storage medium | |
CN109271944B (en) | Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium | |
US10964054B2 (en) | Method and device for positioning | |
CN106952303B (en) | Vehicle distance detection method, device and system | |
CN108629231B (en) | Obstacle detection method, apparatus, device and storage medium | |
CN107016705B (en) | Ground plane estimation in computer vision systems | |
JP2021523443A (en) | Association of lidar data and image data | |
CN112967283B (en) | Target identification method, system, equipment and storage medium based on binocular camera | |
CN110956137A (en) | Point cloud data target detection method, system and medium | |
KR101995223B1 (en) | System, module and method for detecting pedestrian, computer program | |
CN113838125A (en) | Target position determining method and device, electronic equipment and storage medium | |
CN112525147B (en) | Distance measurement method for automatic driving equipment and related device | |
CN112613424A (en) | Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium | |
CN111913177A (en) | Method and device for detecting target object and storage medium | |
CN113743385A (en) | Unmanned ship water surface target detection method and device and unmanned ship | |
CN112650300A (en) | Unmanned aerial vehicle obstacle avoidance method and device | |
CN111382735A (en) | Night vehicle detection method, device, equipment and storage medium | |
CN110426714B (en) | Obstacle identification method | |
CN114219770A (en) | Ground detection method, ground detection device, electronic equipment and storage medium | |
CN115223135B (en) | Parking space tracking method and device, vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |