CN112017241A - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN112017241A
CN112017241A CN202010843958.2A CN202010843958A CN112017241A CN 112017241 A CN112017241 A CN 112017241A CN 202010843958 A CN202010843958 A CN 202010843958A CN 112017241 A CN112017241 A CN 112017241A
Authority
CN
China
Prior art keywords
camera
target object
radar
view
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010843958.2A
Other languages
Chinese (zh)
Inventor
董旭
刘兰个川
毛云翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Motors Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Motors Technology Co Ltd filed Critical Guangzhou Xiaopeng Motors Technology Co Ltd
Priority to CN202010843958.2A priority Critical patent/CN112017241A/en
Publication of CN112017241A publication Critical patent/CN112017241A/en
Priority to PCT/CN2021/110287 priority patent/WO2022037403A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The embodiment of the invention provides a data processing method and a data processing device, wherein the method comprises the following steps: acquiring camera data and radar data; performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object; displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view; and labeling the matching relation between the camera target object and the radar target object according to the camera plan view and the stereo view. According to the embodiment of the invention, the matching relation between the radar target and the camera target is accurately marked, and the accurate matching relation marking can be carried out on the radar target and the camera target under the background of big data by adopting the marking method that the camera plane view and the aerial view are mutually referenced and mutually contrasted.

Description

Data processing method and device
Technical Field
The present invention relates to the field of data processing, and in particular, to a method and an apparatus for data processing.
Background
At present, in an automatic driving system and an advanced assistant driving system, sensing and target detection of the surrounding environment can be realized by configuring a camera and a millimeter wave radar, and respective advantages of two sensors can be fully exerted by matching a target detection result based on the camera and a target detection result based on the radar.
However, due to the influence of large noise of radar target detection and uncertainty of camera target detection in three dimensions, how to realize accurate matching relation labeling is an urgent problem to be solved.
Disclosure of Invention
In view of the above, it is proposed to provide a method and apparatus for data processing that overcomes or at least partially solves the above mentioned problems, comprising:
a method of data processing, the method comprising:
acquiring camera data and radar data;
performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view;
and labeling the matching relation between the camera target object and the radar target object according to the camera plan view and the stereo view.
Optionally, the displaying the camera target object in the perspective view includes:
determining three-dimensional information of the camera target object;
and displaying a view body corresponding to the camera target object in the stereogram, and setting the boundary of the view body according to the three-dimensional information.
Optionally, after the labeling the matching relationship between the camera target object and the radar target object according to the camera plan view and the perspective view, the method further includes:
and checking the matching relation between the camera target object and the radar target object for reasonableness.
Optionally, the performing a plausibility check on the matching relationship between the camera target object and the radar target object includes:
for a camera target object and a radar target object which have a matching relation, determining bottom edge coordinate information of the camera target object in the stereo image and a radar distance of the radar target object;
and checking the rationality of the matching relation between the camera target object and the radar target object according to the bottom edge coordinate information and the radar distance.
Optionally, the performing a plausibility check on the matching relationship between the camera target object and the radar target object includes:
determining the number of camera target objects which are marked and have a matching relation with the same radar target object;
and according to the number of the camera target objects, carrying out rationality check on the matching relation between the camera target objects and the radar target objects.
Optionally, before the labeling the matching relationship between the camera target object and the radar target object according to the camera plan view and the perspective view, the method further includes:
judging whether the display of the camera target object in the camera plane graph is accurate or not;
when the camera target object is judged to be displayed inaccurately in the camera plan view, adjusting the camera target object, and updating the display of the camera target object in the camera plan view and the stereo view.
Optionally, before the performing the target detection on the camera data to obtain the camera target object and performing the target detection on the radar data to obtain the radar target object, the method further includes
Aligning the camera data and the radar data.
Optionally, the perspective view comprises a bird's eye view.
An apparatus for data processing, the apparatus comprising:
the camera data and radar data acquisition module is used for acquiring camera data and radar data;
the target detection module is used for carrying out target detection on the camera data to obtain a camera target object and carrying out target detection on the radar data to obtain a radar target object;
a camera target object and radar target object display module for displaying the camera target object and the radar target object in a camera plan view and displaying the camera target object and the radar target object in a perspective view;
and the matching relation labeling module is used for labeling the matching relation between the camera target object and the radar target object according to the camera plane graph and the stereo graph.
A server comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of data processing as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of data processing as described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the camera data and the radar data are acquired, the camera data are subjected to target detection to obtain the camera target object, the radar data are subjected to target detection to obtain the radar target object, then the camera target object and the radar target object are displayed in the camera plane graph, the camera target object and the radar target object are displayed in the stereo graph, and then the matching relation between the camera target object and the radar target object is labeled according to the camera plane graph and the stereo graph, so that the matching relation between the radar target and the camera target is accurately labeled.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the description of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a flow chart illustrating steps of a method for data processing according to an embodiment of the present invention;
FIG. 2a is a schematic diagram of a camera target and radar target display example provided by an embodiment of the invention;
FIG. 2b is a schematic diagram of another camera target and radar target display example provided by an embodiment of the present invention;
FIG. 3 is a flow chart of steps in another method of data processing according to an embodiment of the invention;
FIG. 4 is a flow chart of steps in another method of data processing according to an embodiment of the invention;
FIG. 5 is a diagram illustrating an example of a matching relationship labeling process according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart illustrating steps of a data processing method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, acquiring camera data and radar data;
in the process of labeling the radar target and the camera target, the camera data and the radar data may be acquired to perform target detection on the camera data and the radar data, respectively, for example, a camera and a millimeter wave radar may be adopted to collect a camera signal and a radar signal, respectively, so as to further perform target detection based on the camera signal and the radar signal, respectively.
In an automatic driving and advanced assistant driving system, sensing and target detection of the surrounding environment can be realized by configuring a camera and a millimeter wave radar, the accuracy of a target detection result based on the camera is high, the information content is rich, and the target detection result is limited by a two-dimensional detection result, so that the value in practical application is limited; the three-dimensional position and speed information of the detected object can be obtained based on the target detection result of the radar, but the method is limited by the influences of large noise and low accuracy of radar target detection. The radar detection target and the camera detection target can be matched, so that respective advantages of the two sensors can be fully exerted, and accurate three-dimensional object information can be obtained.
Because the traditional camera radar target matching scheme is a fixed rule algorithm obtained based on artificial empirical summarization, the algorithm is mainly based on artificial definition logic, and the algorithm is formed in a logic programming mode for matching by fusing rules under multiple scenes summarized by artificial experience, however, the algorithm has larger limitations and defects in practical application, such as low accuracy, poor robustness and the like.
In an example, in order to solve the above problems and improve the performance of camera radar target matching, a data-driven target matching algorithm based on machine learning may be adopted to implement the performance, and a data-driven scheme needs to be based on a large amount of radar data and camera data, and the radar data and the camera data may have a matching relationship between a camera target and a radar target that are accurately labeled.
102, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
after the camera data and the radar data are acquired, target detection can be performed on the camera data and the radar data respectively, and then a camera target object and a radar target object can be obtained.
For example, the detection of the camera target can be completed through the pre-trained model inference, or the detection can be performed in a manual labeling mode.
Step 103, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view;
as an example, the perspective view may include a bird's eye view that may be used for cross-referenced, cross-referenced annotation with the camera plan view.
After the camera target object and the radar target object are obtained, the camera target object and the radar target object can be displayed in a camera plane view, and the camera target object and the radar target object can be displayed in a perspective view, so that matching and labeling are performed on the camera target object and the radar target object by further adopting the camera plane view and the perspective view.
For example, a radar target (i.e., a radar target object) and a camera target (i.e., a camera target object) may be displayed by a camera plane and a bird's eye view, respectively, in which the velocity of the radar target may be represented by an arrow and the position of the camera target may be represented by a view volume (view front).
In one example, the representation method for displaying the radar target in the camera plane or the bird's eye view may be adjusted, such as representing the radar target in different colors, shapes (square or prismatic, etc.); different view upper and lower boundaries may be used to represent camera objects under the bird's eye view.
In yet another example, the scene in the bird's eye view may be a triangle, and the object may be located by using the scene (a triangle without the maximum and minimum distance limitation) or a limited scene (a trapezoid with the maximum and minimum distance limitation). Because the camera detects the uncertainty (such as in the depth direction) of the three-dimensional position of the target, the camera target is difficult to accurately show on the aerial view, so that the accurate marking of the radar matching relation of the camera is influenced, the limited view body is obtained by adding the upper and lower boundaries to the view body, and the limited view body representation method can help to reduce the three-dimensional uncertainty of the camera target, so that a marking clue is provided to the greatest extent, and accurate marking can be realized.
In an embodiment of the present invention, the displaying the camera target object in the perspective view may include the following sub-steps:
a substep 11 of determining three-dimensional information of the camera target object;
after the camera target object is obtained, corresponding three-dimensional information may be acquired for the camera target object.
For example, three-dimensional information of a detected target on a camera plane may be estimated, and a specific method for acquiring three-dimensional information of a camera target object may be various, for example, the estimation may be performed based on the length and width of a detected bounding box (bounding box), or the actual three-dimensional position of the detected target may be calculated by using the bottom edge of the detected bounding box based on an inverse perspective transformation (IPM), and the inverse perspective transformation method may specifically be calculated in the following manner:
Figure BDA0002642406710000061
the sum in the formula can be an external reference matrix and an internal reference matrix of the camera sensor respectively, and Z can be set to 0 so as to project the detected target of the camera plane on the ground, thereby eliminating the uncertainty of the detected target of the camera in depth.
And a substep 12 of displaying a view volume corresponding to the camera target object in the stereogram, and setting a boundary of the view volume according to the three-dimensional information.
After determining the three-dimensional information of the camera target object, the camera target object may be displayed in a perspective view, which may be performed by displaying a view volume corresponding to the camera target object, and setting a boundary for the view volume according to the three-dimensional information of the camera target object.
For example, the view volume may be a limited view volume by adding upper and lower boundaries, which may respectively represent the actual size range of the detected object, such as 1 meter and 5 meters, and the view volume in the bird's eye view may be embodied as a trapezoid (as shown in fig. 2 a).
In one example, the orientation of the camera plan and the bird's eye view can be adjusted, and as shown in fig. 2a, the display method can be that the camera plan is displayed on the left and the bird's eye view is displayed on the right; as shown in fig. 2b, the camera plan view may be displayed on the top and the bird's-eye view may be displayed on the bottom, but the present invention is not limited thereto, and in different display modes, the two viewing angles of the camera plan view and the bird's-eye view may be switched, for example, a shortcut key may be used for switching.
And 104, marking the matching relation between the camera target object and the radar target object according to the camera plane graph and the stereo graph.
In a specific implementation, the camera target object and the radar target object are displayed in a camera plan view and a perspective view, and then the matching relationship between the camera target object and the radar target object can be labeled according to the camera plan view and the perspective view.
For the camera signal, since the camera signal is information of a two-dimensional plane obtained by projecting a three-dimensional object through perspective transformation (perspective transformation), the camera signal has three-dimensional uncertainty (mainly in a depth direction), which is caused by a camera imaging principle and is difficult to overcome, and thus, three-dimensional uncertainty also exists in a camera detection target.
For radar signals, because the radar detection targets are limited by radar technologies in the market, the radar detection targets often contain noise and are also limited by radar hardware configuration, uncertainty exists in the height of the radar detection targets, and the noise and the uncertainty cause ambiguity when matching relation labeling is carried out on the camera radar targets, so that accurate labeling is influenced, and the ambiguity can be specifically expressed in the following aspects:
(1) radar targets generated by objects at different depths may be projected onto the same location on the camera plane;
(2) the projection of the radar target in the height direction of the camera plane is inaccurate;
(3) the projection of the camera object in the depth direction in the bird's eye view is inaccurate.
By adopting the method for matching and labeling the sensing results of the camera and the radar by simultaneously adopting the camera plane view and the aerial view, the camera plane view and the aerial view can be mutually referenced and contrasted, so that different projections in the camera plane and the aerial view can be better obtained by combining the geometric relationship between the three-dimensional objects, and further richer semantic relationship, geometric relationship and the like can be presented as references for labeling reference, and accurate matching relationship labeling can be realized.
In the embodiment of the invention, the camera data and the radar data are acquired, the camera data are subjected to target detection to obtain the camera target object, the radar data are subjected to target detection to obtain the radar target object, then the camera target object and the radar target object are displayed in the camera plane graph, the camera target object and the radar target object are displayed in the stereo graph, and then the matching relation between the camera target object and the radar target object is labeled according to the camera plane graph and the stereo graph, so that the matching relation between the radar target and the camera target is accurately labeled.
Referring to fig. 3, a flowchart illustrating steps of a data processing method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 301, acquiring camera data and radar data;
in the process of labeling the radar target and the camera target, the camera data and the radar data may be acquired to perform target detection on the camera data and the radar data, respectively, for example, a camera and a millimeter wave radar may be adopted to collect a camera signal and a radar signal, respectively, so as to further perform target detection based on the camera signal and the radar signal, respectively.
Step 302, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
after the camera data and the radar data are acquired, target detection can be performed on the camera data and the radar data respectively, and then a camera target object and a radar target object can be obtained.
Step 303, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view;
after the camera target object and the radar target object are obtained, the camera target object and the radar target object can be displayed in a camera plane view, and the camera target object and the radar target object can be displayed in a perspective view, so that matching and labeling are performed on the camera target object and the radar target object by further adopting the camera plane view and the perspective view.
Step 304, labeling the matching relation between the camera target object and the radar target object according to the camera plane graph and the stereo graph;
in a specific implementation, the camera target object and the radar target object are displayed in a camera plan view and a perspective view, and then the matching relationship between the camera target object and the radar target object can be labeled according to the camera plan view and the perspective view.
Step 305, performing a rationality check on the matching relationship between the camera target object and the radar target object.
In a specific implementation, a rationality check (sanity check) may be performed on a matching relationship between a camera target object and a radar target object, for example, after all camera targets are labeled, a labeling rationality check may be performed, and a check content of the rationality check may include two parts: firstly, whether self-contradiction exists or not can be judged for the marking result; secondly, whether the annotation is complete can be judged.
In one example, the method for determining whether there is a contradiction between the two cameras may be multiple, and any two matched camera targets may be checked to determine whether the coordinate sequence of the bottom edge of the camera target matches the radar distance; it is possible to detect whether the same radar target is connected a plurality of times and all are marked as a determination relationship for judgment.
In yet another example, determining whether the annotation is complete may detect whether all camera targets are annotated for determination.
In an embodiment of the present invention, step 305 may include the following sub-steps:
for a camera target object and a radar target object which have a matching relation, determining bottom edge coordinate information of the camera target object in the stereo image and a radar distance of the radar target object; and checking the rationality of the matching relation between the camera target object and the radar target object according to the bottom edge coordinate information and the radar distance.
In the process of rationality check, for a camera target object and a radar target object which have a matching relationship, the bottom edge coordinate information of the camera target object in the stereogram and the radar distance of the radar target object can be determined, and further, whether a contradiction exists in the marking result or not can be judged according to the bottom edge coordinate information and the radar distance, so that the rationality check is performed on the matching relationship between the camera target object and the radar target object.
For example, any two matched camera targets may be checked to determine whether the bottom coordinate order matches the radar distance, and since the larger the bottom coordinate value is based on the principle of inverse perspective transformation, the closer the radar distance of the corresponding target is, the algorithm may issue an error warning if any two targets do not meet the criterion.
In an embodiment of the present invention, step 305 may include the following sub-steps:
determining the number of camera target objects which are marked and have a matching relation with the same radar target object; and according to the number of the camera target objects, carrying out rationality check on the matching relation between the camera target objects and the radar target objects.
In the rationality checking process, the number of camera target objects which are marked and have matching relations with the same radar target object can be determined, and whether the camera target objects are complete or not can be judged aiming at the marks according to the number of the camera target objects so as to check the rationality of the matching relations between the camera target objects and the radar target objects.
Because the three-dimensional uncertainty that the camera target exists and the noise problem of radar target for accurate marking is difficult, and the rationality inspection can be the key step of realizing accurate marking, through the mark rationality inspection according to the inverse perspective transform, can make full use of geometrical constraint conditions such as order (ordinal) in the rationality inspection to discover the mistake that appears in the mark.
In the embodiment of the invention, the camera data is subjected to target detection by acquiring the camera data and the radar data to obtain the camera target object, and the radar data is subjected to target detection to obtain the radar target object, then, in the camera plan view, the camera target object and the radar target object are displayed, and in the perspective view, the camera target object and the radar target object are displayed, and then labeling the matching relation between the camera target object and the radar target object according to the camera plane graph and the perspective graph, and the matching relation between the camera target object and the radar target object is checked reasonably, so that the matching relation between the radar target and the camera target is accurately marked, by adopting a labeling method of mutual reference and mutual contrast of the camera plane view and the aerial view and performing labeling rationality check, the radar target and the camera target can be accurately labeled in a matching relationship under a big data background.
Referring to fig. 4, a flowchart illustrating steps of another data processing method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 401, acquiring camera data and radar data;
in the process of labeling the radar target and the camera target, the camera data and the radar data may be acquired to perform target detection on the camera data and the radar data, respectively, for example, a camera and a millimeter wave radar may be adopted to collect a camera signal and a radar signal, respectively, so as to further perform target detection based on the camera signal and the radar signal, respectively.
Step 402, aligning the camera data and the radar data;
after the camera data and the radar data are obtained, time alignment and space alignment processing can be performed on the camera data and the radar data, so that subsequent processing can be further performed by using the data after alignment calibration. For example, the radar detection results may be spatially and temporally aligned to the camera plane.
Step 403, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
after the camera data and the radar data are acquired, target detection can be performed on the camera data and the radar data respectively, and then a camera target object and a radar target object can be obtained.
Step 404, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view;
after the camera target object and the radar target object are obtained, the camera target object and the radar target object can be displayed in a camera plane view, and the camera target object and the radar target object can be displayed in a perspective view, so that matching and labeling are performed on the camera target object and the radar target object by further adopting the camera plane view and the perspective view.
Step 405, determining whether the display of the camera target object in the camera plan is accurate;
after the camera target object and the radar target object are displayed in the camera plan view and displayed in the perspective view, whether the camera target object is accurately displayed in the camera plan view can be judged.
Step 406, when it is determined that the camera target object is displayed inaccurately in the camera plan view, adjusting the camera target object, and updating the display of the camera target object in the camera plan view and the stereo view;
in the process of determining whether the display of the camera target object in the camera plan view is accurate, the camera target object may be adjusted when it is determined that the display of the camera target object in the camera plan view is inaccurate, and the display of the camera target object in the camera plan view and the stereo view may be updated. By successively labeling each camera target, radar points which correspond to the camera targets and can be matched can be selected.
For example, it may be checked for accuracy for a camera target, and in case of inaccuracy, the target may be chosen to be ignored or a manual adjustment may be chosen. When the adjustment target is selected, the view volume corresponding to the camera target object in the bird's-eye view can be adjusted accordingly.
In an example, the annotation tool can be configured in relation to, for example, whether the annotator is allowed to make bounding box adjustments in the camera plan view for the camera target object.
In yet another example, when a camera target is inaccurate and chooses not to adjust, or the target is heavily occluded, or the target location is far away, the target may be ignored. For each radar point that can be matched, it may be selected whether a camera radar target matching relationship based on that radar point is determined.
Step 407, labeling the matching relationship between the camera target object and the radar target object according to the camera plan view and the perspective view.
In a specific implementation, the camera target object and the radar target object are displayed in a camera plan view and a perspective view, and then the matching relationship between the camera target object and the radar target object can be labeled according to the camera plan view and the perspective view.
In order to enable those skilled in the art to better understand the above steps, the following description is provided for the embodiment of the present invention with reference to fig. 5, but it should be understood that the embodiment of the present invention is not limited thereto.
1. Collecting millimeter wave radar data;
2. target detection based on a millimeter wave radar;
3. spatial alignment and temporal alignment of radar detection results to a camera plane;
4. acquiring data of a camera;
5. camera-based target detection;
6. estimating three-dimensional information of a detected target of the camera;
7. displaying the radar detection target on a camera plane: adopting a radar point representation method;
8. the camera detects the display of the target on the camera plane: adopting a boundary frame representation method;
9. displaying radar detection targets under a bird's eye view: radar points and radar speed arrow representation methods are adopted;
10. displaying the camera detection target under the aerial view: adopting an object point and visual body representation method;
11. successively inspecting and marking each camera detection target;
12. is the camera target inaccurate? If the camera target is inaccurate, adjusting the camera target, and updating the display of the adjusted camera target; when accurate, further detect if the camera target can be ignored? When the target can be ignored, ignoring the target, and when the target cannot be ignored, carrying out next detection;
13. is the camera target free of radar spots?
14. In the absence of radar points, detect if all camera targets are labeled?
15. If not, hiding the radar point of which the camera detection target is matched and determined;
16. inspecting a next camera detection target;
17. when the camera target has radar points, successively marking the radar points matched with the camera target, and selecting whether each matching point is determined;
18. is the annotation result meet the rationality check? If not, returning to successively inspect and mark each camera detection target; and when the rationality check is met, ending.
In the embodiment of the invention, the camera data and the radar data are aligned by acquiring the camera data and the radar data, then the camera data is subjected to target detection to obtain a camera target object, the radar data is subjected to target detection to obtain a radar target object, the camera target object and the radar target object are displayed in a camera plane graph, the camera target object and the radar target object are displayed in a stereo graph, whether the display of the camera target object in the camera plane graph is accurate or not is judged, when the camera target object is judged to be displayed inaccurately in the camera plane graph, the camera target object is adjusted, the display of the camera target object in the camera plane graph and the stereo graph is updated, the matching relation between the camera target object and the radar target object is marked according to the camera plane graph and the stereo graph, and the matching relation between the camera target object and the radar target object is checked reasonably, the matching relation between the radar target and the camera target is accurately marked, and the radar target and the camera target can be accurately marked in a big data background by adopting a marking method of mutually referring and contrasting the camera plan view and the aerial view.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention is shown, which may specifically include the following modules:
a camera data and radar data acquisition module 601 for acquiring camera data and radar data;
a target detection module 602, configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
a camera target object and radar target object display module 603 for displaying the camera target object and the radar target object in a camera plan view and displaying the camera target object and the radar target object in a perspective view;
a matching relationship labeling module 604, configured to label a matching relationship between the camera target object and the radar target object according to the camera plan view and the perspective view.
In an embodiment of the present invention, the camera target object and radar target object display module 603 includes:
a three-dimensional information determination sub-module for determining three-dimensional information of the camera target object;
and the boundary setting submodule of the view body is used for displaying the view body corresponding to the camera target object in the stereogram and setting the boundary of the view body according to the three-dimensional information.
In an embodiment of the present invention, the apparatus further includes:
and the reasonableness checking module is used for carrying out reasonableness checking on the matching relation between the camera target object and the radar target object.
In an embodiment of the present invention, the rationality checking module includes:
the bottom side coordinate information and radar distance determining submodule is used for determining bottom side coordinate information of a camera target object in the stereogram and radar distance of the radar target object for the camera target object and the radar target object which have a matching relation;
and the first rationality checking sub-module is used for checking the rationality of the matching relation between the camera target object and the radar target object according to the bottom edge coordinate information and the radar distance.
In an embodiment of the present invention, the rationality checking module includes:
the camera target object quantity determining submodule is used for determining the quantity of camera target objects which are marked and have a matching relation with the same radar target object;
and the second rationality checking submodule is used for checking the rationality of the matching relation between the camera target object and the radar target object according to the number of the camera target objects.
In an embodiment of the present invention, the apparatus further includes:
the judging module is used for judging whether the display of the camera target object in the camera plane graph is accurate or not;
and the camera target object adjusting module is used for adjusting the camera target object and updating the display of the camera target object in the camera plane view and the stereo view when the camera target object is judged to be displayed inaccurately in the camera plane view.
In an embodiment of the present invention, the apparatus further includes:
an alignment module to align the camera data and the radar data.
In an embodiment of the invention, the perspective view comprises a bird's eye view.
In the embodiment of the invention, the camera data and the radar data are acquired, the camera data are subjected to target detection to obtain the camera target object, the radar data are subjected to target detection to obtain the radar target object, then the camera target object and the radar target object are displayed in the camera plane graph, the camera target object and the radar target object are displayed in the stereo graph, and then the matching relation between the camera target object and the radar target object is labeled according to the camera plane graph and the stereo graph, so that the matching relation between the radar target and the camera target is accurately labeled.
An embodiment of the present invention also provides a server, which may include a processor, a memory, and a computer program stored on the memory and capable of running on the processor, and when executed by the processor, the computer program implements the method for processing data as above.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the above data processing method.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and apparatus for data processing provided above are described in detail, and a specific example is applied herein to illustrate the principles and embodiments of the present invention, and the above description of the embodiment is only used to help understand the method and core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (11)

1. A method of data processing, the method comprising:
acquiring camera data and radar data;
performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a perspective view;
and labeling the matching relation between the camera target object and the radar target object according to the camera plan view and the stereo view.
2. The method of claim 1, wherein displaying the camera target object in the perspective view comprises:
determining three-dimensional information of the camera target object;
and displaying a view body corresponding to the camera target object in the stereogram, and setting the boundary of the view body according to the three-dimensional information.
3. The method of claim 1 or 2, further comprising, after said labeling the matching relationship of the camera target object and the radar target object according to the camera plan view and the perspective view:
and checking the matching relation between the camera target object and the radar target object for reasonableness.
4. The method according to claim 3, wherein the performing a plausibility check on the matching relationship of the camera target object and the radar target object comprises:
for a camera target object and a radar target object which have a matching relation, determining bottom edge coordinate information of the camera target object in the stereo image and a radar distance of the radar target object;
and checking the rationality of the matching relation between the camera target object and the radar target object according to the bottom edge coordinate information and the radar distance.
5. The method according to claim 3, wherein the performing a plausibility check on the matching relationship of the camera target object and the radar target object comprises:
determining the number of camera target objects which are marked and have a matching relation with the same radar target object;
and according to the number of the camera target objects, carrying out rationality check on the matching relation between the camera target objects and the radar target objects.
6. The method of claim 1, further comprising, prior to said labeling the matching relationship between the camera target object and the radar target object according to the camera plan view and the perspective view:
judging whether the display of the camera target object in the camera plane graph is accurate or not;
when the camera target object is judged to be displayed inaccurately in the camera plan view, adjusting the camera target object, and updating the display of the camera target object in the camera plan view and the stereo view.
7. The method of claim 1, further comprising, before the performing the target detection on the camera data to obtain a camera target object and performing the target detection on the radar data to obtain a radar target object:
aligning the camera data and the radar data.
8. The method of claim 1, wherein the perspective view comprises a bird's eye view.
9. An apparatus for data processing, the apparatus comprising:
the camera data and radar data acquisition module is used for acquiring camera data and radar data;
the target detection module is used for carrying out target detection on the camera data to obtain a camera target object and carrying out target detection on the radar data to obtain a radar target object;
a camera target object and radar target object display module for displaying the camera target object and the radar target object in a camera plan view and displaying the camera target object and the radar target object in a perspective view;
and the matching relation labeling module is used for labeling the matching relation between the camera target object and the radar target object according to the camera plane graph and the stereo graph.
10. A server comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program, when executed by the processor, implementing a method of data processing according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method of data processing according to any one of claims 1 to 8.
CN202010843958.2A 2020-08-20 2020-08-20 Data processing method and device Pending CN112017241A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010843958.2A CN112017241A (en) 2020-08-20 2020-08-20 Data processing method and device
PCT/CN2021/110287 WO2022037403A1 (en) 2020-08-20 2021-08-03 Data processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843958.2A CN112017241A (en) 2020-08-20 2020-08-20 Data processing method and device

Publications (1)

Publication Number Publication Date
CN112017241A true CN112017241A (en) 2020-12-01

Family

ID=73505226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843958.2A Pending CN112017241A (en) 2020-08-20 2020-08-20 Data processing method and device

Country Status (2)

Country Link
CN (1) CN112017241A (en)
WO (1) WO2022037403A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037403A1 (en) * 2020-08-20 2022-02-24 广州小鹏汽车科技有限公司 Data processing method and apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011036807A1 (en) * 2009-09-28 2011-03-31 トヨタ自動車株式会社 Object detection device and object detection method
CN107871129A (en) * 2016-09-27 2018-04-03 北京百度网讯科技有限公司 Method and apparatus for handling cloud data
CN110197148A (en) * 2019-05-23 2019-09-03 北京三快在线科技有限公司 Mask method, device, electronic equipment and the storage medium of target object
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
CN111353273A (en) * 2020-03-09 2020-06-30 深圳大学 Radar data labeling method, device, equipment and storage medium
CN111401194A (en) * 2020-03-10 2020-07-10 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798010A (en) * 2016-08-31 2018-03-13 法乐第(北京)网络科技有限公司 A kind of annotation equipment of sensing data
CN107784038B (en) * 2016-08-31 2021-03-19 法法汽车(中国)有限公司 Sensor data labeling method
CN108764187B (en) * 2018-06-01 2022-03-08 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and acquisition entity for extracting lane line
CN110942449B (en) * 2019-10-30 2023-05-23 华南理工大学 Vehicle detection method based on laser and vision fusion
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011036807A1 (en) * 2009-09-28 2011-03-31 トヨタ自動車株式会社 Object detection device and object detection method
CN107871129A (en) * 2016-09-27 2018-04-03 北京百度网讯科技有限公司 Method and apparatus for handling cloud data
CN110197148A (en) * 2019-05-23 2019-09-03 北京三快在线科技有限公司 Mask method, device, electronic equipment and the storage medium of target object
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN111324115A (en) * 2020-01-23 2020-06-23 北京百度网讯科技有限公司 Obstacle position detection fusion method and device, electronic equipment and storage medium
CN111353273A (en) * 2020-03-09 2020-06-30 深圳大学 Radar data labeling method, device, equipment and storage medium
CN111401194A (en) * 2020-03-10 2020-07-10 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037403A1 (en) * 2020-08-20 2022-02-24 广州小鹏汽车科技有限公司 Data processing method and apparatus

Also Published As

Publication number Publication date
WO2022037403A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
US20180018528A1 (en) Detecting method and device of obstacles based on disparity map and automobile driving assistance system
KR102016636B1 (en) Calibration apparatus and method of camera and rader
CN109118537A (en) A kind of picture matching process, device, equipment and storage medium
CN114705121A (en) Vehicle pose measuring method and device, electronic equipment and storage medium
CN112241978A (en) Data processing method and device
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN101782386B (en) Non-visual geometric camera array video positioning method and system
CN114758504A (en) Online vehicle overspeed early warning method and system based on filtering correction
CN115546315A (en) Sensor on-line calibration method and device for automatic driving vehicle and storage medium
CN112017241A (en) Data processing method and device
CN105335959A (en) Quick focusing method and device for imaging apparatus
CN111126363B (en) Object recognition method and device for automatic driving vehicle
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN117197775A (en) Object labeling method, object labeling device and computer readable storage medium
JP6087218B2 (en) Image analysis device
CN115457084A (en) Multi-camera target detection tracking method and device
CN116129378A (en) Lane line detection method, device, equipment, vehicle and medium
CN115546130A (en) Height measuring method and device for digital twins and electronic equipment
Cheda et al. Camera egomotion estimation in the ADAS context
CN114371475A (en) Method, system, equipment and computer storage medium for optimizing calibration parameters
CN114638947A (en) Data labeling method and device, electronic equipment and storage medium
CN111986248A (en) Multi-view visual perception method and device and automatic driving automobile
CN113468931A (en) Data processing method and device, electronic equipment and storage medium
CN117437602B (en) Dual-layer data calibration method, device, equipment and readable storage medium
Zuckerman et al. Distance Estimation to Image Objects Using Adapted Scale

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201201

RJ01 Rejection of invention patent application after publication