WO2022037403A1 - Data processing method and apparatus - Google Patents

Data processing method and apparatus Download PDF

Info

Publication number
WO2022037403A1
WO2022037403A1 PCT/CN2021/110287 CN2021110287W WO2022037403A1 WO 2022037403 A1 WO2022037403 A1 WO 2022037403A1 CN 2021110287 W CN2021110287 W CN 2021110287W WO 2022037403 A1 WO2022037403 A1 WO 2022037403A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
target object
radar
data
view
Prior art date
Application number
PCT/CN2021/110287
Other languages
French (fr)
Chinese (zh)
Inventor
董旭
刘兰个川
毛云翔
Original Assignee
广州小鹏汽车科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州小鹏汽车科技有限公司 filed Critical 广州小鹏汽车科技有限公司
Publication of WO2022037403A1 publication Critical patent/WO2022037403A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the field of data processing, in particular to a method and device for data processing.
  • the perception of the surrounding environment and target detection can be realized by configuring cameras and millimeter-wave radars.
  • the camera-based target detection results By matching the camera-based target detection results with the radar-based target detection results, it is possible to Take full advantage of the respective advantages of both sensors.
  • a method of data processing comprising:
  • the camera target object and the radar target object are displayed, and in the stereoscopic view, the camera target object and the radar target object are displayed;
  • the matching relationship between the camera target object and the radar target object is marked.
  • the displaying of the camera target object in the stereoscopic view includes:
  • the viewing volume corresponding to the camera target object is displayed, and the boundary of the viewing volume is set according to the three-dimensional information.
  • the method further includes:
  • a rationality check is performed on the matching relationship between the camera target object and the radar target object.
  • performing a rationality check on the matching relationship between the camera target object and the radar target object includes:
  • the matching relationship between the camera target object and the radar target object is checked for rationality.
  • performing a rationality check on the matching relationship between the camera target object and the radar target object includes:
  • the matching relationship between the camera target objects and the radar target objects is checked for rationality.
  • the method before the labeling of the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view, the method further includes:
  • the camera target object is adjusted, and the display of the camera target object in the camera plan view and the stereoscopic view is updated.
  • target detection on the camera data to obtain a camera target object before performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object, further comprising:
  • the camera data and the radar data are aligned.
  • the perspective view includes a bird's-eye view.
  • a data processing device includes:
  • Camera data and radar data acquisition module for acquiring camera data and radar data
  • a target detection module configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
  • a camera target object and radar target object display module configured to display the camera target object and the radar target object in a camera plan view, and display the camera target object and the radar target object in a stereoscopic view;
  • the matching relationship labeling module is configured to label the matching relationship between the camera target object and the radar target object according to the camera plan view and the stereogram.
  • a server includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program implementing the data processing method as described above when executed by the processor.
  • a computer-readable storage medium stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, implements the above-mentioned data processing method.
  • the present invention by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view And radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and accurately mark the matching relationship between the radar target and the camera target.
  • the relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view.
  • FIG. 1 is a flowchart of steps of a method for data processing provided by an embodiment of the present invention
  • 2a is a schematic diagram of an example of displaying a camera target and a radar target according to an embodiment of the present invention
  • 2b is a schematic diagram of another example of displaying a camera target and a radar target according to an embodiment of the present invention
  • FIG. 3 is a flowchart of steps of another data processing method provided by an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an example of a matching relationship labeling process provided by an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 1 a flowchart of steps of a data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
  • Step 101 acquiring camera data and radar data
  • the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
  • the perception of the surrounding environment and target detection can be achieved by configuring cameras and millimeter-wave radars.
  • the camera-based target detection results are highly accurate and rich in information, but are also subject to Limited to two-dimensional detection results, the value in practical applications is limited; radar-based target detection results can obtain the three-dimensional position and velocity information of the detected object, but at the same time, it is also limited by the large noise and low accuracy of radar target detection. high impact.
  • the respective advantages of the two sensors can be fully utilized, and accurate three-dimensional object information can be obtained.
  • the algorithm is mainly based on human defined logic, and is formed by logic programming by combining the multi-scenario fusion rules summed up by human experience.
  • this algorithm has many limitations and shortcomings in practical applications, such as low accuracy and poor robustness.
  • a data-driven machine learning-based target matching algorithm can be used to achieve, while the data-driven solution needs to be based on a large amount of radar data and camera data
  • the radar data and the camera data can have the matching relationship between the accurately marked camera target and the radar target
  • the accurate marking tool of the present invention can realize the accurate marking of the matching relationship between the radar target and the camera target under the background of big data, and can be applied to Implementation of camera-radar target fusion algorithm based on machine learning.
  • Step 102 performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
  • target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
  • the detection of the camera target can be completed through pre-trained model inference, or the detection can be performed by manual labeling.
  • the present invention can use the pre-trained model for automatic detection to improve labeling efficiency.
  • Step 103 displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a three-dimensional view;
  • the stereo map may include a bird's-eye view, which may be used for cross-reference and cross-reference annotation with the camera plan.
  • the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
  • the radar target that is, the radar target object
  • the camera target that is, the camera target object
  • view frustum represents the position of the camera target.
  • the representation method for radar targets displayed in the camera plane or bird's-eye view can be adjusted, such as using different colors and shapes (square or prism, etc.) to represent radar targets; different upper and lower boundaries of the visual image can be used. to represent the camera target in a bird's-eye view.
  • the specific representation of the viewing volume in the bird's-eye view can also be a triangle, and the viewing volume (a triangle with no maximum and minimum distance restrictions) or a limited viewing volume (a trapezoid with a maximum and minimum distance limit) can be used in the bird's-eye view.
  • Position the object Due to the uncertainty of the three-dimensional position of the target detected by the camera (such as in the depth direction), it is difficult to accurately represent the camera target on the bird's-eye view, which will affect the accurate annotation of the camera-radar matching relationship.
  • Obtaining the limited view volume, and then using the limited view volume representation method can help reduce the three-dimensional uncertainty of the camera target, and provide the most possible labeling clues, so that accurate labeling can be achieved.
  • the displaying of the camera target object in the stereoscopic view may include the following sub-steps:
  • Sub-step 11 determine the three-dimensional information of the camera target object
  • the three-dimensional information of the detected object in the camera plane can be estimated, and the specific methods for obtaining the three-dimensional information of the camera target object can be various, for example, it can be estimated based on the length and width of the detected bounding box, or it can be based on inverse perspective transformation.
  • IPM inverse perspective transform
  • the sum in this formula can be the external parameter matrix and the internal parameter matrix of the camera sensor, respectively, and Z can be set to 0, so that the detected target of the camera plane can be projected on the ground, and then the depth difference of the camera detected target can be eliminated. Uncertainty.
  • Sub-step 12 in the stereoscopic view, display the visual volume corresponding to the camera target object, and set the boundary of the visual volume according to the three-dimensional information.
  • the camera target object can be displayed in a stereogram, which can display the view volume corresponding to the camera target object, and can set the boundary for the view volume according to the three-dimensional information of the camera target object.
  • the viewing volume can be obtained by adding upper and lower boundaries to obtain a limited viewing volume, and the upper and lower boundaries can represent the actual size range of the detected object, such as 1 meter and 5 meters. as shown in Figure 2a).
  • the orientation of the camera plan view and the bird's-eye view can be adjusted, as shown in Figure 2a, the display method can be that the camera plan view is displayed on the left and the bird's eye view is displayed on the right; as shown in Figure 2b, the camera plan view can also be displayed. On the top, the bird's-eye view is displayed on the bottom, which is not limited in the present invention.
  • the two perspectives of the camera plan view and the bird's-eye view can be switched, for example, a shortcut key can be used to switch.
  • Step 104 Mark the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
  • the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
  • the camera signal since the camera signal is the information of the two-dimensional plane projected by the three-dimensional object through perspective transformation, the camera signal has three-dimensional uncertainty (mainly in the depth direction), which is the camera signal. Due to the imaging principle, it is difficult to overcome, resulting in three-dimensional uncertainty in the detection target of the camera.
  • the radar detection target For radar signals, due to the limitation of radar technology on the market, the radar detection target often contains noise, and is also limited by the radar hardware configuration, the radar detection target is highly uncertain, and the noise and uncertainty lead to the camera.
  • the ambiguity in the matching relationship labeling of radar targets affects the accurate labeling.
  • the ambiguity can be manifested in the following aspects:
  • radar targets generated by objects of different depths may be projected to the same position
  • the camera plan and bird's-eye view can be referred to and compared with each other, so that the geometric relationship between the three-dimensional objects can be better combined to obtain the results between the camera plane and the bird's eye view.
  • Different projections in the bird's-eye view can present richer semantic relations, geometric relations, etc. as the basis for labeling reference, so as to achieve accurate matching labeling.
  • the present invention by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and realize the accurate marking of the matching relationship between the radar target and the camera target.
  • the relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the method of mutual reference and comparison between the camera plan and the bird's-eye view.
  • FIG. 3 a flowchart of steps of a data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
  • Step 301 acquiring camera data and radar data
  • the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
  • Step 302 performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
  • target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
  • Step 303 displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a stereoscopic view;
  • the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
  • Step 304 marking the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view;
  • the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
  • Step 305 Check the rationality of the matching relationship between the camera target object and the radar target object.
  • a sanity check can be performed on the matching relationship between the camera target object and the radar target object. For example, after the labeling of all camera targets is completed, a labeling rationality check can be performed.
  • the content can include two parts: one is to judge whether there is a self-contradiction in the marking result; the other is to judge whether the marking is complete.
  • any two matching camera targets can be checked to determine whether the order of their bottom coordinates and the radar distance are consistent; it can be detected whether there is the same
  • the radar targets are connected multiple times, and they are all marked with a definite relationship for judgment.
  • to determine whether the annotation is complete it may be detected whether all the camera objects are marked for determination.
  • step 305 may include the following sub-steps:
  • For the camera target object and the radar target object with a matching relationship determine the bottom coordinate information of the camera target object in the stereogram and the radar distance of the radar target object; according to the bottom coordinate information and the radar distance, and the matching relationship between the camera target object and the radar target object is checked for rationality.
  • the bottom coordinate information of the camera target object in the stereo map and the radar distance of the radar target object can be determined, and then the bottom coordinate information and the radar distance of the radar target object can be determined.
  • the radar distance is used to judge whether there is a self-contradiction in the marking results, so as to check the rationality of the matching relationship between the camera target object and the radar target object.
  • any two matching camera targets can be checked to determine whether the order of their bottom coordinates is consistent with the radar distance. Due to the principle of inverse perspective transformation, the larger the bottom coordinate value is, the closer the corresponding target radar distance is. , the algorithm can issue a false warning if any two targets do not meet this criterion.
  • step 305 may include the following sub-steps:
  • the number of camera target objects marked with a matching relationship with the same radar target object can be determined, and then according to the number of camera target objects, the completeness of the labeling can be judged, so as to determine whether the camera target object and the radar target object are complete.
  • the matching relationship is checked for plausibility.
  • the radar target by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and determine the matching relationship between the camera target object and the radar target object.
  • the rationality check is carried out to accurately mark the matching relationship between the radar target and the camera target.
  • FIG. 4 a flowchart of steps of another data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
  • Step 401 acquiring camera data and radar data
  • the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
  • Step 402 aligning the camera data and the radar data
  • time alignment and space alignment processing can be performed on the camera data and the radar data, so as to further use the aligned and calibrated data for subsequent processing.
  • radar detection results can be spatially and temporally aligned to the camera plane.
  • Step 403 performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
  • target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
  • Step 404 displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a three-dimensional view;
  • the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
  • Step 405 judging whether the display of the camera target object in the camera plan view is accurate
  • Step 406 when it is determined that the camera target object is displayed inaccurately in the camera plan view, adjust the camera target object, and update the display of the camera target object in the camera plan view and the stereoscopic view;
  • the camera target object In the process of judging whether the display of the camera target object in the camera plan view is accurate, the camera target object can be adjusted when it is determined that the display of the camera target object in the camera plan view is inaccurate, and the camera target object can be updated in the camera plan view and stereo map. display in .
  • the matching radar points corresponding to the camera target can be selected.
  • a camera target can be checked for accuracy, and in the event of inaccuracy, the target can be chosen to be ignored or manually adjusted.
  • the view volume corresponding to the camera target object in the bird's eye view can also be adjusted accordingly.
  • related configurations can be made for the annotation tool, for example, whether to allow the annotator to adjust the bounding box of the camera target object in the camera plan view.
  • the target when the camera target is inaccurate and no adjustment is selected, or the target is heavily occluded, or the target location is far away, the target can be ignored. For each radar point that can be matched, it can be selected whether the matching relationship of the camera radar target based on the radar point is determined.
  • Step 407 Mark the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
  • the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
  • the camera plan view the camera target object and the radar target object are displayed, and in the stereoscopic view, the camera target object and the radar target object are displayed, and then it is judged whether the display of the camera target object in the camera plan view is accurate.
  • the display is inaccurate, adjust the camera target object, and update the display of the camera target object in the camera plan view and stereo map.
  • the camera plan view and stereo map mark the matching relationship between the camera target object and the radar target object, and compare the camera target object.
  • the matching relationship with the radar target object is checked for rationality, and the matching relationship between the radar target and the camera target can be accurately marked.
  • the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view it can target the radar target under the background of big data. Annotate the exact matching relationship with the camera target.
  • FIG. 6 a schematic structural diagram of a data processing apparatus provided by an embodiment of the present invention is shown, which may specifically include the following modules:
  • a camera data and radar data acquisition module 601 for acquiring camera data and radar data
  • a target detection module 602 configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
  • a camera target object and radar target object display module 603 configured to display the camera target object and the radar target object in a camera plan view, and display the camera target object and the radar target object in a stereoscopic view;
  • the matching relationship labeling module 604 is configured to label the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
  • the camera target object and radar target object display module 603 includes:
  • a three-dimensional information determination submodule used for determining the three-dimensional information of the camera target object
  • the visual volume boundary setting sub-module is configured to display the visual volume corresponding to the camera target object in a stereoscopic image, and set the boundary of the visual volume according to the three-dimensional information.
  • the device further includes:
  • the rationality checking module is used for checking the rationality of the matching relationship between the camera target object and the radar target object.
  • the rationality checking module includes:
  • the bottom coordinate information and radar distance determination sub-module is used to determine the bottom coordinate information of the camera target object in the stereo image and the radar target object of the radar target object for the camera target object and the radar target object with a matching relationship distance;
  • the first rationality checking sub-module is configured to perform a rationality check on the matching relationship between the camera target object and the radar target object according to the base coordinate information and the radar distance.
  • the rationality checking module includes:
  • the sub-module for determining the number of camera target objects is used to determine the number of camera target objects marked with a matching relationship with the same radar target object;
  • the second rationality checking sub-module is configured to perform a rationality check on the matching relationship between the camera target object and the radar target object according to the number of the camera target objects.
  • the device further includes:
  • a judgment module for judging whether the display of the camera target object in the camera plan view is accurate
  • a camera target object adjustment module configured to adjust the camera target object when it is determined that the camera target object is displayed inaccurately in the camera plan view, and update the camera target object in the camera plan view and the camera target object Display in the stereogram.
  • the device further includes:
  • an alignment module configured to align the camera data and the radar data.
  • the three-dimensional view includes a bird's-eye view.
  • the present invention by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and realize the accurate marking of the matching relationship between the radar target and the camera target.
  • the relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view.
  • An embodiment of the present invention also provides a server, which may include a processor, a memory, and a computer program stored in the memory and capable of running on the processor.
  • a server which may include a processor, a memory, and a computer program stored in the memory and capable of running on the processor.
  • An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above data processing method is implemented.
  • embodiments of the present invention may be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product implemented on one or more computer-usable storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
  • These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal equipment to produce a machine that causes the instructions to be executed by the processor of the computer or other programmable data processing terminal equipment Means are created for implementing the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing terminal equipment to operate in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the The instruction means implement the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A data processing method and apparatus. The method comprises: obtaining camera data and radar data (101); performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object (102); in a camera plan view, displaying the camera target object and the radar target object, and in a stereogram, displaying the camera target object and the radar target object (103); and according to the camera plan view and the stereogram, marking a matching relationship between the camera target object and the radar target object (104). By means of the method, the accurate marking of the matching relationship between the radar target and the camera target is achieved; and the precise marking of the matching relationship between the radar target and the camera target under a big data background can be achieved by using a marking method of mutual reference and mutual contrast of the camera plan view and an aerial view.

Description

一种数据处理的方法和装置A method and apparatus for data processing
本发明要求在2020年08月20日提交中国专利局、申请号202010843958.2、发明名称为“一种数据处理的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本发明中。The present invention claims the priority of the Chinese patent application filed on August 20, 2020, with the application number 202010843958.2 and the invention titled "A method and apparatus for data processing", the entire contents of which are incorporated herein by reference .
技术领域technical field
本发明涉及数据处理领域,特别是涉及一种数据处理的方法和装置。The present invention relates to the field of data processing, in particular to a method and device for data processing.
背景技术Background technique
目前,在自动驾驶和高级辅助驾驶系统中,可以通过配置相机和毫米波雷达来实现对周围环境的感知和目标检测,通过针对基于相机的目标检测结果和基于雷达的目标检测结果进行匹配,能够充分发挥两种传感器各自的优势。At present, in autonomous driving and advanced assisted driving systems, the perception of the surrounding environment and target detection can be realized by configuring cameras and millimeter-wave radars. By matching the camera-based target detection results with the radar-based target detection results, it is possible to Take full advantage of the respective advantages of both sensors.
但由于存在雷达目标检测的噪声大和相机目标检测在三维上的不确定性的影响,如何实现精确的匹配关系标注是一个急需解决的问题。However, due to the large noise of radar target detection and the uncertainty of camera target detection in 3D, how to achieve accurate matching relationship annotation is an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
鉴于上述问题,提出了以便提供克服上述问题或者至少部分地解决上述问题的一种数据处理的方法和装置,包括:In view of the above problems, it is proposed to provide a data processing method and apparatus to overcome the above problems or at least partially solve the above problems, including:
一种数据处理的方法,所述方法包括:A method of data processing, the method comprising:
获取相机数据和雷达数据;Obtain camera data and radar data;
对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象;Perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象;In the camera plan view, the camera target object and the radar target object are displayed, and in the stereoscopic view, the camera target object and the radar target object are displayed;
根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。According to the camera plan view and the three-dimensional view, the matching relationship between the camera target object and the radar target object is marked.
可选地,所述在立体图中,显示所述相机目标对象,包括:Optionally, the displaying of the camera target object in the stereoscopic view includes:
确定所述相机目标对象的三维信息;determining three-dimensional information of the camera target object;
在立体图中,显示所述相机目标对象对应的视景体,并根据所述三维信息,设置所述视景体的边界。In the stereogram, the viewing volume corresponding to the camera target object is displayed, and the boundary of the viewing volume is set according to the three-dimensional information.
可选地,在所述根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系之后,还包括:Optionally, after marking the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view, the method further includes:
对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。A rationality check is performed on the matching relationship between the camera target object and the radar target object.
可选地,所述对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查,包括:Optionally, performing a rationality check on the matching relationship between the camera target object and the radar target object includes:
对于具有匹配关系的相机目标对象和雷达目标对象,确定所述相机目标对象在所述立体图中的底边坐标信息和所述雷达目标对象的雷达距离;For the camera target object and the radar target object having a matching relationship, determine the bottom coordinate information of the camera target object in the stereogram and the radar distance of the radar target object;
根据所述底边坐标信息和所述雷达距离,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。According to the base coordinate information and the radar distance, the matching relationship between the camera target object and the radar target object is checked for rationality.
可选地,所述对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查,包括:Optionally, performing a rationality check on the matching relationship between the camera target object and the radar target object includes:
确定被标注与同一雷达目标对象具有匹配关系的相机目标对象数量;Determine the number of camera target objects marked with a matching relationship with the same radar target object;
根据所述相机目标对象数量,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。According to the number of the camera target objects, the matching relationship between the camera target objects and the radar target objects is checked for rationality.
可选地,在所述根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系之前,还包括:Optionally, before the labeling of the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view, the method further includes:
判断所述相机目标对象在所述相机平面图中的显示是否准确;judging whether the display of the camera target object in the camera plan view is accurate;
在判定所述相机目标对象在所述相机平面图中显示不准确时,对所述相机目标对象进行调整,并更新所述相机目标对象在所述相机平面图和所述立体图中的显示。When it is determined that the display of the camera target object in the camera plan view is inaccurate, the camera target object is adjusted, and the display of the camera target object in the camera plan view and the stereoscopic view is updated.
可选地,在所述对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象之前,还包括Optionally, before performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object, further comprising:
对所述相机数据和所述雷达数据进行对齐。The camera data and the radar data are aligned.
可选地,所述立体图包括鸟瞰图。Optionally, the perspective view includes a bird's-eye view.
一种数据处理的装置,所述装置包括:A data processing device, the device includes:
相机数据和雷达数据获取模块,用于获取相机数据和雷达数据;Camera data and radar data acquisition module for acquiring camera data and radar data;
目标检测模块,用于对所述相机数据进行目标检测,得到相机目标对象, 并对所述雷达数据进行目标检测,得到雷达目标对象;a target detection module, configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
相机目标对象和雷达目标对象显示模块,用于在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象;a camera target object and radar target object display module, configured to display the camera target object and the radar target object in a camera plan view, and display the camera target object and the radar target object in a stereoscopic view;
匹配关系标注模块,用于根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。The matching relationship labeling module is configured to label the matching relationship between the camera target object and the radar target object according to the camera plan view and the stereogram.
一种服务器,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的数据处理的方法。A server includes a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program implementing the data processing method as described above when executed by the processor.
一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如上所述的数据处理的方法。A computer-readable storage medium stores a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, implements the above-mentioned data processing method.
本发明实施例具有以下优点:The embodiments of the present invention have the following advantages:
在本发明实施例中,通过获取相机数据和雷达数据,对相机数据进行目标检测,得到相机目标对象,并对雷达数据进行目标检测,得到雷达目标对象,然后在相机平面图中,显示相机目标对象和雷达目标对象,并在立体图中,显示相机目标对象和雷达目标对象,进而根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系,实现了准确标注出雷达目标和相机目标的匹配关系,通过采用相机平面图和鸟瞰图相互参考、相互对照的标注方法,能够针对大数据背景下雷达目标和相机目标进行精确地匹配关系标注。In the embodiment of the present invention, by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view And radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and accurately mark the matching relationship between the radar target and the camera target. The relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view.
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solutions of the present invention, in order to be able to understand the technical means of the present invention more clearly, it can be implemented according to the content of the description, and in order to make the above and other purposes, features and advantages of the present invention more obvious and easy to understand , the following specific embodiments of the present invention are given.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.
图1是本发明一实施例提供的一种数据处理的方法的步骤流程图;1 is a flowchart of steps of a method for data processing provided by an embodiment of the present invention;
图2a是本发明一实施例提供的一种相机目标和雷达目标显示示例的示意图;2a is a schematic diagram of an example of displaying a camera target and a radar target according to an embodiment of the present invention;
图2b是本发明一实施例提供的另一种相机目标和雷达目标显示示例的示意图;2b is a schematic diagram of another example of displaying a camera target and a radar target according to an embodiment of the present invention;
图3是本发明一实施例提供的另一种数据处理的方法的步骤流程图;3 is a flowchart of steps of another data processing method provided by an embodiment of the present invention;
图4是本发明一实施例提供的另一种数据处理的方法的步骤流程图;4 is a flowchart of steps of another data processing method provided by an embodiment of the present invention;
图5是本发明一实施例提供的一种匹配关系标注流程示例的示意图;5 is a schematic diagram of an example of a matching relationship labeling process provided by an embodiment of the present invention;
图6是本发明一实施例提供的一种数据处理的装置的结构示意图。FIG. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
具体实施例specific embodiment
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
参照图1,示出了本发明一实施例提供的一种数据处理的方法的步骤流程图,具体可以包括如下步骤:Referring to FIG. 1, a flowchart of steps of a data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
步骤101,获取相机数据和雷达数据; Step 101, acquiring camera data and radar data;
在标注雷达目标和相机目标的过程中,可以通过获取相机数据和雷达数据,以分别针对相机数据和雷达数据进行目标检测,例如,可以采用相机和毫米波雷达分别采集相机信号和雷达信号,以进一步可以分别基于相机信号和雷达信号进行目标检测。In the process of labeling the radar target and the camera target, the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
在自动驾驶和高级辅助驾驶系统中,可以通过配置相机和毫米波雷达来实现对周围环境的感知和目标检测,基于相机的目标检测结果准确度高,且具有的信息含量丰富,但同时也受限于二维的检测结果,导致在实际应用中的价值有限;基于雷达的目标检测结果能够得到被检测物体的三维位置和速度信息,但同时也受限于雷达目标检测噪声大、准确度不高的影响。可以通 过将雷达检测目标和相机检测目标相匹配,从而能够充分发挥两种传感器各自的优势,得到准确的三维物体信息。In autonomous driving and advanced assisted driving systems, the perception of the surrounding environment and target detection can be achieved by configuring cameras and millimeter-wave radars. The camera-based target detection results are highly accurate and rich in information, but are also subject to Limited to two-dimensional detection results, the value in practical applications is limited; radar-based target detection results can obtain the three-dimensional position and velocity information of the detected object, but at the same time, it is also limited by the large noise and low accuracy of radar target detection. high impact. By matching the radar detection target and the camera detection target, the respective advantages of the two sensors can be fully utilized, and accurate three-dimensional object information can be obtained.
由于传统的相机雷达目标匹配方案是基于人为经验性总结得到的固定规则算法,该算法是以人为定义逻辑为主,通过将人为经验性总结出的多场景下融合规则,采用逻辑编程的方式形成算法以进行匹配,然而该算法在实际应用中存在较大局限和不足,如准确度低、鲁棒性差等问题。Since the traditional camera radar target matching scheme is a fixed rule algorithm based on human experience summary, the algorithm is mainly based on human defined logic, and is formed by logic programming by combining the multi-scenario fusion rules summed up by human experience. However, this algorithm has many limitations and shortcomings in practical applications, such as low accuracy and poor robustness.
在一示例中,为了解决上述问题,提升相机雷达目标匹配的性能,可以采用通过数据驱动的基于机器学习的目标匹配算法以实现,而通过数据驱动的方案需要基于大量的雷达数据和相机数据,且雷达数据和相机数据可以具有精确标注的相机目标和雷达目标的匹配关系,通过本发明的精确标注工具,能够实现大数据背景下精确地雷达目标和相机目标匹配关系的标注,进而可以应用于基于机器学习的相机雷达目标融合算法的实现。In an example, in order to solve the above problems and improve the performance of camera radar target matching, a data-driven machine learning-based target matching algorithm can be used to achieve, while the data-driven solution needs to be based on a large amount of radar data and camera data, Moreover, the radar data and the camera data can have the matching relationship between the accurately marked camera target and the radar target, and the accurate marking tool of the present invention can realize the accurate marking of the matching relationship between the radar target and the camera target under the background of big data, and can be applied to Implementation of camera-radar target fusion algorithm based on machine learning.
步骤102,对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象; Step 102, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
在获取相机数据和雷达数据后,可以分别对相机数据和雷达数据进行目标检测,进而可以得到相机目标对象和雷达目标对象。After acquiring the camera data and the radar data, target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
例如,可以通过预先训练的模型推理完成针对相机目标的检测,也可以采用手工标注的方式进行检测,本发明可以采用预训练模型进行自动检测的方式,以提高标注效率。For example, the detection of the camera target can be completed through pre-trained model inference, or the detection can be performed by manual labeling. The present invention can use the pre-trained model for automatic detection to improve labeling efficiency.
步骤103,在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象; Step 103, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a three-dimensional view;
作为一示例,立体图可以包括鸟瞰图,该鸟瞰图可以用于与相机平面图进行相互参考、相互对照的标注。As an example, the stereo map may include a bird's-eye view, which may be used for cross-reference and cross-reference annotation with the camera plan.
在得到相机目标对象和雷达目标对象后,可以将相机目标对象和雷达目标对象在相机平面图中进行显示,并可以将相机目标对象和雷达目标对象在立体图中进行显示,以进一步采用相机平面图和立体图,针对相机目标对象和雷达目标对象进行匹配标注。After the camera target object and the radar target object are obtained, the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
例如,可以将雷达目标(即雷达目标对象)和相机目标(即相机目标对象)分别通过相机平面和鸟瞰图进行显示,在鸟瞰图中,可以用箭头表示雷 达目标的速度,可以通过视景体(view frustum)表示相机目标的位置。For example, the radar target (that is, the radar target object) and the camera target (that is, the camera target object) can be displayed through the camera plane and the bird's-eye view, respectively. (view frustum) represents the position of the camera target.
在一示例中,针对雷达目标在相机平面或鸟瞰图中进行显示的表示方法可以调整,如采用不同的颜色、形状(方形或棱形等)表示雷达目标;可以采用不同的视景图上下边界来表示鸟瞰图下的相机目标。In an example, the representation method for radar targets displayed in the camera plane or bird's-eye view can be adjusted, such as using different colors and shapes (square or prism, etc.) to represent radar targets; different upper and lower boundaries of the visual image can be used. to represent the camera target in a bird's-eye view.
在又一示例中,鸟瞰图中视景体的具体表现也可以为三角形,鸟瞰图中可以用视景体(无最大最小距离限制的三角形)或有限视景体(有最大最小距离限制的梯形)进行物体的定位。由于相机检测目标三维位置的不确定性(如深度方向上),则精准地在鸟瞰图上表示出相机目标较难,进而将影响相机雷达匹配关系的精准标注,通过针对视景体添加上下边界得到有限视景体,进而采用有限视景体表示法可以有助于降低相机目标的三维不确定性,最大可能地提供标注线索,从而能够实现精准的标注。In yet another example, the specific representation of the viewing volume in the bird's-eye view can also be a triangle, and the viewing volume (a triangle with no maximum and minimum distance restrictions) or a limited viewing volume (a trapezoid with a maximum and minimum distance limit) can be used in the bird's-eye view. Position the object. Due to the uncertainty of the three-dimensional position of the target detected by the camera (such as in the depth direction), it is difficult to accurately represent the camera target on the bird's-eye view, which will affect the accurate annotation of the camera-radar matching relationship. Obtaining the limited view volume, and then using the limited view volume representation method can help reduce the three-dimensional uncertainty of the camera target, and provide the most possible labeling clues, so that accurate labeling can be achieved.
在本发明一实施例中,所述在立体图中,显示所述相机目标对象,可以包括如下子步骤:In an embodiment of the present invention, the displaying of the camera target object in the stereoscopic view may include the following sub-steps:
子步骤11,确定所述相机目标对象的三维信息;Sub-step 11, determine the three-dimensional information of the camera target object;
在得到相机目标对象后,可以针对相机目标对象获取对应的三维信息。After the camera target object is obtained, corresponding three-dimensional information can be obtained for the camera target object.
例如,可以估计相机平面被检测目标三维信息,针对获取相机目标对象三维信息的具体方法可以为多种,如可以基于被检测边界框(bounding box)的长宽进行估算,或者可以基于逆透视变换(inverse perspective transform,IPM),利用检测边界框的底边计算被检测目标的实际三维位置,逆透视变换方法具体可以采用如下方式进行计算:For example, the three-dimensional information of the detected object in the camera plane can be estimated, and the specific methods for obtaining the three-dimensional information of the camera target object can be various, for example, it can be estimated based on the length and width of the detected bounding box, or it can be based on inverse perspective transformation. (inverse perspective transform, IPM), using the bottom edge of the detection bounding box to calculate the actual three-dimensional position of the detected target, the inverse perspective transformation method can be calculated in the following ways:
Figure PCTCN2021110287-appb-000001
Figure PCTCN2021110287-appb-000001
其中,该公式中的和可以分别为相机传感器的外参矩阵和内参矩阵,可以将Z设置为0,以能够将相机平面被检测目标投影在地面上,进而可以消除相机检测目标在深度上的不确定性。Among them, the sum in this formula can be the external parameter matrix and the internal parameter matrix of the camera sensor, respectively, and Z can be set to 0, so that the detected target of the camera plane can be projected on the ground, and then the depth difference of the camera detected target can be eliminated. Uncertainty.
子步骤12,在立体图中,显示所述相机目标对象对应的视景体,并根据所述三维信息,设置所述视景体的边界。Sub-step 12, in the stereoscopic view, display the visual volume corresponding to the camera target object, and set the boundary of the visual volume according to the three-dimensional information.
在确定相机目标对象的三维信息后,可以将相机目标对象在立体图中进 行,其可以通过显示相机目标对象对应的视景体,并可以根据相机目标对象的三维信息针对视景体设置边界。After the three-dimensional information of the camera target object is determined, the camera target object can be displayed in a stereogram, which can display the view volume corresponding to the camera target object, and can set the boundary for the view volume according to the three-dimensional information of the camera target object.
例如,视景体可以通过添加上下边界得到有限视景体,其上下边界可以分别代表被检测物体的实际尺寸范围,如可以采用1米和5米,鸟瞰图中视景体可以具体表现为梯形(如图2a所示)。For example, the viewing volume can be obtained by adding upper and lower boundaries to obtain a limited viewing volume, and the upper and lower boundaries can represent the actual size range of the detected object, such as 1 meter and 5 meters. as shown in Figure 2a).
在一示例中,可以针对相机平面图和鸟瞰图的方位进行调整,如图2a所示,显示方法可以为相机平面图显示在左,鸟瞰图显示在右;如图2b所示,也可以相机平面图显示在上,鸟瞰图显示在下,本发明对此不作限制,在不同显示方式中,可以针对相机平面图和鸟瞰图两个视角进行切换,例如,可以使用快捷键切换。In an example, the orientation of the camera plan view and the bird's-eye view can be adjusted, as shown in Figure 2a, the display method can be that the camera plan view is displayed on the left and the bird's eye view is displayed on the right; as shown in Figure 2b, the camera plan view can also be displayed. On the top, the bird's-eye view is displayed on the bottom, which is not limited in the present invention. In different display modes, the two perspectives of the camera plan view and the bird's-eye view can be switched, for example, a shortcut key can be used to switch.
步骤104,根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。Step 104: Mark the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
在具体实现中,通过将相机目标对象和雷达目标对象在相机平面图和立体图中进行显示,进而可以根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系。In the specific implementation, the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
针对相机信号,由于相机信号为三维物体通过透视变换(perspective transformation)而投影得到的二维平面的信息,则相机信号存在三维上的不确定性(主要为深度方向),该不确定性是相机成像原理所导致,难以克服,造成了相机检测目标也存在三维上的不确定性。For the camera signal, since the camera signal is the information of the two-dimensional plane projected by the three-dimensional object through perspective transformation, the camera signal has three-dimensional uncertainty (mainly in the depth direction), which is the camera signal. Due to the imaging principle, it is difficult to overcome, resulting in three-dimensional uncertainty in the detection target of the camera.
针对雷达信号,由于受限于市场上的雷达技术,雷达检测目标往往含有噪声,且同样受限于雷达硬件配置,雷达检测目标在高度上存在不确定性,噪声和不确定性导致了对相机雷达目标进行匹配关系标注时的多义性,进而影响精准地标注,多义性具体可以表现在以下方面:For radar signals, due to the limitation of radar technology on the market, the radar detection target often contains noise, and is also limited by the radar hardware configuration, the radar detection target is highly uncertain, and the noise and uncertainty lead to the camera. The ambiguity in the matching relationship labeling of radar targets affects the accurate labeling. The ambiguity can be manifested in the following aspects:
(1)在相机平面上,不同深度的物体产生的雷达目标可能会被投影到同一个位置上;(1) On the camera plane, radar targets generated by objects of different depths may be projected to the same position;
(2)雷达目标在相机平面的高度方向上投影是不准确的;(2) The projection of the radar target in the height direction of the camera plane is inaccurate;
(3)相机目标在鸟瞰图中深度方向的投影是不准确的。(3) The projection of the camera target in the depth direction of the bird's-eye view is inaccurate.
通过同时采用相机平面图和鸟瞰图进行相机和雷达感知结果的匹配标注方法,可以针对相机平面图和鸟瞰图相互参考、相互对照,从而能够更好 地结合三维物体之间的几何关系得到在相机平面和鸟瞰图中不同的投影,进而可以呈现出更丰富的语义关系、几何关系等作为标注参考的依据,以实现精准地匹配关系标注。By using the camera plan and bird's-eye view at the same time for the matching and labeling method of the camera and radar perception results, the camera plan and bird's-eye view can be referred to and compared with each other, so that the geometric relationship between the three-dimensional objects can be better combined to obtain the results between the camera plane and the bird's eye view. Different projections in the bird's-eye view can present richer semantic relations, geometric relations, etc. as the basis for labeling reference, so as to achieve accurate matching labeling.
在本发明实施例中,通过获取相机数据和雷达数据,对相机数据进行目标检测,得到相机目标对象,并对雷达数据进行目标检测,得到雷达目标对象,然后在相机平面图中,显示相机目标对象和雷达目标对象,并在立体图中,显示相机目标对象和雷达目标对象,进而根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系,实现了准确标注出雷达目标和相机目标的匹配关系,通过采用相机平面图和鸟瞰图相互参考、相互对照的标注方法,能够针对大数据背景下雷达目标和相机目标进行精确地匹配关系标注。In the embodiment of the present invention, by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and realize the accurate marking of the matching relationship between the radar target and the camera target. The relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the method of mutual reference and comparison between the camera plan and the bird's-eye view.
参照图3,示出了本发明一实施例提供的一种数据处理的方法的步骤流程图,具体可以包括如下步骤:Referring to FIG. 3, a flowchart of steps of a data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
步骤301,获取相机数据和雷达数据; Step 301, acquiring camera data and radar data;
在标注雷达目标和相机目标的过程中,可以通过获取相机数据和雷达数据,以分别针对相机数据和雷达数据进行目标检测,例如,可以采用相机和毫米波雷达分别采集相机信号和雷达信号,以进一步可以分别基于相机信号和雷达信号进行目标检测。In the process of labeling the radar target and the camera target, the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
步骤302,对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象; Step 302, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
在获取相机数据和雷达数据后,可以分别对相机数据和雷达数据进行目标检测,进而可以得到相机目标对象和雷达目标对象。After acquiring the camera data and the radar data, target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
步骤303,在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象; Step 303, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a stereoscopic view;
在得到相机目标对象和雷达目标对象后,可以将相机目标对象和雷达目标对象在相机平面图中进行显示,并可以将相机目标对象和雷达目标对象在立体图中进行显示,以进一步采用相机平面图和立体图,针对相机目标对象和雷达目标对象进行匹配标注。After obtaining the camera target object and the radar target object, the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
步骤304,根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系; Step 304, marking the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view;
在具体实现中,通过将相机目标对象和雷达目标对象在相机平面图和立体图中进行显示,进而可以根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系。In the specific implementation, the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
步骤305,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。Step 305: Check the rationality of the matching relationship between the camera target object and the radar target object.
在具体实现中,可以对相机目标对象和雷达目标对象的匹配关系进行合理性检查(sanity check),例如,可以针对全部相机目标的标注完成后,进行标注合理性检查,该合理性检查的检查内容可以包括两部分:一是可以对标记结果判断是否存在自相矛盾;二是可以针对标注判断是否完整。In the specific implementation, a sanity check can be performed on the matching relationship between the camera target object and the radar target object. For example, after the labeling of all camera targets is completed, a labeling rationality check can be performed. The content can include two parts: one is to judge whether there is a self-contradiction in the marking result; the other is to judge whether the marking is complete.
在一示例中,判断是否存在自相矛盾的方法可以为多种,可以对任意两个匹配上的相机目标进行检查,以判断其底边坐标顺序和雷达距离是否吻合;可以检测是否有同一个雷达目标被连接多次,且均被标注为确定关系以进行判断。In an example, there can be various methods for judging whether there is a self-contradiction, and any two matching camera targets can be checked to determine whether the order of their bottom coordinates and the radar distance are consistent; it can be detected whether there is the same The radar targets are connected multiple times, and they are all marked with a definite relationship for judgment.
在又一示例中,判断标注是否完整,可以检测是否全部的相机目标均被标注以进行判断。In yet another example, to determine whether the annotation is complete, it may be detected whether all the camera objects are marked for determination.
在本发明一实施例中,步骤305可以包括如下子步骤:In an embodiment of the present invention, step 305 may include the following sub-steps:
对于具有匹配关系的相机目标对象和雷达目标对象,确定所述相机目标对象在所述立体图中的底边坐标信息和所述雷达目标对象的雷达距离;根据所述底边坐标信息和所述雷达距离,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。For the camera target object and the radar target object with a matching relationship, determine the bottom coordinate information of the camera target object in the stereogram and the radar distance of the radar target object; according to the bottom coordinate information and the radar distance, and the matching relationship between the camera target object and the radar target object is checked for rationality.
在合理性检查的过程中,对于具有匹配关系的相机目标对象和雷达目标对象,可以确定相机目标对象在立体图中的底边坐标信息和雷达目标对象的雷达距离,进而可以根据底边坐标信息和雷达距离,针对标记结果是否存在自相矛盾进行判断,以对相机目标对象和雷达目标对象的匹配关系进行合理性检查。In the process of rationality check, for the camera target object and radar target object with matching relationship, the bottom coordinate information of the camera target object in the stereo map and the radar distance of the radar target object can be determined, and then the bottom coordinate information and the radar distance of the radar target object can be determined. The radar distance is used to judge whether there is a self-contradiction in the marking results, so as to check the rationality of the matching relationship between the camera target object and the radar target object.
例如,可以对任意两个匹配上的相机目标进行检查,以判断其底边坐标顺序和雷达距离是否吻合,由于基于逆透视变换的原则,底边坐标数值越大, 对应的目标雷达距离越近,则可以在任意两个目标不符合此标准时,算法可以发出错误警告。For example, any two matching camera targets can be checked to determine whether the order of their bottom coordinates is consistent with the radar distance. Due to the principle of inverse perspective transformation, the larger the bottom coordinate value is, the closer the corresponding target radar distance is. , the algorithm can issue a false warning if any two targets do not meet this criterion.
在本发明一实施例中,步骤305可以包括如下子步骤:In an embodiment of the present invention, step 305 may include the following sub-steps:
确定被标注与同一雷达目标对象具有匹配关系的相机目标对象数量;根据所述相机目标对象数量,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。Determine the number of camera target objects marked with a matching relationship with the same radar target object; according to the number of camera target objects, perform a rationality check on the matching relationship between the camera target object and the radar target object.
在合理性检查的过程中,可以通过确定被标注与同一雷达目标对象具有匹配关系的相机目标对象数量,进而可以根据相机目标对象数量,针对标注判断是否完整,以对相机目标对象和雷达目标对象的匹配关系进行合理性检查。In the process of rationality check, the number of camera target objects marked with a matching relationship with the same radar target object can be determined, and then according to the number of camera target objects, the completeness of the labeling can be judged, so as to determine whether the camera target object and the radar target object are complete. The matching relationship is checked for plausibility.
由于相机目标存在的三维不确定性和雷达目标的噪声问题,使得精准标注较难,而合理性检查可以是实现精准标注的关键步骤,通过根据逆透视变换的标注合理性检查,在合理性检查中可以充分利用顺序(ordinal)等几何约束条件,以发现标注中出现的错误。Due to the three-dimensional uncertainty of the camera target and the noise problem of the radar target, it is difficult to accurately label, and the rationality check can be a key step to achieve accurate labeling. Geometric constraints such as ordinal can be fully utilized in order to find errors in annotation.
在本发明实施例中,通过获取相机数据和雷达数据,对相机数据进行目标检测,得到相机目标对象,并对雷达数据进行目标检测,得到雷达目标对象,然后在相机平面图中,显示相机目标对象和雷达目标对象,并在立体图中,显示相机目标对象和雷达目标对象,进而根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系,并对相机目标对象和雷达目标对象的匹配关系进行合理性检查,实现了准确标注出雷达目标和相机目标的匹配关系,通过采用相机平面图和鸟瞰图相互参考、相互对照的标注方法,并进行标注合理性检查,能够针对大数据背景下雷达目标和相机目标进行精确地匹配关系标注。In the embodiment of the present invention, by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and determine the matching relationship between the camera target object and the radar target object. The rationality check is carried out to accurately mark the matching relationship between the radar target and the camera target. By using the labeling method of mutual reference and comparison between the camera plan and bird's-eye view, and carrying out the labeling rationality check, the radar target can be targeted against the background of big data. Annotate the exact matching relationship with the camera target.
参照图4,示出了本发明一实施例提供的另一种数据处理的方法的步骤流程图,具体可以包括如下步骤:Referring to FIG. 4, a flowchart of steps of another data processing method provided by an embodiment of the present invention is shown, which may specifically include the following steps:
步骤401,获取相机数据和雷达数据; Step 401, acquiring camera data and radar data;
在标注雷达目标和相机目标的过程中,可以通过获取相机数据和雷达数据,以分别针对相机数据和雷达数据进行目标检测,例如,可以采用相机和 毫米波雷达分别采集相机信号和雷达信号,以进一步可以分别基于相机信号和雷达信号进行目标检测。In the process of labeling the radar target and the camera target, the camera data and the radar data can be acquired to perform target detection for the camera data and the radar data respectively. Further, target detection can be performed based on camera signals and radar signals, respectively.
步骤402,对所述相机数据和所述雷达数据进行对齐; Step 402, aligning the camera data and the radar data;
在得到相机数据和雷达数据后,可以针对相机数据和雷达数据进行时间对齐和空间对齐处理,以进一步采用对齐校准后的数据进行后续处理。例如,可以将雷达检测结果向相机平面进行空间对齐和时间对齐。After the camera data and radar data are obtained, time alignment and space alignment processing can be performed on the camera data and the radar data, so as to further use the aligned and calibrated data for subsequent processing. For example, radar detection results can be spatially and temporally aligned to the camera plane.
步骤403,对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象; Step 403, performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object;
在获取相机数据和雷达数据后,可以分别对相机数据和雷达数据进行目标检测,进而可以得到相机目标对象和雷达目标对象。After acquiring the camera data and the radar data, target detection can be performed on the camera data and the radar data respectively, and then the camera target object and the radar target object can be obtained.
步骤404,在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象; Step 404, displaying the camera target object and the radar target object in a camera plan view, and displaying the camera target object and the radar target object in a three-dimensional view;
在得到相机目标对象和雷达目标对象后,可以将相机目标对象和雷达目标对象在相机平面图中进行显示,并可以将相机目标对象和雷达目标对象在立体图中进行显示,以进一步采用相机平面图和立体图,针对相机目标对象和雷达目标对象进行匹配标注。After obtaining the camera target object and the radar target object, the camera target object and the radar target object can be displayed in the camera plane view, and the camera target object and the radar target object can be displayed in the stereogram, so as to further adopt the camera plane view and stereogram , matching and labeling the camera target object and the radar target object.
步骤405,判断所述相机目标对象在所述相机平面图中的显示是否准确; Step 405, judging whether the display of the camera target object in the camera plan view is accurate;
在相机平面图中显示相机目标对象和雷达目标对象,并在立体图中显示相机目标对象和雷达目标对象后,可以判断相机目标对象在相机平面图中的显示是否准确。After displaying the camera target object and the radar target object in the camera plan view, and displaying the camera target object and the radar target object in the stereo view, it can be judged whether the display of the camera target object in the camera plan view is accurate.
步骤406,在判定所述相机目标对象在所述相机平面图中显示不准确时,对所述相机目标对象进行调整,并更新所述相机目标对象在所述相机平面图和所述立体图中的显示; Step 406, when it is determined that the camera target object is displayed inaccurately in the camera plan view, adjust the camera target object, and update the display of the camera target object in the camera plan view and the stereoscopic view;
针对相机目标对象在相机平面图中的显示判断是否准确的过程中,可以在判定相机目标对象在相机平面图中显示不准确时,对相机目标对象进行调整,并可以更新相机目标对象在相机平面图和立体图中的显示。通过逐次标注每个相机目标,进而可以选出相机目标对应的可以匹配的雷达点。In the process of judging whether the display of the camera target object in the camera plan view is accurate, the camera target object can be adjusted when it is determined that the display of the camera target object in the camera plan view is inaccurate, and the camera target object can be updated in the camera plan view and stereo map. display in . By labeling each camera target one by one, the matching radar points corresponding to the camera target can be selected.
例如,可以针对相机目标检查是否准确,在不准确时,可以选择忽略该目标或者可以选择进行手动调整。在选择调整目标时,可以对鸟瞰图中相机 目标对象对应的视景体也进行相应调整。For example, a camera target can be checked for accuracy, and in the event of inaccuracy, the target can be chosen to be ignored or manually adjusted. When selecting the adjustment target, the view volume corresponding to the camera target object in the bird's eye view can also be adjusted accordingly.
在一示例中,可以针对标注工具进行相关配置,例如,是否允许标注员在相机平面图中针对相机目标对象进行边界框的调整。In an example, related configurations can be made for the annotation tool, for example, whether to allow the annotator to adjust the bounding box of the camera target object in the camera plan view.
在又一示例中,当相机目标不准确且选择不进行调整时,或者目标被严重遮挡,或者目标位置较远时,可以忽略该目标。针对每一个可以被匹配的雷达点,可以选择基于该雷达点的相机雷达目标匹配关系是否确定。In yet another example, when the camera target is inaccurate and no adjustment is selected, or the target is heavily occluded, or the target location is far away, the target can be ignored. For each radar point that can be matched, it can be selected whether the matching relationship of the camera radar target based on the radar point is determined.
步骤407,根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。Step 407: Mark the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
在具体实现中,通过将相机目标对象和雷达目标对象在相机平面图和立体图中进行显示,进而可以根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系。In the specific implementation, the camera target object and the radar target object are displayed in the camera plan view and stereo map, and then the matching relationship between the camera target object and the radar target object can be marked according to the camera plan view and stereo map.
为了使本领域技术人员能够更好地理解上述步骤,以下结合图5对本发明实施例加以示例性说明,但应当理解的是,本发明实施例并不限于此。In order to enable those skilled in the art to better understand the above steps, the embodiment of the present invention is exemplified below with reference to FIG. 5 , but it should be understood that the embodiment of the present invention is not limited thereto.
1、毫米波雷达数据采集;1. Millimeter wave radar data collection;
2、基于毫米波雷达的目标检测;2. Target detection based on millimeter wave radar;
3、雷达检测结果向相机平面的空间对齐和时间对齐;3. Spatial and temporal alignment of radar detection results to the camera plane;
4、相机数据采集;4. Camera data collection;
5、基于相机的目标检测;5. Camera-based target detection;
6、相机被检测目标的三维信息估计;6. Three-dimensional information estimation of the detected target of the camera;
7、雷达检测目标在相机平面的显示:采用雷达点表示法;7. The display of radar detection target on the camera plane: using radar point representation;
8、相机检测目标在相机平面的显示:采用边界框表示法;8. The display of the camera detection target on the camera plane: using the bounding box representation;
9、雷达检测目标在鸟瞰图下的显示:采用雷达点和雷达速度箭头表示法;9. The display of the radar detection target under the bird's-eye view: using the radar point and radar speed arrow representation;
10、相机检测目标在鸟瞰图下的显示:采用物体点和视景体表示法;10. The display of the camera detection target under the bird's-eye view: using object point and visual volume representation;
11、逐次考察标注每一个相机检测目标;11. Check and label each camera detection target one by one;
12、该相机目标是否不准确?在不准确时,对该相机目标进行调整,并且更新调整以后的相机目标的显示;在准确时,进一步检测该相机目标是否可以忽略?可以忽略时,忽略该目标,在不能忽略时,进行下一步检测;12. Is the camera target inaccurate? When it is inaccurate, adjust the camera target, and update the display of the adjusted camera target; when it is accurate, can further detect the camera target be ignored? When it can be ignored, ignore the target, and when it cannot be ignored, proceed to the next step of detection;
13、该相机目标是否没有雷达点?13. Does the camera target have no radar point?
14、在没有雷达点时,检测是否所有的相机目标都被标注?14. When there are no radar points, check if all camera targets are labeled?
15、若否,则可以隐藏该相机检测目标已经被匹配确定的雷达点;15. If not, you can hide the radar point that the camera detects that the target has been matched and determined;
16、考察下一个相机检测目标;16. Investigate the next camera detection target;
17、在相机目标有雷达点时,逐次标出跟该相机目标匹配的雷达点,并且对每一个匹配点选择是否确定;17. When the camera target has radar points, mark the radar points that match the camera target one by one, and select whether to confirm each matching point;
18、标注结果是否满足合理性检查?在不满足时,返回逐次考察标注每一个相机检测目标;在满足合理性检查时,结束。18. Does the labeling result meet the rationality check? When not satisfied, return to inspect and label each camera detection target one by one; when the rationality check is satisfied, end.
在本发明实施例中,通过获取相机数据和雷达数据,对相机数据和雷达数据进行对齐,然后对相机数据进行目标检测,得到相机目标对象,并对雷达数据进行目标检测,得到雷达目标对象,在相机平面图中,显示相机目标对象和雷达目标对象,并在立体图中,显示相机目标对象和雷达目标对象,进而判断相机目标对象在相机平面图中的显示是否准确,在判定相机目标对象在相机平面图中显示不准确时,对相机目标对象进行调整,并更新相机目标对象在相机平面图和立体图中的显示,根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系,并对相机目标对象和雷达目标对象的匹配关系进行合理性检查,实现了准确标注出雷达目标和相机目标的匹配关系,通过采用相机平面图和鸟瞰图相互参考、相互对照的标注方法,能够针对大数据背景下雷达目标和相机目标进行精确地匹配关系标注。In the embodiment of the present invention, by acquiring camera data and radar data, aligning the camera data and radar data, and then performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, In the camera plan view, the camera target object and the radar target object are displayed, and in the stereoscopic view, the camera target object and the radar target object are displayed, and then it is judged whether the display of the camera target object in the camera plan view is accurate. When the display is inaccurate, adjust the camera target object, and update the display of the camera target object in the camera plan view and stereo map. According to the camera plan view and stereo map, mark the matching relationship between the camera target object and the radar target object, and compare the camera target object. The matching relationship with the radar target object is checked for rationality, and the matching relationship between the radar target and the camera target can be accurately marked. By using the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view, it can target the radar target under the background of big data. Annotate the exact matching relationship with the camera target.
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。It should be noted that, for the sake of simple description, the method embodiments are described as a series of action combinations, but those skilled in the art should know that the embodiments of the present invention are not limited by the described action sequences, because According to embodiments of the present invention, certain steps may be performed in other sequences or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of the present invention.
参照图6,示出了本发明一实施例提供的一种数据处理的装置的结构示意图,具体可以包括如下模块:Referring to FIG. 6 , a schematic structural diagram of a data processing apparatus provided by an embodiment of the present invention is shown, which may specifically include the following modules:
相机数据和雷达数据获取模块601,用于获取相机数据和雷达数据;a camera data and radar data acquisition module 601 for acquiring camera data and radar data;
目标检测模块602,用于对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象;A target detection module 602, configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
相机目标对象和雷达目标对象显示模块603,用于在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象;a camera target object and radar target object display module 603, configured to display the camera target object and the radar target object in a camera plan view, and display the camera target object and the radar target object in a stereoscopic view;
匹配关系标注模块604,用于根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。The matching relationship labeling module 604 is configured to label the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view.
在本发明一实施例中,所述相机目标对象和雷达目标对象显示模块603包括:In an embodiment of the present invention, the camera target object and radar target object display module 603 includes:
三维信息确定子模块,用于确定所述相机目标对象的三维信息;a three-dimensional information determination submodule, used for determining the three-dimensional information of the camera target object;
视景体的边界设置子模块,用于在立体图中,显示所述相机目标对象对应的视景体,并根据所述三维信息,设置所述视景体的边界。The visual volume boundary setting sub-module is configured to display the visual volume corresponding to the camera target object in a stereoscopic image, and set the boundary of the visual volume according to the three-dimensional information.
在本发明一实施例中,所述装置还包括:In an embodiment of the present invention, the device further includes:
合理性检查模块,用于对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。The rationality checking module is used for checking the rationality of the matching relationship between the camera target object and the radar target object.
在本发明一实施例中,所述合理性检查模块包括:In an embodiment of the present invention, the rationality checking module includes:
底边坐标信息和雷达距离确定子模块,用于对于具有匹配关系的相机目标对象和雷达目标对象,确定所述相机目标对象在所述立体图中的底边坐标信息和所述雷达目标对象的雷达距离;The bottom coordinate information and radar distance determination sub-module is used to determine the bottom coordinate information of the camera target object in the stereo image and the radar target object of the radar target object for the camera target object and the radar target object with a matching relationship distance;
第一合理性检查子模块,用于根据所述底边坐标信息和所述雷达距离,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。The first rationality checking sub-module is configured to perform a rationality check on the matching relationship between the camera target object and the radar target object according to the base coordinate information and the radar distance.
在本发明一实施例中,所述合理性检查模块包括:In an embodiment of the present invention, the rationality checking module includes:
相机目标对象数量确定子模块,用于确定被标注与同一雷达目标对象具有匹配关系的相机目标对象数量;The sub-module for determining the number of camera target objects is used to determine the number of camera target objects marked with a matching relationship with the same radar target object;
第二合理性检查子模块,用于根据所述相机目标对象数量,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。The second rationality checking sub-module is configured to perform a rationality check on the matching relationship between the camera target object and the radar target object according to the number of the camera target objects.
在本发明一实施例中,所述装置还包括:In an embodiment of the present invention, the device further includes:
判断模块,用于判断所述相机目标对象在所述相机平面图中的显示是否准确;a judgment module for judging whether the display of the camera target object in the camera plan view is accurate;
相机目标对象调整模块,用于在判定所述相机目标对象在所述相机平面图中显示不准确时,对所述相机目标对象进行调整,并更新所述相机目标对象在所述相机平面图和所述立体图中的显示。A camera target object adjustment module, configured to adjust the camera target object when it is determined that the camera target object is displayed inaccurately in the camera plan view, and update the camera target object in the camera plan view and the camera target object Display in the stereogram.
在本发明一实施例中,所述装置还包括:In an embodiment of the present invention, the device further includes:
对齐模块,用于对所述相机数据和所述雷达数据进行对齐。an alignment module, configured to align the camera data and the radar data.
在本发明一实施例中,所述立体图包括鸟瞰图。In an embodiment of the present invention, the three-dimensional view includes a bird's-eye view.
在本发明实施例中,通过获取相机数据和雷达数据,对相机数据进行目标检测,得到相机目标对象,并对雷达数据进行目标检测,得到雷达目标对象,然后在相机平面图中,显示相机目标对象和雷达目标对象,并在立体图中,显示相机目标对象和雷达目标对象,进而根据相机平面图和立体图,标注相机目标对象和雷达目标对象的匹配关系,实现了准确标注出雷达目标和相机目标的匹配关系,通过采用相机平面图和鸟瞰图相互参考、相互对照的标注方法,能够针对大数据背景下雷达目标和相机目标进行精确地匹配关系标注。In the embodiment of the present invention, by acquiring camera data and radar data, performing target detection on the camera data to obtain the camera target object, and performing target detection on the radar data to obtain the radar target object, and then displaying the camera target object in the camera plan view and radar target object, and display the camera target object and radar target object in the stereo map, and then mark the matching relationship between the camera target object and the radar target object according to the camera plan and stereo map, and realize the accurate marking of the matching relationship between the radar target and the camera target. The relationship between the radar target and the camera target in the context of big data can be accurately matched and labeled by using the labeling method of mutual reference and comparison between the camera plan and the bird's-eye view.
本发明一实施例还提供了一种服务器,可以包括处理器、存储器及存储在存储器上并能够在处理器上运行的计算机程序,计算机程序被处理器执行时实现如上数据处理的方法。An embodiment of the present invention also provides a server, which may include a processor, a memory, and a computer program stored in the memory and capable of running on the processor. When the computer program is executed by the processor, the above data processing method is implemented.
本发明一实施例还提供了一种计算机可读存储介质,计算机可读存储介质上存储计算机程序,计算机程序被处理器执行时实现如上数据处理的方法。An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above data processing method is implemented.
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。As for the apparatus embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for related parts.
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same and similar parts between the various embodiments may be referred to each other.
本领域内的技术人员应明白,本发明实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。It should be understood by those skilled in the art that the embodiments of the present invention may be provided as a method, an apparatus, or a computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product implemented on one or more computer-usable storage media having computer-usable program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。Embodiments of the present invention are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present invention. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal equipment to produce a machine that causes the instructions to be executed by the processor of the computer or other programmable data processing terminal equipment Means are created for implementing the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory capable of directing a computer or other programmable data processing terminal equipment to operate in a particular manner, such that the instructions stored in the computer readable memory result in an article of manufacture comprising instruction means, the The instruction means implement the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing terminal equipment, so that a series of operational steps are performed on the computer or other programmable terminal equipment to produce a computer-implemented process, thereby executing on the computer or other programmable terminal equipment The instructions executed on the above provide steps for implementing the functions specified in the flowchart or blocks and/or the block or blocks of the block diagrams.
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。Although preferred embodiments of the embodiments of the present invention have been described, additional changes and modifications to these embodiments may be made by those skilled in the art once the basic inventive concepts are known. Therefore, the appended claims are intended to be construed to include the preferred embodiments as well as all changes and modifications that fall within the scope of the embodiments of the present invention.
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求 或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。Finally, it should also be noted that in this document, relational terms such as first and second are used only to distinguish one entity or operation from another, and do not necessarily require or imply these entities or that there is any such actual relationship or sequence between operations. Moreover, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or terminal device comprising a list of elements includes not only those elements, but also a non-exclusive list of elements. other elements, or also include elements inherent to such a process, method, article or terminal equipment. Without further limitation, an element defined by the phrase "comprises a..." does not preclude the presence of additional identical elements in the process, method, article, or terminal device that includes the element.
以上对所提供的一种数据处理的方法和装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。The provided method and device for data processing have been introduced in detail above. Specific examples are used in this paper to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only used to help understand the present invention. method and its core idea; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific implementation and application scope. Invention limitations.

Claims (11)

  1. 一种数据处理的方法,其特征在于,所述方法包括:A method for data processing, characterized in that the method comprises:
    获取相机数据和雷达数据;Obtain camera data and radar data;
    对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象;Perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
    在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象;In the camera plan view, the camera target object and the radar target object are displayed, and in the stereoscopic view, the camera target object and the radar target object are displayed;
    根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。According to the camera plan view and the three-dimensional view, the matching relationship between the camera target object and the radar target object is marked.
  2. 根据权利要求1所述的方法,其特征在于,所述在立体图中,显示所述相机目标对象,包括:The method according to claim 1, wherein the displaying the camera target object in a stereoscopic image comprises:
    确定所述相机目标对象的三维信息;determining three-dimensional information of the camera target object;
    在立体图中,显示所述相机目标对象对应的视景体,并根据所述三维信息,设置所述视景体的边界。In the stereogram, the viewing volume corresponding to the camera target object is displayed, and the boundary of the viewing volume is set according to the three-dimensional information.
  3. 根据权利要求1或2所述的方法,其特征在于,在所述根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系之后,还包括:The method according to claim 1 or 2, wherein after the marking the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view, the method further comprises:
    对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。A rationality check is performed on the matching relationship between the camera target object and the radar target object.
  4. 根据权利要求3所述的方法,其特征在于,所述对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查,包括:The method according to claim 3, wherein the performing a rationality check on the matching relationship between the camera target object and the radar target object comprises:
    对于具有匹配关系的相机目标对象和雷达目标对象,确定所述相机目标对象在所述立体图中的底边坐标信息和所述雷达目标对象的雷达距离;For the camera target object and the radar target object having a matching relationship, determine the bottom coordinate information of the camera target object in the stereogram and the radar distance of the radar target object;
    根据所述底边坐标信息和所述雷达距离,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。According to the base coordinate information and the radar distance, the matching relationship between the camera target object and the radar target object is checked for rationality.
  5. 根据权利要求3所述的方法,其特征在于,所述对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查,包括:The method according to claim 3, wherein the performing a rationality check on the matching relationship between the camera target object and the radar target object comprises:
    确定被标注与同一雷达目标对象具有匹配关系的相机目标对象数量;Determine the number of camera target objects marked with a matching relationship with the same radar target object;
    根据所述相机目标对象数量,对所述相机目标对象和所述雷达目标对象的匹配关系进行合理性检查。According to the number of the camera target objects, the matching relationship between the camera target objects and the radar target objects is checked for rationality.
  6. 根据权利要求1所述的方法,其特征在于,在所述根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系之前,还包括:The method according to claim 1, wherein before the marking the matching relationship between the camera target object and the radar target object according to the camera plan view and the three-dimensional view, the method further comprises:
    判断所述相机目标对象在所述相机平面图中的显示是否准确;judging whether the display of the camera target object in the camera plan view is accurate;
    在判定所述相机目标对象在所述相机平面图中显示不准确时,对所述相机目标对象进行调整,并更新所述相机目标对象在所述相机平面图和所述立体图中的显示。When it is determined that the display of the camera target object in the camera plan view is inaccurate, the camera target object is adjusted, and the display of the camera target object in the camera plan view and the stereoscopic view is updated.
  7. 根据权利要求1所述的方法,其特征在于,在所述对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象之前,还包括:The method according to claim 1, wherein, before performing target detection on the camera data to obtain a camera target object, and performing target detection on the radar data to obtain a radar target object, the method further comprises:
    对所述相机数据和所述雷达数据进行对齐。The camera data and the radar data are aligned.
  8. 根据权利要求1所述的方法,其特征在于,所述立体图包括鸟瞰图。The method of claim 1, wherein the perspective view comprises a bird's-eye view.
  9. 一种数据处理的装置,其特征在于,所述装置包括:A data processing device, characterized in that the device comprises:
    相机数据和雷达数据获取模块,用于获取相机数据和雷达数据;Camera data and radar data acquisition module for acquiring camera data and radar data;
    目标检测模块,用于对所述相机数据进行目标检测,得到相机目标对象,并对所述雷达数据进行目标检测,得到雷达目标对象;a target detection module, configured to perform target detection on the camera data to obtain a camera target object, and perform target detection on the radar data to obtain a radar target object;
    相机目标对象和雷达目标对象显示模块,用于在相机平面图中,显示所述相机目标对象和所述雷达目标对象,并在立体图中,显示所述相机目标对象和所述雷达目标对象;a camera target object and radar target object display module, configured to display the camera target object and the radar target object in a camera plan view, and display the camera target object and the radar target object in a stereoscopic view;
    匹配关系标注模块,用于根据所述相机平面图和所述立体图,标注所述相机目标对象和所述雷达目标对象的匹配关系。The matching relationship labeling module is configured to label the matching relationship between the camera target object and the radar target object according to the camera plan view and the stereogram.
  10. 一种服务器,其特征在于,包括处理器、存储器及存储在所述存储器上并能够在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至8中任一项所述的数据处理的方法。A server, characterized by comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program being executed by the processor to implement claims 1 to 1 The data processing method of any one of 8.
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8 中任一项所述的数据处理的方法。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the data processing according to any one of claims 1 to 8 is implemented. method.
PCT/CN2021/110287 2020-08-20 2021-08-03 Data processing method and apparatus WO2022037403A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010843958.2 2020-08-20
CN202010843958.2A CN112017241A (en) 2020-08-20 2020-08-20 Data processing method and device

Publications (1)

Publication Number Publication Date
WO2022037403A1 true WO2022037403A1 (en) 2022-02-24

Family

ID=73505226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110287 WO2022037403A1 (en) 2020-08-20 2021-08-03 Data processing method and apparatus

Country Status (2)

Country Link
CN (1) CN112017241A (en)
WO (1) WO2022037403A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784038A (en) * 2016-08-31 2018-03-09 法乐第(北京)网络科技有限公司 A kind of mask method of sensing data
CN107798010A (en) * 2016-08-31 2018-03-13 法乐第(北京)网络科技有限公司 A kind of annotation equipment of sensing data
US20190370565A1 (en) * 2018-06-01 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for extracting lane line and computer readable storage medium
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011036807A1 (en) * 2009-09-28 2011-03-31 トヨタ自動車株式会社 Object detection device and object detection method
CN107871129B (en) * 2016-09-27 2019-05-10 北京百度网讯科技有限公司 Method and apparatus for handling point cloud data
CN110197148B (en) * 2019-05-23 2020-12-01 北京三快在线科技有限公司 Target object labeling method and device, electronic equipment and storage medium
CN110598743A (en) * 2019-08-12 2019-12-20 北京三快在线科技有限公司 Target object labeling method and device
CN111324115B (en) * 2020-01-23 2023-09-19 北京百度网讯科技有限公司 Obstacle position detection fusion method, obstacle position detection fusion device, electronic equipment and storage medium
CN111353273B (en) * 2020-03-09 2023-09-26 深圳大学 Radar data labeling method, device, equipment and storage medium
CN111401194B (en) * 2020-03-10 2023-09-22 北京百度网讯科技有限公司 Data processing method and device for automatic driving vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784038A (en) * 2016-08-31 2018-03-09 法乐第(北京)网络科技有限公司 A kind of mask method of sensing data
CN107798010A (en) * 2016-08-31 2018-03-13 法乐第(北京)网络科技有限公司 A kind of annotation equipment of sensing data
US20190370565A1 (en) * 2018-06-01 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for extracting lane line and computer readable storage medium
CN110794405A (en) * 2019-10-18 2020-02-14 北京全路通信信号研究设计院集团有限公司 Target detection method and system based on camera and radar fusion
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN112017241A (en) * 2020-08-20 2020-12-01 广州小鹏汽车科技有限公司 Data processing method and device

Also Published As

Publication number Publication date
CN112017241A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
US11727593B1 (en) Automated data capture
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
CN108362295B (en) Vehicle path guiding apparatus and method
US20200005447A1 (en) Computer aided rebar measurement and inspection system
EP3517997A1 (en) Method and system for detecting obstacles by autonomous vehicles in real-time
US8483442B2 (en) Measurement apparatus, measurement method, and feature identification apparatus
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
Li et al. 3D triangulation based extrinsic calibration between a stereo vision system and a LIDAR
CN111536990A (en) On-line external reference mis-calibration detection between sensors
CN112241978A (en) Data processing method and device
WO2022037403A1 (en) Data processing method and apparatus
CN112686951A (en) Method, device, terminal and storage medium for determining robot position
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
Yang et al. Vision system of mobile robot combining binocular and depth cameras
CN101782386B (en) Non-visual geometric camera array video positioning method and system
CN114662587A (en) Three-dimensional target sensing method, device and system based on laser radar
CN117197775A (en) Object labeling method, object labeling device and computer readable storage medium
KR101154436B1 (en) Line matching method based on intersection context
WO2022237210A1 (en) Obstacle information generation
KR102568111B1 (en) Apparatus and method for detecting road edge
CN113014899B (en) Binocular image parallax determination method, device and system
Lee et al. Semi-automatic framework for traffic landmark annotation
CN112598736A (en) Map construction based visual positioning method and device
JP6546898B2 (en) Three-dimensional space identification apparatus, method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21857498

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21857498

Country of ref document: EP

Kind code of ref document: A1