CN115856829B - Image data identification method and system for radar three-dimensional data conversion - Google Patents

Image data identification method and system for radar three-dimensional data conversion Download PDF

Info

Publication number
CN115856829B
CN115856829B CN202310065712.0A CN202310065712A CN115856829B CN 115856829 B CN115856829 B CN 115856829B CN 202310065712 A CN202310065712 A CN 202310065712A CN 115856829 B CN115856829 B CN 115856829B
Authority
CN
China
Prior art keywords
data
point cloud
coordinates
converted
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310065712.0A
Other languages
Chinese (zh)
Other versions
CN115856829A (en
Inventor
李峰
王意
许西论
聂春梅
都丰林
李瑞东
董毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Original Assignee
SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD filed Critical SHANDONG MATRIX SOFTWARE ENGINEERING CO LTD
Priority to CN202310065712.0A priority Critical patent/CN115856829B/en
Publication of CN115856829A publication Critical patent/CN115856829A/en
Application granted granted Critical
Publication of CN115856829B publication Critical patent/CN115856829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical field of image recognition, in particular to an image data recognition method and system for radar three-dimensional data conversion. The method comprises the steps of obtaining radar point cloud data and data types needing to be converted; selecting a key information area from the radar point cloud data; carrying out standardization processing on the point cloud data in the key information area to obtain point cloud coordinates in the area; according to the data type to be converted, determining coordinates related to the data type to be converted in the point cloud coordinates; and converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data types to be converted, and obtaining image data. According to the technical scheme, various efficient and accurate two-dimensional image recognition algorithms can be effectively utilized on the three-dimensional target to carry out target discrimination and identification.

Description

Image data identification method and system for radar three-dimensional data conversion
Technical Field
The invention relates to the technical field of image recognition, in particular to an image data recognition method and system for radar three-dimensional data conversion.
Background
With the increasing popularity of lidar technology, radar devices are used in more industries and application scenarios to locate, track and measure targets. However, because of some inherent defects in the point cloud algorithm and the radar device, such as insufficient accuracy (the accuracy of the radar device is generally + -2 cm), the identification work of some targets is difficult to complete, and in practical application, the camera device is often required to be matched at the same time, which clearly increases the application cost.
Secondly, the conventional camera equipment often needs to capture the target by means of proper visible light intensity, which results in that a light supplementing device needs to be additionally arranged under some extreme conditions to ensure the normal operation of the system. If the condition of adding the light supplementing device is not provided on site, or the angle or the intensity of the light supplementing device is deviated, the capturing work of the target object may not be completed, so that the work of the whole system is difficult to achieve expectations.
In the prior art, the laser radar point cloud information can be converted into common image data, but no scheme for further analyzing and identifying the target by utilizing an image identification technology exists, and in the process of converting the point cloud into pixel data, besides ensuring the conversion of an effective identification area, the invalid area at the edge of the point cloud needs to be filled with blank color instead of continuously storing other point clouds, and the existing image data identification aiming at the radar three-dimensional data conversion has the problems of low precision and low efficiency, so that the image data identification method and the system for the radar three-dimensional data conversion are needed.
Disclosure of Invention
The invention provides an image data identification method and system for radar three-dimensional data conversion, which aims to solve the problems of low accuracy and low efficiency of image data identification for radar three-dimensional data conversion.
In a first aspect, the present invention provides a method for identifying image data by converting radar three-dimensional data, which adopts the following technical scheme:
an image data recognition method of radar three-dimensional data conversion, comprising:
s1, acquiring Lei Dadian cloud data and a data type to be converted;
s2, selecting a key information area from the radar point cloud data;
s3, carrying out standardized processing on the point cloud data in the key information area to obtain point cloud coordinates in the area;
s4, determining coordinates related to the data type to be converted in the point cloud coordinates according to the data type to be converted;
s5, converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data types to be converted, and obtaining image data.
Further, the obtained Lei Dadian cloud data and the data types to be converted include target distance data detected by the radar, radar wave reflection intensity data and heat radiation data.
Further, the selecting a key information area in the radar point cloud data includes selecting an area needing image conversion according to an image generated by the radar point cloud data as the key information area.
Further, the converting the point cloud coordinate into the pixel coordinate according to the number of coordinates related to the data type to be converted includes performing a digital conversion on the coordinate value related to the data type to be converted in the point cloud coordinate corresponding to any color value in RGB when the number of coordinates related to the data type to be converted is 1.
Further, the converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted includes dividing the RGB values into three combinations of RG, RB and GB when the number of coordinates related to the data type to be converted is 2, respectively corresponding to the coordinate values related to the data type to be converted in the point cloud coordinates, and performing digital conversion.
Further, the converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted includes respectively corresponding three color values in RGB to the coordinate values related to the data type to be converted in the point cloud coordinates when the number of coordinates related to the data type to be converted is 3, and performing digital conversion.
Further, the performing numerical conversion includes performing an equal proportion conversion on coordinate values in the point cloud coordinates and RGB color values.
In a second aspect, an image data recognition system for radar three-dimensional data conversion includes:
the data acquisition module is configured to acquire Lei Dadian cloud data and data types needing to be converted;
the selecting module is configured to select a key information area from the radar point cloud data;
the computing module is configured to perform standardized processing on the point cloud data in the key information area to obtain point cloud coordinates in the area; according to the data type to be converted, determining coordinates related to the data type to be converted in the point cloud coordinates;
and the conversion module is configured to convert the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted, so as to obtain image data.
In a third aspect, the present invention provides a computer-readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the image data recognition method of radar three-dimensional data conversion.
In a fourth aspect, the present invention provides a terminal device, including a processor and a computer readable storage medium, where the processor is configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of image data recognition for radar three-dimensional data conversion.
In summary, the invention has the following beneficial technical effects:
according to the method for recognizing the image data through the three-dimensional data conversion of the radar, the radar point cloud data image is converted into the color image, the difference of the height, the position and the like of the target recognition object is converted into the difference of the image colors, and finally the output image data is a color region set capable of completely showing the difference information.
According to the technical scheme, various efficient and accurate two-dimensional image recognition algorithms can be effectively utilized on the three-dimensional target to carry out target discrimination and identification, and recognition capability can be further enhanced through inherent geometric features of three-dimensional textures, for example, the problem of recognizing photos and faces is solved.
Drawings
Fig. 1 is a flowchart of an image data recognition method for radar three-dimensional data conversion according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, a radar three-dimensional data conversion image data identification method of the present embodiment includes:
acquiring Lei Dadian cloud data and data types to be converted;
selecting a key information area from the radar point cloud data;
carrying out standardization processing on the point cloud data in the key information area to obtain point cloud coordinates in the area;
according to the data type to be converted, determining coordinates related to the data type to be converted in the point cloud coordinates;
and converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data types to be converted, and obtaining image data.
The method specifically comprises the following steps:
firstly, acquiring Lei Dadian cloud data and data types to be converted, acquiring point cloud data through radar equipment, wherein the radar point cloud data is a float data set in the form of (x, y, z), and acquiring a radar point cloud image of a target object through the point cloud data; and then determining the data types to be converted, including target distance data detected by a laser radar, radar wave reflection intensity data, heat radiation data and the like, wherein the specific conditions are selected according to actual requirements.
And selecting a key information area from the radar point cloud data, wherein the key information area comprises an area needing image conversion selected according to an image generated by the radar point cloud data, and the area is used as the key information area.
The key information area can be selected from the radar point cloud data by manual selection or other modes, such as model matching, so as to circle the area needing to be subjected to color conversion next, for example, the color conversion needs to be carried out on a truck in the radar point cloud, namely, the point cloud image of the truck is converted into a colored image, and then the area where the whole outline of the truck is located is selected from the point cloud image.
Thirdly, carrying out standardization processing on the point cloud data in the key information area to obtain point cloud coordinates in the area; after a region needing color conversion is selected, firstly, carrying out standardization processing on point clouds in the region, unifying point cloud data under the same coordinate system, and obtaining coordinate values of the point clouds. The specific method is that a multidimensional information fusion method based on various data acquisition devices is disclosed in Chinese patent No. 20221118 by publication No. CN115359333A, and the method comprises the following steps: and capturing point cloud data of the radar equipment, acquiring and circumscribing a space range of the target object through the point cloud data, wherein the space range is preset by environment variables, namely, a region between the maximum coordinate and the minimum coordinate of the point cloud data is defined as the space range. The resolution of the radar equipment, namely the density of the point cloud data, is set in advance according to the detection requirement; the greater the density of the point cloud data is, the smaller the division of the scale of the space coordinate system is; and searching for the optimal space coordinate system scale on the basis of ensuring that as much point cloud data as possible fall on the space coordinate system scale. But due to the hardware limitations of the radar itself, the minimum value of the spatial coordinate system scale is 1cm. After the critical area range is selected, the pre-estimated data range and coordinate scale refer to the custom data range and coordinate scale, which are set by the user according to the specific working environment, for example, the scale is 1cm by default, or can be 5cm or 10cm, for example, the length of 10 meters is 15 meters and the height of 2 meters, which is not limited.
Setting the size of the image to be converted means that unit calculation is directly performed according to the estimated data range and the coordinate scale, for example 1920 and 1080 are 19.2 meters and 10.8 meters.
The origin, X-axis, Y-axis and Z-axis are set in the spatial range, and a multi-dimensional data space, i.e., a spatial coordinate system, is created with the spatial coordinate system scale as the X-axis, Y-axis and Z-axis scales. Wherein the total length of the X axis, the Y axis and the Z axis does not exceed the space range.
Loading the point cloud data into a multidimensional data space to generate a binary data set of the point cloud data: and loading the point cloud data into a space coordinate system, and enabling as much point cloud data as possible to fall on the scale of the space coordinate system.
And under the condition that the point cloud data is not on the scale of the space coordinate system, correcting the point cloud data, namely determining the trend of the point cloud data based on the peripheral point cloud data, and placing the point cloud data on the scale of the nearest space coordinate system after measuring and calculating according to a measuring and calculating principle. Wherein the measuring and calculating principle comprises that the normal is not changed and the set space range is not exceeded.
Any one point cloud data is not on the scale of the space coordinate system, the peripheral point cloud data of the point cloud data, namely a front value and a rear value, is obtained; calculating the trend of the point cloud data through the front value and the rear value
Figure SMS_1
The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
Figure SMS_2
For the front value +.>
Figure SMS_3
The latter value.
And if the point cloud data exist on the scale of the space coordinate system, the point cloud data are assigned to be 1, and if the point cloud data do not exist, the point cloud data are assigned to be 0.
Specifically, a mode of assigning values to the space coordinate system scale is used for distinguishing a point cloud coverage area from a blank area, namely, the value of 1 is assigned when the point cloud data exist on the space coordinate system scale, and the value of 0 is assigned when the point cloud data do not exist; the area with the value of 1 is a point cloud coverage area, and the area with the value of 0 is a blank area. In addition, 1 is used for filling the closed space region.
When the radar apparatus scans the target object and the inside of the target object cannot be scanned, the area between the point cloud on the upper layer and the bottom of the target object is a closed space area.
Determining the unidirectional binary data length of the data space according to the space coordinate system scale and the set total length;
and outputting binary data sets in the unidirectional binary data length according to the assignment of the space coordinate system scale.
Specifically, based on the set space coordinate system scale m and the total length n, the unidirectional binary data length of the data space is obtained
Figure SMS_4
For example, if the total length in the Z-axis direction is 3.2m and the scale of the space coordinate system is 1cm, the unidirectional binary data length is 320, that is, 320 scales exist in the Z-axis direction. According to whether the difference of the point cloud data exists on the space coordinate system scale, 10 groups of binary data of 32Bit can be output, namely: 0000 0111 0000 1110 1111 1111 1111 0000.
Determining coordinates related to the data type to be converted in the point cloud coordinates according to the data type to be converted;
in particular, the method comprises the steps of,
the data types to be converted include target distance data detected by a laser radar, radar wave reflection intensity data, heat radiation data and the like, if the height of the coal load on a freight train is required to be converted into color difference display, the height of the terrain is similar to the difference display on a topographic map through different color values, and when the height value in a point cloud coordinate is required to be utilized nowadays, the height value only corresponds to the z value in the coordinate (x, y, z), and the values of x and y are 0.
When the position of the freight train is required to be converted into color difference for display, the remote sensing device displays red color for objects with close distance similar to automobile remote sensing; for objects with far distances, green is displayed, and the distances between different targets are distinguished through color display, then the x and y values in coordinates (x, y, z) are required to be correlated, and the value of z is 0;
if both height and position are to be converted, then simultaneous conversion of the x, y and z values in the coordinates (x, y, z) is required. Therefore, the coordinate of the single calibration distance information can be the coordinate of the fusion information such as the target distance, the signal intensity, the heat and the like can be calibrated at the same time.
And fifthly, converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data types to be converted, and obtaining image data.
As can be seen from the four steps, the coordinates related to the data type to be converted are either one of x, y and z, or two-by-two combination of x, y and z, or three coordinate values of x, y and z, so that the number of coordinates related to the data type to be converted is more than or equal to 1 and less than or equal to 3.
When the number of coordinates related to the data type to be converted is equal to 1, three different areas can be selected to respectively correspond to RGB values (namely, only one of the areas is changed, and the other areas are kept to be 0), and the laser radar is used for vehicle identification, so that distance data of three different targets can respectively correspond to conversion of R, G, B three color values, and the finally displayed effect is color graphic information of three automobiles, namely, red, green and blue. In this way, the image can be easily segmented and processed when the image analysis is performed.
The corresponding relation has no specific requirement, and can be customized according to the requirement, namely x can correspond to R, G or B, and the corresponding relation is which, namely the corresponding color is displayed.
When the number of coordinates related to the data type to be converted is equal to 2, the RGB values can be divided into three combinations of RG, RB and GB, and the three combinations correspond to the point cloud coordinates in pairs.
The RGB values and the point cloud coordinates (x, y, z) have no fixed correspondence, and xy may correspond to RG or GB.
When the number of coordinates related to the data type to be converted is equal to 3, different RBG values may be respectively corresponding to the coordinate values related to the data type to be converted, and only one combination relationship is present.
After the key information and the color are matched and combined one by one, determining the conversion ratio between each value and the color value (0-255) according to the data ratio of the coordinate axis scale and the maximum value range. The key information data to be converted is a distance value z, the scale is 1cm, the maximum value range is 2.55m, and the conversion relation between the key information data and the color value is 1:1.
in the same way, the equal proportion conversion between the x and y position information and the pixel coordinates is carried out, the pixel point coordinates of each key information are determined, and the image pixels of the non-key point information can be set to be null or a specific value (such as RGB (0, 0)) according to the requirement or the image format.
And finishing the conversion work and outputting the image data.
Example 2
The embodiment provides an image data identification system for radar three-dimensional data conversion, which comprises:
the data acquisition module is configured to acquire Lei Dadian cloud data and data types needing to be converted;
the selecting module is configured to select a key information area from the radar point cloud data;
the computing module is configured to perform standardized processing on the point cloud data in the key information area to obtain point cloud coordinates in the area; according to the data type to be converted, determining coordinates related to the data type to be converted in the point cloud coordinates;
and the conversion module is configured to convert the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted, so as to obtain image data.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the image data recognition method of radar three-dimensional data conversion.
A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of image data recognition for radar three-dimensional data conversion.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (7)

1. A method for recognizing image data of radar three-dimensional data conversion, comprising:
s1, acquiring Lei Dadian cloud data and a data type to be converted;
s2, selecting a key information area from the radar point cloud data;
s3, carrying out standardized processing on the point cloud data in the key information area to obtain point cloud coordinates in the area;
s4, determining coordinates related to the data type to be converted in the point cloud coordinates according to the data type to be converted;
s5, converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data types to be converted, and obtaining image data;
the data types to be converted are acquired, and the data types comprise target distance data detected by a radar, radar wave reflection intensity data and heat radiation data;
selecting a key information area from the radar point cloud data, wherein the key information area comprises an area which is required to be subjected to image conversion and is selected according to an image generated by the radar point cloud data and is used as the key information area;
and converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted, wherein when the number of coordinates related to the data type to be converted is 1, the coordinate value related to the data type to be converted in the point cloud coordinates corresponding to any color value in RGB is subjected to digital conversion.
2. The method for recognizing image data converted by three-dimensional data of radar according to claim 1, wherein the converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the type of data to be converted includes dividing RGB values into three combinations of RG, RB and GB when the number of coordinates related to the type of data to be converted is 2, respectively corresponding to coordinate values related to the type of data to be converted in the point cloud coordinates, and performing digital conversion.
3. The method for recognizing image data converted by three-dimensional data of radar according to claim 2, wherein the converting the point cloud coordinates into pixel coordinates according to the number of coordinates related to the type of data to be converted includes performing digital conversion by respectively corresponding three color values in RGB to coordinate values related to the type of data to be converted in the point cloud coordinates when the number of coordinates related to the type of data to be converted is 3.
4. A method of image data recognition for three-dimensional data conversion of radar according to claim 3, wherein said performing numerical conversion includes performing equal-proportion conversion of coordinate values in point cloud coordinates and RGB color values.
5. An image data recognition system based on radar three-dimensional data conversion according to any one of claims 1 to 4, comprising:
the data acquisition module is configured to acquire Lei Dadian cloud data and data types needing to be converted;
the selecting module is configured to select a key information area from the radar point cloud data;
the computing module is configured to perform standardized processing on the point cloud data in the key information area to obtain point cloud coordinates in the area; according to the data type to be converted, determining coordinates related to the data type to be converted in the point cloud coordinates;
and the conversion module is configured to convert the point cloud coordinates into pixel coordinates according to the number of coordinates related to the data type to be converted, so as to obtain image data.
6. A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform an image data recognition method of radar three-dimensional data conversion according to claim 1.
7. A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform an image data recognition method of radar three-dimensional data conversion as claimed in claim 1.
CN202310065712.0A 2023-02-06 2023-02-06 Image data identification method and system for radar three-dimensional data conversion Active CN115856829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310065712.0A CN115856829B (en) 2023-02-06 2023-02-06 Image data identification method and system for radar three-dimensional data conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310065712.0A CN115856829B (en) 2023-02-06 2023-02-06 Image data identification method and system for radar three-dimensional data conversion

Publications (2)

Publication Number Publication Date
CN115856829A CN115856829A (en) 2023-03-28
CN115856829B true CN115856829B (en) 2023-05-16

Family

ID=85657602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310065712.0A Active CN115856829B (en) 2023-02-06 2023-02-06 Image data identification method and system for radar three-dimensional data conversion

Country Status (1)

Country Link
CN (1) CN115856829B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385571B (en) * 2023-06-01 2023-09-15 山东矩阵软件工程股份有限公司 Point cloud compression method and system based on multidimensional dynamic variable resolution
CN117745537B (en) * 2024-02-21 2024-05-17 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104181546A (en) * 2014-08-25 2014-12-03 中国科学院武汉物理与数学研究所 Color information acquisition and display method of color three-dimensional scanning laser radar
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706278A (en) * 2019-09-20 2020-01-17 异起(上海)智能科技有限公司 Object identification method and device based on laser radar and camera
CN111435162B (en) * 2020-03-03 2021-10-08 深圳市镭神智能系统有限公司 Laser radar and camera synchronization method, device, equipment and storage medium
CN114529610A (en) * 2022-01-11 2022-05-24 浙江零跑科技股份有限公司 Millimeter wave radar data labeling method based on RGB-D camera
CN115359333B (en) * 2022-10-24 2023-03-24 山东矩阵软件工程股份有限公司 Multi-dimensional information fusion method based on multiple types of data acquisition equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104181546A (en) * 2014-08-25 2014-12-03 中国科学院武汉物理与数学研究所 Color information acquisition and display method of color three-dimensional scanning laser radar
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment

Also Published As

Publication number Publication date
CN115856829A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN115856829B (en) Image data identification method and system for radar three-dimensional data conversion
CN106651752B (en) Three-dimensional point cloud data registration method and splicing method
CN109443196B (en) Volume measurement method and system
CN111563923B (en) Method for obtaining dense depth map and related device
US10724848B2 (en) Method and apparatus for processing three-dimensional vision measurement data
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN109801333B (en) Volume measurement method, device and system and computing equipment
CN110766758B (en) Calibration method, device, system and storage device
CN109918977B (en) Method, device and equipment for determining idle parking space
EP2339292A1 (en) Three-dimensional measurement apparatus and method thereof
CN111383279A (en) External parameter calibration method and device and electronic equipment
EP2396766A1 (en) Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment
CN111913169B (en) Laser radar internal reference and point cloud data correction method, device and storage medium
CN113160328A (en) External reference calibration method, system, robot and storage medium
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN114972532B (en) External parameter calibration method, device, equipment and storage medium between laser radars
CN113325388A (en) Method and device for filtering floodlight noise of laser radar in automatic driving
CN116559181B (en) Defect detection method, system, device and medium based on luminosity stereoscopic vision
US20240212101A1 (en) Image fusion method, electronic device, unmanned aerial vehicle and storage medium
CN107765257A (en) A kind of laser acquisition and measuring method based on the calibration of reflected intensity accessory external
CN112150522A (en) Remote sensing image registration method, device, equipment, storage medium and system
US11748908B1 (en) Systems and methods for generating point-accurate three-dimensional models with point-accurate color information from a non-cosited capture
Trzeciak et al. Comparison of accuracy and density of static and mobile laser scanners
CN115063489A (en) External parameter calibration method, device, equipment and storage medium
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant