CN113436273A - 3D scene calibration method, calibration device and calibration application thereof - Google Patents
3D scene calibration method, calibration device and calibration application thereof Download PDFInfo
- Publication number
- CN113436273A CN113436273A CN202110716862.4A CN202110716862A CN113436273A CN 113436273 A CN113436273 A CN 113436273A CN 202110716862 A CN202110716862 A CN 202110716862A CN 113436273 A CN113436273 A CN 113436273A
- Authority
- CN
- China
- Prior art keywords
- scene
- calibration
- scaling
- transformation parameter
- transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000009466 transformation Effects 0.000 claims abstract description 52
- 238000004364 calculation method Methods 0.000 claims abstract description 32
- 238000001514 detection method Methods 0.000 claims abstract description 14
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 230000001629 suppression Effects 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 9
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000002372 labelling Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a 3D scene calibration method, a calibration device and a calibration application thereof, relating to the image processing technology, wherein the 3D scene calibration method comprises the following steps: scene area detection: acquiring a 3D scene image to be calibrated, and detecting the position area of a calibration target; scene global scaling registration: fast computational generation of a full set of transformation parameters for transformation of a 3D scene bounding box into a bounding box in 2D space; scene accurate calibration and registration: optimizing the transformation parameter set result; selecting a scene scheme: the final values of the transformation parameters are calculated using a method that is not very large rejection. The device for calibrating the 3D scene comprises a camera, a plurality of calibration plates, a calibration plate detection module, a transformation parameter calculation module, an optimization calculation module and a non-maximum suppression calculation module. Meanwhile, the application of 3D scene scaling is also provided. The method is beneficial to improving scene understanding precision, and can improve the efficiency of 3D scene labeling.
Description
Technical Field
The invention belongs to the image processing technology, and particularly relates to a 3D scene calibration method, a calibration device and calibration application thereof.
Background
With the development of science and technology and the emergence of artificial intelligence technology, some unmanned intelligent devices are applied to daily life of people, and the recognition of scenes by the unmanned intelligent devices is particularly important, for example, unmanned driving and the like generally acquire scene images in advance, and each scene is labeled with a corresponding scene, and a model is trained by using a labeled image set, so that the trained model can be applied when surrounding images are acquired in the actual driving process of an automobile, and the scene where the automobile is located is recognized through the images, so that a driving strategy is decided according to the scene.
The existing scene labeling needs repeated labeling operation on a scene image, so that the scene labeling operation is complicated, the scene labeling time is long, and the efficiency is low.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides a 3D scene calibration method, a calibration device and a calibration application thereof.
In order to achieve the purpose, the invention adopts the following technical scheme:
A3D scene scaling method comprises the following steps:
s1, scene area detection: acquiring a 3D scene image to be calibrated, wherein a plurality of calibration plates are distributed in the 3D scene image to be calibrated, and detecting the position area of the calibration plates from the 3D scene image;
s2, scene global calibration registration: based on the position information of the calibration board in step S1, generating all transformation parameter sets for transforming the bounding box of the 3D scene into the bounding box in the 2D space by fast calculation from the distance data and the camera data of the calibration board;
s3, scene accurate calibration and registration: introducing data information of all points on the calibration board for calculation, and optimizing the transformation parameter set result in the step S2 to minimize the point-to-point distance and obtain an optimized transformation parameter set;
s4, selecting scene schemes: the final values of the transformation parameters are calculated using a method that is not very large rejection.
Further, in step S1, the 3D scene image needs to be grayed before scaling, so as to obtain a grayscale 3D scene image.
Further, in step S1, the method for detecting the target board with physical examination includes the following steps:
step 101: calculating a normal vector of each point of the 3D scene image by using a PCA (principal component analysis) dimension reduction method;
step 102: judging whether different points belong to one region or not by using random seed combined method vector information;
step 103: and filtering the generated area to obtain the position information of the calibration board.
Further, in step S2, the method for quickly generating a transformation parameter set specifically includes the following steps:
step 201, randomly selecting three point groups on a calibration board area in distance data and camera data;
step 202, calculating a corresponding transformation parameter by using the information of the center and normal vector of the calibration plate in the three point groups, and calculating a value of the parameter to measure calibration precision;
step 203, selecting the parameters with the scores larger than the threshold value as an initial transformation parameter set.
Further, in step S4, when the final values are multiple sets of parameters, the final values are selected by means of human intervention.
The invention also provides a device for calibrating the 3D scene, which comprises a camera, a plurality of calibration plates, a calibration plate detection module, a transformation parameter calculation module, an optimization calculation module and a non-maximum suppression calculation module;
the camera is used for acquiring a 3D scene image;
the plurality of calibration plates are used for being distributed and fixed in a scene and used for detecting and calibrating the scene calibration plates, and a plurality of calibration points are arranged on the calibration plates in an array manner;
the calibration plate detection module is used for detecting all calibration plate position areas in a 3D scene;
the transformation parameter calculation module is used for calculating all transformation parameters of the bounding box of the 3D scene transformed into the bounding box in the 2D space and collecting the transformation parameters to realize global calibration and registration of the scene;
the optimization calculation module is used for calculating the transformation parameter information of all the points on the selected calibration plate and collecting the transformation parameter information;
and the non-maximum inhibition calculation module is used for calculating the final value of the screened transformation parameter.
Furthermore, the calibration plate detection module comprises a dimension reduction calculation module, an area judgment module and a filtering module, wherein the dimension reduction calculation module is used for calculating the normal vector of each point of the 3D scene image, the area judgment module is used for dividing the area position according to whether the normal vector information of each point in the scene belongs to the same area, and the filtering module is used for filtering out the calibration plate position.
Further, the system also comprises an image graying processing module used for graying the color 3D scene.
The invention also provides an application of 3D scene calibration, which is applied to the reconstruction of street view scenes, the bounding box of the 3D scene marked in the point cloud space can directly obtain the bounding box of the object in the 2D space through parameter transformation, and then the marking result segmented in the 2D space can be obtained by combining manual judgment.
The 3D scene calibration method, the calibration device and the calibration application thereof have the advantages that:
(1) the method can transform the point in one sensor into the coordinate system of the other sensor, calculate the corresponding relation between the point in the 3D point cloud data and the point in the 2D image data, and is beneficial to improving the scene understanding precision, and the 3D bounding box marked in the point cloud space can directly obtain the bounding box of the object in the 2D space through the transformation parameters.
(2) The method and the device mark the object in the 3D scene, effectively avoid the repeated marking work of the same object among different frames, directly obtain the track information of the object and improve the marking efficiency of the 3D scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a calibration method according to an embodiment of the present invention;
FIG. 2 is a system block diagram of a scaling device in an embodiment of the present invention;
FIG. 3 is a schematic diagram of gray scale processing in a scaling application according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the embodiment of the present invention after calibration by the calibration application calibration board;
FIG. 5 is a schematic diagram of a targeting application in an embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples.
Referring to fig. 1, a 3D scene scaling method includes the steps of:
step 1, carrying out region detection on a 3D scene: the method comprises the steps of obtaining a 3D scene image to be calibrated, wherein a plurality of calibration plates are distributed in the 3D scene image to be calibrated, the position area of each calibration plate is determined from the 3D scene image, and a plurality of calibration points are arranged on the calibration plates in an array mode. The 3D scene image needs to be subjected to graying before scaling, and a grayscale 3D scene image is obtained.
To further explain the method for detecting the target plate, the method specifically comprises the following steps: 1. calculating a normal vector of each point of the 3D scene image by using a PCA (principal component analysis) dimension reduction method; 2. judging whether different points belong to one region or not by using random seed combined method vector information; 3. and filtering the generated area to obtain the position information of the calibration board.
Step 2, carrying out global transformation parameter registration on the 3D scene: based on the position information of the calibration plate in the step 1, quickly calculating and generating all transformation parameter sets of the bounding box of the 3D scene transformed into the bounding box in the 2D space from the distance data and the camera data of the calibration plate;
to further explain the method for rapidly generating the transformation parameter set, the method specifically includes the following steps: 1. randomly selecting three point groups on a calibration board area from the distance data and the camera data; 2. calculating the corresponding transformation parameters by using the information of the center and normal vector of the calibration plate in the three point groups, and calculating the value of the calibration precision measured by the parameters; 3. parameters with scores greater than a threshold are selected as the initial set of transformation parameters.
Step 3, carrying out accurate transformation parameter registration on the 3D scene: and introducing data information of all points on the calibration board for calculation, and optimizing the transformation parameter set result in the step S2 to minimize the point-to-point distance to obtain an optimized transformation parameter set.
And 4, selecting a 3D scene scheme: the final values of the transformation parameters are calculated using a method that is not very large rejection. And when the final value is a plurality of groups of parameters, selecting the final value by means of manual intervention.
Referring to fig. 2, the present invention further provides a 3D scene scaling apparatus, which includes a camera, a plurality of scaling boards, a scaling board detection module, a transformation parameter calculation module, an optimization calculation module, and a non-maximum suppression calculation module.
The camera is used for acquiring 3D scene images. The plurality of calibration plates are used for being distributed and fixed in a scene and used for detecting and calibrating the scene calibration plates, and a plurality of calibration points are arranged on the calibration plates in an array mode.
The calibration plate detection module is used for detecting all calibration plate position areas in the 3D scene; the calibration plate detection module comprises a dimension reduction calculation module, an area judgment module and a filtering module, wherein the dimension reduction calculation module is used for calculating the normal vector of each point of the 3D scene image, the area judgment module is used for dividing the area position according to whether the normal vector information of each point in the scene belongs to the same area, and the filtering module is used for filtering out the calibration plate position.
And the transformation parameter calculation module is used for calculating all transformation parameters of the bounding box of the 3D scene transformed into the bounding box in the 2D space and collecting the transformation parameters to realize global scaling registration of the scene.
And the optimization calculation module is used for calculating and collecting transformation parameter information of all points on the selected calibration plate.
And the non-maximum inhibition calculation module is used for calculating the final value of the screened transformation parameter.
The system also comprises an image graying processing module used for graying the color 3D scene.
Referring to fig. 3-5, the invention also discloses an application of 3D scene scaling, which is applied to reconstruction of street view scene, and the bounding box of the 3D scene marked in the point cloud space can directly obtain the bounding box of the object in the 2D space through parameter transformation, and then obtain the marking result segmented in the 2D space by combining with manual judgment.
The 3D scene calibration method, the calibration device and the calibration application thereof can transform points in one sensor into a coordinate system of another sensor, calculate the corresponding relation between the points in the 3D point cloud data and the points in the 2D image data, are beneficial to improving scene understanding precision, the 3D bounding box marked in the point cloud space can directly obtain the bounding box of the object in the 2D space through parameter transformation, mark the object in the 3D scene, effectively avoid the repeated marking work of the same object among different frames, directly obtain the track information of the object, and improve the marking efficiency of the 3D scene.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A3D scene scaling method is characterized by comprising the following steps:
s1, scene area detection: acquiring a 3D scene image to be calibrated, wherein a plurality of calibration plates are distributed in the 3D scene image to be calibrated, and detecting the position area of the calibration plates from the 3D scene image;
s2, scene global calibration registration: based on the position information of the calibration board in step S1, generating all transformation parameter sets for transforming the bounding box of the 3D scene into the bounding box in the 2D space by fast calculation from the distance data and the camera data of the calibration board;
s3, scene accurate calibration and registration: introducing data information of all points on the calibration board for calculation, and optimizing the transformation parameter set result in the step S2 to minimize the point-to-point distance and obtain an optimized transformation parameter set;
s4, selecting scene schemes: the final values of the transformation parameters are calculated using a method that is not very large rejection.
2. 3D scene scaling method according to claim 1, characterized in that in step S1, the 3D scene image needs to be grayed before scaling to obtain a grayscale 3D scene image.
3. The 3D scene scaling method according to claim 1, wherein in step S1, the method for detecting the target comprises the following steps:
step 101: calculating a normal vector of each point of the 3D scene image by using a PCA (principal component analysis) dimension reduction method;
step 102: judging whether different points belong to one region or not by using random seed combined method vector information;
step 103: and filtering the generated area to obtain the position information of the calibration board.
4. The 3D scene scaling method according to claim 1, wherein in step S2, the method for fast generating the transformation parameter set comprises the following steps:
step 201, randomly selecting three point groups on a calibration board area in distance data and camera data;
step 202, calculating a corresponding transformation parameter by using the information of the center and normal vector of the calibration plate in the three point groups, and calculating a value of the parameter to measure calibration precision;
step 203, selecting the parameters with the scores larger than the threshold value as an initial transformation parameter set.
5. The 3D scene scaling method according to claim 1, wherein in step S4, when the final values are multiple sets of parameters, the final values are selected by means of human intervention.
6. A3D scene calibration device is characterized by comprising a camera, a plurality of calibration plates, a calibration plate detection module, a transformation parameter calculation module, an optimization calculation module and a non-maximum suppression calculation module;
the camera is used for acquiring a 3D scene image;
the plurality of calibration plates are used for being distributed and fixed in a scene and used for detecting and calibrating the scene calibration plates, and a plurality of calibration points are arranged on the calibration plates in an array manner;
the calibration plate detection module is used for detecting all calibration plate position areas in a 3D scene;
the transformation parameter calculation module is used for calculating all transformation parameters of the bounding box of the 3D scene transformed into the bounding box in the 2D space and collecting the transformation parameters to realize global calibration and registration of the scene;
the optimization calculation module is used for calculating the transformation parameter information of all the points on the selected calibration plate and collecting the transformation parameter information;
and the non-maximum inhibition calculation module is used for calculating the final value of the screened transformation parameter.
7. The apparatus for scaling a 3D scene according to claim 6, wherein the scaling plate detection module comprises a dimension reduction calculation module, a region judgment module, and a filtering module, the dimension reduction calculation module is configured to calculate a normal vector of each point in the 3D scene image, the region judgment module is configured to divide the region according to whether the normal vector information of each point in the scene belongs to the same region, and the filtering module is configured to filter out the scaling plate position.
8. The apparatus for 3D scene scaling according to claim 6, further comprising an image graying processing module for graying a color 3D scene.
9. The application of 3D scene scaling is characterized in that the application is applied to reconstruction of street scene, a bounding box of a 3D scene marked in a point cloud space can directly obtain a bounding box of an object in a 2D space through parameter conversion, and then a marking result segmented in the 2D space can be obtained by combining manual judgment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716862.4A CN113436273A (en) | 2021-06-28 | 2021-06-28 | 3D scene calibration method, calibration device and calibration application thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110716862.4A CN113436273A (en) | 2021-06-28 | 2021-06-28 | 3D scene calibration method, calibration device and calibration application thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113436273A true CN113436273A (en) | 2021-09-24 |
Family
ID=77754828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110716862.4A Pending CN113436273A (en) | 2021-06-28 | 2021-06-28 | 3D scene calibration method, calibration device and calibration application thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113436273A (en) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007280387A (en) * | 2006-03-31 | 2007-10-25 | Aisin Seiki Co Ltd | Method and device for detecting object movement |
US20080201101A1 (en) * | 2005-03-11 | 2008-08-21 | Creaform Inc. | Auto-Referenced System and Apparatus for Three-Dimensional Scanning |
US20100295948A1 (en) * | 2009-05-21 | 2010-11-25 | Vimicro Corporation | Method and device for camera calibration |
US20150181198A1 (en) * | 2012-01-13 | 2015-06-25 | Softkinetic Software | Automatic Scene Calibration |
CN105046710A (en) * | 2015-07-23 | 2015-11-11 | 北京林业大学 | Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus |
CN108197569A (en) * | 2017-12-29 | 2018-06-22 | 驭势科技(北京)有限公司 | Obstacle recognition method, device, computer storage media and electronic equipment |
CN109100741A (en) * | 2018-06-11 | 2018-12-28 | 长安大学 | A kind of object detection method based on 3D laser radar and image data |
US20200097758A1 (en) * | 2017-06-09 | 2020-03-26 | Mectho S.R.L. | Method and system for object detection and classification |
CN110930454A (en) * | 2019-11-01 | 2020-03-27 | 北京航空航天大学 | Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning |
CN111160169A (en) * | 2019-12-18 | 2020-05-15 | 中国平安人寿保险股份有限公司 | Face detection method, device, equipment and computer readable storage medium |
CN111739087A (en) * | 2020-06-24 | 2020-10-02 | 苏宁云计算有限公司 | Method and system for generating scene mask |
CN111989709A (en) * | 2018-04-05 | 2020-11-24 | 株式会社小糸制作所 | Arithmetic processing device, object recognition system, object recognition method, automobile, and vehicle lamp |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
CN112433193A (en) * | 2020-11-06 | 2021-03-02 | 山东产研信息与人工智能融合研究院有限公司 | Multi-sensor-based mold position positioning method and system |
US20210110202A1 (en) * | 2019-10-15 | 2021-04-15 | Bentley Systems, Incorporated | 3d object detection from calibrated 2d images |
CN112802202A (en) * | 2019-11-14 | 2021-05-14 | 北京三星通信技术研究有限公司 | Image processing method, image processing device, electronic equipment and computer storage medium |
CN112907676A (en) * | 2019-11-19 | 2021-06-04 | 浙江商汤科技开发有限公司 | Calibration method, device and system of sensor, vehicle, equipment and storage medium |
CN112955897A (en) * | 2018-09-12 | 2021-06-11 | 图森有限公司 | System and method for three-dimensional (3D) object detection |
-
2021
- 2021-06-28 CN CN202110716862.4A patent/CN113436273A/en active Pending
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080201101A1 (en) * | 2005-03-11 | 2008-08-21 | Creaform Inc. | Auto-Referenced System and Apparatus for Three-Dimensional Scanning |
JP2007280387A (en) * | 2006-03-31 | 2007-10-25 | Aisin Seiki Co Ltd | Method and device for detecting object movement |
US20100295948A1 (en) * | 2009-05-21 | 2010-11-25 | Vimicro Corporation | Method and device for camera calibration |
US20150181198A1 (en) * | 2012-01-13 | 2015-06-25 | Softkinetic Software | Automatic Scene Calibration |
CN105046710A (en) * | 2015-07-23 | 2015-11-11 | 北京林业大学 | Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus |
US20200097758A1 (en) * | 2017-06-09 | 2020-03-26 | Mectho S.R.L. | Method and system for object detection and classification |
CN108197569A (en) * | 2017-12-29 | 2018-06-22 | 驭势科技(北京)有限公司 | Obstacle recognition method, device, computer storage media and electronic equipment |
CN111989709A (en) * | 2018-04-05 | 2020-11-24 | 株式会社小糸制作所 | Arithmetic processing device, object recognition system, object recognition method, automobile, and vehicle lamp |
CN109100741A (en) * | 2018-06-11 | 2018-12-28 | 长安大学 | A kind of object detection method based on 3D laser radar and image data |
CN112955897A (en) * | 2018-09-12 | 2021-06-11 | 图森有限公司 | System and method for three-dimensional (3D) object detection |
US20210110202A1 (en) * | 2019-10-15 | 2021-04-15 | Bentley Systems, Incorporated | 3d object detection from calibrated 2d images |
CN110930454A (en) * | 2019-11-01 | 2020-03-27 | 北京航空航天大学 | Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning |
US10859684B1 (en) * | 2019-11-12 | 2020-12-08 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
WO2021093240A1 (en) * | 2019-11-12 | 2021-05-20 | Huawei Technologies Co., Ltd. | Method and system for camera-lidar calibration |
CN112802202A (en) * | 2019-11-14 | 2021-05-14 | 北京三星通信技术研究有限公司 | Image processing method, image processing device, electronic equipment and computer storage medium |
CN112907676A (en) * | 2019-11-19 | 2021-06-04 | 浙江商汤科技开发有限公司 | Calibration method, device and system of sensor, vehicle, equipment and storage medium |
CN111160169A (en) * | 2019-12-18 | 2020-05-15 | 中国平安人寿保险股份有限公司 | Face detection method, device, equipment and computer readable storage medium |
CN111739087A (en) * | 2020-06-24 | 2020-10-02 | 苏宁云计算有限公司 | Method and system for generating scene mask |
CN112433193A (en) * | 2020-11-06 | 2021-03-02 | 山东产研信息与人工智能融合研究院有限公司 | Multi-sensor-based mold position positioning method and system |
Non-Patent Citations (3)
Title |
---|
JUN XIE等: "Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer", 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, pages 3688 - 3697 * |
吴培良;刘海东;孔令富;: "一种基于丰富视觉信息学习的3D场景物体标注算法", 小型微型计算机系统, no. 01, pages 154 - 159 * |
李随伟;李刚柱;: "行人动态实时监测系统的设计与实现", 信息技术, no. 05, pages 15 - 20 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102673106B (en) | Silk screen print positioning equipment and method for photovoltaic solar silicon chip | |
CN110031829B (en) | Target accurate distance measurement method based on monocular vision | |
CN102096917B (en) | Automatic eliminating method for redundant image data of capsule endoscope | |
CN112525107B (en) | Structured light three-dimensional measurement method based on event camera | |
CN109034017A (en) | Head pose estimation method and machine readable storage medium | |
CN111485475B (en) | Pavement pit recognition method and device | |
CN107796373B (en) | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model | |
CN105447512A (en) | Coarse-fine optical surface defect detection method and coarse-fine optical surface defect detection device | |
CN101819024B (en) | Machine vision-based two-dimensional displacement detection method | |
CN111996883B (en) | Method for detecting width of road surface | |
CN110490936A (en) | Scaling method, device, equipment and the readable storage medium storing program for executing of vehicle camera | |
CN106124034A (en) | Thin-wall part operation mode based on machine vision test device and method of testing | |
CN102214290B (en) | License plate positioning method and license plate positioning template training method | |
CN113674218B (en) | Weld feature point extraction method and device, electronic equipment and storage medium | |
CN113483682B (en) | Gap measurement method and system based on machine vision | |
CN113313047A (en) | Lane line detection method and system based on lane structure prior | |
CN114445661A (en) | Embedded image identification method based on edge calculation | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
CN1987893A (en) | Method for identifying fabric grain image facing camara weft straightener | |
CN117593193B (en) | Sheet metal image enhancement method and system based on machine learning | |
CN114792417A (en) | Model training method, image recognition method, device, equipment and storage medium | |
CN117710588A (en) | Three-dimensional target detection method based on visual ranging priori information | |
CN113436273A (en) | 3D scene calibration method, calibration device and calibration application thereof | |
CN112924037A (en) | Infrared body temperature detection system and detection method based on image registration | |
CN111582076A (en) | Picture freezing detection method based on pixel motion intelligent perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |