CN111145261A - Method for identifying index point and storage medium - Google Patents

Method for identifying index point and storage medium Download PDF

Info

Publication number
CN111145261A
CN111145261A CN201910816012.4A CN201910816012A CN111145261A CN 111145261 A CN111145261 A CN 111145261A CN 201910816012 A CN201910816012 A CN 201910816012A CN 111145261 A CN111145261 A CN 111145261A
Authority
CN
China
Prior art keywords
pixel
point
calibration
preset
marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910816012.4A
Other languages
Chinese (zh)
Inventor
袁超峰
刘福明
韩雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Starcart Technology Co ltd
Original Assignee
Guangdong Starcart Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Starcart Technology Co ltd filed Critical Guangdong Starcart Technology Co ltd
Priority to CN201910816012.4A priority Critical patent/CN111145261A/en
Publication of CN111145261A publication Critical patent/CN111145261A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of image processing, and discloses a method for identifying a calibration point, which comprises the following steps: processing the image data to generate a circumscribed rectangle of the marker in the image; the lower side of the circumscribed rectangle extends downwards for a preset pixel coordinate length to generate an interested area; traversing the region of interest to generate a marking region of a preset marker related to the marker; and traversing the marked area to generate a central line, wherein the pixel coordinate of the upper end point of the central line is the pixel coordinate of the calibration point. Some technical effects of this disclosure are: and manual interaction is reduced, and effective automatic identification of the calibration point is realized.

Description

Method for identifying index point and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a technology for identifying a calibration point in the field of image processing.
Background
Vision is an important means for human beings to observe the world and to know the world, and accounts for 70% of the information that human beings obtain from the external environment. Human beings obtain the light that peripheral object reflects or oneself sends with eyes, and the light forms the image on the retina, passes through nerve fiber and conveys to the brain, and the brain processes and understands the visual information, forms the vision finally. The computer vision simulates the function of human vision, the camera is used for acquiring images of the surrounding environment, and the computer is used for processing the images. The computer vision can complete the work which can not be qualified by the human vision, such as the accurate measurement of the size, the distance and the like of the object to be measured. The computer vision technology can be widely applied to the fields of surveying and mapping, vision detection, automatic driving and the like.
One of the basic tasks of computer vision is to calculate geometric information of an object in three-dimensional space from image information acquired by a camera, and to reconstruct or recognize the object therefrom, and further to recognize the real world. Wherein, camera calibration is a necessary way to accomplish the task. Coordinate information of the index point is acquired by recognizing the index object. The camera calibration method can be divided into a conventional calibration method and a self-calibration method according to whether a calibration object is required. The traditional camera calibration method is to take a calibration object with a known shape and size as a shooting object of a camera, then perform correlation processing on the shot image, and use a series of mathematical transformations to solve internal and external parameters of a camera model. The camera self-calibration method does not need a calibration object and only depends on the relationship between corresponding points of a plurality of images to directly calibrate. So far, the self-calibration method is flexible, but because the unknown parameters involved in calibration are too many, stable results are difficult to obtain. On the contrary, the traditional calibration method is mature, the precision of the calibration result is high, and the method is widely applied.
For the traditional camera calibration technology, the extraction of the coordinates of the feature points on the calibration object is an irrecoverable step, and meanwhile, the positioning precision of the feature points has an important influence on the final calibration result. With the expansion of the application range of the camera calibration technology, the applied field environment has a diversified trend, for example, the conditions of factory environment, outdoor large background environment or coexistence of multiple calibrators, etc., cause the problems of insufficient accuracy of feature point extraction or incapability of effectively completing extraction at all when processing images. In the conventional camera calibration method, a calibration object is an essential component. In general, a calibration object as a reference object should satisfy the following basic requirements: firstly, in image processing, the image characteristic part of the calibration object should be easy to identify, namely, the reference object should have a more distinct difference compared with the background environment; secondly, the characteristic part of the reference object should be easy to extract when image processing is carried out. Calibration references generally fall into two broad categories: three-dimensional calibration object and two-dimensional plane calibration object. Three-dimensional volumetric landmarks are typically cube, solid, single-colored small squares. The traditional camera calibration method also has the problem of usability, and the current popular method can be completed only by manual interaction. The manual participation steps mainly focus on manual pointing and manual measurement of calibration objects, the repeatability is low, and the manual step needs to be repeated once for calibration.
In summary, in the prior art, a solution for automatically, easily and conveniently identifying a calibration point is lacking in camera calibration.
Disclosure of Invention
In order to realize simple and feasible identification of the calibration point, the disclosure provides an identification method of the calibration object, and the technical scheme is as follows:
the identification method of the calibration object comprises the following steps: processing the image data to generate a circumscribed rectangle of the marker in the image; the lower side of the circumscribed rectangle extends downwards for a preset pixel coordinate length to generate an interested area; traversing the region of interest to generate a marking region of a preset marker related to the marker; and traversing the marked area to generate a central line, wherein the pixel coordinate of the upper end point of the central line is the pixel coordinate of the calibration point.
Preferably, the image data is processed by a semantic segmentation method of deep learning, and the calibration object is generated by a circumscribed rectangle method.
Preferably, the preset pixel coordinate length range: the coordinate length of the preset pixel is less than or equal to 10 pixels and less than or equal to 20 pixels.
Preferably, the preset pixel coordinate length is 15 pixels.
Preferably, the preset marker related to the calibration object is a rectangular strip; the upper edge of the rectangular strip is connected with the lower edge of the calibration object, the middle point of the upper edge of the rectangular strip is coincided with the calibration point, and the left edge and the right edge of the rectangular strip are not more than the left bottom edge and the right bottom edge of the calibration object.
Preferably, the method of generating a labeling area of a marker-related preset marker comprises: setting I (x, y) as any pixel point in the region of interest, and setting I (x-delta, y) and I (x + delta, y) as two pixel points of I (x, y) which are symmetrical along the y axis, wherein delta is the pixel coordinate length of a preset marker in an image;
and the combination of the above-mentioned materials is that,
d1=I(x,y)-I(x-δ,y)
d2=I(x,y)-I(x+δ,y)
wherein d is1,d2The pixel difference value of any pixel point and the corresponding symmetrical pixel point is obtained;
D=d1+d2-|I(x+δ,y)-I(x-δ,y)|
d represents the sum of pixel difference values of any pixel point and the corresponding symmetric pixel point, and then the pixel difference value of two symmetric pixel points is subtracted to represent the pixel difference value of any pixel point and the symmetric pixel point;
let L (x, y) be the pixel value binarization function of the pixel point, when d is satisfied1>0,d2>0 and D>When L is not satisfied, L (x, y) is 0;
Figure BDA0002186383020000031
here, the binarization function threshold L is α × I (x, y), and α is a threshold coefficient.
Preferably, the value range of the threshold coefficient is 0.3- α -0.8.
The present disclosure also discloses a readable storage medium, wherein a computer program is stored on the readable storage medium, and the computer program executes the foregoing calibration point identification method.
One aspect of the present disclosure provides a solution for automatically and easily identifying a calibration point, which reduces manual interaction and realizes effective and automatic identification of the calibration point.
Drawings
For a better understanding of the technical solution of the present invention, reference is made to the following drawings, which are included to assist in describing the prior art or embodiments. These drawings will selectively demonstrate articles of manufacture or methods related to either the prior art or some embodiments of the invention. The basic information for these figures is as follows:
FIG. 1 is a flow diagram of a method for identifying index points, according to an embodiment.
Fig. 2 is a schematic diagram of the installation position of the calibration object in one embodiment.
Fig. 3 is a schematic view of the pavement arrangement of the calibration object in one embodiment.
FIG. 4 is a schematic diagram illustrating a predetermined marker placement in one embodiment.
Detailed Description
The technical means or technical effects related to the present invention will be further described below, and it is obvious that the examples provided are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step, will be within the scope of the present invention based on the embodiments of the present invention and the explicit or implicit representations or hints.
In general terms, the present disclosure discloses a method for identifying a calibration point, comprising the steps of: processing the image data to generate a circumscribed rectangle of the marker in the image; the lower side of the circumscribed rectangle extends downwards for a preset pixel coordinate length to generate an interested area; traversing the region of interest to generate a marking region of a preset marker related to the marker; and traversing the marked area to generate a central line, wherein the pixel coordinate of the upper end point of the central line is the pixel coordinate of the calibration point.
Accordingly, in an embodiment, the present disclosure also provides a readable storage medium storing a computer program for executing the foregoing calibration point identification method.
Some technical effects of this disclosure are: semantically segmenting image data through a convolutional neural network technology, thereby separating the rough position of a calibration object in the image and generating an external rectangle of the calibration object; generating a region of interest from the circumscribed rectangle; generating a marking area of a preset marker according to the region of interest; a center line is generated from the mark area, thereby obtaining pixel coordinates of the index point. Through the solution provided by the disclosure, the pixel coordinates of the calibration point can be effectively obtained by identifying the preset marker, and the automatic and simple identification of the calibration point is realized.
In some embodiments, a three-dimensional volumetric marker monochrome tile is conventionally placed next to the calibration point. The conventional placement means that any one side of the single-color small square of the three-dimensional calibration object passes through the calibration point. Generally, in the conventional technical solution, it is strictly required that a certain right-angle vertex of a single-color small block of the three-dimensional calibration object coincides with the calibration point.
In some embodiments, the midpoint of any one edge of the solid color cube of the three-dimensional volumetric marker coincides with the midpoint of any one edge of the solid color cube.
In some embodiments, the three-dimensional volumetric calibration object monochrome dice are parallel to the optical axis centerline of the camera. The advantage of this arrangement is that subsequent data processing is facilitated.
In some embodiments, vehicle-mounted positioning points are set on the vehicle, and the vehicle-mounted positioning points are used for acquiring positioning data of the vehicle in real time or non-real time. Generally, the vehicle-mounted positioning point may be an existing navigation device of the vehicle, or may be another positioning device additionally selected and set at another position. In one embodiment, the center of the selected roof is set as a vehicle-mounted positioning point, and a positioning device is installed.
In some embodiments, the calibration points are symmetrically arranged on two sides of the road surface, and a connecting line of any two calibration points on each side is parallel to the road surface middle line. As shown in fig. 2 and 3, this arrangement has the advantage of facilitating subsequent data processing.
In some embodiments, a positioning device is disposed on the index point, the positioning device being configured to receive positioning data for acquiring and transmitting the index point.
In some embodiments, the positioning data of all positioning points can be obtained through pre-measurement, and then the positioning data is used for subsequent use.
It is understood that the above embodiment operations may be one-off, i.e. set for the first time, and then may not need to be reset without environmental change; when the calibration is performed again, the operation steps of the above embodiment can be omitted.
In some embodiments, as shown in fig. 4, a rectangular strip is provided as the preset marker, where the preset marker is placed according to the following rules: the middle point of any side of the rectangular strip is superposed with the calibration point and is externally connected with the calibrated side of the single-color small square of the three-dimensional calibration object; two side edges of the rectangular strip do not exceed the single-color small square of the three-dimensional calibration object. The rectangular strip can be made of common monochromatic paper, the length-width ratio is arbitrary, and the length-width size is smaller than the side length of the monochromatic small square of the three-dimensional calibration object. Specifically, the color of the rectangular bar may be selected according to the external environment of the index point recognition, and black is generally selected. The rectangular strip can be fixed beside the calibration point by gluing and the like.
It can be understood that the preset marker can be set once, that is, set for the first time, and the rectangular strip can be retained later without resetting; when the camera calibration is performed again, the step of setting the rectangular strip can be omitted.
In some embodiments, the camera is operated to capture video or pictures of the front within the working radius to obtain image data.
In some embodiments, the image data is processed to generate a circumscribed rectangle of the marker in the image; the lower side of the circumscribed rectangle extends downwards for a preset pixel coordinate length to generate an interested area; traversing the region of interest to generate a marking region of a preset marker related to the marker; and traversing the marked area to generate a central line, wherein the pixel coordinate of the upper end point of the central line is the pixel coordinate of the calibration point.
In some embodiments, the image data is processed by a semantic segmentation method of deep learning. In general, the following steps may be taken: collecting video data containing an identification target; converting the video data into picture data; performing target marking on the picture by using a marking tool to generate sample data; training by using sample data to generate a network model; and calling the model to identify the target.
In some embodiments, the modeling method of the deep learning neural network model is as follows: firstly, training a pre-training model obtained by VGG16 training, and outputting a trained FCN-32s model; taking the FCN-32s model as a pre-training model, training by using a new sample, and outputting a trained FCN-16s model; and (4) taking the FCN-16s model as a pre-training model, training by using a new sample, and outputting the trained FCN-8s model. And taking the FCN-8s model as a pre-training model, training by using a new sample, and outputting the trained FCN-4s model as a target model. Here, FCN is a full volume accumulator network (full volumetric networks). VGG is Visual Geometry Group. It should be noted that the required model obtained by training the rest deep neural networks such as googleNet can also be used.
In some embodiments, the image data findContours are processed to obtain all external quadrangles except the three-dimensional stereo calibration object small square image; and the circumscribed rectangle with the minimum output area.
In some embodiments, the region of interest is generated by extending the lower edge of the output bounding rectangle by a preset pixel coordinate length in pixel coordinates. Generally, the consideration factor of the preset pixel coordinate length value is mainly the pixel error size when the circumscribed rectangle corresponding to the three-dimensional calibration object small block image is obtained. The pixel error refers to the pixel difference between the theoretical value and the actual value of the three-dimensional calibration object small square image.
In some embodiments, the preset pixel coordinate length range is: the coordinate length of the preset pixel is less than or equal to 10 pixels and less than or equal to 20 pixels.
In some embodiments, the pixel coordinate length is 15, i.e., the lower edge of the circumscribed rectangle extends downward by 15 unit pixel coordinate lengths.
In some embodiments, as shown in fig. 4, the predetermined markers associated with the markers are rectangular strips; the upper edge of the rectangular strip is connected with the lower edge of the calibration object, the upper midpoint of the rectangular strip is overlapped with the calibration point, and the side length of the rectangular strip is smaller than that of the calibration object, so that the left side and the right side of the image of the rectangular strip are not more than the left side and the right side of the image of the calibration object in the image.
In some embodiments, the three-dimensional volumetric calibration object is parallel to the optical axis of the camera. And traversing the region of interest to generate a marking region of the preset marker related to the marker.
In some embodiments, a method of generating a marker region of a marker-related preset marker: setting I (x, y) as any pixel point in the region of interest, and setting I (x-delta, y) and I (x + delta, y) as two pixel points of I (x, y) which are symmetrical along the y axis, wherein delta is the pixel coordinate length of a preset marker in an image;
and the combination of the above-mentioned materials is that,
d1=I(x,y)-I(x-δ,y)
d2=I(x,y)-I(x+δ,y)
wherein d is1,d2The pixel difference value of any pixel point and the corresponding symmetrical pixel point is obtained;
D=d1+d2-|I(x+δ,y)-I(x-δ,y)|
d represents the sum of pixel difference values of any pixel point and the corresponding symmetric pixel point, and then the pixel difference value of two symmetric pixel points is subtracted to represent the pixel difference value of any pixel point and the symmetric pixel point;
let L (x, y) be the pixel value binarization function of the pixel point, when d is satisfied1>0,d2>0 and D>When L is not satisfied, L (x, y) is 0;
that is to say that the first and second electrodes,
Figure BDA0002186383020000081
here, the binarization function threshold L is α × I (x, y), and α is a threshold coefficient.
In some embodiments, the threshold coefficient α is in the range of 0.3 ≦ α ≦ 0.8.
Accordingly, in some embodiments, a readable storage medium having stored thereon a computer program for performing the method of index point identification in the foregoing embodiments.
The various embodiments or features mentioned herein may be combined with each other as additional alternative embodiments without conflict, within the knowledge and ability level of those skilled in the art, and a limited number of alternative embodiments formed by a limited number of combinations of features not listed above are still within the skill of the disclosed technology, as will be understood or inferred by those skilled in the art from the figures and above.
Finally, it is emphasized that the above-mentioned embodiments, which are typical and preferred embodiments of the present disclosure, are only used for explaining and explaining the technical solutions of the present disclosure in detail for the convenience of the reader, and are not used to limit the protection scope or application of the present disclosure.
Therefore, any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be covered within the scope of the present disclosure.

Claims (8)

1. A method for identifying a calibration point is characterized in that: the method comprises the following steps:
processing the image data to generate a circumscribed rectangle of the marker in the image;
the lower side of the circumscribed rectangle extends downwards for a preset pixel coordinate length to generate an interested area;
traversing the region of interest to generate a marking region of a preset marker related to the marker;
and traversing the marked area to generate a central line, wherein the pixel coordinate of the upper end point of the central line is the pixel coordinate of the calibration point.
2. The method of claim 1, wherein: and processing the image data by a semantic segmentation method of deep learning, and generating a calibration object by a circumscribed rectangle method.
3. The method of claim 1, wherein: the preset pixel coordinate length range is as follows: the coordinate length of the preset pixel is less than or equal to 10 pixels and less than or equal to 20 pixels.
4. The method of claim 1, wherein: the preset pixel coordinate length is 15 pixels.
5. The method of claim 1, wherein: the preset marker related to the calibration object is a rectangular strip; the upper edge of the rectangular strip is connected with the lower edge of the calibration object, the middle point of the upper edge of the rectangular strip is coincided with the calibration point, and the left side and the right side of the image of the rectangular strip are not more than the left side and the right side of the image of the calibration object.
6. The method of claim 1, wherein:
the method for generating the marking area of the preset marker related to the marker comprises the following steps: setting I (x, y) as any pixel point in the region of interest, and setting I (x-delta, y) and I (x + delta, y) as two pixel points of I (x, y) which are symmetrical along the y axis, wherein delta is the pixel coordinate length of a preset marker in an image;
and the combination of the above-mentioned materials is that,
d1=I(x,y)-I(x-δ,y)
d2=I(x,y)-I(x+δ,y)
wherein d is1,d2The pixel difference value of any pixel point and the corresponding symmetrical pixel point is obtained;
D=d1+d2-|I(x+δ,y)-I(x-δ,y)|
d represents the sum of pixel difference values of any pixel point and the corresponding symmetric pixel point, and then the pixel difference value of two symmetric pixel points is subtracted to represent the pixel difference value of any pixel point and the symmetric pixel point;
let L (x, y) be the pixel value binarization function of the pixel point, when d is satisfied1>0,d2>0 and D>When L is not satisfied, L (x, y) is 0;
that is to say that the first and second electrodes,
Figure FDA0002186383010000021
here, the binarization function threshold L is α × I (x, y), and α is a threshold coefficient.
7. The method of claim 6, wherein the threshold coefficient is in a range of 0.3- α -0.8.
8. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which executes the method according to any one of claims 1 to 7.
CN201910816012.4A 2019-08-30 2019-08-30 Method for identifying index point and storage medium Withdrawn CN111145261A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910816012.4A CN111145261A (en) 2019-08-30 2019-08-30 Method for identifying index point and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910816012.4A CN111145261A (en) 2019-08-30 2019-08-30 Method for identifying index point and storage medium

Publications (1)

Publication Number Publication Date
CN111145261A true CN111145261A (en) 2020-05-12

Family

ID=70516798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910816012.4A Withdrawn CN111145261A (en) 2019-08-30 2019-08-30 Method for identifying index point and storage medium

Country Status (1)

Country Link
CN (1) CN111145261A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991375A (en) * 2021-02-08 2021-06-18 上海通办信息服务有限公司 Method and system for reshaping arbitrary-shaped image area into N rectangular areas
CN113449745A (en) * 2021-08-31 2021-09-28 北京柏惠维康科技有限公司 Method, device and equipment for identifying marker in calibration object image and readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991375A (en) * 2021-02-08 2021-06-18 上海通办信息服务有限公司 Method and system for reshaping arbitrary-shaped image area into N rectangular areas
CN112991375B (en) * 2021-02-08 2024-01-23 上海通办信息服务有限公司 Method and system for remolding image area with arbitrary shape into N rectangular areas
CN113449745A (en) * 2021-08-31 2021-09-28 北京柏惠维康科技有限公司 Method, device and equipment for identifying marker in calibration object image and readable medium
CN113449745B (en) * 2021-08-31 2021-11-19 北京柏惠维康科技有限公司 Method, device and equipment for identifying marker in calibration object image and readable medium

Similar Documents

Publication Publication Date Title
CN107609557B (en) Pointer instrument reading identification method
CN110068270B (en) Monocular vision box volume measuring method based on multi-line structured light image recognition
EP1139270B1 (en) Method for computing the location and orientation of an object in three-dimensional space
WO2018049998A1 (en) Traffic sign information acquisition method and device
US8396284B2 (en) Smart picking in 3D point clouds
CN111340797A (en) Laser radar and binocular camera data fusion detection method and system
CN110490936B (en) Calibration method, device and equipment of vehicle camera and readable storage medium
CN110906954A (en) High-precision map test evaluation method and device based on automatic driving platform
CN107025663A (en) It is used for clutter points-scoring system and method that 3D point cloud is matched in vision system
CN111260539B (en) Fish eye pattern target identification method and system thereof
CN107084680A (en) A kind of target depth measuring method based on machine monocular vision
CN104766309A (en) Plane feature point navigation and positioning method and device
CN113255578B (en) Traffic identification recognition method and device, electronic equipment and storage medium
CN111046843A (en) Monocular distance measurement method under intelligent driving environment
CN111476894A (en) Three-dimensional semantic map construction method and device, storage medium and electronic equipment
CN111243003A (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN111145261A (en) Method for identifying index point and storage medium
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN110942092B (en) Graphic image recognition method and recognition system
CN111145260A (en) Vehicle-mounted binocular calibration method
CN111145262A (en) Vehicle-mounted monocular calibration method
CN111626241A (en) Face detection method and device
CN108447092B (en) Method and device for visually positioning marker
CN111310651B (en) Water surface detection method based on polarization camera and RGB-D sensor
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200512

WW01 Invention patent application withdrawn after publication