CN113345016A - Positioning pose judgment method for binocular recognition - Google Patents

Positioning pose judgment method for binocular recognition Download PDF

Info

Publication number
CN113345016A
CN113345016A CN202110436188.4A CN202110436188A CN113345016A CN 113345016 A CN113345016 A CN 113345016A CN 202110436188 A CN202110436188 A CN 202110436188A CN 113345016 A CN113345016 A CN 113345016A
Authority
CN
China
Prior art keywords
image
cognitive
binocular
dimensional coordinate
coordinate space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110436188.4A
Other languages
Chinese (zh)
Inventor
傅进
周刚
黄赟
朱奕琦
穆国平
许路广
陆飞
黄杰
申志成
沈熙辰
吴侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical Jiaxing Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority to CN202110436188.4A priority Critical patent/CN113345016A/en
Publication of CN113345016A publication Critical patent/CN113345016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a method for judging binocular recognition positioning poses, which comprises the following steps: collecting different illumination images PAnAnd image PBn(ii) a For image PAnAnd image PBnPerforming illumination normalization to construct lightAccording to a reference image PA0And image PB0(ii) a To light reference image PA0And image PB0Preprocessing and carrying out cognitive determination; confirming the targeted cognitive characteristics of the collected object; performing parallax calculation and acquiring a point cloud picture; constructing a three-dimensional coordinate space; and judging the identification accuracy requirement and the error requirement of the three-dimensional coordinate space. According to the technical scheme, the distance measurement is carried out through the parallax of the binocular camera collected images, the accurate three-dimensional coordinate space is constructed to accurately control the transformer substation robot, meanwhile, the transformer substation robot is accurately controlled due to the fact that the image change caused by the change of ambient light influences, the depth camera is combined with illumination compensation, the collected images are standardized, the judgment is unified, and all-weather work is facilitated.

Description

Positioning pose judgment method for binocular recognition
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for judging binocular recognition positioning poses.
Background
With the development of society, the demand of production and living for electricity consumption is increasing, and the demand for power lines and equipment is increasing. In the face of higher power utilization pressure, facilities for realizing power transmission, such as early-built power transmission lines and transformer substations, need more precise power operation and maintenance. The monitoring of the electric power operation equipment is realized by generally modifying the whole electric power operation equipment, adding an auxiliary acquisition and regulation device to realize the control of the internet of things, but the transformation of the early-stage transformer substation requires high cost and is difficult to normally operate in the transformation process, so that the operation monitoring of the transformer substation is realized while the work of the transformer substation is not influenced by the transformation mode of the transformer substation operation robot.
Binocular stereo vision is an important form of machine vision, and is a method for acquiring three-dimensional geometric information of an object and distance information of a space three-dimensional scene by using imaging equipment to acquire two images of the object to be detected from different positions, calculating position deviation between corresponding points of the images and combining camera parameters based on a parallax principle. The binocular stereoscopic vision measurement method is adopted to realize the accurate control of the transformer substation operation robot on the transformer substation equipment, has the advantages of high efficiency, proper precision, simple system structure, low cost and the like, and is very suitable for online and non-contact detection.
Chinese patent document CN111982074A discloses a "sitting posture identification method based on binocular stereo vision". Adopted including lamp stand, lighting fixture and lamp shade, the lamp stand in set up electronic control device, electronic control device set up power supply circuit and treater, the treater connect binocular stereoscopic vision device and bee calling organ, binocular stereoscopic vision device set up the lamp stand on, the oblique top of directional theta degree, the treater set up position of sitting recognition algorithm, include following step: (1) acquiring a depth image hi (x, y) by a binocular stereo vision device; (2) converting the depth image hi (x, y) into distance information di (x, y) in the horizontal direction; (3) binarizing the distance information di (x, y) to obtain D0i (x, y); (4) performing corrosion expansion on the D0i (x, y) to obtain D1i (x, y); (5) counting the number Ni of non-zero pixels in D1i (x, y), and if Ni > M N r, determining that the sitting posture is incorrect. According to the technical scheme, the sitting posture judgment is carried out according to the depth of field by purely applying binocular recognition, and the influence of the illumination condition on the depth of field information is not considered, so that the image is deviated, and the judgment result is influenced.
Disclosure of Invention
The invention mainly solves the technical problems that the influence of illumination is not considered in the prior technical scheme, the judgment condition is single, and the judgment result is influenced, and provides a method for judging the positions and postures of binocular recognition.
The technical problem of the invention is mainly solved by the following technical scheme: the invention comprises the following steps:
s1 collecting different illumination images P through two cameras with fixed relative positionsAnAnd image PBn
S2 for image PAnAnd image PBnPerforming illumination normalization processing to construct an illumination reference image PA0And image PB0
S3 illuminating the reference image PA0And image PB0Preprocessing and carrying out cognitive determination;
s4, confirming the targeted cognitive features of the acquisition object based on the deep learning cognitive features of the image through cognitive determination;
s5, performing parallax calculation according to the binocular stereo imaging principle and acquiring a point cloud picture;
s6, constructing a three-dimensional coordinate space according to the pertinence cognitive features and the point cloud pictures of the collected objects;
s7, judging the identification accuracy requirement and the error requirement of the three-dimensional coordinate space;
if S8 is satisfied, the three-dimensional coordinate space is output, and if not, the process returns to step S4.
Preferably, the illumination normalization processing in step 2 specifically includes:
s2.1 input image PAn(xAn,yAn) And image PBn(xBn,yBn) And taking a logarithm;
s2.2, calculating a shadow layer image;
s2.3, calculating a reflection layer image and carrying out index transformation;
s2.4 selecting image PAn(xAn,yAn) Sample image g ofA(xAn,yAn) And calculating a histogram thereof
Figure BDA0003033184590000031
Get picture PBn(xBn,yBn) Sample image g ofB(xBn,yBn) And calculating a histogram thereof
Figure BDA0003033184590000032
S2.5, normalizing the reflection layer image by adopting a histogram matching method to obtain an image r corrected by an illumination normalization methodA(xA,yA) And rB(xB,yB)。
Preferably, the preprocessing includes filtering, noise reduction, white balance, warping, and affine transformation. The method comprises the steps of filtering specific waveband frequencies in signals through filtering, suppressing and preventing interference, reducing noise, eliminating interference factors, correcting color temperature through white balance, restoring and acquiring the color of a main body, enabling pictures shot under different light source conditions to be similar to the colors of pictures watched by human eyes, and achieving the purpose that the camera images can accurately reflect the color condition of a shot object, wherein affine transformation is the transformation from one two-dimensional coordinate system to another two-dimensional coordinate system and belongs to linear transformation.
Preferably, the cognitive determination in step S3 specifically includes:
s3.1 determining an image PA0And image PB0A general cognitive trait;
s3.2 creating an image PA0And image PB0The matching relation between the two;
and S3.3, recognizing the cognitive attributes of the acquisition object.
Preferably, the general cognitive features in step S3.1 include texture, contour and color, and the general cognitive features include the specific cognitive features in step S4. The general cognitive features are a general mode for realizing image recognition, so that after the cognitive determination, the specific cognitive features are performed, and the recognition effect is better.
Preferably, said step S3.1 determines the image PA0And image PB0The method for universal cognitive characterization comprises the following steps: the method comprises the following steps of determining the type of a graph, the geometric length of lines forming the graph, the color of different characteristic regions forming the graph, the connection relation of the lines forming the graph, the geometric relation of an acquisition object and other general graphs, and the length proportional relation of outlines forming the graph.
Preferably, the specific types of cognitive attributes in step S3.3 include colors, contours, surface textures, and geometric structures of the contours. The appearance characteristics of the acquired image are determined through identifying the color, the contour, the surface texture and the geometric structure of the contour of the acquired image so as to achieve the aim of accurate identification.
Preferably, if the accuracy requirement of the degree of recognition and the error requirement of the three-dimensional coordinate space cannot be satisfied in step S7, it is determined that the selected targeted cognitive features are less discriminative, and the process returns to step S4 to reconfirm the targeted cognitive features of the acquisition target based on the deep-learning cognitive features of the image and continue to construct the three-dimensional coordinate space until the accuracy requirement of the degree of recognition and the error requirement of the three-dimensional coordinate space are satisfied.
The invention has the beneficial effects that: the parallax of images collected through the binocular camera is used for distance measurement, an accurate three-dimensional coordinate space is constructed for accurate control of the transformer substation robot, meanwhile, the transformer substation robot is accurately controlled due to the fact that image changes caused by ambient light changes affect, the depth camera is combined with illumination compensation, the collected images are standardized, judgment is unified, and all-weather work is facilitated.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings. Example (b): as shown in fig. 1, the method for determining a binocular recognition positioning pose according to this embodiment includes the following steps:
s1 fixing through relative positionTwo fixed cameras collect different illumination images PAnAnd image PBn
S2 for image PAnAnd image PBnPerforming illumination normalization processing to construct an illumination reference image PA0And image PB0The normalization processing specifically includes:
s2.1 input image PAn(xAn,yAn) And image PBn(xBn,yBn) And taking a logarithm;
s2.2, calculating a shadow layer image; the weighted least mean square optimized edge preserving filter is used to obtain the corresponding shadow layer image so that the shadow layer image is as close as possible to the input image on the one hand, and is smooth as possible at small gradients on the other hand, namely, the shadow layer image is required to be smooth everywhere under the condition of keeping the basic characteristics of the original image.
The difference in the x direction of the resulting images is calculated separately: the difference between the gray values of the left and right adjacent pixels of the image in the horizontal direction is obtained; difference in y-direction: i.e. the difference between the gray values of the upper and lower adjacent pixels of the image in the vertical direction.
Calculating each element T of the matrix Ti,j
Figure BDA0003033184590000051
Figure BDA0003033184590000052
Wherein the constant epsilon is 0.00001, the parameter lambda is more than 0, and the parameter a is 1.0-1.8.
S2.3, calculating the image of the reflecting layer, performing exponential transformation, and calculating the reflecting layer independent of illumination
Figure BDA0003033184590000053
Then, carrying out exponential transformation to obtain an image without illumination influence
Figure BDA0003033184590000061
Figure BDA0003033184590000062
Figure BDA0003033184590000063
Figure BDA0003033184590000064
S2.4 selecting image PAn(xAn,yAn) Sample image g ofA(xAn,yAn) And calculating a histogram thereof
Figure BDA0003033184590000065
Get picture PBn(xBn,yBn) Sample image g ofB(xBn,yBn) And calculating a histogram thereof
Figure BDA0003033184590000066
S2.5, normalizing the reflection layer image by adopting a histogram matching method to obtain an image r corrected by an illumination normalization methodA(xA,yA) And rB(xB,yB) The method specifically comprises the following steps:
find out
Figure BDA0003033184590000067
Luminance distribution histogram of
Figure BDA0003033184590000068
Figure BDA0003033184590000069
Luminance distribution histogram of
Figure BDA00030331845900000610
Where mi is the number of pixels in the image with gray level i, and m is the total number of pixels in the image.
Will be provided with
Figure BDA00030331845900000611
And
Figure BDA00030331845900000612
the matching is carried out in a matching way,
Figure BDA00030331845900000613
and
Figure BDA00030331845900000614
match, i.e. pair
Figure BDA00030331845900000615
Figure BDA00030331845900000616
The gray value of the middle pixel is transformed, so that the transformed image rA(xA,yA) And a sample image gA(xAn,yAn) Histogram is the same, rB(xB,yB) Histogram of (g) and sample image gB(xBn,yBn) Are the same as the histogram of (a) < i > rA(xA,yA),rB(xB,yB) Namely, the image to be processed in the step 1 is corrected by the illumination normalization method.
S3 illuminating the reference image PA0And image PB0Preprocessing and cognitive determination are performed. The preprocessing comprises filtering, namely filtering specific band frequencies in the signals through the filtering so as to suppress and prevent interference; noise reduction for eliminating interference factors; white balance, correcting color temperature, restoring the color of the collected main body, making the color of the picture shot under different light source conditions similar to the color of the picture watched by human eyes, and realizing that the camera image can accurately reflect the color condition of the shot object; warping, affine transformation, is the transformation from one two-dimensional coordinate system to another, belonging to linear transformation.
The cognitive determination specifically comprises:
s3.1 determining an image PA0And image PB0The general cognitive characteristics of the human body are as follows,common cognitive features include texture, contours, and color. Determining an image PA0And image PB0The method for universal cognitive characterization comprises the following steps: the method comprises the following steps of determining the type of a graph, the geometric length of lines forming the graph, the color of different characteristic regions forming the graph, the connection relation of the lines forming the graph, the geometric relation of an acquisition object and other general graphs, and the length proportional relation of outlines forming the graph.
S3.2 creating an image PA0And image PB0The matching relation between the two;
and S3.3, recognizing the cognitive attributes of the acquisition object, wherein the specific types of the cognitive attributes comprise colors, contours, surface textures and geometric structures of the contours. The appearance characteristics of the acquired image are determined through identifying the color, the contour, the surface texture and the geometric structure of the contour of the acquired image so as to achieve the aim of accurate identification.
S4 identifies, by cognitive determination, a target cognitive feature of the acquisition target based on the deep learning cognitive features of the image, the target cognitive feature being included in the common cognitive features. The general cognitive features are a general mode for realizing image recognition, so that after the cognitive determination, the specific cognitive features are performed, and the recognition effect is better.
S5, performing parallax calculation according to the binocular stereo imaging principle and acquiring a point cloud picture;
s6, constructing a three-dimensional coordinate space according to the pertinence cognitive features and the point cloud pictures of the collected objects;
s7, judging the identification accuracy requirement and the error requirement of the three-dimensional coordinate space; if the requirement of the accuracy of the recognition degree and the requirement of the error of the three-dimensional coordinate space cannot be met, judging that the selected targeted cognitive features lack the degree of distinction;
if S8 is satisfied, the three-dimensional coordinate space is output, and if not, the process returns to step S4. And jumping back to the step S4, based on the deep learning cognitive features of the image, the targeted cognitive features of the acquisition object are re-confirmed and the three-dimensional coordinate space is continuously constructed until the recognition accuracy requirement and the error requirement of the three-dimensional coordinate space are met.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Although terms such as illumination normalization, cognitive determination, etc. are used more herein, the possibility of using other terms is not excluded. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.

Claims (8)

1. A method for judging binocular recognition positioning poses is characterized by comprising the following steps:
s1 collecting different illumination images P through two cameras with fixed relative positionsAnAnd image PBn
S2 for image PAnAnd image PBnPerforming illumination normalization processing to construct an illumination reference image PA0And image PB0
S3 illuminating the reference image PA0And image PB0Preprocessing and carrying out cognitive determination;
s4, confirming the targeted cognitive features of the acquisition object based on the deep learning cognitive features of the image through cognitive determination;
s5, performing parallax calculation according to the binocular stereo imaging principle and acquiring a point cloud picture;
s6, constructing a three-dimensional coordinate space according to the pertinence cognitive features and the point cloud pictures of the collected objects;
s7, judging the identification accuracy requirement and the error requirement of the three-dimensional coordinate space;
if S8 is satisfied, the three-dimensional coordinate space is output, and if not, the process returns to step S4.
2. The binocular recognition positioning pose determination method according to claim 1, wherein the illumination normalization processing of step 2 specifically includes:
s2.1 input image PAn(xAn,yAn) And image PBn(xBn,yBn) And taking a logarithm;
s2.2, calculating a shadow layer image;
s2.3, calculating a reflection layer image and carrying out index transformation;
s2.4 selecting image PAn(xAn,yAn) Sample image g ofA(xAn,yAn) And calculating a histogram thereof
Figure FDA0003033184580000011
Get picture PBn(xBn,yBn) Sample image g ofB(xBn,yBn) And calculating a histogram thereof
Figure FDA0003033184580000012
S2.5, normalizing the reflection layer image by adopting a histogram matching method to obtain an image r corrected by an illumination normalization methodA(xA,yA) And rB(xB,yB)。
3. The positioning pose determination method for binocular recognition according to claim 1, wherein the preprocessing of step 3 includes filtering, noise reduction, white balance, warping processing and affine transformation.
4. The method for determining the binocular recognizing positioning pose according to claim 1, wherein the step S3 cognitive determination specifically comprises:
s3.1 determining an image PA0And image PB0A general cognitive trait;
s3.2 creating an image PA0And image PB0The matching relation between the two;
and S3.3, recognizing the cognitive attributes of the acquisition object.
5. The method for binocular recognizing, positioning and pose judging according to claim 4, wherein the generalized cognitive features in the step S3.1 comprise textures, contours and colors, and the generalized cognitive features comprise the specific cognitive features in the step S4.
6. The binocular identification positioning pose determination method according to claim 4, wherein the step S3.1 is to determine an image PA0And image PB0The method for universal cognitive characterization comprises the following steps: the method comprises the following steps of determining the type of a graph, the geometric length of lines forming the graph, the color of different characteristic regions forming the graph, the connection relation of the lines forming the graph, the geometric relation of an acquisition object and other general graphs, and the length proportional relation of outlines forming the graph.
7. The binocular recognizing, positioning and pose judging method according to claim 4, wherein the specific types of the cognitive attributes in the step S3.3 comprise colors, contours, surface textures and geometric structures of the contours.
8. The binocular recognition positioning pose determination method according to claim 1, wherein if the accuracy requirement and the error requirement of the recognition degree of the three-dimensional coordinate space cannot be met, in the step S7, it is determined that the selected targeted cognitive features lack the degree of distinction, and the process jumps back to the step S4 to reconfirm the targeted cognitive features of the acquisition object based on the deep learning cognitive features of the image and continue to construct the three-dimensional coordinate space until the accuracy requirement and the error requirement of the recognition degree of the three-dimensional coordinate space are met.
CN202110436188.4A 2021-04-22 2021-04-22 Positioning pose judgment method for binocular recognition Pending CN113345016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110436188.4A CN113345016A (en) 2021-04-22 2021-04-22 Positioning pose judgment method for binocular recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110436188.4A CN113345016A (en) 2021-04-22 2021-04-22 Positioning pose judgment method for binocular recognition

Publications (1)

Publication Number Publication Date
CN113345016A true CN113345016A (en) 2021-09-03

Family

ID=77468374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110436188.4A Pending CN113345016A (en) 2021-04-22 2021-04-22 Positioning pose judgment method for binocular recognition

Country Status (1)

Country Link
CN (1) CN113345016A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295010A (en) * 2013-05-30 2013-09-11 西安理工大学 Illumination normalization method for processing face images
CN106022304A (en) * 2016-06-03 2016-10-12 浙江大学 Binocular camera-based real time human sitting posture condition detection method
CN107545247A (en) * 2017-08-23 2018-01-05 北京伟景智能科技有限公司 Three-dimensional cognitive approach based on binocular identification
CN107957246A (en) * 2017-11-29 2018-04-24 北京伟景智能科技有限公司 Article geometrical size measuring method on conveyer belt based on binocular vision
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system
CN111982074A (en) * 2020-08-21 2020-11-24 杭州晶一智能科技有限公司 Sitting posture identification method based on binocular stereo vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103295010A (en) * 2013-05-30 2013-09-11 西安理工大学 Illumination normalization method for processing face images
CN106022304A (en) * 2016-06-03 2016-10-12 浙江大学 Binocular camera-based real time human sitting posture condition detection method
CN107545247A (en) * 2017-08-23 2018-01-05 北京伟景智能科技有限公司 Three-dimensional cognitive approach based on binocular identification
CN107957246A (en) * 2017-11-29 2018-04-24 北京伟景智能科技有限公司 Article geometrical size measuring method on conveyer belt based on binocular vision
CN111897332A (en) * 2020-07-30 2020-11-06 国网智能科技股份有限公司 Semantic intelligent substation robot humanoid inspection operation method and system
CN111982074A (en) * 2020-08-21 2020-11-24 杭州晶一智能科技有限公司 Sitting posture identification method based on binocular stereo vision

Similar Documents

Publication Publication Date Title
CN105740856B (en) A kind of pointer instrument registration read method based on machine vision
CN107014294B (en) Contact net geometric parameter detection method and system based on infrared image
CN111009007B (en) Finger multi-feature comprehensive three-dimensional reconstruction method
CN108921176A (en) A kind of pointer instrument positioning and recognition methods based on machine vision
CN111028295A (en) 3D imaging method based on coded structured light and dual purposes
CN111223133A (en) Registration method of heterogeneous images
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN108765476A (en) A kind of polarization image method for registering
CN116309757B (en) Binocular stereo matching method based on machine vision
Cherian et al. Accurate 3D ground plane estimation from a single image
CN108182704A (en) Localization method based on Shape context feature
CN112200848B (en) Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment
CN116129037A (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
JP2024507089A (en) Image correspondence analysis device and its analysis method
CN109443319A (en) Barrier range-measurement system and its distance measuring method based on monocular vision
CN110751690B (en) Visual positioning method for milling machine tool bit
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN113345016A (en) Positioning pose judgment method for binocular recognition
Ji et al. Using color difference compensation method to balance and repair the image of art design
CN114241059B (en) Synchronous calibration method for camera and light source in photometric stereo vision system
CN112700504B (en) Parallax measurement method of multi-view telecentric camera
CN109741389A (en) One kind being based on the matched sectional perspective matching process of region base
CN106056599B (en) A kind of object recognition algorithm and device based on Object Depth data
CN110956640B (en) Heterogeneous image edge point detection and registration method
CN108428250A (en) A kind of X angular-point detection methods applied to vision positioning and calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication