CN110728715B - Intelligent inspection robot camera angle self-adaptive adjustment method - Google Patents

Intelligent inspection robot camera angle self-adaptive adjustment method Download PDF

Info

Publication number
CN110728715B
CN110728715B CN201910831148.2A CN201910831148A CN110728715B CN 110728715 B CN110728715 B CN 110728715B CN 201910831148 A CN201910831148 A CN 201910831148A CN 110728715 B CN110728715 B CN 110728715B
Authority
CN
China
Prior art keywords
camera
matrix
points
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910831148.2A
Other languages
Chinese (zh)
Other versions
CN110728715A (en
Inventor
路绳方
高阳
陈烨
陈庆
焦良葆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201910831148.2A priority Critical patent/CN110728715B/en
Publication of CN110728715A publication Critical patent/CN110728715A/en
Application granted granted Critical
Publication of CN110728715B publication Critical patent/CN110728715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Abstract

The invention discloses an intelligent inspection robot camera angle self-adaptive adjustment method, which comprises the following steps: establishing a monocular mobile vision measurement model, acquiring internal reference calibration data of a camera, determining image target points acquired by robots at different subsequent moments according to the positions of initial image target points of the robots, solving a homography matrix, searching image matching points corresponding to the target points in the initial images, utilizing a triangulation principle to realize three-dimensional pose information of the same target point acquired by the robots at different moments under a camera coordinate system, and finally obtaining deflection angles of the camera. The intelligent inspection robot solves the problem that the target deviates from the visual field center of the robot camera when the intelligent inspection robot works, can adaptively adjust the angle of the camera under the condition that the positioning error and the rotation error of the cradle head exist, realizes the accurate positioning and accurate identification of the target, and completes the intelligent inspection, fault diagnosis, identification and early warning of the robot target.

Description

Intelligent inspection robot camera angle self-adaptive adjustment method
Technical Field
The invention belongs to the technical field of machine vision measurement, and particularly relates to an intelligent inspection robot camera angle self-adaptive adjustment method.
Background
The intelligent inspection robot replaces manual inspection in a special environment, so that the field inspection efficiency is improved, the field maintenance cost is reduced, the limitation of manual inspection is reduced, and the application of the artificial intelligent technology in the special environment is expanded. In the running process of the intelligent inspection robot, equipment information and surrounding environment information to be detected are obtained by carrying a holder and a camera, and analysis and judgment of a special environment target state are realized by utilizing an image processing and pattern recognition technology. In special environments such as transformer substations, important machine room occasions and the like, the intelligent inspection robot is put into use, and good effects are achieved on intelligent detection, fault judgment and surrounding environment abnormality early warning of related equipment in the special environments.
In the inspection process, the intelligent inspection robot based on monocular vision has certain navigation and positioning errors when the robot walks and stops, and a cloud deck carried by the intelligent inspection robot also has certain rotation errors, so that an object to be inspected by the robot deviates from the center of a visual field imaged by a camera, and under severe conditions, the object is caused to deviate from the imaging visual field range of the camera completely, the object to be detected cannot be imaged, and certain difficulties are brought to intelligent detection and equipment fault early warning judgment of subsequent objects.
Disclosure of Invention
The invention aims to: aiming at the defect that a certain rotation error exists in a cradle head carried by an intelligent inspection robot in the prior art, the invention discloses a self-adaptive adjustment method for the camera angle of the intelligent inspection robot, which solves the problem that a target deviates from the center position of the camera vision of the robot when the intelligent inspection robot works, helps the robot to image the same target point at a fixed position of an image acquired by the camera, and realizes accurate positioning and accurate identification of the target.
The technical scheme is as follows: the invention discloses an intelligent inspection robot camera angle self-adaptive adjustment method, which is characterized in that: the method comprises the following steps:
step A, a monocular mobile measurement system model is established by using a camera calibration method of plane square points according to a pinhole imaging model of a camera;
step B, according to the mapping relation between the three-dimensional space coordinate point and the plane two-dimensional coordinate point, an internal parameter matrix of the camera is obtained; wherein the internal parameter matrix refers to parameters related to the characteristics of the camera, including the focal length and pixel size of the camera;
step C, obtaining two images shot by the robots at two different moments at the same position through the cameras of the intelligent inspection robot, and obtaining expressions of a basic matrix and an essential matrix; the basic matrix represents the inherent projective relation of the two-view epipolar geometry, and the basic matrix represents the basic matrix under the normalized image coordinates;
step D, extracting the position information of the characteristic points of the two images according to the projection equation of the two images obtained in the step C, wherein the position information comprises two-dimensional coordinate values of the characteristic points;
e, combining the image characteristic points extracted in the step D, solving a basic matrix by utilizing an 8-point algorithm, and then combining the internal parameter matrix obtained in the step B to solve an essential matrix, and decomposing the essential matrix to obtain an external parameter matrix of the camera; the extrinsic matrix realizes the conversion of points from a world coordinate system to a camera coordinate system; wherein the camera external parameter matrix comprises a rotation matrix and a translation matrix;
step F, solving a homography matrix between the two images according to at least 4 pairs of matching characteristic points between the two images and combining an SVD algorithm; wherein the homography matrix refers to projection mapping from one image plane to another image plane;
step G, utilizing image positions corresponding to characteristic points of two images acquired at the front and rear moments of the robot, and solving the angle required to rotate the camera according to the homography matrix solved in the step F and combining the binocular stereoscopic vision measurement principle with the internal parameters and the external structural parameters of the camera solved in the step E, wherein the angle required to rotate the camera is the rotation angle of a holder of the robot; the angle is an angle under a camera coordinate system, can be decomposed according to x and y axes of the camera coordinate system, namely, left and right and up and down rotation angles of the camera are generally transmitted to a cradle head of the camera after decomposition, and the adjustment of the camera angle can be realized.
Preferably, the step D further includes:
d1: extracting and matching SIFT feature points in an overlapping area of the images;
d2: and removing mismatching points in the image pairs by using a RANSAC algorithm, and realizing accurate registration of SIFT feature points between the two images.
Preferably, the step F further includes:
f1: according to a relational expression between an image coordinate system and a world coordinate system of a matching point between images at two moments of a perspective projection model of the camera;
f2: and solving a homography matrix between the two images by utilizing SVD (singular value decomposition) according to at least 4 pairs of matching characteristic points.
Preferably, the expression of the basic matrix and the essential matrix in the step C is:
Figure GDA0004054308120000021
E=SR
wherein F is a basic matrix, E is an essential matrix, R is a rotation matrix, S is an antisymmetric matrix, A r and Al Is a matrix of parameters within the camera at two different times.
Preferably, the angle expression of the camera rotation in the step G is:
Figure GDA0004054308120000031
and D, wherein P is a spatial characteristic point corresponding to one characteristic point in a group of matching characteristic points in the two image matching characteristic points, P' is a virtual spatial three-dimensional point calculated according to the step G, the calculated angle under the camera coordinate system is decomposed according to the x and y axes of the camera coordinate system, the decomposed angles respectively represent the left and right rotation angles and the up and down rotation angles of the camera, and the decomposed angles are transmitted to a holder of the camera to realize the adjustment of the camera angle.
The beneficial effects are that: the invention discloses an intelligent inspection robot camera angle self-adaptive adjustment method, which combines a monocular vision technology, utilizes two images shot by a robot at the same position and different moments to realize the extraction and matching of the features of the two images, and utilizes a mobile monocular vision positioning technology to complete the three-dimensional reconstruction of a target point. And the self-adaptive adjustment of the camera angle carried by the robot under the condition that various positioning errors exist is realized through the solved image coordinates of the two groups of corresponding points and the three-dimensional coordinates under the world coordinate system. The intelligent inspection robot solves the problem that the target deviates from the center position of the visual field of the robot camera when the intelligent inspection robot works, can adaptively adjust the angle of the camera under the condition that the positioning error of the robot and the rotation error of the cradle head exist, realizes the accurate positioning and accurate identification of the target, and completes the intelligent inspection, fault diagnosis, identification and early warning of the robot target.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a monocular mobile intelligent inspection robot according to the present invention;
the intelligent inspection robot comprises a camera 1, a cradle head 2, a laser radar 3, a ditch detection sensor 5, a four-wheel drive chassis 5 and an anti-collision switch 6, wherein the camera is mounted on the intelligent inspection robot;
FIG. 3 is a diagram of the extraction and matching of features of the present invention;
fig. 4 is a diagram showing a binocular vision measurement system and a camera angle deflection calculation for a monocular mobile robot according to the present invention.
Detailed Description
The invention discloses an intelligent inspection robot camera angle self-adaptive adjustment method, which is characterized in that: the method comprises the following steps:
step A, a monocular mobile measurement system model is established by using a camera calibration method of plane square points according to a pinhole imaging model of a camera; the mobile monocular vision measuring system is formed by moving one camera to virtually form a plurality of cameras. The invention takes two images shot by robots at a certain position at different moments to form two-view vision measurement as an example, and analyzes the principle of a binocular stereo vision measurement system consisting of mobile monocular vision.
Step B, obtaining an internal parameter matrix of the camera according to the mapping relation between the three-dimensional space coordinate point and the plane two-dimensional coordinate point; assume that the three-dimensional points of the target plane are marked as being homogeneous
Figure GDA0004054308120000041
The two-dimensional point homogeneous coordinates of the image plane are +.>
Figure GDA0004054308120000042
The projective relationship between the two is that
Figure GDA0004054308120000043
Wherein s is an arbitrary non-zero scale factor, [ R t ]]Is a matrix of 3 rows and 4 columns, called the camera external parameter matrix, R is called the rotation matrix, t= (t) 1 ,t 2 ,t 3 ) T Known as a translation matrix,
Figure GDA0004054308120000044
a is referred to as the intrinsic matrix of the camera. Alpha x 、α y Is the scale factor of the u-axis and v-axis, (u) 0 ,v 0 ) For principal point coordinates, r is a non-perpendicular factor of the u-axis and v-axis. The internal parameter matrix A of the camera can be obtained by the Zhang plane calibration method.
Step C, obtaining two images shot by the robots at two different moments at a certain position through the cameras of the intelligent inspection robot, and obtaining expressions of a basic matrix and an essential matrix; assume that the world three-dimensional homogeneous coordinate of a point P in space is X W The homogeneous coordinates of two-dimensional images in two images respectively shot at two moments are p 1 and p2 Then the projection equation of the cameras at two moments can be obtained by the formula (1) as
Figure GDA0004054308120000045
wherein ,s1 and s2 Is the non-zero scale factor of two cameras, A 1 and A2 The camera parameters at time 1 and time 2 are respectively camera parameters, and A is because the camera only moves rigidly and the internal structural parameters are not changed 1 =A 2
Preferably, in the step C, in combination with the epipolar geometry constraint relationship, the expressions of the basic matrix F and the essential matrix E can be obtained by the expression (2), and are respectively as follows:
Figure GDA0004054308120000051
/>
E=SR (4)
wherein S is an antisymmetric matrix,
Figure GDA0004054308120000052
As can be seen from the formula (3), the base matrix F is only related to the internal parameters of the two cameras and the external structural parameters of the system, while the mounted camera only performs rigid motion due to the rotation of the pan-tilt, and the internal parameters of the camera are unchanged. Thus, an essential matrix E of formula (4) can be obtained, and it can be seen that E is related to only external parameters of the vision system, and E can be decomposed to find external structural parameters R and t between two view models of the mobile monocular vision measurement system.
Step D, extracting two image feature points according to the projection equation of the two images obtained in the step C;
preferably, the step D further includes:
d1: extracting and matching SIFT feature points in an overlapping area of the images;
d2: and removing mismatching points in the image pairs by using a RANSAC algorithm, and realizing accurate registration of SIFT feature points between the two images.
The invention utilizes the characteristic of SIFT feature points (scale invariant feature transformation), namely SIFT feature is unchanged for image rotation, scale scaling, brightness change and the like, carries out SIFT feature extraction on image pairs with overlapping areas captured by a robot camera at a certain position at two different moments, eliminates mismatching points in the image pairs by using a RANSAC algorithm, and realizes accurate registration of SIFT feature points between two images. As shown in fig. 3, at a certain position, because of navigation positioning errors and pan-tilt angle rotation errors of the inspection robot, the carried cameras image the same target at different moments, two images with overlapping areas are obtained, and SIFT feature points are extracted and matched in the overlapping areas of the images. By combining formulas (3) and (4), the external structural parameters R and t between the two-view camera coordinate systems can be obtained by utilizing the 8-point algorithm of the analysis, and the external parameters play an important role in reconstructing the three-dimensional points of the space.
E, combining the image characteristic points extracted in the step D, solving a basic matrix by utilizing an 8-point algorithm, and then combining the internal parameter matrix obtained in the step B to solve an essential matrix, and decomposing the essential matrix to obtain an external parameter matrix between two images; from the epipolar geometry constraint relation and the definition of the essential matrix, the basic matrix is a matrix with 7 degrees of freedom and rank of 2, the basic matrix F between two view images can be obtained by utilizing an 8-point algorithm through extraction and matching of the characteristic points of the two images, and the essential matrix E can be obtained by combining the internal parameters of a camera. By decomposing the essential matrix E, the external structural parameters R and t between the two views can be finally determined.
Step F, according to at least 4 pairs of matching points between the two images, utilizing SVD (singular value decomposition) to solve a homography matrix between the two images;
preferably, the step F further includes:
f1: according to a relational expression between an image coordinate system and a world coordinate system of a matching point between images at two moments of a perspective projection model of the camera;
f2: and solving the homography matrix according to the SVD decomposition method.
Combining the information of the matching characteristic points of the two images, the perspective projection model of the camera can be utilized to respectively obtain the relational expression between the image coordinate system and the world coordinate system of the matching points between the images at two moments,
Figure GDA0004054308120000061
/>
Figure GDA0004054308120000062
as can be derived from equations (5) and (6),
Figure GDA0004054308120000063
order the
Figure GDA0004054308120000067
H is a matrix of 3×3, the H matrix reflects the mapping relation between the two image feature points, as shown in FIG. 3, H is defined as a homography matrix between two planes, assuming +.>
Figure GDA0004054308120000064
Substitution into equation (7) yields
Figure GDA0004054308120000065
From equation (8)
Figure GDA0004054308120000066
wherein ,(ua ,v a) and (ub ,v b ) Is a matching point pair on both images.
As can be seen from the formula (9), each pair of feature points can obtain two equations, and the H matrix is a singular matrix with the rank of 8, so that at least 4 pairs of matching points can solve the homography matrix H of two planes. One common method of solving the homography matrix H is the SVD decomposition method.
Step G, according to the homography matrix obtained in the step F, obtaining virtual space three-dimensional points corresponding to the characteristic points of the two images, and combining a binocular stereoscopic vision measurement principle and a system external parameter matrix, obtaining the rotation angle of the camera; the angle is an angle under a camera coordinate system, can be decomposed according to x and y axes of the camera coordinate system, namely, left and right and up and down rotation angles of the camera are generally transmitted to a cradle head of the camera after decomposition, and the adjustment of the camera angle can be realized.
Preferably, the angular expression of the camera rotation in the step G is:
Figure GDA0004054308120000071
and D, wherein P is a spatial characteristic point corresponding to one characteristic point in a group of matching characteristic points in the two image matching characteristic points, P' is a virtual spatial three-dimensional point calculated according to the step G, the calculated angle under the camera coordinate system is decomposed according to the x and y axes of the camera coordinate system, the decomposed angles respectively represent the left and right rotation angles and the up and down rotation angles of the camera, and the decomposed angles are transmitted to a holder of the camera to realize the adjustment of the camera angle.
After the homography matrix H of the two images and the external structural parameters R and t of the two-view camera coordinate system are solved, a first image is taken as a reference image, a group of matching feature point pairs P and q in the matching feature points of the two images are selected, the corresponding space feature point is P, and the image coordinates in the two images are p= (x) a ,y a) and q=(xb ,y b ) As shown in fig. 4. By combining the binocular stereoscopic vision measurement principle, the three-dimensional world coordinate system of the space point P can be accurately calculated by knowing the image coordinates of two points and external structural parameters R and t of a two-view camera measurement system after accurate calibration, and the world coordinate system is built under the coordinate system of the camera 2 at the moment, so that the rotation angle of the camera is calculated. Let the world three-dimensional coordinates of the spatial point P calculated at this time be p= (X) 1 ,Y 1 ,Z 1 )。
Because of navigation and positioning errors of the robot, there is a deviation in the image coordinates of the matching feature points of two images photographed at different times, as shown in fig. 4, the matching feature point pairs p and q, one on the right side of the principal point and one on the left side of the principal point. Assuming that the robot is accurate in navigation and positioning, the above error does not exist, and the image coordinate is set as p= (x) in the image 1 a ,y a ) Then the position and angle of the camera at time 1 should be the same as the position and angle of the camera at time 2, so in image 2 the image coordinates of the matching point corresponding to its p-point in image 1 should be q' = (x) a ,y a ). In fact, the camera position and angle at time 2 deviates from the camera position and angle at time 1 due to the presence of errors.
According to the invention, under the condition that a navigation error exists, the position of the inspection robot is not changed, and the shooting position of the camera is adjusted by rotating the angle of the cradle head, so that the goal is not deviated from the center position of the image. The rotation angle of the cradle head is obtained through an angle error calculated by the camera.
From the above analysis, it can be seen that if the target point is to have the same image coordinates as the image taken at time 1 and the position on the image taken at time 2, the camera at time 2 should be moved by an angle qo while keeping the robot not moving 2 q′。
In practice, from the homography matrix H obtained by the above two images, the point q' = (x) in the image 2 can be obtained a ,y a ) The corresponding matching point p' in image 1 is shown in fig. 4, i.e
p′=H -T q′=H -T (x a ,y a ,1) (10)
Wherein P ' and q ' are virtual image matching point pairs, and the corresponding spatial points are P ', namely virtual spatial three-dimensional points. Similarly, knowing the image coordinates of the corresponding point pair P 'and q', and the external structural parameters R and t of the system, by combining the binocular stereo vision measurement principle, the three-dimensional world coordinates of the virtual space point P 'can be obtained, and the calculated P' = (X ') is assumed' 1 ,Y′ 1 ,Z′ 1 ). Analysis from fig. 4, because the three-dimensional coordinates of the spatial points are established under the camera 2 coordinate system, +.qo 2 q′=∠Po 2 P', the angle at which the camera 2 is moved should be calculated by
Figure GDA0004054308120000081
By decomposing the expression (11), the angle at which the camera should rotate, that is, the angle at which the robot pan/tilt should rotate, can be obtained.
The foregoing is only a preferred embodiment of the invention, it being noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (5)

1. The intelligent inspection robot camera angle self-adaptive adjustment method is characterized by comprising the following steps of:
step A, according to a pinhole imaging model of a camera, a camera calibration method of plane square lattice points is utilized, namely, a mapping relation between three-dimensional space coordinate points and plane two-dimensional coordinate points is utilized, and a monocular mobile measurement system model is established;
the camera calibration method of the plane square points in the step A is to establish a mapping relation between object points in a scene and image points on an image, namely predefining a camera perspective model, and solving various parameters in the camera perspective model by knowing the corresponding relation between world coordinates of characteristic points and image coordinates by referring to the model; wherein the mapping relation in the calibration method is that
Figure QLYQS_1
wherein ,
Figure QLYQS_2
is the homogeneous coordinates of the three-dimensional point of the target plane, < >>
Figure QLYQS_3
S is any non-zero scale factor, A is an internal parameter matrix of the camera; [ R t ]]The camera external reference matrix is a matrix of 3 rows and 4 columns, specifically, R is a rotation matrix, and the direction of the coordinate axis of the world coordinate system relative to the coordinate axis of the camera is described; t is a translation matrix describing the position of the spatial origin under the camera coordinate system;
step B, according to the mapping relation between the three-dimensional space coordinate point and the plane two-dimensional coordinate point, an internal parameter matrix of the camera is obtained; wherein the internal parameter matrix refers to parameters related to the characteristics of the camera, including the focal length and pixel size of the camera;
step C, obtaining two images shot by the inspection robots at two different moments at the same position through cameras of the intelligent inspection robots, and solving expressions of a basic matrix F and an essential matrix E; the basic matrix F represents the inherent projective relation of the two-view epipolar geometry, and the basic matrix E is the basic matrix under the normalized image coordinates;
step D, extracting the position information of the characteristic points of the two images according to the projection equation of the two images obtained in the step C, wherein the position information comprises two-dimensional coordinate values of the characteristic points;
e, combining the image characteristic points extracted in the step D, solving a basic matrix F by using an 8-point algorithm, then combining the internal parameter matrix obtained in the step B, solving an essential matrix E, and decomposing the essential matrix E by using an SVD algorithm to obtain a camera external parameter matrix; the extrinsic matrix realizes the conversion of points from a world coordinate system to a camera coordinate system; the camera external parameter matrix comprises a rotation matrix R and a translation matrix t;
step F, solving a homography matrix between the two images by utilizing at least 4 pairs of matching points and combining an SVD algorithm according to the matched characteristic points; wherein the homography matrix represents the projection mapping relation from one image to another image;
step G, according to the homography matrix obtained in the step F, obtaining virtual space three-dimensional points corresponding to the two image feature points, and combining the binocular stereo vision measurement principle and the camera external reference matrix obtained in the step E, obtaining the rotation angle of the camera, wherein the rotation angle of the camera is the rotation angle of the robot cradle head;
the rotation angle of the camera in the step G, that is, the rotation angle expression of the robot holder is:
Figure QLYQS_4
and D, wherein P is a spatial characteristic point corresponding to one characteristic point in a group of matching characteristic points in the two image matching characteristic points, P' is an angle under a camera coordinate system obtained according to the virtual space three-dimensional point calculated in the step G, and the angle is decomposed according to x and y axes of the camera coordinate system to respectively represent left and right and up and down rotation angles of the camera, and is transmitted to a holder of the camera after the decomposition to realize adjustment of the camera angle.
2. The intelligent inspection robot camera angle self-adaptive adjustment method according to claim 1, wherein the method comprises the following steps: the step D further includes:
d1: extracting and matching SIFT feature points in an overlapping area of the images;
d2: and removing mismatching points in the image pairs by using a RANSAC algorithm, and realizing accurate registration of SIFT feature points between the two images.
3. The intelligent inspection robot camera angle self-adaptive adjustment method according to claim 1, wherein the method comprises the following steps: the step F further comprises the following steps:
f1: according to a relational expression between an image coordinate system and a world coordinate system of a matching point between images at two moments of a perspective projection model of the camera;
f2: and solving the homography matrix according to the SVD decomposition method.
4. The intelligent inspection robot camera angle self-adaptive adjustment method according to claim 1, wherein the method comprises the following steps: the expressions of the basic matrix F and the essential matrix E in the step C are as follows:
Figure QLYQS_5
E=SR
wherein F is a basic matrix, E is an essential matrix, R is a rotation matrix, S is an antisymmetric matrix, A r and Al Is a matrix of parameters within the camera at two different times.
5. The intelligent inspection robot camera angle self-adaptive adjustment method according to claim 1, wherein the method comprises the following steps: the mobile monocular vision measuring system in the step A is a virtual binocular stereoscopic vision system through movement of one camera, and the offset angles of the cameras at different positions at two different moments to the same target point are calculated by combining homography matrixes between two images.
CN201910831148.2A 2019-09-06 2019-09-06 Intelligent inspection robot camera angle self-adaptive adjustment method Active CN110728715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910831148.2A CN110728715B (en) 2019-09-06 2019-09-06 Intelligent inspection robot camera angle self-adaptive adjustment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910831148.2A CN110728715B (en) 2019-09-06 2019-09-06 Intelligent inspection robot camera angle self-adaptive adjustment method

Publications (2)

Publication Number Publication Date
CN110728715A CN110728715A (en) 2020-01-24
CN110728715B true CN110728715B (en) 2023-04-25

Family

ID=69218886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910831148.2A Active CN110728715B (en) 2019-09-06 2019-09-06 Intelligent inspection robot camera angle self-adaptive adjustment method

Country Status (1)

Country Link
CN (1) CN110728715B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311681A (en) * 2020-02-14 2020-06-19 北京云迹科技有限公司 Visual positioning method, device, robot and computer readable storage medium
CN113538587A (en) * 2020-04-16 2021-10-22 深圳先进技术研究院 Camera coordinate transformation method, terminal and storage medium
CN111409085B (en) * 2020-04-27 2023-05-23 浙江库科自动化科技有限公司 Intelligent inspection robot with function of closing angle cock and inspection method thereof
CN111611913A (en) * 2020-05-20 2020-09-01 北京海月水母科技有限公司 Human-shaped positioning technology of monocular face recognition probe
CN111914715B (en) * 2020-07-24 2021-07-16 廊坊和易生活网络科技股份有限公司 Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN111932623A (en) * 2020-08-11 2020-11-13 北京洛必德科技有限公司 Face data automatic acquisition and labeling method and system based on mobile robot and electronic equipment thereof
CN112184834A (en) * 2020-10-07 2021-01-05 浙江港创智能机器人有限公司 Autonomous inspection method for overhead transmission line
WO2022121911A1 (en) * 2020-12-07 2022-06-16 北京达美盛软件股份有限公司 Virtual inspection system and visualized factory system in augmented reality environment
CN112634433A (en) * 2020-12-07 2021-04-09 北京达美盛软件股份有限公司 Real-time control and visualization system of digital factory
CN112714287A (en) * 2020-12-23 2021-04-27 广东科凯达智能机器人有限公司 Pan-tilt target conversion control method, device, equipment and storage medium
CN112598743B (en) * 2021-02-08 2023-10-13 智道网联科技(北京)有限公司 Pose estimation method and related device for monocular vision image
CN112949478A (en) * 2021-03-01 2021-06-11 浙江国自机器人技术股份有限公司 Target detection method based on holder camera
CN113758499A (en) * 2021-03-18 2021-12-07 北京京东乾石科技有限公司 Method, device and equipment for determining assembly deviation compensation parameters of positioning sensor
CN114170306B (en) * 2021-11-17 2022-11-04 埃洛克航空科技(北京)有限公司 Image attitude estimation method, device, terminal and storage medium
CN114549282B (en) * 2022-01-11 2023-12-12 深圳昱拓智能有限公司 Method and system for realizing multi-meter reading based on affine transformation
CN114862969A (en) * 2022-05-27 2022-08-05 国网江苏省电力有限公司电力科学研究院 Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot
CN115958609B (en) * 2023-03-16 2023-07-14 山东卓朗检测股份有限公司 Instruction data safety early warning method based on intelligent robot automatic control system
CN116563336A (en) * 2023-04-03 2023-08-08 国网江苏省电力有限公司南通供电分公司 Self-adaptive positioning algorithm for digital twin machine room target tracking
CN116896608B (en) * 2023-09-11 2023-12-12 山东省地震局 Virtual seismic scene presentation system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011143813A1 (en) * 2010-05-19 2011-11-24 深圳泰山在线科技有限公司 Object projection method and object projection sysytem
CN104376552B (en) * 2014-09-19 2017-12-29 四川大学 A kind of virtual combat method of 3D models and two dimensional image
CN104596502B (en) * 2015-01-23 2017-05-17 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN106679648B (en) * 2016-12-08 2019-12-10 东南大学 Visual inertia combination SLAM method based on genetic algorithm
KR101890612B1 (en) * 2016-12-28 2018-08-23 (주)에이다스원 Method and apparatus for detecting object using adaptive roi and classifier
CN109102525B (en) * 2018-07-19 2021-06-18 浙江工业大学 Mobile robot following control method based on self-adaptive posture estimation

Also Published As

Publication number Publication date
CN110728715A (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
AU2016313849A1 (en) Mapping a space using a multi-directional camera
CN109559355B (en) Multi-camera global calibration device and method without public view field based on camera set
CN110288656A (en) A kind of object localization method based on monocular cam
EP2932191A2 (en) Apparatus and method for three dimensional surface measurement
CN112362034B (en) Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
US20230351625A1 (en) A method for measuring the topography of an environment
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN113379848A (en) Target positioning method based on binocular PTZ camera
KR20050061115A (en) Apparatus and method for separating object motion from camera motion
Su et al. Obtaining obstacle information by an omnidirectional stereo vision system
WO2022078437A1 (en) Three-dimensional processing apparatus and method between moving objects
JP7033294B2 (en) Imaging system, imaging method
CN113240749B (en) Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
CN112257535B (en) Three-dimensional matching equipment and method for avoiding object
CN115147495A (en) Calibration method, device and system for vehicle-mounted system
CN114862969A (en) Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot
CN111829489B (en) Visual positioning method and device
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
CN115082570B (en) Calibration method for laser radar and panoramic camera
CN111553955B (en) Multi-camera three-dimensional system and calibration method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200124

Assignee: Nanjing Jiuli Electronic Technology Co.,Ltd.

Assignor: NANJING INSTITUTE OF TECHNOLOGY

Contract record no.: X2024980001819

Denomination of invention: A method of adaptive adjustment of camera angle for intelligent inspection robots

Granted publication date: 20230425

License type: Common License

Record date: 20240204