CN111985420B - Unmanned inspection method for power distribution station based on machine vision - Google Patents
Unmanned inspection method for power distribution station based on machine vision Download PDFInfo
- Publication number
- CN111985420B CN111985420B CN202010861785.7A CN202010861785A CN111985420B CN 111985420 B CN111985420 B CN 111985420B CN 202010861785 A CN202010861785 A CN 202010861785A CN 111985420 B CN111985420 B CN 111985420B
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- camera
- power distribution
- small
- mechanical arm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 22
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000009466 transformation Effects 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 8
- 240000004282 Grewia occidentalis Species 0.000 claims description 6
- 230000003321 amplification Effects 0.000 claims description 6
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J5/00—Manipulators mounted on wheels or on carriages
- B25J5/007—Manipulators mounted on wheels or on carriages mounted on wheels
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
- G06K17/0022—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
- G06K17/0025—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/02—Recognising information on displays, dials, clocks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Engineering & Computer Science (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned inspection method of a power distribution station based on machine vision, which comprises a mobile trolley, a camera mechanical arm, a recognition camera with known internal references and a microcomputer, wherein a power distribution cabinet in the power distribution station is provided with a large two-dimensional code, a small two-dimensional code and a recognizable instrument panel; through the accurate control of the pose of the mechanical arm of the camera, the instrument panel of the power distribution cabinet can be accurately read without other operations; through the tertiary recognition mechanism of big two-dimensional code, little two-dimensional code, panel board, the staff only inputs the operation code, and the robot can verify the operation code and discern appointed little two-dimensional code in the big two-dimensional code, obtains the positional information of target panel board through discernment little two-dimensional code, need not the operating personnel to participate in during, also need not training robot discernment panel board outward appearance, strong adaptability, need not to reform transform the distribution substation, only need paste specific big two-dimensional code and little two-dimensional code sticker can realize.
Description
Technical Field
The invention relates to the field of mobile robots, in particular to a machine vision-based unmanned inspection method for a power distribution station.
Background
The distribution station is many to be set up in the open field, and it absorbs the lightning easily by itself, and the staff is very dangerous when carrying out work in the distribution station, and because there is high-tension electricity in the distribution station, can not ensure staff's personal health safety completely. Meanwhile, the working range of the power distribution station is wider, a great deal of time is required to recite related knowledge and operation rules during manual inspection, more numerical values are required to be recorded and verified, and when the instrument panel on the power distribution cabinet is more and complex, the on-line inspection of observation and input cannot be realized, so that the inspection efficiency is far lower than that of robot inspection;
in the prior art, the inspection mode of the robot often depends on the control of a remote operator, and even if the robot runs automatically, the movement and positioning of the robot also depend on auxiliary settings arranged on a power distribution station, such as auxiliary marking arranged on the ground, a baffle plate for shielding a correlation device, a matrix-type arranged label and the like, so that the problems of large modification amount and high implementation cost of the power distribution station exist.
Disclosure of Invention
The invention aims to solve the defects in the prior art, realize the automatic identification of a designated instrument panel of a designated power distribution cabinet, and provide a machine vision-based unmanned inspection method for the power distribution station, which comprises a mobile trolley, a camera mechanical arm, a known internal reference identification camera and a microcomputer, wherein the power distribution cabinet in the power distribution station is provided with a large two-dimensional code, a small two-dimensional code and an identifiable instrument panel, and specifically comprises the following steps:
step one: the mobile trolley moves to the front of the power distribution cabinet of the transformer substation by means of a positioning system, and the camera mechanical arm initializes the pose;
step two: the staff remotely inputs an operation code and transmits the operation code to the mobile trolley;
step three: the identification camera scans and decodes the large two-dimensional code on the power distribution cabinet;
step four: the microcomputer matches the information decoded by the large two-dimensional code with the operation code input remotely to determine the small two-dimensional code and the position information thereof required by the next operation;
step five: the microcomputer performs motion planning and motion control on the camera mechanical arm according to the matched result, and moves the identification camera to a far observation point C2 in front of the small two-dimensional code;
step six: the microcomputer sends out an instruction to control the movement of an identification camera at the tail end of the camera mechanical arm, the identification camera reaches a position close to an observation point C3 in front of the small two-dimensional code, the identification is carried out, decoding is carried out, and the position of the instrument panel is obtained from decoding information of the small two-dimensional code;
step seven: and performing motion planning and motion control on the camera mechanical arm, moving to the front of the target instrument panel, and reading data of the target instrument panel.
Further, in the third step, the recognition camera performs preliminary acquisition and preprocessing on the large two-dimensional code, judges the characteristics of the large two-dimensional code, if the judgment condition is not met, the camera mechanical arm performs M1 pose transformation, and the acquisition and preprocessing actions are repeated until the judgment condition is met, including the following specific steps:
s1: the recognition camera collects the large two-dimensional code for the first time in the initial pose;
s2: preprocessing the acquired image;
s3: performing contour extraction on the processed large two-dimensional code image;
s4: judging according to the outline characteristics of the two-dimensional code, if the two-dimensional code does not meet the judging condition, carrying out M1 pose transformation through a camera mechanical arm, and repeating S1-S4 until the two-dimensional code meets the judging condition;
s5: and identifying and decoding the two-dimensional code.
Further, in the step S5, the specific steps of identifying and decoding the two-dimensional code include:
t1: extracting four-corner point pixel coordinates of the two-dimensional code through image processing;
t2: the homography matrix H of affine correction transformation can be obtained according to the following formula (1-1) by utilizing the pixel coordinates of four corners, and the whole image is transformed, so that the projective distortion of the two-dimensional code perspective image can be eliminated;
t3: and decoding the obtained two-dimensional code to obtain decoding information.
In the fourth step, the decoding information is matched with the operation code input in the second step, and the specification and the position information of the target small two-dimensional code are obtained; and simultaneously, calculating and reconstructing three-dimensional coordinates.
Further, in the fourth step, the decoding information includes the small two-dimensional code specification and the position information of the small two-dimensional code relative to the large two-dimensional code; the three-dimensional coordinate calculation and reconstruction specifically comprises the following steps:
k1: through image processing, four-corner point pixel coordinates p of two-dimensional code are extracted 1 ,p 2 ,p 3 ,p 4 ;
K2: by using the internal parameters of the camera and combining the formula (2-1), the homogeneous coordinate p of the tetrad point on the normalized plane can be obtained n1 ,p n2 ,p n3 ,p n4 ;
K3: let p n1 Kept on the normalization plane, the amplification factor is t 1 =1, and then the diagonal bisection principle of square is used to calculate the amplification factor t of the other three points by combining the formula (3-1) 2 ,t 3 ,t 4 Then, the new four points p are obtained b1 (p n1 ),p b2 ,p b3 ,p b4 Forming a reference plane, and forming a parallelogram by four points;
[p b1 p b2 p b3 p b4 ]=[p n1 p n2 t 2 p n3 t 3 p n4 t 4 ]#(3-1)
and K4: according to the four-point coordinates of the reference plane, calculating the four-point distance l 1 ,l 2 ,l 3 ,l 4 And the distance average L and the respective difference e 1 ,e 2 ,e 3 ,e 4 Respectively increasing each point by e in the direction of distance connecting line i Distance/2, modifying in the direction of meeting the equality condition of square four sides, finally obtaining four modified datum points p b1 ′,p b2 ′,p b3 ′,p b4 ′;
And K5: if the average value e of the difference is more than 10 -6 Then the square on the reference plane is considered to be nonstandard, and the modified reference point is normalized to p n1 ,p n2 ,p n3 ,p n4 Returning to the step K3;
k6: if the average value e of the difference is less than 10 -6 The square standard on the reference plane is considered to be obtained, and the respective space coordinates p are obtained according to the formula (6-1), namely the ratio of the width C4 of the real two-dimensional code to the average value L of the four-point distance s1 p s2 p s3 p s4 ;
K7: obtaining the central point pose of the two-dimensional code;
and K8: according to the pose of the base of the camera mechanical arm, determining the coordinates of the tail end of the current camera mechanical arm relative to the base coordinate system and the coordinates of the large two-dimensional code under the base coordinate system;
k9: and according to the large two-dimensional code coordinates obtained in the K7, converting the position information of the small two-dimensional code relative to the large two-dimensional code in the decoding information into small two-dimensional code coordinate information under a base coordinate system, and determining the coordinate information of a near observation point C3 with a fixed distance in front of the small two-dimensional code.
Further, in the fifth and seventh steps, the motion planning of the microcomputer to the end of the camera manipulator is composed of a path planning program and a track planning program; the motion control is a program based on a PID control algorithm.
Further, in step six, the microcomputer controls the recognition camera to move close to the small two-dimensional code through a feedback control program, that is, from the far observation point C2 to the near observation point C3 in front of the small two-dimensional code, the feedback control program specifically includes:
p1: according to the residual error, the camera mechanical arm is controlled to move forward of the small two-dimensional code continuously;
p2: judging whether the two-dimensional code is in the center of the visual field of the identification camera, if the two-dimensional code is not in the center of the visual field of the identification camera, calculating residual errors of the two-dimensional code and the center of the visual field of the identification camera, and repeating the steps P1-P2; if the two-dimensional code is in the center of the field of view of the recognition camera, P3 is executed;
p3: and controlling the camera mechanical arm to horizontally move to a position of a closer observation point C3 which is a fixed distance in front of the small two-dimensional code.
Still further, the small two-dimensional Code on the power distribution cabinet is QR Code or april tag.
The beneficial effects are that: according to the invention, (1) accurate control of the identification camera is realized through the camera mechanical arm, and the appointed instrument panel of the power distribution cabinet can be accurately read without the need of accurate moving capability of the moving trolley;
(2) Through the three-level identification mechanism of the large two-dimensional code, the small two-dimensional code and the instrument panel, a worker can only input the operation code, the robot can verify the operation code from the large two-dimensional code to identify the appointed small two-dimensional code, the position information of the appointed instrument panel is obtained through the small two-dimensional code, the participation of the operator is not needed during the process, the robot is not required to be trained to identify the appearance of the instrument panel, the use is convenient, the adaptability is high, the large modification to a power distribution station is not required, and the implementation can be realized only by sticking specific large two-dimensional code and small two-dimensional code stickers;
(3) The information in the large two-dimensional code and the small two-dimensional code is convenient to input, and the information is very convenient and fast to change, so that the method is suitable for a power distribution station with a complex power distribution cabinet;
(4) Because the camera mechanical arm has completed the establishment of the large two-dimensional code coordinate, the small two-dimensional code coordinate and the target instrument panel coordinate in the working process, the mobile trolley has expansion capability, can further carry expanded detection mechanical arms, operation mechanical arms and other components, and can enable the expansion components of the mobile trolley to have higher precision and efficiency under the cooperation of the identification camera and the established coordinate parameters.
Drawings
FIG. 1 is a flow diagram of a machine vision-based substation unmanned inspection method;
FIG. 2 is a schematic diagram of M1 pose transformation in step three;
FIG. 3 is a schematic diagram of M2 pose transformation in step five;
FIG. 4 is a schematic diagram of M3 pose transformation in step six;
FIG. 5 is a schematic diagram of M4 pose transformation in step seven;
in the figure: QR1, large two-dimensional Code coordinates, code2, small two-dimensional Code coordinates, panel2 and target instrument Panel coordinates.
Detailed Description
The present invention will be further described in detail with reference to the following examples and drawings for the purpose of enhancing the understanding of the present invention, which examples are provided for the purpose of illustrating the present invention only and are not to be construed as limiting the scope of the present invention.
1-2, an unmanned inspection method of a power distribution station based on machine vision comprises a mobile trolley, a camera mechanical arm, a known internal reference identification camera and a microcomputer, wherein a power distribution cabinet in the power distribution station is provided with a large two-dimensional code, a small two-dimensional code and an identifiable instrument panel, and the unmanned inspection method specifically comprises the following steps:
step one: the mobile trolley moves to the front of the substation power distribution cabinet by means of the positioning system, the camera mechanical arm initializes the pose, and the camera is identified to be positioned in the initial pose C0;
step two: the staff remotely inputs an operation code and transmits the operation code to the mobile trolley;
step three: the recognition camera performs preliminary acquisition and pretreatment on the large two-dimensional code on the power distribution cabinet, judges the characteristics of the large two-dimensional code, performs M1 pose conversion on the camera mechanical arm if the judgment condition is not met, and repeats the acquisition and pretreatment actions until the judgment condition is met, and comprises the following specific steps:
s1: the recognition camera collects the large two-dimensional code for the first time in the initial pose;
s2: preprocessing the acquired image, wherein the preprocessing mode sequentially comprises the following steps:
(1) performing brightness and contrast self-adaptive adjustment;
(2) carrying out gray scale processing, and converting the acquired image into a gray scale image;
(3) and converting the acquired image into a gray scale image.
S3: performing contour extraction on the processed large two-dimensional code image;
s4: judging according to the outline characteristics of the two-dimensional code, if the two-dimensional code does not meet the judging condition, performing primary pose transformation through a camera mechanical arm, and repeating S1-S4 until the two-dimensional code meets the judging condition;
s5: and identifying and decoding the two-dimensional code.
In this embodiment, when the judgment condition is finally compounded in the step, the recognition camera is located at the observation point C1;
in this embodiment, in step S5, the specific steps of identifying and decoding the two-dimensional code include:
t1: extracting four-corner point pixel coordinates of the two-dimensional code through image processing;
t2: the homography matrix H of affine correction transformation can be obtained according to the following formula (1-1) by utilizing the pixel coordinates of four corners, and the whole image is transformed, so that the projective distortion of the two-dimensional code perspective image can be eliminated;
t3: and decoding the obtained two-dimensional code to obtain decoding information.
Step four: the decoding information comprises the specification of the small two-dimensional code and the position information of the small two-dimensional code relative to the large two-dimensional code, the microcomputer decodes the large two-dimensional code, and matches the decoding information of the large two-dimensional code with the operation code input in the step two to obtain the specification and the position information of the target small two-dimensional code required by the next operation; and simultaneously, calculating and reconstructing three-dimensional coordinates.
In this embodiment, the calculation and reconstruction of the three-dimensional coordinates specifically includes the following steps:
k1: through image processing, four-corner point pixel coordinates p of two-dimensional code are extracted 1 ,p 2 ,p 3 ,p 4 ;
K2: by using the internal parameters of the camera and combining the formula (2-1), the homogeneous coordinate p of the tetrad point on the normalized plane can be obtained n1 ,p n2 ,p n3 ,p n4 ;
K3: let p n1 Kept on the normalization plane, the amplification factor is t 1 =1, and then the diagonal bisection principle of square is used to calculate the amplification factor t of the other three points by combining the formula (3-1) 2 ,t 3 ,t 4 And find new four points p b1 (p n1 ),p b2 ,p b3 ,p b4 Forming a reference plane, and forming a parallelogram by four points;
[p b1 p b2 p b3 p b4 ]=[p n1 p n2 t 2 p n3 t 3 p n4 t 4 ]#(3-1)
and K4: according to the four-point coordinates of the reference plane, calculating the four-point distance l 12 ,l 23 ,l 34 ,l 41 And the distance average L and the respective difference e 1 ,e 2 ,e 3 ,e 4 Respectively increasing each point by e in the direction of distance connecting line i Distance/2, modifying in the direction of meeting the equality condition of square four sides, finally obtaining four modified datum points p b1 ′,p b2 ′,p b3 ′,p b4 ′;
And K5: if the average value e of the difference is more than 10 -6 Then the square on the reference plane is considered to be nonstandard, and the modified reference point is normalized to p n1 ,p n2 ,p n3 ,p n4 Returning to the step K3;
k6: if the average value e of the difference is less than 10 -6 The square standard on the reference plane is considered to be obtained, and the respective space coordinates p are obtained according to the formula (6-1), namely the ratio of the width D of the real two-dimensional code to the average value L of the four-point distance s1 p s2 p s3 p s4 ;
K7: obtaining the central point pose of the two-dimensional code;
and K8: according to the pose of the base of the camera mechanical arm, determining the coordinates of the tail end of the current camera mechanical arm relative to the base coordinate system and the coordinates of the large two-dimensional code under the base coordinate system;
k9: according to the large two-dimensional Code coordinates obtained in the K7, the position information of the small two-dimensional Code relative to the large two-dimensional Code in the decoding information is converted into small two-dimensional Code coordinate (such as a second small two-dimensional Code coordinate 2 in the figure) information under a base coordinate system, and the coordinate information of a near observation point C3 with a fixed distance in front of the small two-dimensional Code is determined.
Step five: the microcomputer performs motion planning and motion control on the camera mechanical arm according to the matched result, and performs M2 pose transformation on the target pose of trajectory planning of the tail end of the camera mechanical arm by controlling the camera mechanical arm, and the mobile recognition camera moves from an observation point C1 of the large two-dimensional code to a far observation point C2 of a fixed distance in front of the small two-dimensional code;
in the embodiment, in the fifth step, the motion planning of the microcomputer to the end of the mechanical arm of the camera is composed of a path planning program and a track planning program; the motion control is a program based on a PID control algorithm;
step six: the microcomputer controls the camera mechanical arm to perform M3 pose transformation through a feedback control program, so that the recognition camera is close to the small two-dimensional code, namely, moves from a far observation point C2 in front of the small two-dimensional code to a near observation point C3, performs recognition and decoding, and obtains the position information of the target instrument panel from the decoding information of the small two-dimensional code;
in this embodiment, the feedback control program specifically includes:
p1: according to the residual error, the camera mechanical arm is controlled to move forward of the small two-dimensional code continuously;
p2: judging whether the two-dimensional code is in the center of the visual field of the identification camera, if the two-dimensional code is not in the center of the visual field of the identification camera, calculating residual errors of the two-dimensional code and the center of the visual field of the identification camera, and repeating the steps P1-P2; if the two-dimensional code is in the center of the field of view of the recognition camera, P3 is executed;
p3: and controlling the camera mechanical arm to horizontally move to a position of a closer observation point C3 which is a fixed distance in front of the small two-dimensional code.
Step seven: and (3) performing motion planning and motion control on the camera mechanical arm, performing M4 pose transformation on the camera mechanical arm, moving the recognition camera to an observation point C4 in front of a target instrument Panel coordinate (such as a second instrument Panel2 in the figure), and reading data of the target instrument Panel.
In the embodiment, in the seventh step, the motion planning of the microcomputer to the end of the mechanical arm of the camera is composed of a path planning program and a track planning program; the motion control is a program based on a PID control algorithm;
in this embodiment, the small two-dimensional Code on the power distribution cabinet is QR Code or april tag.
In the fourth embodiment, the calculation content of the microcomputer includes the following steps:
(1) it is known that the pose of the observation point { C1} relative to the base coordinate system { B } can be obtained by a mechanical arm model
(2) Acquiring the pose of a large two-dimensional code coordinate { QR1} relative to an observation point { C1}, as
(3) Reading two-dimensional Code information, and acquiring the pose of a small two-dimensional Code coordinate { Code2} relative to a large two-dimensional Code coordinate { QR1} asThe instrument panel is relative to the pose of the small two-dimensional code +.>
(4) The pose of the small two-dimensional Code coordinate { Code2} relative to the base coordinate system { B } is obtained through calculation
In the fifth step, the calculation content of the microcomputer includes the following:
(5) the camera is moved using a robotic arm to a known observation point { C2} relative to the small two-dimensional Code { Code2}, where the camera pose is relative to the base coordinate system
In the sixth step, the calculation content of the microcomputer includes the following:
(6) reaching the observation point { C3}, the phase pose with respect to the base coordinate system is
(7) Accurately measuring the coordinates { Code2} of the small two-dimensional Code to obtain the pose of the small two-dimensional Code relative to the cameraThe pose is +.>
(8) The pose of the target instrument Panel { Panel2} (abbreviated to PA 2) relative to the base coordinate system { B } is obtained through calculation
In the seventh step, the calculation content of the microcomputer includes the following:
(9) the camera is moved using a robotic arm to a known observable point { C4} relative to the target instrument Panel { Panel2}, where the camera pose is relative to the base coordinate system
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (8)
1. The unmanned inspection method for the power distribution station based on the machine vision is characterized by comprising a mobile trolley, a camera mechanical arm, a known internal reference identification camera and a microcomputer, wherein a large two-dimensional code, a small two-dimensional code and an identifiable instrument panel are arranged on a power distribution cabinet in the power distribution station, and the unmanned inspection method specifically comprises the following steps of:
step one: the mobile trolley moves to the front of the power distribution cabinet of the transformer substation by means of a positioning system, and the camera mechanical arm initializes the pose;
step two: the staff remotely inputs an operation code and transmits the operation code to the mobile trolley;
step three: the identification camera scans and decodes the large two-dimensional code on the power distribution cabinet;
step four: the microcomputer matches the information decoded by the large two-dimensional code with the operation code input remotely to determine the small two-dimensional code and the position information thereof required by the next operation;
step five: the microcomputer performs motion planning and motion control on the camera mechanical arm according to the matched result, and moves the identification camera to a far observation point C2 in front of the small two-dimensional code;
step six: the microcomputer sends out an instruction to control the identification camera to move, the identification camera reaches a position close to an observation point C3 in front of the small two-dimensional code, identification and decoding are carried out, and the position of the instrument panel is obtained from decoding information of the small two-dimensional code;
step seven: and performing motion planning and motion control on the camera mechanical arm, moving to the front of the target instrument panel, and reading data of the target instrument panel.
2. The machine vision-based power distribution station unmanned inspection method according to claim 1, wherein in the third step, the recognition camera performs preliminary acquisition and preprocessing on the large two-dimensional code, judges the characteristics of the large two-dimensional code, if the judgment condition is not met, the camera mechanical arm performs M1 pose transformation, and the acquisition and preprocessing actions are repeated until the judgment condition is met, and the method comprises the following specific steps:
s1: the recognition camera collects the large two-dimensional code for the first time in the initial pose;
s2: preprocessing the acquired image;
s3: performing contour extraction on the processed large two-dimensional code image;
s4: judging according to the outline characteristics of the two-dimensional code, if the two-dimensional code does not meet the judging condition, carrying out M1 pose transformation through a camera mechanical arm, and repeating S1-S4 until the two-dimensional code meets the judging condition;
s5: and identifying and decoding the two-dimensional code.
3. The machine vision-based power distribution station unmanned inspection method according to claim 2, wherein in the step S5, the specific steps of identifying and decoding the two-dimensional code include:
t1: extracting four-corner point pixel coordinates of the two-dimensional code through image processing;
t2: the homography matrix H of affine correction transformation can be obtained according to the following formula (1-1) by utilizing the pixel coordinates of four corners, and the whole image is transformed, so that the projective distortion of the two-dimensional code perspective image can be eliminated;
t3: and decoding the obtained two-dimensional code to obtain decoding information.
4. The machine vision-based power distribution station unmanned inspection method according to claim 1, wherein in the fourth step, the decoding information is matched with the operation code input in the second step, and the specification and the position information of the target small two-dimensional code are obtained; and simultaneously, calculating and reconstructing three-dimensional coordinates.
5. The machine vision-based power distribution station unmanned inspection method according to claim 4, wherein in the fourth step, the decoded information contains a small two-dimensional code specification and position information of a small two-dimensional code relative to a large two-dimensional code; the three-dimensional coordinate calculation and reconstruction specifically comprises the following steps:
k1: through image processing, four-corner point pixel coordinates p of two-dimensional code are extracted 1 ,p 2 ,p 3 ,p 4 ;
K2: by using the internal parameters of the camera and combining the formula (2-1), the homogeneous coordinate p of the tetrad point on the normalized plane can be obtained n1 ,p n2 ,p n3 ,p n4 ;
K3: let p n1 Kept on the normalization plane, the amplification factor is t 1 =1, and then the diagonal bisection principle of square is used to calculate the amplification factor t of the other three points by combining the formula (3-1) 2 ,t 3 ,t 4 And find new four points p b1 (p n1 ),p b2 ,p b3 ,p b4 Forming a reference plane, and forming a parallelogram by four points;
[p b1 p b2 p b3 p b4 ]=[p n1 p n2 t 2 p n3 t 3 p n4 t 4 ]#(3-1)
and K4: according to the four-point coordinates of the reference plane, calculating the four-point distance l 1 ,l 2 ,l 3 ,l 4 And the distance average L and the respective difference e 1 ,e 2 ,e 3 ,e 4 Respectively increasing each point by e in the direction of distance connecting line i Distance/2, modifying in the direction of meeting the equality condition of square four sides, finally obtaining four modified datum points p b1 ′,p b2 ′,p b3 ′,p b4 ′;
And K5: if the average value e of the difference is more than 10 -6 Then the square on the reference plane is considered to be nonstandard, and the modified reference point is normalized to p n1 ,p n2 ,p n3 ,p n4 Returning to the step K3;
k6: if the average value e of the difference is less than 10 -6 The square standard on the reference plane is considered to be obtained, and the respective space coordinates p are obtained according to the formula (6-1), namely the ratio of the width D of the real two-dimensional code to the average value L of the four-point distance s1 ,p s2 ,p s3 ,p s4 ;
K7: obtaining the central point pose of the two-dimensional code;
and K8: according to the pose of the base of the camera mechanical arm, determining the coordinates of the tail end of the current camera mechanical arm relative to the base coordinate system and the coordinates of the large two-dimensional code under the base coordinate system;
k9: and according to the large two-dimensional code coordinates obtained in the K7, converting the position information of the small two-dimensional code relative to the large two-dimensional code in the decoding information into small two-dimensional code coordinate information under a base coordinate system, and determining the coordinate information of a near observation point C3 with a fixed distance in front of the small two-dimensional code.
6. The machine vision-based power distribution station unmanned inspection method according to claim 1, wherein in the fifth step and the seventh step, the motion planning of the microcomputer to the terminal of the mechanical arm of the camera is composed of a path planning program and a track planning program; the motion control is a program based on a PID control algorithm.
7. The machine vision-based power distribution station unmanned inspection method according to claim 1, wherein in step six, the microcomputer controls the recognition camera to move, close to the small two-dimensional code, from a far observation point C2 to a near observation point C3 in front of the small two-dimensional code through a feedback control program, and the feedback control program specifically comprises:
p1: according to the residual error, the camera mechanical arm is controlled to move forward of the small two-dimensional code continuously;
p2: judging whether the two-dimensional code is in the center of the visual field of the identification camera, if the two-dimensional code is not in the center of the visual field of the identification camera, calculating residual errors of the two-dimensional code and the center of the visual field of the identification camera, and repeating the steps P1-P2; if the two-dimensional code is in the center of the field of view of the recognition camera, P3 is executed;
p3: and controlling the camera mechanical arm to horizontally move to a position of a closer observation point C3 which is a fixed distance in front of the small two-dimensional code.
8. The machine vision-based power distribution station unmanned inspection method of claim 1, wherein the small two-dimensional Code on the power distribution cabinet is QR Code or april tag.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010861785.7A CN111985420B (en) | 2020-08-25 | 2020-08-25 | Unmanned inspection method for power distribution station based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010861785.7A CN111985420B (en) | 2020-08-25 | 2020-08-25 | Unmanned inspection method for power distribution station based on machine vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111985420A CN111985420A (en) | 2020-11-24 |
CN111985420B true CN111985420B (en) | 2023-08-22 |
Family
ID=73443681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010861785.7A Active CN111985420B (en) | 2020-08-25 | 2020-08-25 | Unmanned inspection method for power distribution station based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111985420B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112975975A (en) * | 2021-03-02 | 2021-06-18 | 路邦康建有限公司 | Robot control interface correction method and hospital clinical auxiliary robot thereof |
CN113510712A (en) * | 2021-08-04 | 2021-10-19 | 国网浙江省电力有限公司嘉兴供电公司 | Mechanical arm path planning method for transformer substation operation robot |
CN113689096B (en) * | 2021-08-11 | 2024-02-27 | 深圳市佳康捷科技有限公司 | Storage sorting method and system for full two-dimension code real-time positioning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004171073A (en) * | 2002-11-18 | 2004-06-17 | Canon Electronics Inc | Two-dimensional code reading device |
CN106323294A (en) * | 2016-11-04 | 2017-01-11 | 新疆大学 | Positioning method and device for patrol robot of transformer substation |
CN106991460A (en) * | 2017-01-23 | 2017-07-28 | 中山大学 | A kind of quick detection and localization algorithm of QR codes |
CN107507174A (en) * | 2017-08-16 | 2017-12-22 | 杭州意能电力技术有限公司 | Power plant's instrument equipment drawing based on hand-held intelligent inspection is as recognition methods and system |
CN109035474A (en) * | 2018-07-27 | 2018-12-18 | 国网江苏省电力有限公司苏州供电分公司 | Method for inspecting and system based on two dimensional code |
CN110163912A (en) * | 2019-04-29 | 2019-08-23 | 达泊(东莞)智能科技有限公司 | Two dimensional code pose scaling method, apparatus and system |
CN110673612A (en) * | 2019-10-21 | 2020-01-10 | 重庆邮电大学 | Two-dimensional code guide control method for autonomous mobile robot |
-
2020
- 2020-08-25 CN CN202010861785.7A patent/CN111985420B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004171073A (en) * | 2002-11-18 | 2004-06-17 | Canon Electronics Inc | Two-dimensional code reading device |
CN106323294A (en) * | 2016-11-04 | 2017-01-11 | 新疆大学 | Positioning method and device for patrol robot of transformer substation |
CN106991460A (en) * | 2017-01-23 | 2017-07-28 | 中山大学 | A kind of quick detection and localization algorithm of QR codes |
CN107507174A (en) * | 2017-08-16 | 2017-12-22 | 杭州意能电力技术有限公司 | Power plant's instrument equipment drawing based on hand-held intelligent inspection is as recognition methods and system |
CN109035474A (en) * | 2018-07-27 | 2018-12-18 | 国网江苏省电力有限公司苏州供电分公司 | Method for inspecting and system based on two dimensional code |
CN110163912A (en) * | 2019-04-29 | 2019-08-23 | 达泊(东莞)智能科技有限公司 | Two dimensional code pose scaling method, apparatus and system |
CN110673612A (en) * | 2019-10-21 | 2020-01-10 | 重庆邮电大学 | Two-dimensional code guide control method for autonomous mobile robot |
Non-Patent Citations (1)
Title |
---|
樊刘仡.基于视觉的巡检无人机自主导航与路径跟随技术研究.《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》.2020,第C031-817页. * |
Also Published As
Publication number | Publication date |
---|---|
CN111985420A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111985420B (en) | Unmanned inspection method for power distribution station based on machine vision | |
Ling et al. | Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision | |
CN113610921B (en) | Hybrid workpiece gripping method, apparatus, and computer readable storage medium | |
CN106853639A (en) | A kind of battery of mobile phone automatic assembly system and its control method | |
CN110281231B (en) | Three-dimensional vision grabbing method for mobile robot for unmanned FDM additive manufacturing | |
CN114289934B (en) | Automatic welding system and method for large structural part based on three-dimensional vision | |
CN110293559B (en) | Installation method for automatically identifying, positioning and aligning | |
CN114714365B (en) | Disordered workpiece grabbing method and system based on cloud platform | |
CN107123145B (en) | Elevator button positioning and identifying method based on artificial mark and geometric transformation | |
CN112561886A (en) | Automatic workpiece sorting method and system based on machine vision | |
CN114474056A (en) | Grabbing operation-oriented monocular vision high-precision target positioning method | |
CN113319859B (en) | Robot teaching method, system and device and electronic equipment | |
CN111452045B (en) | Reinforcing steel bar identification marking system and method based on stereoscopic vision | |
CN111005163B (en) | Automatic leather sewing method, device, equipment and computer readable storage medium | |
CN109591010A (en) | Industrial robot kinematics parameter based on space vector method obtains and method of calibration | |
CN113664838A (en) | Robot positioning placement control method and device, electronic equipment and storage medium | |
CN210386980U (en) | Machine vision-based intelligent cooling bed control system | |
CN116542914A (en) | Weld joint extraction and fitting method based on 3D point cloud | |
CN106926241A (en) | A kind of the tow-armed robot assembly method and system of view-based access control model guiding | |
CN110530289A (en) | A kind of mechanical hand three-dimensional self-scanning device and scan method based on camera anticollision | |
CN117506931A (en) | Groove cutting path planning and correcting equipment and method based on machine vision | |
JP2015136764A (en) | Control device, robot system, robot and robot control method | |
CN110421565B (en) | Robot global positioning and measuring system and method for practical training | |
CN116175582A (en) | Intelligent mechanical arm control system and control method based on machine vision | |
Kalitsios et al. | Vision-enhanced system for human-robot disassembly factory cells: introducing a new screw dataset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231029 Address after: B-001, 4th Floor, Building A, Dongsheng Building, No. 8 Zhongguancun East Road, Haidian District, Beijing, 100083 Patentee after: BEIJING HUACHING INTELLIGENT TECHNOLOGY CO.,LTD. Address before: Room SA31, Flat C, Dongsheng Building, No. 8 Zhongguancun Road, Haidian District, Beijing, 100190 Patentee before: Beijing otereb Technology Co.,Ltd. |