CN115609591B - Visual positioning method and system based on 2D Marker and compound robot - Google Patents

Visual positioning method and system based on 2D Marker and compound robot Download PDF

Info

Publication number
CN115609591B
CN115609591B CN202211463733.XA CN202211463733A CN115609591B CN 115609591 B CN115609591 B CN 115609591B CN 202211463733 A CN202211463733 A CN 202211463733A CN 115609591 B CN115609591 B CN 115609591B
Authority
CN
China
Prior art keywords
marker
point
corner
pose
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211463733.XA
Other languages
Chinese (zh)
Other versions
CN115609591A (en
Inventor
王益亮
陆蕴凡
石岩
李华伟
沈锴
陈忠伟
赵越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiangong Intelligent Technology Co ltd
Original Assignee
Shanghai Xiangong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiangong Intelligent Technology Co ltd filed Critical Shanghai Xiangong Intelligent Technology Co ltd
Priority to CN202211463733.XA priority Critical patent/CN115609591B/en
Publication of CN115609591A publication Critical patent/CN115609591A/en
Application granted granted Critical
Publication of CN115609591B publication Critical patent/CN115609591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a 2D Marker-based visual positioning method and system and a compound robot, wherein the method comprises the following steps: fixing the 2D Marker beside the position of the target object; the camera after the hand-eye calibration acquires a 2D Marker image, performs threshold segmentation to obtain a binary image, performs contour searching to obtain all contours in the image, performs corner detection and circular reference detection respectively, recognizes the coordinates of an inner corner and an outer corner of a frame feature and the coordinates of corresponding radiuses and circle center points of all circular features, and performs point location sequencing by taking the corner closest to each circle center point as a starting point; after sub-pixelation treatment is carried out on each point position, calculating the 2D Marker plane pose; and then according to the acquired plane pose, teaching the conversion relation between the target object and the 2D Marker so as to acquire the pose of the target object under the base coordinate system, thereby improving the pose calculation precision of the 2D Marker.

Description

Visual positioning method and system based on 2D Marker and compound robot
Technical Field
The invention designs a visual positioning technology, and particularly relates to a visual positioning method and system based on a 2D Marker and a compound robot.
Background
Along with the development of intellectualization and digitization in the warehouse logistics industry, AMR (Automated Mobile Robot) for handling and a robotic arm for gripping are widely used in industrial production processes in various fields. In recent years, the rapid development of intelligent factories continuously improves the technical requirements of the intelligent logistics field, and the requirement of automatic transfer of parts between complex production lines cannot be met by pure AMR (automatic transfer) and fixed mechanical arms, so that a combined robot combining AMR and mechanical arms is generated. The positioning accuracy of the composite robot cannot achieve accurate grabbing or placing of cargoes due to the influences of factors such as positioning navigation technology and factory environment, so that a vision system is required to be carried at the tail end of the composite robot arm to compensate and correct positioning errors of the robot, and accurate grabbing and placing of the composite robot are achieved.
The terminal vision system of the compound robot is divided into a 3D scheme and a 2D scheme, the 3D scheme is provided with a 3D camera, point cloud data are collected for identification, and although a target object can be directly identified, the 3D camera is affected by imaging precision of the 3D camera, the identification error is usually larger, and the 3D camera has the defects of high price, large volume, heavy weight, slower identification beat and the like, so that the application of the 3D camera in the compound robot is restricted. The 2D scheme carries a 2D camera, has the advantages of low price, small volume, light weight, quick recognition beat, high precision and the like, and has wider application scenes in the composite robot.
In general, a 2D camera cannot directly identify the three-dimensional space pose of a target object, and needs to identify by means of a specific Marker, and accurate grabbing and placing of the composite robot are realized by utilizing the characteristic that the spatial position relationship between the Marker and the grabbing target is fixed.
For example, CN111516006B discloses a specific mark-based composite robot operation method, which uses an ArUco tag or a custom simple tag to perform positioning to obtain the coordinates of the tag in the coordinate system of the mechanical arm, however, the technology only can obtain the coordinates of the tag in the coordinate system of the mechanical arm by identifying the tag, which means that the mechanical arm can only grasp the position of the tag, so that when there is a large distance between the position of the tag and the grasping position, the grasping cannot be performed directly.
The technical scheme is greatly limited by the use scene, and the position and the posture of the tag are limited to be the grabbing position and the grabbing position, otherwise, the tag cannot be grabbed. Secondly, the label used in the technology only uses 4 control points to calculate the pose, and the identified control points do not reach sub-pixelation, so that the overall pose calculation error is large, and accurate grabbing of the compound robot is difficult to realize.
Disclosure of Invention
The invention mainly aims to provide a visual positioning method and system based on a 2D Marker and a compound robot so as to improve the pose calculation precision of the 2D Marker.
In order to achieve the above object, according to a first aspect of the present invention, there is provided a 2D Marker-based visual positioning method, comprising the steps of:
step S100 fixes the 2D Marker beside the position of the target, where the marking features of the 2D Marker include: rectangular frame features with obvious color distinction and a plurality of circular features with different radiuses, wherein each circular feature is gathered in the frame feature frame near the same corner point;
step S200, acquiring a 2D Marker image by a camera calibrated by hands and eyes, carrying out threshold segmentation to obtain a binary image, carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify coordinates of inner and outer corner points of frame features and coordinates of corresponding radius and circle center points of each circular feature, and carrying out point ordering by taking the corner point closest to each circle center point as a starting point;
step S300, after sub-pixelation processing is carried out on each point position, calculating the 2D Marker plane pose, and the steps comprise: according to the known physical size of the 2D Marker, a space coordinate system Ow is established by taking the 2D Marker as an XOY plane, the corner points and the center points acquired in the step S200 are matched with the three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal parameters, the plane pose of the 2D Marker is calculated through a PnP algorithm;
step S400 teaches the conversion relation between the target object and the 2D Marker according to the plane pose obtained in step S300, so as to obtain the pose of the target object under the base coordinate system.
In a possibly preferred embodiment, the step S200 further includes an image denoising step: and carrying out noise reduction treatment on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
In a possibly preferred embodiment, in the step S200, the acquired 2D Marker image is subjected to adaptive thresholding using a maximum inter-class variance method, so as to obtain a binary image.
In a possibly preferred embodiment, the step S200 of acquiring all the contours in the graph to perform the corner detection includes: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all omitted, ifd max >D, reserved max Dividing the contour into two sub-contours by taking the corresponding coordinate point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon; and screening quadrangles from the polygons, calculating the deviation of each angle compared with the right angle, and considering the quadrangles as rectangular frames when the deviation meets the preset condition, wherein each vertex of the polygons is a corner point.
In a possibly preferred embodiment, the circular reference detection step includes: and (3) performing convexity judgment, probability judgment and roundness judgment calculation on the polygon so as to locate the corresponding outline in the map, and accordingly obtaining the corresponding radius and the center point coordinates of each circular feature.
In a possibly preferred embodiment, the sub-pixelation processing step comprises:
let q be the sub-pixel point,
Figure 363738DEST_PATH_IMAGE002
for a point in the neighborhood of the q-point, the coordinates are known,/->
Figure DEST_PATH_IMAGE004
Is->
Figure 703583DEST_PATH_IMAGE002
Gray gradient at>
Figure 372462DEST_PATH_IMAGE002
On the pixel edge, then->
Figure 834667DEST_PATH_IMAGE002
The gradient direction of the dot pixels is perpendicular to the edge direction, when vector +.>
Figure DEST_PATH_IMAGE006
Is identical to the edge direction, then +.>
Figure 526680DEST_PATH_IMAGE006
The dot product operation result of the gradient vector of the vector and the p point is 0:
Figure 732533DEST_PATH_IMAGE007
the equation is expanded and solved:
Figure 724760DEST_PATH_IMAGE008
collecting a plurality of corner points
Figure 420184DEST_PATH_IMAGE009
According to->
Figure 68334DEST_PATH_IMAGE009
Distance from the center gives weight +.>
Figure 874616DEST_PATH_IMAGE011
A system of equations is constructed according to the above and solved using least squares method>
Figure 986928DEST_PATH_IMAGE013
:
Figure 790936DEST_PATH_IMAGE015
。/>
In order to achieve the above object, according to a second aspect of the present invention, there is also provided a 2D Marker-based visual positioning system for identifying a 2D Marker as described above, wherein the visual positioning system comprises:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6 for timely retrieval and execution by a control unit, a camera, a robotic arm, a processing unit, an information output unit;
wherein the camera sets up at the arm end, the control unit for coordinate:
after the camera is calibrated by hands and eyes, the camera is driven by the mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of the inner corner and the outer corner of the frame feature and the corresponding radius and circle center point coordinates of each circular feature, and carrying out point ordering by taking the corner closest to each circle center point as a starting point; and then, after sub-pixelating each point, calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of the 2D Marker, a space coordinate system Ow is established by taking the 2D Marker as an XOY plane, key points of matching are established between each angular point and a center point which are acquired before and a three-dimensional space on the Ow coordinate system, under the condition that an internal reference of a camera is known, the plane pose of the 2D Marker is calculated through a PnP algorithm so as to be used for teaching the conversion relation between a target object and the 2D Marker, and the pose of the target object under the mechanical arm base coordinate system is acquired;
and the information output unit is used for outputting the pose of the target object under the mechanical arm base coordinate system.
In a possibly preferred embodiment, the step of acquiring all contours in the graph in step S200 to perform corner detection respectively includes: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all omitted, ifd max >D, reserved max Dividing the contour into two sub-contours by taking the corresponding coordinate point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon; and screening quadrangles from the polygons, calculating the deviation of each angle compared with the right angle, and considering the quadrangles as rectangular frames when the deviation meets the preset condition, wherein each vertex of the polygons is a corner point.
In a possibly preferred embodiment, the circular reference detection step includes: and (3) performing convexity judgment, probability judgment and roundness judgment calculation on the polygon so as to locate the corresponding outline in the map, and accordingly obtaining the corresponding radius and the center point coordinates of each circular feature.
In order to achieve the above object, according to a third aspect of the present invention, there is also provided a compound robot including: and the vision grabbing unit is an autonomous mobile robot, wherein the vision grabbing unit is any one of the 2D Marker-based vision positioning systems.
According to the visual positioning method and system based on the 2D Marker and the composite robot, the pose calculation precision of the 2D Marker can be remarkably improved through the specially designed 2D Marker mark characteristics and the corresponding recognition method, and the precision of capturing/placing the target object by the composite robot is further improved; in addition, through the mode of teaching, calculate 2D Marker and wait to snatch conversion relation between the object for 2D Marker can be near waiting to snatch the object arbitrary position and angle installation, has solved traditional 2D Marker mounted position and has restricted great problem by the scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention. In the drawings:
FIG. 1 is a diagram of the steps of a method according to a first embodiment of the present invention;
FIG. 2 is a schematic logic diagram of a first embodiment of the present invention;
fig. 3 is a schematic diagram of a 2D Marker pose recognition flow in the first embodiment of the present invention;
fig. 4 is a schematic diagram of a 2D Marker structure according to a first embodiment of the present invention;
fig. 5 is a schematic diagram of a 2D Marker corner sequence in a first embodiment of the present invention;
fig. 6 is a schematic diagram of a configuration of a composite robot performing a grabbing task based on a 2D Marker visual positioning method according to a first embodiment of the present invention;
fig. 7 is a schematic structural diagram of a 2D Marker-based visual positioning system according to a second embodiment of the present invention.
Detailed Description
In order that those skilled in the art can better understand the technical solutions of the present invention, the following description will clearly and completely describe the specific technical solutions of the present invention in conjunction with the embodiments to help those skilled in the art to further understand the present invention. It will be apparent that the embodiments described herein are merely some, but not all embodiments of the invention. It should be noted that embodiments and features of embodiments in this application may be combined with each other by those of ordinary skill in the art without departing from the inventive concept and conflict. All other embodiments, which are derived from the embodiments herein without creative effort for a person skilled in the art, shall fall within the disclosure and the protection scope of the present invention.
Furthermore, the terms "first," "second," "S100," "S200," and the like in the description and in the claims and drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those described herein. Also, the terms "comprising" and "having" and any variations thereof herein are intended to cover a non-exclusive inclusion. Unless specifically stated or limited otherwise, the terms "disposed," "configured," "mounted," "connected," "coupled" and "connected" are to be construed broadly, e.g., as being either permanently connected, removably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this case will be understood by those skilled in the art in view of the specific circumstances and in combination with the prior art.
The following will exemplify visual positioning and gripping of a target object by a composite robot, wherein the composite robot in this example includes: AMR (Autonomous Mobile Robot autonomous mobile robot), arm and computer vision system, wherein AMR is responsible for removing, and the arm is responsible for getting and putting goods, and computer vision system is equivalent to the eye of arm to adjust the arm and get and put accurately, can compare in single functional transfer robot, compound robot can accomplish complex collaborative tasks such as freight, goods get and put.
(one)
As shown in fig. 1 to 6, in a first aspect of the present invention, the steps of the 2D Marker-based visual positioning method include:
in step S100, the 2D Marker is fixed beside the position of the target, for example, the relative positional relationship between the 2D Marker (hereinafter referred to as Marker/mark) and the target is kept unchanged.
Step S200, a camera calibrated by hands and eyes acquires a 2D Marker image, performs threshold segmentation to obtain a binary image, performs contour searching to obtain all contours in the image, performs corner detection and circular reference detection respectively, recognizes coordinates of inner and outer corner points of frame features and coordinates of corresponding radius and circle center points of each circular feature, and performs point location sequencing by taking the corner point closest to each circle center point as a starting point.
Step S300, after sub-pixelation processing is carried out on each point position, calculating the 2D Marker plane pose, and the steps comprise: according to the known physical size of the 2D Marker, a space coordinate system Ow is established by taking the 2D Marker as an XOY plane, the corner points and the center points acquired in the step S200 are matched with the three-dimensional space on the Ow coordinate system, and under the condition of the known camera internal parameters, the plane pose of the 2D Marker is calculated through a PnP algorithm.
Step S400 teaches the conversion relation between the target object and the 2D Marker according to the plane pose obtained in step S300, so as to obtain the pose of the target object under the base coordinate system.
Specifically, the visual grabbing target of the compound robot is to obtain the pose of the target object under the standard system of the mechanical arm base, and the pose consists of two parts: three-dimensional space coordinates
Figure 988699DEST_PATH_IMAGE017
Spatial rotation amount +.>
Figure 536355DEST_PATH_IMAGE019
Wherein the representation mode of the space rotation quantity is more than Euler angle, quaternion, rotation matrix, rotation vector and the like, for the convenience of calculation, the rotation quantity is converted into the form of rotation matrix, and the space coordinates and the rotation quantity are combined into a secondary matrix->
Figure 703507DEST_PATH_IMAGE021
. The pose of the target object marked by the base is +.>
Figure 678416DEST_PATH_IMAGE023
The method comprises the following steps:
Figure 363475DEST_PATH_IMAGE025
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure 714822DEST_PATH_IMAGE027
representing the pose of the center of the flange at the tail end of the mechanical arm under a polar coordinate system, and (I)>
Figure 801727DEST_PATH_IMAGE029
Representing the conversion of the camera to the end of the mechanical arm, also called hand-eye matrix, +.>
Figure 213117DEST_PATH_IMAGE031
Representing the pose of the target object in the camera coordinate system.
From the above formula, the pose of the target object marked by the base is obtained
Figure 119893DEST_PATH_IMAGE033
There must be +.>
Figure DEST_PATH_IMAGE034
Figure 478193DEST_PATH_IMAGE029
,/>
Figure 419604DEST_PATH_IMAGE035
And->
Figure 736316DEST_PATH_IMAGE027
The pose data of the tail end of the mechanical arm can be directly read from the cooperative mechanical arm and can be regarded as a known quantity.
While
Figure 130388DEST_PATH_IMAGE029
The conversion relation from the camera to the end flange needs to be obtained through calibration.
For example, the compound robot related to the technical scheme adopts a mode of 'eyes are on hands', namely, a camera is arranged at the tail end of a mechanical arm. The calibration of the hand-eye system is a precondition of visual recognition and grabbing, and the calibration precision plays a decisive role in grabbing precision. Calibration for 2D camera hand-eye system is divided into two steps: an internal reference calibration and an external reference calibration.
1. Internal reference calibration
Intrinsic properties of the camera are referred to as camera intrinsic properties, consisting of
Figure 557959DEST_PATH_IMAGE037
Focal length +.>
Figure 353876DEST_PATH_IMAGE039
Origin offset->
Figure 107069DEST_PATH_IMAGE041
And a series of distortion coefficients, wherein the internal parameters are fixed after the focal length and the focusing ring of the camera are fixed. According to a camera aperture imaging model:
Figure 251087DEST_PATH_IMAGE042
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure 216769DEST_PATH_IMAGE044
is pixel coordinate point +.>
Figure 132772DEST_PATH_IMAGE046
Is a spatial coordinate point. The internal reference calibration process usually needs to use specific calibration objects, and the calibration objects usually have various forms such as checkerboard calibration plates, round calibration plates, churco calibration plates and the like. The calibration plate is a plane in three-dimensional space>
Figure 56866DEST_PATH_IMAGE048
Obtaining an imaging plane according to the projection imaging relationship shown in the above formula>
Figure 425530DEST_PATH_IMAGE050
The homography matrix between the two planes can be determined by the coordinates of the corresponding points of the two planes>
Figure 663744DEST_PATH_IMAGE052
. The specification and the size on the calibration plate are known, the corner points in the image can be extracted by a corner point detection algorithm, and the +.>
Figure 168675DEST_PATH_IMAGE054
Wherein->
Figure 263670DEST_PATH_IMAGE056
For the coordinates of the points in the image,/>
Figure 385210DEST_PATH_IMAGE058
for calibrating the point coordinates on the plate, +.>
Figure 958274DEST_PATH_IMAGE060
Representing homography matrices for both planes. When a plurality of calibration plate images are acquired, a plurality of groups of homography matrixes are obtained, and the +.>
Figure 583290DEST_PATH_IMAGE052
Will->
Figure 849186DEST_PATH_IMAGE052
The camera parameter K can be obtained by decomposition.
2. External parameter calibration
The external parameter calibration is to calculate the position relation of the camera at the tail end of the mechanical arm, which is called a hand-eye matrix, and when the camera is installed at the tail end of the hand grip, the camera is called an 'eye on hand'. In this relationship, the calibration plate is placed in a fixed position,
Figure 661285DEST_PATH_IMAGE062
the conversion relation between the calibration plate and the camera is represented, the calibration plate image is acquired by moving the tail end position of the mechanical arm, at the moment, the relation between the base coordinates of the mechanical arm and the calibration plate is fixed, the relation between the camera and the tail end of the mechanical arm is required, and the relation formula can be obtained:
Figure 303618DEST_PATH_IMAGE063
convert it into
Figure DEST_PATH_IMAGE065
In the form of (1), a plurality of groups of corresponding relations are obtained by moving the tail end position of the mechanical arm, and a Tsai two-step method is used for solving a hand-eye matrix +.>
Figure 249053DEST_PATH_IMAGE029
On the other hand,
Figure 482588DEST_PATH_IMAGE031
the pose of the target in the camera coordinate system is obtained through recognition.
Wherein the inventor considers that the Marker identification process is to calculate the conversion relation from Marker to camera coordinate system
Figure 47562DEST_PATH_IMAGE067
The Marker recognition accuracy plays a vital role in the grabbing accuracy of the composite robot. The principle of the Marker used in the prior art is that the Marker is regarded as a plane, a space coordinate system (world coordinate system) is established on the plane, then the corner points on the Marker are extracted to obtain the world coordinate and the pixel coordinate of the corner points, and under the condition of known camera internal parameters, the conversion relation from the world coordinate system on the plane to the camera coordinate system is calculated>
Figure 962428DEST_PATH_IMAGE067
The orientation of the coordinate system established on the checkerboard Marker is not unique due to the fact that the checkerboard has the central symmetry property, so that the checkerboard can not be used for identifying a composite robot, and ArUco and Apriltag are two-dimensional codes with specific patterns, the orientation of the ArUco and Apriltag depends on four corner points of the Marker frame, but the orientation error is larger due to the fact that the number of the corner points is smaller. In addition, in 2016 Alberto et al, it has been demonstrated that the positioning error of a single Apriltag Marker increases with the distance of the Marker from the camera and the angle between the Marker and the camera plane, so that the tag cannot be moved away from the capture position by using the arucc scheme.
Therefore, the defects in the prior art are based on the problem that the pose calculation precision of the existing 2D Marker scheme is not high, so that the existing ArUco and Apriltag label scheme cannot meet the requirement of accurate grabbing of the composite robot.
Therefore, the invention designs the 2D Marker and the corresponding recognition method, which can effectively increase the recognition rate and improve the pose calculation precision. Wherein the marking characteristics of the 2D Marker include: the rectangular border feature with distinct color differentiation, several circular features of different radii, wherein each circular feature is clustered within the border feature frame near the same corner point, as shown in fig. 4-5, in this example the border feature is a rectangular black border, while the circular feature is two circles of different radii are disposed near the lower right corner of the border, and the two circles do not overlap each other, being significantly spaced apart.
It should be noted that the design concept of the 2D Marker is derived from that april tag and ArUco determine the unique Marker through specific black-and-white codes in the code (similar to black-and-white blocks in two-dimensional codes), the black-and-white codes represent the unique ID, the extraction and decoding of the black-and-white blocks of the codes are time-consuming operations, and in the application of the compound robot, the ID information is not needed, and only Marker corner coordinates are needed to calculate the pose, so that the existing markers have information redundancy.
The black frame and the circular standard are used for positioning the Marker quickly, meanwhile, the decoding step is not needed, and experimental data show that the Marker detection speed can reach 40-50 milliseconds in the same equipment and with the same pixel size and the same april tag and AurCo recognition rate of 200-300 milliseconds.
Further, after the 2D Marker is fixed beside the position of the target object, the camera after the hand-eye calibration acquires the 2D Marker image and then enters the corresponding identification method steps, and the steps comprise:
step S210, denoising the image:
for the acquired Marker image, due to the influence of noise such as an ambient light source and sundries, errors can be generated when the original image is directly subjected to image processing, so that noise reduction processing is needed, and aiming at the characteristics that the type of the noise accords with Gaussian noise, the image is subjected to the noise reduction processing by using Gaussian smoothing, and a two-dimensional Gaussian smoothing function is as follows:
Figure 296458DEST_PATH_IMAGE068
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure 904157DEST_PATH_IMAGE070
and->
Figure 956426DEST_PATH_IMAGE072
Is the mean value of Gaussian kernel, < >>
Figure 674984DEST_PATH_IMAGE074
And->
Figure 129099DEST_PATH_IMAGE076
Is the variance of the gaussian kernel.
Step S220 threshold segmentation:
the image is adaptively thresholded using a maximum inter-class variance method. First according to the maximum inter-class variance
Figure 173278DEST_PATH_IMAGE078
Calculate threshold +.>
Figure 712844DEST_PATH_IMAGE080
:
Figure 235092DEST_PATH_IMAGE081
Wherein:
Figure 543714DEST_PATH_IMAGE082
obtaining a threshold value
Figure 493215DEST_PATH_IMAGE080
Then, the image is segmented:
Figure 988918DEST_PATH_IMAGE083
step S230, contour searching:
and (3) screening the binary image obtained by threshold segmentation, taking out some noise connected domains with smaller areas, and then carrying out contour searching on the rest connected domains. For example: for a pixel point with a pixel value of 1, if a pixel with a pixel value of 0 is found in the 4-neighborhood or 8-neighborhood of the pixel, the pixel is defined as a contour point, the image is traversed, and all contours are found.
Step S240, corner detection:
the inner and outer corner points of the black frame where the Marker is located are key points for calculating the pose, and the inner and outer corner points of the black frame are searched on the basis of contour searching.
The contours were first polygon fitted using the Douglas-Peucker (Douglas-Peucker) algorithm:
connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance value
Figure 314857DEST_PATH_IMAGE085
Define limit difference->
Figure DEST_PATH_IMAGE087
If->
Figure DEST_PATH_IMAGE089
The middle contour point between the two points is completely omitted, if
Figure DEST_PATH_IMAGE091
Then keep +.>
Figure DEST_PATH_IMAGE092
And dividing the contour into two sub-contours by taking the changing point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon.
Then screening the fitted polygons, wherein the internal and external contours of the black frame are square, so that the quadrilaterals are firstly screened from the polygons, and then each angle of the square is 90 degrees, and each angle of the quadrilaterals is calculatedCosine value of (2)
Figure DEST_PATH_IMAGE094
Considering the influence of the shooting angle, if the maximum ++among the four corners of the quadrangle>
Figure DEST_PATH_IMAGE096
Then it is considered a rectangle and each vertex of the polygon is a corner point.
Step S250 circular reference detection:
in the Marker, besides a black frame, two circular references at the lower right corner are also important, for the detected polygon, the Convexity (consistency) is firstly judged, the Convexity is defined as the degree that the polygon is close to a convex polygon, the Convexity of the convex polygon is 1, and the Convexity calculation formula is as follows:
Figure DEST_PATH_IMAGE097
where S represents the area of the region enclosed by the outline and H represents the area of the smallest convex polygon enclosed by all vertices of the corresponding outline polygon. When convex=1, the contour is a convex polygon.
Inertia ratio (inertia ratio), which represents the degree of deviation of an elliptical orbit from an ideal circular shape, is in the range of
Figure DEST_PATH_IMAGE099
The closer the inertia rate is to 0, the flatter the graph is, the closer the inertia rate is to 1, the more the graph is round, and the calculation formula of the inertia rate i is as follows:
Figure DEST_PATH_IMAGE100
wherein c represents the semi-focal distance of the ellipse, a represents the semi-major axis of the ellipse,
roundness (Circularity), which means the degree of fullness of a pattern close to a circle, is in the range of
Figure DEST_PATH_IMAGE101
The closer the value is to 0, the closer the graph is to an infinitely elongated rectangle, the closer is to 1, the closer is to a circle, and the calculation formula of the roundness is:
Figure DEST_PATH_IMAGE102
where S represents the area of the pattern and C represents the perimeter of the pattern.
The position of the circular reference can be accurately positioned through judging three parameters of convexity, inertia rate and roundness, and the coordinates and the radius of the center point where the reference is located are obtained.
Step S260, point location planning:
after the coordinates of the inner and outer corner points of the black frame and the coordinates of the center points of the two circular references are obtained, the points are ordered, as shown in fig. 5, firstly, the circular reference circle with a larger radius, then the circular reference center coordinates with a smaller radius, and finally the coordinates of the corner points of the inner and outer frames, wherein the corner point closest to the two circular references is taken as a starting point, and then the corner points are arranged clockwise, so that an ordered coordinate sequence of ten points on the image is obtained, and the coordinate system direction of the corner points can be marked.
Step S310 sub-pixelation:
furthermore, the corner detection can only obtain coordinates of a pixel level, the obtained corner coordinate values are integers, and for the high-precision positioning of the compound robot, the error caused by the corner of the pixel level is large, so that the corner coordinates need to be sub-pixelized in order to obtain the corner position coordinates with higher precision.
Assuming that the q point is a true sub-pixel point, its coordinate value is unknown,
Figure DEST_PATH_IMAGE104
for a point in the neighborhood of the q-point, the coordinates are known,/->
Figure DEST_PATH_IMAGE106
Is that
Figure 569996DEST_PATH_IMAGE104
Gray gradient at>
Figure 690399DEST_PATH_IMAGE104
On the pixel edge, then->
Figure 1294DEST_PATH_IMAGE104
The gradient direction of the dot pixels is perpendicular to the edge direction, and the vector is
Figure DEST_PATH_IMAGE108
Is identical to the edge direction, then +.>
Figure 68608DEST_PATH_IMAGE108
The dot product operation result of the gradient vector of the vector and the p point is 0:
Figure DEST_PATH_IMAGE109
the equation is expanded and solved:
Figure DEST_PATH_IMAGE110
a plurality of can be collected near the initial corner point
Figure 44433DEST_PATH_IMAGE104
According to->
Figure 398054DEST_PATH_IMAGE104
Distance from the center gives weight +.>
Figure DEST_PATH_IMAGE112
A system of equations is constructed according to the above and solved using least squares method>
Figure DEST_PATH_IMAGE114
:
Figure DEST_PATH_IMAGE115
Through sub-pixelation processing, the pose calculation precision of the Marker can be effectively improved, and even the operation precision of the compound robot reaches the millimeter level.
Step S320, solving the pose:
assuming that the physical size of a Marker is known, the Marker is regarded as a space plane, and a space coordinate system is established by taking the Marker as an XOY plane
Figure DEST_PATH_IMAGE117
Corner points and center points are +.>
Figure 9295DEST_PATH_IMAGE117
The three-dimensional space coordinates on the coordinate system are known. Sub-pixel coordinates of the 10 key points in the image are obtained through recognition, the pose of the Marker in a camera coordinate system is calculated through the (2D-3D) coordinates of the corresponding Point pairs, and the pose of the Marker plane can be calculated through a PnP (permanent-n-Point) algorithm under the condition that the camera is known as an internal reference.
The solving modes of the PnP algorithm are various, and a direct linear transformation method (Direct Linear Transform, DLT), an EPnP method and a minimum reprojection error method are common, and the example of the scheme takes the minimum reprojection error method as an example to solve the Marker pose.
For example, according to the principle of aperture imaging, the projection relationship of the world coordinate system to the pixel coordinate system
Figure DEST_PATH_IMAGE119
Wherein the method comprises the steps of
Figure DEST_PATH_IMAGE121
For pixel coordinate system coordinates +.>
Figure DEST_PATH_IMAGE123
Is an internal reference matrix of a camera>
Figure DEST_PATH_IMAGE125
External parameters, i.e. pose, representing Marker to camera coordinate system>
Figure 286824DEST_PATH_IMAGE021
Homogeneous matrix composed of external parameters +.>
Figure DEST_PATH_IMAGE127
Representing coordinates in the world coordinate system, +.>
Figure DEST_PATH_IMAGE129
Representing the depth of the feature points in the camera coordinate system.
Solving for optimal outliers by minimizing reprojection errors
Figure 362227DEST_PATH_IMAGE021
。/>
Figure DEST_PATH_IMAGE130
Thereby obtaining the pose of the Marker plane.
Step S410 teaches the conversion relationship between the target object and the 2D Marker:
by identifying the pose of Marker under the mechanical arm base standard system only
Figure DEST_PATH_IMAGE132
The object materials beside the Marker are required to be grabbed, and finally the pose of the object under the base mark system is required to be obtained>
Figure DEST_PATH_IMAGE133
. Therefore, the conversion relation between the object and the Marker is also required->
Figure DEST_PATH_IMAGE135
The conversion relation is obtained through teaching:
after recognizing Mark, the conversion relation from Mark to base target can be obtained
Figure 962448DEST_PATH_IMAGE132
:
Figure DEST_PATH_IMAGE136
Then the position of the tail end of the mechanical arm is the conversion relation from the target point target to the base coordinates of the mechanical arm by moving the tail end position of the mechanical arm to the grabbing position of the target object
Figure DEST_PATH_IMAGE137
When the relation between the target and the Marker is fixed, the conversion relation from the target point to the Marker can be calculated>
Figure 857723DEST_PATH_IMAGE135
Figure DEST_PATH_IMAGE139
Thereby, the pose of the target object under the basic coordinate system can be obtained.
In the actual use process of the technology, as shown in fig. 6, when the composite robot executes a grabbing task, the composite robot is firstly scheduled to reach a designated site, and then the mechanical arm is controlled to move to an identification position, so that the Marker is ensured to be in the field of view of the camera. Obtaining according to the identified Marker
Figure DEST_PATH_IMAGE140
And then according to teaching get->
Figure 266839DEST_PATH_IMAGE135
The pose of the grabbing point target marked on the mechanical arm base can be obtained>
Figure DEST_PATH_IMAGE141
Figure DEST_PATH_IMAGE143
Thereby enabling the mechanical arm of the compound robot to accurately grasp the target object.
Therefore, the scheme calculates the transformation relation between the Marker and the target to be grabbed by a teaching method, so that manual measurement or special workpieces are not needed to ensure the offset relation, the installation of the Marker is not limited by environments such as materials, machine stations and the like, the flexibility of deployment of the composite robot is improved, and the deployment period and the deployment difficulty are effectively shortened.
On the other hand, AMR can also gradually have certain positioning error in the autonomous moving process, so that the error brought by AMR positioning can be effectively compensated through the scheme, and the precision of picking and placing cargoes of the compound robot is greatly improved.
In addition, compared with the 3D Marker recognition scheme, the compound robot of the 2D Marker recognition scheme has the advantages of being fast in beat, high in accuracy, low in cost and the like. As noted, april tag and ArUco determine a unique Marker by a specific black-and-white code in the code (similar to a black-and-white block in a two-dimensional code), the black-and-white code represents a unique ID, and extraction and decoding of the coded black-and-white block are time-consuming operations, whereas in a compound robot application, the ID information is unnecessary, and only Marker corner coordinates are needed to calculate the pose, so there is information redundancy. The scheme of the invention adopts a black frame and a round reference, so that the Marker can be positioned very fast, and no decoding step exists, so that experimental data show that the same equipment has the same pixel size, the identification rate of april tag and AurCo is 200-300 milliseconds, and the detection speed of the Marker can reach 40-50 milliseconds.
(II)
As shown in fig. 7, corresponding to the first embodiment, the present invention further provides a 2D Marker-based visual positioning system for identifying a 2D Marker as described above, wherein the visual positioning system includes:
the storage unit is used for storing a program comprising the steps of the visual positioning method based on the 2D Marker in the embodiment I, so that the control unit, the camera, the mechanical arm, the processing unit and the information output unit can be timely adjusted and executed;
wherein the camera sets up at the arm end, the control unit for coordinate:
after the camera is calibrated by hands and eyes, the camera is driven by the mechanical arm to acquire a 2D Marker image;
the processing unit is used for carrying out image denoising processing on the 2D Marker image, carrying out threshold segmentation to obtain a binary image, carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of the inner corner and the outer corner of the frame feature and the corresponding radius and circle center point coordinates of each circular feature, and carrying out point ordering by taking the corner closest to each circle center point as a starting point; and then, after sub-pixelating each point, calculating the 2D Marker plane pose, wherein the method comprises the following steps: according to the known physical size of the 2D Marker, a space coordinate system Ow is established by taking the 2D Marker as an XOY plane, key points of matching are established between each angular point and a center point which are acquired before and a three-dimensional space on the Ow coordinate system, under the condition that an internal reference of a camera is known, the plane pose of the 2D Marker is calculated through a PnP algorithm so as to be used for teaching the conversion relation between a target object and the 2D Marker, and the pose of the target object under the mechanical arm base coordinate system is acquired;
and the information output unit is used for outputting the pose of the target object under the mechanical arm base coordinate system.
In a preferred embodiment, the processing unit performs noise reduction processing on the acquired 2D Marker image by using a gaussian smoothing algorithm, and performs adaptive threshold segmentation on the acquired 2D Marker image by using a maximum inter-class variance method, so as to acquire a binary image.
Wherein in a preferred embodiment, the sub-pixelation processing step comprises:
let q be the sub-pixel point,
Figure DEST_PATH_IMAGE145
for a point in the neighborhood of the q-point, the coordinates are known,/->
Figure DEST_PATH_IMAGE147
Is->
Figure 134432DEST_PATH_IMAGE145
Gray gradient at>
Figure 499030DEST_PATH_IMAGE145
On the pixel edge, then->
Figure 209497DEST_PATH_IMAGE145
The gradient direction of the dot pixels is perpendicular to the edge direction, when vector +.>
Figure DEST_PATH_IMAGE149
Is identical to the edge direction, then +.>
Figure 484620DEST_PATH_IMAGE149
The dot product operation result of the gradient vector of the vector and the p point is 0:
Figure DEST_PATH_IMAGE150
the equation is expanded and solved:
Figure DEST_PATH_IMAGE152
collecting a plurality of corner points
Figure 3457DEST_PATH_IMAGE145
According to->
Figure 73044DEST_PATH_IMAGE145
Distance from the center gives weight +.>
Figure DEST_PATH_IMAGE154
A system of equations is constructed according to the above and solved using least squares method>
Figure DEST_PATH_IMAGE156
:
Figure DEST_PATH_IMAGE158
Wherein in a preferred embodiment, the corner detection step comprises: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all omitted, ifd max >D, reserved max Dividing the contour into two sub-contours by taking the corresponding coordinate point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon; and screening quadrangles from the polygons, calculating the deviation of each angle compared with the right angle, and considering the quadrangles as rectangular frames when the deviation meets the preset condition, wherein each vertex of the polygons is a corner point.
Wherein in a preferred embodiment, the circular fiducial detection step comprises: and (3) performing convexity judgment, probability judgment and roundness judgment calculation on the polygon so as to locate the corresponding outline in the map, and accordingly obtaining the corresponding radius and the center point coordinates of each circular feature.
Through the 2D Marker-based visual positioning system, the transformation relation between the Marker and the target to be grabbed is obtained by adopting a teaching method, so that manual measurement or special workpieces are not needed to ensure the offset relation, the Marker is installed without being limited by environments such as materials, machine stations and the like, the system is particularly suitable for being configured on a composite robot, and meanwhile, the deployment period and the deployment difficulty of the system can be effectively shortened due to the deployment flexibility of the system.
On the other hand, AMR can also gradually have certain positioning error in the autonomous moving process, so that the error brought by AMR positioning can be effectively compensated through the system, and the precision of picking and placing cargoes of the compound robot is greatly improved.
(III)
With respect to the first and second embodiments, a third aspect of the present invention further provides a compound robot, which includes: and the vision grabbing unit is an autonomous mobile robot, wherein the vision grabbing unit is the 2D Marker-based vision positioning system in the second embodiment.
In summary, through the 2D Marker-based visual positioning method and system and the composite robot provided by the invention, the pose calculation precision of the 2D Marker can be remarkably improved through the specially designed 2D Marker characteristics and the corresponding identification method, and the precision of capturing/placing the target object by the composite robot is further improved; in addition, through the mode of teaching, calculate 2D Marker and wait to snatch conversion relation between the object for 2D Marker can be near waiting to snatch the object arbitrary position and angle installation, has solved traditional 2D Marker mounted position and has restricted great problem by the scene.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is to be limited only by the following claims and their full scope and equivalents, and any modifications, equivalents, improvements, etc., which fall within the spirit and principles of the invention are intended to be included within the scope of the invention.
It will be appreciated by those skilled in the art that the system, apparatus and their respective modules provided by the present invention may be implemented entirely by logic programming method steps, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc., except for implementing the system, apparatus and their respective modules provided by the present invention in a purely computer readable program code. Therefore, the system, the apparatus, and the respective modules thereof provided by the present invention may be regarded as one hardware component, and the modules included therein for implementing various programs may also be regarded as structures within the hardware component; modules for implementing various functions may also be regarded as being either software programs for implementing the methods or structures within hardware components.
Furthermore, all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program, where the program is stored in a storage medium and includes several instructions for causing a single-chip microcomputer, chip or processor (processor) to perform all or part of the steps in the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In addition, any combination of various embodiments of the present invention may be performed, so long as the concept of the embodiments of the present invention is not violated, and the disclosure of the embodiments of the present invention should also be considered.

Claims (10)

1. A visual positioning method based on a 2D Marker is characterized by comprising the following steps:
step S100 fixes the 2D Marker beside the position of the target, where the marking features of the 2D Marker include: rectangular frame features with obvious color distinction and a plurality of circular features with different radiuses, wherein each circular feature is gathered in the frame feature frame near the same corner point;
step S200, acquiring a 2D Marker image by a camera calibrated by hands and eyes, carrying out threshold segmentation to obtain a binary image, carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify coordinates of inner and outer corner points of frame features and coordinates of corresponding radius and circle center points of each circular feature, and carrying out point ordering by taking the corner point closest to each circle center point as a starting point;
step S300, after sub-pixelating each point, calculating the pose of the 2D Marke plane, wherein the steps comprise: from the known physical dimensions of the 2D Marker,establishing a space coordinate system by taking 2D Marker as XOY planeOwEach corner point and the center point obtained in the step S200 are combined withOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of knowing internal parameters of a camera;
step S400 teaches the conversion relation between the target object and the 2D Marker according to the plane pose obtained in step S300, so as to obtain the pose of the target object under the base coordinate system.
2. The 2D Marker-based visual positioning method according to claim 1, further comprising an image denoising step in step S200: and carrying out noise reduction treatment on the acquired 2D Marker image by adopting a Gaussian smoothing algorithm.
3. The 2D Marker-based visual positioning method according to claim 1, wherein in step S200, the acquired 2D Marker image is subjected to adaptive thresholding using a maximum inter-class variance method to obtain a binary image.
4. The 2D Marker-based visual positioning method according to claim 1, wherein the step of acquiring all contours in the graph in step S200 to perform corner detection respectively includes: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all omitted, ifd max >D, reserved max Dividing the contour into two sub-contours by taking the corresponding coordinate point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon; and screening quadrangles from the polygons, calculating the deviation of each angle compared with the right angle, and considering the quadrangles as rectangular frames when the deviation meets the preset condition, wherein each vertex of the polygons is a corner point.
5. The 2D Marker-based visual positioning method according to claim 4, wherein the circular fiducial detection step includes: and (3) performing convexity judgment, probability judgment and roundness judgment calculation on the polygon so as to locate the corresponding outline in the map, and accordingly obtaining the corresponding radius and the center point coordinates of each circular feature.
6. The 2D Marker-based visual positioning method according to claim 1, wherein the sub-pixelation processing step comprises:
let q be the sub-pixel point,
Figure 190614DEST_PATH_IMAGE001
for a point in the neighborhood of the q-point, the coordinates are known,/->
Figure 685311DEST_PATH_IMAGE002
Is->
Figure 797624DEST_PATH_IMAGE001
Gray gradient at>
Figure 601632DEST_PATH_IMAGE001
On the pixel edge, then->
Figure 799395DEST_PATH_IMAGE001
The gradient direction of the dot pixels is perpendicular to the edge direction, when vector +.>
Figure 861898DEST_PATH_IMAGE003
Is identical to the edge direction, then +.>
Figure 828717DEST_PATH_IMAGE003
The dot product operation result of the gradient vector of the vector and the p point is 0:
Figure 334784DEST_PATH_IMAGE004
the equation is expanded and solved:
Figure 754264DEST_PATH_IMAGE005
/>
collecting a plurality of corner points
Figure 371191DEST_PATH_IMAGE001
According to->
Figure 963756DEST_PATH_IMAGE001
Distance from the center gives weight +.>
Figure 640725DEST_PATH_IMAGE006
A system of equations is constructed according to the above and solved using least squares method>
Figure 547501DEST_PATH_IMAGE007
:
Figure 436960DEST_PATH_IMAGE008
7. A 2D Marker-based visual positioning system for identifying a 2D Marker as claimed in claim 1, comprising:
a storage unit for storing a program comprising the steps of the 2D Marker-based visual positioning method according to any one of claims 1 to 6 for timely retrieval and execution by a control unit, a camera, a robotic arm, a processing unit, an information output unit;
wherein the camera sets up at the arm end, the control unit for coordinate:
after the camera is calibrated by hands and eyes, the camera is driven by the mechanical arm to acquire a 2D Marker image;
a processing unit for processing the 2D MarkerAfter the image is subjected to image denoising processing, carrying out threshold segmentation to obtain a binary image, then carrying out contour searching to obtain all contours in the image, respectively carrying out corner detection and circular reference detection to identify the coordinates of the inner corner and the outer corner of the frame feature and the corresponding radius and circle center point coordinates of each circular feature, and carrying out point ordering by taking the corner closest to each circle center point as a starting point; and then, after sub-pixelating each point position, calculating the 2D Marke plane pose, wherein the method comprises the following steps: according to the known physical size of the 2D Marker, a space coordinate system is established by taking the 2D Marker as an XOY planeOwEach angular point and the center point which are acquired before are processedOwEstablishing matched key points in a three-dimensional space on a coordinate system, and calculating the plane pose of the 2D Marker through a PnP algorithm under the condition of known camera internal parameters so as to acquire the pose of the target under the mechanical arm base coordinate system by teaching the conversion relation between the target and the 2D Marker;
and the information output unit is used for outputting the pose of the target object under the mechanical arm base coordinate system.
8. The 2D Marker-based visual positioning system according to claim 7, wherein the step of acquiring all contours in the map in step S200 for corner detection respectively comprises: performing polygon fitting on the contour, connecting the first and last points on the contour curve into a straight line, solving the distance between all the points on the contour and the straight line, and finding out the maximum distance valued max Defining a limit difference D, ifd max <D, the middle contour points between the two points are all omitted, ifd max >D, reserved max Dividing the contour into two sub-contours by taking the corresponding coordinate point as a boundary, repeating the method on the sub-contours, and finally taking the reserved coordinate point as the vertex of the fitted polygon; and screening quadrangles from the polygons, calculating the deviation of each angle compared with the right angle, and considering the quadrangles as rectangular frames when the deviation meets the preset condition, wherein each vertex of the polygons is a corner point.
9. The 2D Marker-based visual positioning system of claim 8, wherein the circular fiducial detection step comprises: and (3) performing convexity judgment, probability judgment and roundness judgment calculation on the polygon so as to locate the corresponding outline in the map, and accordingly obtaining the corresponding radius and the center point coordinates of each circular feature.
10. A compound robot, comprising: a vision gripping unit, an autonomous mobile robot, characterized in that the vision gripping unit is a 2D Marker-based vision positioning system according to any one of claims 7 to 9.
CN202211463733.XA 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot Active CN115609591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211463733.XA CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211463733.XA CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Publications (2)

Publication Number Publication Date
CN115609591A CN115609591A (en) 2023-01-17
CN115609591B true CN115609591B (en) 2023-04-28

Family

ID=84877747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211463733.XA Active CN115609591B (en) 2022-11-17 2022-11-17 Visual positioning method and system based on 2D Marker and compound robot

Country Status (1)

Country Link
CN (1) CN115609591B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116060269B (en) * 2022-12-08 2024-06-14 中晟华越(郑州)智能科技有限公司 Spraying method for loop-shaped product
CN116000942B (en) * 2023-03-22 2023-06-27 深圳市大族机器人有限公司 Semiconductor manufacturing system based on multi-axis cooperative robot
CN116245877B (en) * 2023-05-08 2023-11-03 济南达宝文汽车设备工程有限公司 Material frame detection method and system based on machine vision
CN116423526B (en) * 2023-06-12 2023-09-19 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116766183B (en) * 2023-06-15 2023-12-26 山东中清智能科技股份有限公司 Mechanical arm control method and device based on visual image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109363771A (en) * 2018-12-06 2019-02-22 安徽埃克索医疗机器人有限公司 The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm
WO2022034032A1 (en) * 2020-08-11 2022-02-17 Ocado Innovation Limited A selector for robot-retrievable items

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092809A1 (en) * 2015-12-03 2017-06-08 Abb Schweiz Ag A method for teaching an industrial robot to pick parts

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109363771A (en) * 2018-12-06 2019-02-22 安徽埃克索医疗机器人有限公司 The fracture of neck of femur Multiple tunnel of 2D planning information plants nail positioning system in a kind of fusion
CN111612794A (en) * 2020-04-15 2020-09-01 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Multi-2D vision-based high-precision three-dimensional pose estimation method and system for parts
WO2022034032A1 (en) * 2020-08-11 2022-02-17 Ocado Innovation Limited A selector for robot-retrievable items
CN113084808A (en) * 2021-04-02 2021-07-09 上海智能制造功能平台有限公司 Monocular vision-based 2D plane grabbing method for mobile mechanical arm

Also Published As

Publication number Publication date
CN115609591A (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN115609591B (en) Visual positioning method and system based on 2D Marker and compound robot
Romero-Ramirez et al. Speeded up detection of squared fiducial markers
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN109785317B (en) Automatic pile up neatly truss robot&#39;s vision system
Azad et al. Stereo-based 6d object localization for grasping with humanoid robot systems
CN111627072A (en) Method and device for calibrating multiple sensors and storage medium
CN114494045A (en) Large-scale straight gear geometric parameter measuring system and method based on machine vision
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN112560704B (en) Visual identification method and system for multi-feature fusion
CN112132907A (en) Camera calibration method and device, electronic equipment and storage medium
CN104460505A (en) Industrial robot relative pose estimation method
CN111964680A (en) Real-time positioning method of inspection robot
CN113160075A (en) Processing method and system for Apriltag visual positioning, wall-climbing robot and storage medium
CN114888805B (en) Robot vision automatic acquisition method and system for character patterns of tire mold
CN113643380A (en) Mechanical arm guiding method based on monocular camera vision target positioning
CN116843748B (en) Remote two-dimensional code and object space pose acquisition method and system thereof
Li et al. Vision-based target detection and positioning approach for underwater robots
CN114037595A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN114092428A (en) Image data processing method, image data processing device, electronic equipment and storage medium
WO2024021803A1 (en) Mark hole positioning method and apparatus, assembly device, and storage medium
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN116594351A (en) Numerical control machining unit system based on machine vision
CN113807116A (en) Robot six-dimensional pose positioning method based on two-dimensional code
WO2023082417A1 (en) Grabbing point information obtaining method and apparatus, electronic device, and storage medium
CN115953465A (en) Three-dimensional visual random grabbing processing method based on modular robot training platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A 2D Marker based visual positioning method and system, composite robot

Effective date of registration: 20230828

Granted publication date: 20230428

Pledgee: Bank of Communications Ltd. Shanghai New District Branch

Pledgor: Shanghai Xiangong Intelligent Technology Co.,Ltd.

Registration number: Y2023310000491

PE01 Entry into force of the registration of the contract for pledge of patent right