CN113129383A - Hand-eye calibration method and device, communication equipment and storage medium - Google Patents

Hand-eye calibration method and device, communication equipment and storage medium Download PDF

Info

Publication number
CN113129383A
CN113129383A CN202110276024.XA CN202110276024A CN113129383A CN 113129383 A CN113129383 A CN 113129383A CN 202110276024 A CN202110276024 A CN 202110276024A CN 113129383 A CN113129383 A CN 113129383A
Authority
CN
China
Prior art keywords
camera
global
photo
mechanical arm
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110276024.XA
Other languages
Chinese (zh)
Inventor
田璐璐
苏世龙
樊则森
雷俊
丁沛然
马栓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Science and Technology Group Co Ltd
Original Assignee
China Construction Science and Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Science and Technology Group Co Ltd filed Critical China Construction Science and Technology Group Co Ltd
Priority to CN202110276024.XA priority Critical patent/CN113129383A/en
Publication of CN113129383A publication Critical patent/CN113129383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a hand-eye calibration method, a hand-eye calibration device, communication equipment and a storage medium, wherein the hand-eye calibration method comprises the following steps: acquiring a photo group obtained by shooting by a camera group, wherein the camera group at least comprises two cameras and is used for cooperatively shooting a target view field; the photo group comprises photos respectively shot by each camera in the camera group; splicing all the photos in the photo group to obtain a global photo corresponding to the target view area; and calibrating the corresponding relation between the pixel coordinate system of the overall picture and the coordinate system of the mechanical arm according to the overall picture to obtain a hand-eye calibration result. The hand-eye calibration method and device based on the multiple cameras can conveniently and accurately achieve hand-eye calibration based on the multiple cameras.

Description

Hand-eye calibration method and device, communication equipment and storage medium
Technical Field
The present application belongs to the field of computer technologies, and in particular, to a method and an apparatus for calibrating a hand-eye, a communication device, and a storage medium.
Background
Along with the automation degree of the manufacturing industry and the industrial production line is higher and higher, the automatic production links of manual completion, classification, feeding, stacking and quality inspection are replaced by the camera matched mechanical arm in the factory production line. Generally, a picture obtained by shooting by a camera is obtained for recognition, and object positioning is completed; and then, according to a space coordinate relation between the camera and the mechanical arm, converting the pixel coordinate of the object on the photo into the mechanical arm coordinate opposite to the mechanical arm coordinate system so as to indicate the mechanical arm to finish object grabbing according to the mechanical arm coordinate.
In order to achieve accurate grabbing of an object, the visual coordinates of the camera (specifically, the pixel coordinates in a picture taken by the camera) and the mechanical arm coordinates need to be calibrated in advance, and this process is called hand-eye calibration. The existing hand-eye calibration method generally calibrates the visual coordinate of a single camera and the mechanical arm coordinate, but a vision system based on multiple cameras is difficult to calibrate the hand-eye conveniently and accurately.
Disclosure of Invention
In view of this, embodiments of the present application provide a hand-eye calibration method, an apparatus, a communication device, and a storage medium, so as to solve the problem in the prior art how to conveniently and accurately implement multi-camera-based hand-eye calibration.
A first aspect of an embodiment of the present application provides a hand-eye calibration method, including:
acquiring a photo group obtained by shooting by a camera group, wherein the camera group at least comprises two cameras and is used for cooperatively shooting a target view field; the photo group comprises photos respectively shot by each camera in the camera group;
splicing all the photos in the photo group to obtain a global photo corresponding to the target view area;
and calibrating the corresponding relation between the pixel coordinate system of the overall picture and the coordinate system of the mechanical arm according to the overall picture to obtain a hand-eye calibration result.
A second aspect of the embodiments of the present application provides a hand-eye calibration device, including:
the camera group acquisition unit is used for acquiring a photo group shot by the camera group, wherein the camera group at least comprises two cameras and is used for cooperatively shooting a target view field; the photo group comprises photos respectively shot by each camera in the camera group;
the splicing unit is used for splicing the photos in the photo group to obtain a global photo corresponding to the target view field;
and the calibration unit is used for calibrating the corresponding relation between the pixel coordinate system of the overall picture and the mechanical arm coordinate system according to the overall picture to obtain a hand-eye calibration result.
A third aspect of embodiments of the present application provides a communication device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program, when executed by the processor, causes the communication device to implement the steps of the hand-eye calibration method.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, which, when executed by a processor, causes a communication device to implement the steps of the hand-eye calibration method as described.
A fifth aspect of embodiments of the present application provides a computer program product, which, when running on a communication device, causes the communication device to execute the hand-eye calibration method according to the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: in the embodiment of the application, for a camera group comprising at least two cameras, a photo group obtained by shooting by the camera group can be obtained, all photos in the photo group are spliced to obtain a corresponding global photo, and then the corresponding relation between a pixel coordinate system of the global photo and a mechanical arm coordinate system is calibrated according to the global photo to obtain a hand-eye calibration result. Because the hand-eye calibration can be carried out based on the global photo formed by splicing a plurality of photos shot by the multi-camera, compared with the prior art which can only be limited to synchronous hand-eye calibration between a single camera and a mechanical arm, the hand-eye calibration based on the multi-camera can be conveniently and effectively realized so as to meet the actual requirements in industrial production.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a schematic view of an application scenario corresponding to a hand-eye calibration method provided in an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of a hand-eye calibration method according to an embodiment of the present application;
FIG. 3 is an overall view of a photo group provided by an embodiment of the present application before splicing;
FIG. 4 is a schematic diagram of a global photo provided by an embodiment of the present application;
FIG. 5 is an exemplary illustration of a default calibration object provided in an embodiment of the present application;
fig. 6 is a schematic view of an application scenario corresponding to another hand-eye calibration method provided in the embodiment of the present application;
FIG. 7 is a schematic view of a hand-eye calibration device provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a communication device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
At present, in an industrial production line, most of the vision frames are deployed by only one camera, and the visual field range of the vision frames is usually limited, so that the size of an object to be identified is limited, and the object with a large size cannot be accurately identified. In order to realize the identification of the large-size object, the multiple cameras are adopted to deploy the visual frame, so that the identification visual field range is expanded, and the identification of the large-size object is realized. However, the current hand-eye calibration method is generally limited to calibration between a single camera and a mechanical arm, even if there are some multiple cameras and mechanical arm calibrations, one camera is selected from the multiple cameras, and after the camera and the mechanical arm are calibrated, the relative positions of the other cameras and the calibrated camera are further determined, that is, even if there are multiple cameras in a visual system, calibration parameters of the cameras are determined again on the basis of the calibration between the single camera and the mechanical arm, synchronous calibration cannot be achieved, and the process is cumbersome to operate, and a certain error may exist in the process of determining the relative positions between the multiple cameras. Therefore, the prior art has the technical problem that the hand-eye calibration based on multiple cameras is difficult to realize conveniently and accurately. In order to solve the technical problem, embodiments of the present application provide a method and an apparatus for calibrating a hand-eye, a communication device, and a storage medium, where for a camera group including at least two cameras, a photo group obtained by shooting by the camera group can be obtained, and all photos in the photo group are spliced to obtain a corresponding global photo, and then a corresponding relationship between a pixel coordinate system of the global photo and a mechanical arm coordinate system is calibrated according to the global photo to obtain a hand-eye calibration result. Because the hand-eye calibration can be carried out based on the global photo formed by splicing a plurality of photos shot by the multi-camera, compared with the prior art which can only be limited to synchronous hand-eye calibration between a single camera and a mechanical arm, the hand-eye calibration based on the multi-camera can be conveniently and effectively realized so as to meet the actual requirements in industrial production.
Fig. 1 shows an application scenario diagram corresponding to a hand-eye calibration method provided by an embodiment of the present application. Wherein a visual frame is deployed on a conveyor chain conveyor belt for conveying objects, and the visual frame comprises at least two cameras (the visual frame shown in fig. 1 comprises 4 cameras) to form a camera group. Each camera in the camera group is respectively arranged at different positions in the same plane (the plane is parallel to the plane where the conveying chain conveying belt is located) on the top of the vision frame and is used for cooperatively shooting a vision area in the vision frame, namely a target vision area. Furthermore, an independent light supplement lamp can be arranged in the visual frame to ensure the shooting brightness of the camera.
The first embodiment is as follows:
fig. 2 is a schematic flowchart illustrating a hand-eye calibration method provided in an embodiment of the present application, where an execution subject of the hand-eye calibration method may be a communication device, such as a computer, a server, and the like, which is detailed as follows:
in S201, acquiring a group of pictures taken by a camera group, wherein the camera group at least comprises two cameras for cooperatively taking a target view area; the photo group comprises photos which are respectively shot by each camera in the camera group.
The camera group in the embodiment of the present application is used for cooperatively shooting a target view area, such as an inner area of a visual frame shown in fig. 1, and the camera group comprises at least two cameras. The cooperative shooting of the target view area means that different cameras in the camera group are respectively responsible for shooting sub-areas at different positions in the target view area. Furthermore, an overlapping area exists between adjacent sub-areas, the overlapping area enables a common-view area to exist when the camera group shoots, and information of the common-view area can facilitate subsequent photo splicing. The photo group in the embodiment of the application specifically includes photos taken by each camera in the camera group. For example, if the camera group includes four cameras, i.e., a camera a, a camera B, a camera C, and a camera D, the photo group includes four photos, i.e., a photo a, a photo B, a photo C, and a photo D.
Specifically, in the embodiment of the application, each camera in the camera group may be synchronously triggered to take a picture through the infrared sensor installed at the entrance of the target view area, and each picture generated by each camera is acquired, so that the picture group is obtained. In another embodiment, the group of pictures taken by the camera group may be stored in a designated memory space, and the group of pictures may be obtained by accessing the designated memory space.
Further, in order to ensure that the shooting quality of each photo in the photo group is consistent, the exposure level, the signal debouncing and other software and hardware parameters of each camera in the camera group in the embodiment of the application are kept consistent. Furthermore, in order to ensure the synchronism of the shooting of the multiple cameras in the camera group, the cameras can be synchronously started in an external hard triggering mode, and the photos of each camera can be stored in the multi-thread picture container, so that the synchronism of the shooting of the cameras is ensured, the photos generated and stored at almost the same moment (only microsecond order) in the photo group are ensured, and the calibration accuracy is ensured. In one example, if the camera group according to the embodiment of the present application uses a multi-thread picture container for storage, then in an actual use process, the maximum value of the operation speed of the conveyor chain in the application scenario shown in fig. 1 is limited to 45000 rpm, that is, when the operation speed of the conveyor chain is less than or equal to 45000 rpm, a cross ghost does not occur in the picture, and when the rotation speed is higher than 45000 rpm, a cross ghost occurs in the picture.
In S202, the photos in the photo group are spliced to obtain a global photo corresponding to the target view area.
In the embodiment of the application, the photos in the photo group are spliced, that is, the photos corresponding to different sub-regions in the target view region are spliced, and the images of the common view region are displayed in a fusion manner, so that the global photos corresponding to the target view region are obtained, wherein the global photos are the photos capable of completely presenting the image information in the target view region. Specifically, during calibration, a plurality of characteristic pictures are placed in the common-view area between the cameras, so that each picture in the group of pictures taken by the camera group includes information of the characteristic pictures, and then the information of the characteristic pictures is used as a reference standard, so that the splicing of the pictures can be realized.
For example, if four photos taken by four cameras in the visual frame shown in fig. 1 are sequentially arranged according to the positional relationship between the four photos, the overall view of the four photos before splicing (photo a, photo B, photo C, and photo D) is shown in fig. 3, wherein the photo area of each photo is divided into two parts according to the dotted line as shown in fig. 3, and the area containing the information of the feature picture is the common-view area. According to the information of the characteristic pictures in the common view area, the four pictures shown in fig. 3 are spliced, and the images in the common view area are fused, so that the global picture obtained by splicing as shown in fig. 4 is obtained. In one embodiment, for each photo in the group of photos, the common view area occupies at least one third of the total area of the photo, thereby ensuring the subsequent splicing effect.
In S203, calibrating a correspondence between a pixel coordinate system of the global photo and a robot arm coordinate system according to the global photo, and obtaining a hand-eye calibration result.
After the global picture is obtained, the corresponding relation between the pixel coordinate system of the global picture and the mechanical arm coordinate system can be calibrated based on the global picture, and the corresponding relation is used as a hand-eye calibration result, so that the corresponding mechanical arm coordinate can be determined according to the corresponding relation after the identified pixel coordinate of the object to be grabbed is determined, and the object to be grabbed can be grabbed accurately. In one embodiment, a plurality of calibration points may be preset in the target view area, and a coordinate transformation equation is established for calculation according to pixel coordinates of the calibration points in the global photo and robot arm coordinates output when the robot arm moves to the calibration points, so as to obtain a corresponding relationship between a pixel coordinate system of the global photo and a robot arm coordinate system.
Optionally, the step S202 specifically includes:
s20201: performing feature point matching and image registration on every two adjacent photos in the photo group, and determining a transformation matrix between the adjacent photos in the photo group;
s20202: and splicing the photos in the photo group according to the transformation matrix between the adjacent photos in the photo group to obtain the overall photo.
In the embodiment of the application, the characteristic point refers to a point with a violent change of the image gray value or a point with a larger curvature on the edge of the image; intuitively speaking, the feature points are points which can be extracted robustly from each picture when the same scene is photographed from different angles. Exemplarily, the feature point in the embodiment of the present application is a pixel point corresponding to the feature picture included in the common view region as shown in fig. 3.
In S20201, for each photo in the group of photos, feature points may be extracted from the respective common view regions, respectively. When extracting the feature points, the feature points may be implemented by a commonly used feature point extraction algorithm, such as a Scale-invariant feature transform (SIFT) algorithm, an accelerated Up Robust Features (SURF) algorithm, an FAST and rotation-Oriented binary Robust independent basic feature (ORB) algorithm, and the like. After extracting the feature points, matching the feature points of two adjacent photos in the photo group to obtain feature point pairs; then, image registration is carried out according to the characteristic point pairs, and a transformation matrix between the two photos is determined. For each two adjacent photos in the photo group, the transformation matrix between the two photos can be determined through the characteristic point matching and the image registration processing, and finally the transformation matrix between all the adjacent photos in the photo group can be determined and obtained.
Specifically, the process of feature point matching and image registration of two adjacent photos can be realized by the following steps:
a1: and matching the characteristic points according to the characteristic points respectively extracted from the two photos to obtain the characteristic point pairs with the appointed number. Specifically, feature point matching refers to matching extracted feature points using feature descriptors, so as to match the same extracted feature points in different photos. For example, in the SIFI algorithm, each feature point may be described by a 128-dimensional feature vector, and the euclidean distance between two feature points from two photos is calculated by comparing the feature vectors of the two feature points, so as to determine whether the two feature points can be matched. After the feature point matching obtains the preliminary feature point pairs, the feature point pairs may be further screened to obtain a specified number (e.g., 16 pairs) of valid feature point pairs. In the process of screening the feature point pairs, some feature point pairs with poor quality can be specifically removed, or only one group of feature point pairs is reserved in a plurality of pairs of feature point pairs with close distances.
A2: and carrying out image registration according to the specified number of feature point pairs to obtain a transformation matrix between the two photos. After the feature point pairs are obtained, the relative positions of the images need to be obtained according to the feature point pairs, and this process is called image registration. Specifically, in the embodiment of the present application, the relative position between two photos is represented by a transformation matrix between the two photos, and the transformation matrix can be obtained by the following formula:
Figure BDA0002976673820000081
wherein the content of the first and second substances,
Figure BDA0002976673820000082
a homogeneous representation representing pixel coordinates of the first photograph;
Figure BDA0002976673820000083
a homogeneous representation representing pixel coordinates of the second photograph; the first photograph and the second photograph are the two photographs described above, respectively. H is a transformation matrix, also called homography matrix, which is a 3 x 3 matrix, and where H is3,31, the transformation matrix is thus an eight parameter matrix. And substituting the pixel coordinates of the characteristic points in each group of characteristic point pairs into the formula to solve an equation, so that a transformation matrix between two pictures can be obtained respectively to realize the registration between the images. Specifically, since each feature point has x,y two pixel coordinates, four points are needed to solve the solution of the eight-parameter matrix, and meanwhile, considering that the feature point pairs obtained by feature vector matching may have mismatching, a Random Sample Consensus (RANSAC) algorithm can be used for solving to obtain an optimal transformation matrix.
In S20202, the transformation matrices between all adjacent photos in the photo group are obtained through the processing in step S20201, and the photos in the photo group can be sequentially subjected to position conversion according to the transformation matrices and then spliced to obtain the global photo. In one embodiment, a photo may be designated as a reference photo in a photo group, and other photos in the photo group are sequentially transformed into the reference photo pixel coordinate system for stitching according to a transformation matrix between adjacent photos, so as to obtain the global photo.
Further, in order to reduce error accumulation in the photo stitching process, the estimated fixed matrix can be optimized jointly by using a beam adjustment method. Further, after the global photo is obtained through splicing, horizontal correction or vertical correction of the global photo can be performed according to the actual splicing effect.
In the embodiment of the application, the transformation matrixes between the adjacent photos in the photo group can be accurately determined through feature point matching and image registration, so that the photos can be accurately spliced according to the transformation matrixes to obtain accurate global photos, namely, accurate basis is provided for the subsequent calibration process, and the accuracy of the hand-eye calibration method is further improved.
Optionally, the step S20202 includes:
b1: performing projection transformation on each photo in the photo group according to a transformation matrix between adjacent photos in the photo group to obtain a primary spliced photo;
b2: and performing splicing and fusion processing on the preliminary spliced photos to obtain the overall photos.
In B1, after obtaining the transformation matrix between adjacent photos in the photo group, the relative position relationship of each camera in the camera group can be obtained. If the photos are directly spliced according to the transformation matrices, the consistency of the view fields may be damaged, so that the global photos obtained by splicing are not coherent. Further, if obvious light and shade change exists in the obtained preliminary splicing photo, exposure compensation processing can be carried out on the preliminary splicing photo, and the overall brightness of the processed preliminary splicing photo is consistent.
In B2, in the preliminary spliced photo obtained by projective transformation, there may be obvious transition traces in the region (i.e. the overlapping region of the photo) opposite to the common viewing region of the photo before splicing, and the transition traces are the splicing seams. In the embodiment of the application, a process of removing characteristic points repeatedly appearing in the initially spliced picture by adopting a fusion algorithm for a plurality of pixels near the splicing seam is called splicing seam fusion processing, and is also called pixel seam beautifying processing. Through the splicing and fusion processing, the defects of image dislocation, artifact and the like in the global picture can be effectively removed, and the global picture with low repetition rate of the characteristic points and high splicing quality is obtained.
Further, the above-mentioned seam fusion processing may be composed of a seam determination algorithm and a fusion algorithm. The seam determination algorithm is an algorithm for finding a seam in the preliminary spliced picture, such as a point-by-point method, a dynamic programming method, a graph cutting method, and the like. By way of example, the method and the device for determining the seam by the point-by-point method are relatively simple, and processing efficiency can be improved. The fusion algorithm is an algorithm for performing fusion calculation on pixels near the splicing seam on the basis of determining the splicing seam. The fusion algorithm may be a feathering fusion algorithm, a laplacian fusion algorithm, or the like. The feathering fusion is to calculate the weight of the position near the seam according to the distance from the seam, and perform weighted fusion, and the laplacian fusion algorithm is equivalent to calculating the components of the image with different frequencies, and then performing fusion according to the frequencies.
In the embodiment of the application, through projection transformation and splicing fusion processing, the photo splicing quality can be further improved, the accuracy of the overall photo is improved, and then the accuracy of subsequent calibration is improved.
In some embodiments, a fixed matrix may be formed by estimating a virtual focal length and a translational rotation matrix (i.e., virtual internal and external parameters of each camera) corresponding to each camera in the camera group according to a transformation matrix between adjacent photos obtained by image registration (or by further combining parameters used in a stitching optimization process such as a stitching fusion process), and storing the fixed matrix. Therefore, when the global photos need to be obtained again in the subsequent situation that the camera group does not change, all photos in the photo group can be transformed and spliced directly according to the stored fixed matrix to obtain the global photos.
Optionally, the target view field region includes a preset calibration object, and the preset calibration object includes a preset number of calibration points, where the step S203 includes:
c1: acquiring global camera parameters corresponding to the global photos; the global camera parameters at least comprise predicted internal parameters of the global camera;
c2: determining a global camera coordinate corresponding to each calibration point according to the pixel coordinate of each calibration point in the pixel coordinate system of the global photo and the global camera parameter; the global camera coordinates are estimated camera coordinates of the global camera;
c3: determining the corresponding relation between a global camera coordinate system and a mechanical arm coordinate system according to the global camera coordinate of each calibration point and the read mechanical arm coordinate corresponding to each calibration point; the mechanical arm coordinate is a coordinate corresponding to the calibration point in the mechanical arm coordinate system, and the global camera coordinate system is a coordinate system where the global camera coordinate is located;
c4: and determining the corresponding relation between the pixel coordinate system of the global photo and the coordinate system of the mechanical arm according to the global camera parameters and the corresponding relation between the global camera coordinate system and the mechanical arm coordinate system to obtain a hand-eye calibration result.
In the embodiment of the present application, before the hand-eye calibration method of the embodiment of the present application is executed, a preset calibration object is placed in the target visual field area in advance, where the preset calibration object includes a preset number of calibration points. In one embodiment, the predetermined calibration object may be a sheet of paper or a calibration plate containing a predetermined number of calibration points. In another embodiment, the preset calibration object may include n sheets or n calibration plates, and m calibration points exist in each sheet or each calibration plate, so that the calibration object formed by combining the n sheets or the n calibration plates includes n × m calibration points, where n and m are positive integers greater than 0, and n × m is the preset number. For example, as shown in fig. 5, 16 sheets of paper may be laid in the target visual field area to form a preset calibration object, and if 2 cross-shaped points exist in each sheet of paper as 2 calibration points, 16 × 2 — 32 calibration points exist in the target visual field area.
At C1, global camera parameters corresponding to the global photo are obtained, where the global camera parameters at least include the estimated internal parameters of the global camera, and may also include the estimated internal and external parameters of the global camera. The global camera is a camera in an estimated global photo view, specifically a virtual camera suspended at a certain spatial position in a visual frame, and does not coincide with any actual camera position in the camera group. Because the internal and external parameters of each camera in the camera group have slight difference and different installation precision, the global picture can be directly used as the calibration basis of the internal and external parameters of the camera in the global view field, so that the picture which is independently shot by a single camera can not be used as the calibration basis, and the accuracy of the hand-eye calibration method is improved. The internal and external parameters of the camera in the embodiment of the application specifically include the internal parameters and the external parameters of the camera. The camera intrinsic parameters are parameters related to the characteristics of the camera, such as the focal length, the pixel size and the like of the camera; the camera-out parameters are parameters in a world coordinate system, such as the position, rotation direction, etc. of the camera.
In the embodiment of the present application, the global camera parameters, that is, the estimated internal and external parameters of the global camera, may be specifically set with a calibration object (for example, a calibration object may be placed in a target view area in advance under the current camera set installation conditionCheckerboard or the preset calibration object), shooting the target view area, splicing to obtain a specified number (for example, 80) of global photos, and then calibrating according to the specified number of global photos. In particular, the coordinates (X) of the world coordinate system of the calibration point in the calibration object or other calibration plate are setw,Yw,Zw) The coordinate in the global camera coordinate system is (X)c,Yc,Zc) Then the following relationship exists between the two:
Figure BDA0002976673820000121
wherein the rotational displacement matrix
Figure BDA0002976673820000122
I.e. the extrinsic parameters of the global camera, r11~r33For elements describing the rotary motion, tx,ty,tzAre elements used to describe translational motion. Specifically, the rotational displacement matrix may be obtained by selecting a global photo of a calibration object located at a pixel center position as a calibration reference, so as to obtain an external parameter of the global camera, that is, the rotational displacement matrix T.
Assuming that the coordinates on the pixel coordinate system of the global photograph are (u, v), the following relationship (referred to as pixel-camera relationship for short) exists between the coordinates of the global camera coordinate system and the global photograph pixel coordinate system:
Figure BDA0002976673820000123
wherein Z isdeepF is a depth of field parameter of the global camera, dy is a focal length of the global camera, dy is a pixel imaging size in a vertical direction, dx is a pixel imaging size in a horizontal direction, Cx represents an optical center coordinate of imaging of the global camera in the horizontal direction, Cy represents an optical center coordinate of imaging of the global camera in the vertical direction, and the parameters are internal parameters of the global camera. By taking a specified number of pixels of a designated point in the global pictureThe coordinates, the coordinates of the calibration point under the world coordinate system and the determined extrinsic parameters can be combined with the two formulas to obtain the intrinsic parameters of the global camera, so that the corresponding relation between the pixel coordinate system of the global photo and the global camera coordinate system is determined. As a possible implementation, the internal and external parameters of the global camera can be calibrated by using a halcon (a standard machine vision algorithm package) calibration assistant. When the internal and external parameters of the global camera are obtained through calibration by a halcon calibration assistant, the accuracy of the global camera internal and external parameter calibration can be evaluated through residual projection parameters in the halcon.
In C2, after the global camera parameters calibrated in advance are obtained, the coordinates of each calibration point in the estimated global camera coordinate system, that is, the global camera coordinates, corresponding to each calibration point, are determined according to the internal parameters of the calibrated global camera and the pixel coordinates of each calibration point in the pixel coordinate system of the global photo through the pixel-camera relational expression. That is, according to the calibrated global camera intrinsic parameters, the corresponding relationship between the pixel coordinate system of the global photo and the global camera coordinate system can be determined, so that the conversion from the pixel coordinate to the global camera coordinate is completed, and the global camera coordinate corresponding to each calibration point is obtained.
In C3, the arm coordinates are the coordinates of the index point in the arm coordinate system. The following relational expression (camera-mechanical arm relational expression for short) exists between the global camera coordinate system and the mechanical arm coordinate system corresponding to the global photo:
Figure BDA0002976673820000131
wherein (X)c,Yc,Zc) Is the coordinate of the global camera coordinate system, (X)r,Yr,Zr) As a coordinate of the coordinate system of the robot arm, R1 -1Is the inverse of the 4 x 4 rotational translation matrix between the global camera coordinate system and the robotic arm coordinate system.
In the embodiment of the application, the global phase of each calibration point in the global camera coordinate system is determinedAfter machine coordinates are read and mechanical arm coordinates corresponding to each calibration point output by the mechanical arm are read, the two coordinates of each calibration point are substituted into the camera-mechanical arm relational expression to carry out equation solving operation, and then a transformation matrix R between a global camera coordinate system and a mechanical arm coordinate system can be calibrated to obtain a transformation matrix R between the global camera coordinate system and the mechanical arm coordinate system1 -1And determining the corresponding relation between the global camera coordinate system and the mechanical arm coordinate system.
In C4, the internal parameters (Z) in the calibrated global camera parameters acquired in the step C1 aredeepF, dy, dx, Cx, Cy) as known coefficients into the pixel-camera relationship, i.e. the correspondence of the pixel coordinate system of the current global photograph to the global camera coordinate system can be determined; according to the matrix R obtained in the step C31 -1Substituting the known coefficient into the camera-mechanical arm relation, namely determining the corresponding relation between the current global camera coordinate system and the mechanical arm coordinate system; and combining the two corresponding relations, the corresponding relation from the pixel coordinate system of the global photo to the coordinate system of the mechanical arm can be determined. In this embodiment, the pixel-camera relation and the camera-mechanical arm relation substituted into the determination coefficient may be used as the hand-eye calibration result.
In the embodiment of the application, by means of the preset calibration object, the global camera coordinate of the calibration point can be determined according to the pixel coordinate of the calibration point in the global photo and the global camera parameter, the corresponding relation between the global camera coordinate system and the mechanical arm coordinate system can be obtained according to the global camera coordinate and the mechanical arm coordinate of the calibration point, the corresponding relation from the pixel coordinate system of the global photo to the global camera coordinate system and the corresponding relation from the global camera coordinate system to the mechanical arm coordinate system can be finally determined, the two corresponding relations are combined, the corresponding relation from the pixel coordinate system of the global photo to the mechanical arm coordinate system can be accurately determined, and the hand-eye calibration can be accurately completed.
Further, after determining the correspondence of the pixel coordinate system of the global photograph to the global camera coordinate system, and the correspondence of the global camera coordinate system to the robotic arm coordinate system, the fixed object may be placed in the application scene by placing the fixed object as shown in fig. 1And starting the camera, the conveying chain conveying belt and the mechanical arm to verify the accuracy of the hand-eye calibration result. Further, if it is determined through verification that a certain height error exists in the hand-eye calibration result, the depth of field parameter Z in the global camera internal parameters can be finely adjusteddeepThe calibration accuracy and error are adjusted.
Optionally, in the preset calibration object, an interval between the calibration points is smaller than or equal to a preset threshold.
In the embodiment of the present application, the preset number of calibration points are specifically and uniformly distributed in the target view area, the target view area is fully paved, and the interval between each two calibration points is smaller than a preset threshold. For example, the lateral distance between the calibration points is less than or equal to 15 cm, and the vertical distance is less than or equal to 12 cm, thereby improving the calibration accuracy. Specifically, the preset number is specifically determined according to the size of the target view area under the condition that the interval between the calibration points is less than or equal to the preset threshold (for example, 5 to 8 calibration points may be provided in each square meter of the area). Specifically, under the condition that the interval between the calibration points is less than or equal to the preset threshold, the preset number is usually greater than 9, so that the accuracy of the hand-eye calibration method according to the embodiment of the present application is higher than that of the conventional nine-point calibration method, and the influence of distortion errors possibly existing in the global picture on the accuracy can be compensated.
Optionally, after the step S203, the method further includes:
d1: if the target object is detected to enter the target visual field area, starting the camera set to shoot to obtain a target photo group;
d2: splicing the target photo group to obtain a target global photo containing complete image information of the target object;
d3: determining the target mechanical arm coordinate corresponding to the target object according to the target global picture and the hand-eye calibration result; and the target mechanical arm coordinate is a coordinate of the grabbing point of the target object corresponding to the mechanical arm coordinate system.
In D1, when it is detected that the target object enters the target view area, all cameras in the camera group are synchronously started to shoot, so as to obtain the target camera group. Specifically, the target object in the embodiment of the present application is a large-sized object to be grabbed (i.e., the size of the large-sized object is larger than a preset size that can be recognized by a single camera, for example, the large-sized object may be a rebar), and each photo in the target photo group includes a part of image information of the target object. In an embodiment, when a trigger signal is detected to exist in an infrared sensor at an entrance of a target view area, it is determined that a currently detected target object enters the target view area, and all cameras in a camera group are synchronously started to shoot, so that a target camera group is obtained.
In D2, the photos in the target photo group are spliced to obtain a target global photo, which includes the complete image information of the target object. Specifically, the related stitching parameters (for example, parameters used in transformation matrix, projective transformation, and stitching fusion processing between adjacent photos) determined in step S202 in the above calibration process may be obtained, and the target group of photos may be directly stitched.
In D3, after obtaining the target global picture, the coordinates of the pixels of the grabbed point of the target object in the target global picture may be determined by using a preset object recognition model (e.g., a pre-trained neural network model). And then, according to the hand-eye calibration result (namely the corresponding relation between the pixel coordinate system of the global picture and the mechanical arm coordinate system), converting the pixel coordinate corresponding to the grabbing point into the coordinate corresponding to the grabbing point in the mechanical arm coordinate system, namely the target mechanical arm coordinate. And then, the target mechanical arm coordinate is sent to the mechanical arm, so that the mechanical arm can be indicated to move according to the target mechanical arm coordinate, the grabbing point of the target object is accurately positioned, and the target object is accurately grabbed.
In the embodiment of the application, after the hand-eye calibration result is obtained, when the target object is detected, the target overall picture can be obtained by splicing the target picture groups shot by the camera group, and the target mechanical arm coordinate corresponding to the target object is accurately determined according to the target overall picture and the hand-eye calibration result, so that the mechanical arm can be accurately indicated to accurately capture the target object.
Optionally, before the step C3, the method further includes:
indicating the mechanical arm to carry a calibration tool with a preset length to sequentially point each calibration point, and reading the coordinates output by the mechanical arm when the calibration tool points are at the positions of the calibration points to obtain the mechanical arm coordinates corresponding to the calibration points; the coordinate output by the mechanical arm is a coordinate under a mechanical arm base coordinate system;
correspondingly, after the step D3, the method further includes:
sending the target mechanical arm coordinate to the mechanical arm to indicate the mechanical arm to grab the target object according to the target mechanical arm coordinate and a mechanical arm preset parameter; the preset parameters of the mechanical arm comprise information of the calibration tool and a start point coordinate of the conveying chain.
In the embodiment of the application, the mechanical arm base has a certain distance with the visual frame, namely, the mechanical arm base has a certain distance with the target visual field area, so that when the hand-eye calibration is performed, the mechanical arm needs to carry a calibration tool with a preset length, and can only contact the calibration point in the preset object in the target visual field area. For example, the calibration tool of the preset length may be a long-pointed tool of 1 meter (i.e. a tool with a length of one meter and a pointed probe at the end point), and the long-pointed tool may be mounted on a flange of the mechanical arm.
After a calibration tool with a preset length is installed on the mechanical arm, the mechanical arm is indicated to carry the calibration tool with the preset length, each calibration point on a preset calibration object is sequentially clicked, and when the calibration tool point is located at the position of the calibration point, the coordinate output by the mechanical arm is used as the mechanical arm coordinate corresponding to the calibration point. Specifically, the coordinates output by the mechanical arm are the mechanical arm base coordinates output by the mechanical arm to the demonstrator, and the coordinates output by the mechanical arm can be acquired by establishing connection with the demonstrator of the mechanical arm.
Correspondingly, in step D3 of the embodiment of the present application, according to the target global photo and the hand-eye calibration result, the determined target arm coordinate is a coordinate corresponding to the grabbing point of the target object in the arm base coordinate system. After the step D3, the target arm coordinate is sent to the robot arm, the robot arm is instructed to calculate a coordinate position (referred to as a conveyor chain target coordinate for short) on a conveyor chain coordinate system to which the target robot arm actually needs to move according to the target arm coordinate and a preset parameter of the robot arm set in advance by the robot arm, and the robot arm is moved to the conveyor chain target coordinate to capture the target object. Specifically, the preset parameters of the mechanical arm include information of a calibration tool and coordinates of a start point of the conveying chain. The information of the calibration tool may include indication information indicating that the mechanical arm carries the calibration tool, indication information indicating that the mechanical arm does not carry the calibration tool, and information such as the type and length of the calibration tool. The coordinates of the starting point of the conveying chain are specifically coordinates of the visual frame in a coordinate system of the conveying chain, and the coordinates can be used for calibrating the coordinates when the object in the visual frame does not start to be conveyed.
Specifically, in the embodiment of the application, an internal conversion relationship between a mechanical arm base coordinate system, a mechanical arm tool coordinate system and a conveying chain coordinate system is determined in the mechanical arm according to preset parameters of the mechanical arm, so that after the mechanical arm acquires coordinates of a target mechanical arm in the mechanical arm base coordinate system, coordinates of a grabbing point of the target object corresponding to the conveying chain coordinate system can be calculated through the internal conversion relationship, the grabbing point of the target object is accurately moved to the position of the grabbing point of the target object, and the target object is accurately grabbed. Specifically, the internal conversion relationship between the "robot base coordinate system-robot tool coordinate system-conveyor chain coordinate system" may be represented by the inverse matrix R of a 4 × 4 rotation-translation matrix from the robot coordinate system to the conveyor chain coordinate system2 -1Expressed by a matrix, there are:
Figure BDA0002976673820000171
wherein (X)Transfusion system,YTransfusion system,ZTransfusion system) As a coordinate of the conveyor chain coordinate system, (X)r,Yr,Zr) Is the coordinate of the coordinate system of the mechanical arm, (X)c,Yc,Zc) As coordinates of a global camera coordinate system, R1 -1Is the inverse of a 4 x 4 rotational translation matrix, R, between the global camera coordinate system and the robotic arm coordinate system2 -1Is a 4 x 4 rotational translation matrix between the robot arm coordinate system to the conveyor chain coordinate system. For convenience of description, the above equation is referred to as a robot arm-conveyor chain relation.
That is, in the embodiment of the present application, after the target photo group is obtained and the target global photo is obtained by stitching, the pixel coordinates of the grabbing point identified by the target global photo may be calculated and obtained to correspond to the target mechanical arm coordinates on the mechanical arm coordinate system based on the pixel-camera relational expression and the camera-mechanical arm relational expression; after the coordinates of the target mechanical arm are sent to the mechanical arm, the mechanical arm further combines the mechanical arm-conveying chain relational expression to calculate the coordinates of the grabbing point on a conveying chain coordinate system, so that the spatial coordinate chain conversion of a pixel coordinate system-a global camera coordinate system-a mechanical arm base coordinate system-a conveying chain coordinate system is jointly realized, the mechanical arm accurately positions the grabbing point of the target object, and the target object is accurately grabbed.
In the embodiment of the application, when the distance between the vision frame and the mechanical arm is far, the hand-eye calibration can be conveniently and accurately realized through the calibration tool with the preset length. Correspondingly, in the scene with the longer distance between the visual frame and the mechanical arm, after the target mechanical arm coordinate is obtained subsequently, the target mechanical arm coordinate can be sent to the mechanical arm, and the mechanical arm is indicated to accurately position the grabbing point of the target object according to the target mechanical arm coordinate and the mechanical arm preset parameter containing the information of the calibration tool and the conveying chain starting point coordinate, so that the target object can be grabbed accurately.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two:
fig. 7 shows a schematic structural diagram of a hand-eye calibration device provided in an embodiment of the present application, and for convenience of description, only parts related to the embodiment of the present application are shown:
this hand eye calibration device includes: a photo group acquisition unit 71, a splicing unit 72 and a calibration unit 73.
Wherein:
a photo group acquiring unit 71, configured to acquire a photo group captured by a camera group, where the camera group includes at least two cameras and is used to cooperatively capture a target view area; the photo group comprises photos which are respectively shot by each camera in the camera group.
And the splicing unit 72 is configured to splice the photos in the photo group to obtain a global photo corresponding to the target view area.
And the calibration unit 73 is configured to calibrate a correspondence between a pixel coordinate system of the global photo and a robot arm coordinate system according to the global photo, so as to obtain a hand-eye calibration result.
Optionally, the splicing unit 72 includes a transformation matrix determining module and a splicing module:
the transformation matrix determining module is used for performing feature point matching and image registration on every two adjacent photos in the photo group and determining a transformation matrix between the adjacent photos in the photo group;
and the splicing module is used for splicing the photos in the photo group according to the transformation matrix between the adjacent photos in the photo group to obtain the overall photo.
Optionally, the stitching module is specifically configured to perform projection transformation on each photo in the photo group according to a transformation matrix between adjacent photos in the photo group, so as to obtain a preliminary stitched photo; and performing splicing and fusion processing on the preliminary spliced photos to obtain the overall photos.
Optionally, the target view region includes a preset calibration object, the preset calibration object includes a preset number of calibration points, and the calibration unit 73 is specifically configured to acquire global camera parameters corresponding to the global photograph; the global camera parameters at least comprise predicted internal parameters of the global camera; determining a global camera coordinate corresponding to each calibration point according to the pixel coordinate of each calibration point in the pixel coordinate system of the global photo and the global camera parameter; the global camera coordinate system is a coordinate system in a global camera coordinate system, and the global camera coordinate system is a coordinate system where the global camera is located; determining the corresponding relation between a global camera coordinate system and a mechanical arm coordinate system according to the global camera coordinate of each calibration point and the read mechanical arm coordinate corresponding to each calibration point; the mechanical arm coordinate is a coordinate corresponding to the calibration point in the mechanical arm coordinate system; and determining the corresponding relation between the pixel coordinate system of the global photo and the coordinate system of the mechanical arm according to the global camera parameters and the corresponding relation between the global camera coordinate system and the mechanical arm coordinate system to obtain a hand-eye calibration result.
Optionally, the interval between the calibration points in the target visual field area is smaller than or equal to a preset threshold.
Optionally, the hand-eye calibration apparatus further includes:
the target photo group acquisition unit is used for starting the camera group to shoot to obtain a target photo group if the target object is detected to enter a target view field area;
the target global photo determining unit is used for splicing the target photo group to obtain a target global photo containing complete image information of the target object;
the target mechanical arm coordinate determination unit is used for determining the target mechanical arm coordinate corresponding to the target object according to the target global picture and the hand-eye calibration result; and the target mechanical arm coordinate is a coordinate of the grabbing point of the target object corresponding to the mechanical arm coordinate system.
Optionally, the hand-eye calibration apparatus further includes:
the first indicating unit is used for indicating the mechanical arm to carry a calibration tool with a preset length to sequentially point each calibration point, and reading the coordinate output by the mechanical arm when the calibration tool point is at the position of the calibration point to obtain the mechanical arm coordinate corresponding to the calibration point; the coordinate output by the mechanical arm is a coordinate under a mechanical arm base coordinate system;
the second indicating unit is used for sending the target mechanical arm coordinate to the mechanical arm so as to indicate the mechanical arm to grab the target object according to the target mechanical arm coordinate and a mechanical arm preset parameter; the preset parameters of the mechanical arm comprise information of the calibration tool and a start point coordinate of the conveying chain.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Example three:
fig. 8 is a schematic diagram of a communication device according to an embodiment of the present application. As shown in fig. 8, the communication device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a hand-eye calibration program, stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the above-mentioned various hand-eye calibration method embodiments, such as the steps S201 to S203 shown in fig. 2. Alternatively, the processor 80 executes the computer program 82 to implement the functions of the modules/units in the device embodiments, such as the functions of the photo group obtaining unit 71 to the calibration unit 73 shown in fig. 7.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the communication device 8.
The communication device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The communication device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a communication device 8 and does not constitute a limitation of communication device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the communication device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the communication device 8, such as a hard disk or a memory of the communication device 8. The memory 81 may also be an external storage device of the communication device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the communication device 8. Further, the memory 81 may also include both an internal storage unit of the communication device 8 and an external storage device. The memory 81 is used for storing the computer programs and other programs and data required by the communication device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/communication device and method may be implemented in other ways. For example, the above-described apparatus/communication device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A hand-eye calibration method is characterized by comprising the following steps:
acquiring a photo group obtained by shooting by a camera group, wherein the camera group at least comprises two cameras and is used for cooperatively shooting a target view field; the photo group comprises photos respectively shot by each camera in the camera group;
splicing all the photos in the photo group to obtain a global photo corresponding to the target view area;
and calibrating the corresponding relation between the pixel coordinate system of the overall picture and the coordinate system of the mechanical arm according to the overall picture to obtain a hand-eye calibration result.
2. The hand-eye calibration method of claim 1, wherein the stitching the photos in the photo group to obtain the global photo corresponding to the target view area comprises:
performing feature point matching and image registration on every two adjacent photos in the photo group, and determining a transformation matrix between the adjacent photos in the photo group;
and splicing the photos in the photo group according to the transformation matrix between the adjacent photos in the photo group to obtain the overall photo.
3. The hand-eye calibration method of claim 2, wherein the stitching the photos in the photo group according to the transformation matrix between the adjacent photos in the photo group to obtain a global photo comprises:
performing projection transformation on each photo in the photo group according to a transformation matrix between adjacent photos in the photo group to obtain a primary spliced photo;
and performing splicing and fusion processing on the preliminary spliced photos to obtain the overall photos.
4. The hand-eye calibration method according to any one of claims 1 to 3, wherein the target visual field region includes a preset calibration object, the preset calibration object includes a preset number of calibration points, and the calibrating, according to the global photo, a correspondence between a pixel coordinate system of the global photo and a robot arm coordinate system to obtain a hand-eye calibration result includes:
acquiring global camera parameters corresponding to the global photos; the global camera parameters at least comprise predicted internal parameters of the global camera;
determining a global camera coordinate corresponding to each calibration point according to the pixel coordinate of each calibration point in the pixel coordinate system of the global photo and the global camera parameter; the global camera coordinate system is a coordinate system in a global camera coordinate system, and the global camera coordinate system is a coordinate system where the global camera is located;
determining the corresponding relation between a global camera coordinate system and a mechanical arm coordinate system according to the global camera coordinate of each calibration point and the read mechanical arm coordinate corresponding to each calibration point; the mechanical arm coordinate is a coordinate corresponding to the calibration point in the mechanical arm coordinate system;
and determining the corresponding relation between the pixel coordinate system of the global photo and the coordinate system of the mechanical arm according to the global camera parameters and the corresponding relation between the global camera coordinate system and the mechanical arm coordinate system to obtain a hand-eye calibration result.
5. A hand-eye calibration method according to claim 4, wherein the interval between the calibration points in the target visual field region is less than or equal to a preset threshold.
6. The hand-eye calibration method according to claim 4, further comprising, after obtaining the hand-eye calibration result:
if the target object is detected to enter the target visual field area, starting the camera set to shoot to obtain a target photo group;
splicing the target photo group to obtain a target global photo containing complete image information of the target object;
determining the target mechanical arm coordinate corresponding to the target object according to the target global picture and the hand-eye calibration result; and the target mechanical arm coordinate is a coordinate of the grabbing point of the target object corresponding to the mechanical arm coordinate system.
7. The hand-eye calibration method according to claim 6, before determining the correspondence between the global camera coordinate system and the robot arm coordinate system according to the global camera coordinate of each calibration point and the read robot arm coordinate of each calibration point, further comprising:
indicating the mechanical arm to carry a calibration tool with a preset length to sequentially point each calibration point, and reading the coordinates output by the mechanical arm when the calibration tool points are at the positions of the calibration points to obtain the mechanical arm coordinates corresponding to the calibration points; the coordinate output by the mechanical arm is a coordinate under a mechanical arm base coordinate system;
correspondingly, after the target arm coordinate corresponding to the target object is determined according to the target global picture and the hand-eye calibration result, the method further includes:
sending the target mechanical arm coordinate to the mechanical arm to indicate the mechanical arm to grab the target object according to the target mechanical arm coordinate and a mechanical arm preset parameter; the preset parameters of the mechanical arm comprise information of the calibration tool and a start point coordinate of the conveying chain.
8. A hand-eye calibration device, comprising:
the camera group acquisition unit is used for acquiring a photo group shot by the camera group, wherein the camera group at least comprises two cameras and is used for cooperatively shooting a target view field; the photo group comprises photos respectively shot by each camera in the camera group;
the splicing unit is used for splicing the photos in the photo group to obtain a global photo corresponding to the target view field;
and the calibration unit is used for calibrating the corresponding relation between the pixel coordinate system of the overall picture and the mechanical arm coordinate system according to the overall picture to obtain a hand-eye calibration result.
9. A communication device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the computer program, when executed by the processor, causes the communication device to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes a communication device to carry out the steps of the method according to any one of claims 1 to 7.
CN202110276024.XA 2021-03-15 2021-03-15 Hand-eye calibration method and device, communication equipment and storage medium Pending CN113129383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110276024.XA CN113129383A (en) 2021-03-15 2021-03-15 Hand-eye calibration method and device, communication equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110276024.XA CN113129383A (en) 2021-03-15 2021-03-15 Hand-eye calibration method and device, communication equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113129383A true CN113129383A (en) 2021-07-16

Family

ID=76773080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110276024.XA Pending CN113129383A (en) 2021-03-15 2021-03-15 Hand-eye calibration method and device, communication equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113129383A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313771A (en) * 2021-07-19 2021-08-27 山东捷瑞数字科技股份有限公司 Omnibearing measuring method for industrial complex equipment
CN114332249A (en) * 2022-03-17 2022-04-12 常州铭赛机器人科技股份有限公司 Camera vision internal segmentation type hand-eye calibration method
CN115439555A (en) * 2022-08-29 2022-12-06 佛山职业技术学院 Multi-phase machine external parameter calibration method without public view field
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076154A1 (en) * 2016-10-25 2018-05-03 成都通甲优博科技有限责任公司 Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN110193849A (en) * 2018-02-27 2019-09-03 北京猎户星空科技有限公司 A kind of method and device of Robotic Hand-Eye Calibration
CN110587600A (en) * 2019-08-20 2019-12-20 南京理工大学 Point cloud-based autonomous path planning method for live working robot
CN110880159A (en) * 2019-11-05 2020-03-13 浙江大华技术股份有限公司 Image splicing method and device, storage medium and electronic device
CN111738923A (en) * 2020-06-19 2020-10-02 京东方科技集团股份有限公司 Image processing method, apparatus and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076154A1 (en) * 2016-10-25 2018-05-03 成都通甲优博科技有限责任公司 Spatial positioning calibration of fisheye camera-based panoramic video generating method
CN110193849A (en) * 2018-02-27 2019-09-03 北京猎户星空科技有限公司 A kind of method and device of Robotic Hand-Eye Calibration
CN110587600A (en) * 2019-08-20 2019-12-20 南京理工大学 Point cloud-based autonomous path planning method for live working robot
CN110880159A (en) * 2019-11-05 2020-03-13 浙江大华技术股份有限公司 Image splicing method and device, storage medium and electronic device
CN111738923A (en) * 2020-06-19 2020-10-02 京东方科技集团股份有限公司 Image processing method, apparatus and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUTAMANEE AUYSAKUL, ET AL.: "Development of Hemi-Cylinder Plane for Panorama View in Around View Monitor Applications", 《2016 INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA)》, 31 December 2016 (2016-12-31), pages 26 - 30 *
迟龙云 等: "基于局部单应性矩阵的图像拼接与定位算法研究", 《导航定位与授时》, vol. 7, no. 3, pages 62 - 69 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313771A (en) * 2021-07-19 2021-08-27 山东捷瑞数字科技股份有限公司 Omnibearing measuring method for industrial complex equipment
CN114332249A (en) * 2022-03-17 2022-04-12 常州铭赛机器人科技股份有限公司 Camera vision internal segmentation type hand-eye calibration method
CN115439555A (en) * 2022-08-29 2022-12-06 佛山职业技术学院 Multi-phase machine external parameter calibration method without public view field
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116524022B (en) * 2023-04-28 2024-03-26 神力视界(深圳)文化科技有限公司 Offset data calculation method, image fusion device and electronic equipment

Similar Documents

Publication Publication Date Title
CN113129383A (en) Hand-eye calibration method and device, communication equipment and storage medium
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3028252B1 (en) Rolling sequential bundle adjustment
CN111627072B (en) Method, device and storage medium for calibrating multiple sensors
US11544916B2 (en) Automated gauge reading and related systems, methods, and devices
CN111862224B (en) Method and device for determining external parameters between camera and laser radar
CN105608671A (en) Image connection method based on SURF algorithm
WO2023060926A1 (en) Method and apparatus for guiding robot positioning and grabbing based on 3d grating, and device
CN110926330B (en) Image processing apparatus, image processing method, and program
WO2021184302A1 (en) Image processing method and apparatus, imaging device, movable carrier, and storage medium
JP5297779B2 (en) Shape measuring apparatus and program
CN108734738B (en) Camera calibration method and device
WO2020228680A1 (en) Dual camera image-based splicing method and apparatus, and electronic device
CN112307912A (en) Method and system for determining personnel track based on camera
CN114283079A (en) Method and equipment for shooting correction based on graphic card
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
CN111445537A (en) Calibration method and system of camera
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP2011165007A (en) Inspection system, method, and program
CN114998447A (en) Multi-view vision calibration method and system
CN112184544B (en) Image stitching method and device
CN111260574A (en) Seal photo correction method, terminal and computer readable storage medium
CN115880220A (en) Multi-view-angle apple maturity detection method
CN116051652A (en) Parameter calibration method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination