CN114092335A - Image splicing method, device and equipment based on robot calibration and storage medium - Google Patents

Image splicing method, device and equipment based on robot calibration and storage medium Download PDF

Info

Publication number
CN114092335A
CN114092335A CN202111443204.9A CN202111443204A CN114092335A CN 114092335 A CN114092335 A CN 114092335A CN 202111443204 A CN202111443204 A CN 202111443204A CN 114092335 A CN114092335 A CN 114092335A
Authority
CN
China
Prior art keywords
point cloud
cloud data
calibration
data set
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111443204.9A
Other languages
Chinese (zh)
Other versions
CN114092335B (en
Inventor
代勇
陈方
刘聪
姚绪松
蓝猷凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qunbin Intelligent Manufacturing Technology Suzhou Co ltd
Original Assignee
Shenzhen Qb Precision Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qb Precision Industrial Co ltd filed Critical Shenzhen Qb Precision Industrial Co ltd
Priority to CN202111443204.9A priority Critical patent/CN114092335B/en
Publication of CN114092335A publication Critical patent/CN114092335A/en
Application granted granted Critical
Publication of CN114092335B publication Critical patent/CN114092335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an image splicing method, device and equipment based on robot calibration and a storage medium, and relates to the technical field of image processing. The method comprises the steps of obtaining a first point cloud data set collected by a 3D camera on a calibration block under a plurality of postures, and obtaining a calibration data set of a manipulator carrying the 3D camera through pose operation; obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set; acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; and performing three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image. The invention can obtain product images of different azimuth angles, obtain a large amount of point cloud data and realize image splicing.

Description

Image splicing method, device and equipment based on robot calibration and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image splicing method, device and equipment based on robot calibration and a storage medium.
Background
Common 3D vision application can only scan one visual surface of a target object at a time, and even if images of all the visual surfaces of the target object are obtained through multi-angle shooting, the images are difficult to spatially splice to obtain complete imaging. According to the 3D camera plane splicing technology in the prior art, a product is larger than a 3D camera scanning space, the different positions in front are scanned by the product in a translation mode to obtain the complete imaging of the current surface, the mode is only suitable for plane splicing, and if the product is placed at different angles, the existing 3D camera plane splicing technology cannot realize three-dimensional splicing of the product.
Disclosure of Invention
The invention aims to provide an image splicing method, device and equipment based on robot calibration and a storage medium, so as to solve the problem that the image splicing of products is difficult to realize due to different product placing angles.
In order to achieve the above object, an embodiment of the present invention provides an image stitching method based on robot calibration, including:
acquiring a first point cloud data set acquired by a 3D camera on a calibration block under a plurality of postures, and acquiring a calibration data set of a manipulator carrying the 3D camera through pose calculation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set;
acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and performing three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image.
Preferably, the acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when acquiring the point cloud data of the target object, includes:
obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator;
obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix;
and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
Preferably, the acquiring a first point cloud data set acquired by the 3D camera for the calibration block in a plurality of postures, and simultaneously obtaining a calibration data set of the manipulator carrying the 3D camera through pose calculation includes:
acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block;
and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
Preferably, the three-dimensional stitching is performed on the second point cloud data set according to the point cloud data difference relationship and the manipulator position information set to obtain a target stitched image, and the method includes:
determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set;
and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
The embodiment of the invention also provides an image splicing device based on robot calibration, which comprises:
the first acquisition module is used for acquiring a first point cloud data set acquired by the 3D camera on the calibration block under a plurality of postures and acquiring a calibration data set of a manipulator carrying the 3D camera through pose operation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
the data analysis module is used for obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set;
the second acquisition module is used for acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and the data processing module is used for carrying out three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image.
Preferably, the second obtaining module is further configured to:
obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator;
obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix;
and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
Preferably, the first obtaining module is further configured to:
acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block;
and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
Preferably, the data processing module is further configured to:
determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set;
and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
The embodiment of the invention also provides computer terminal equipment which comprises one or more processors and a memory. A memory coupled to the processor for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement the method for image stitching based on robot calibration as described in any of the embodiments above.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the image stitching method based on robot calibration according to any of the above embodiments is implemented.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides an image splicing method based on robot calibration, which comprises the steps of acquiring a first point cloud data set acquired by a 3D camera on a calibration block under a plurality of postures, and acquiring a calibration data set of a manipulator carrying the 3D camera through pose calculation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures; obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set; acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures; and performing three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image. The invention can obtain product images of different azimuth angles, obtain a large amount of point cloud data and realize image splicing.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image stitching method based on robot calibration according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image stitching apparatus based on robot calibration according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a computer terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the step numbers used herein are for convenience of description only and are not used as limitations on the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of the described features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic flowchart of an image stitching method based on robot calibration according to an embodiment of the present invention. In this embodiment, the image stitching method based on robot calibration includes the following steps:
s110, acquiring a first point cloud data set acquired by the 3D camera on the calibration block under a plurality of postures, and acquiring a calibration data set of a manipulator carrying the 3D camera through pose calculation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
s120, obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set;
s130, acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and S140, performing three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image.
In the present embodiment, the point cloud data may be 3D point cloud data. Each set of point cloud data may be a collection of single points, each containing data of spatial coordinates and color, or spatial coordinates and gray scale information.
In one embodiment, step S130, acquiring a second point cloud data set acquired by the 3D camera for the target object under several postures, and acquiring a corresponding manipulator position information set when acquiring the point cloud data of the target object, includes: obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator; obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix; and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
In one embodiment, step S110 is a step of acquiring a first point cloud data set acquired by the 3D camera for the calibration block in a plurality of postures, and obtaining a calibration data set of a manipulator carrying the 3D camera through pose calculation, where the step includes: acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block; and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
In one embodiment, step S140, performing stereo stitching on the second point cloud data set according to the point cloud data difference relationship and the manipulator position information set to obtain a target stitched image, includes: determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set; and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
It is to be understood that the specified point cloud data may be included in a plurality of sets of point cloud data; or may not be included in the sets of point cloud data.
In a specific embodiment, the calibrating and then the splicing are performed, which specifically includes:
when a first point cloud data set acquired by the 3D camera for the calibration block under a plurality of postures is acquired:
(1) a camera (3D camera) is installed on the manipulator, and the manipulator moves to carry the camera to move; a camera is arranged on the manipulator, so that a camera scanning area is not fixed any more, and the camera scanning space movement and the area scanning are realized through the multi-axis movement of the manipulator;
(2) placing a calibration block in a scanning space;
(3) controlling a mechanical arm to scan and collect a point cloud image of a calibration block through different moving and deflection postures; the manipulator enables the camera to scan the calibration block under the condition of a plurality of different poses by moving and deflecting the manipulator into different poses, and the calibration block point cloud data under different poses are acquired;
(4) recording the scanning posture and position provided by the mechanical arm, and recording calibration block point cloud data collected by the camera under different postures and positions;
(5) obtaining a difference relation of different postures of the manipulator by a pose calculation tool and comparing the difference relation of point cloud data of calibration blocks acquired by the 3D camera under the different posture difference relation, and calculating calibration data of a group of cameras corresponding to the poses of the manipulator by the calibration calculation tool;
and according to the point cloud data difference relation and the manipulator position information set, performing three-dimensional splicing on the second point cloud data set to obtain a target spliced image:
(6) a camera is installed on the manipulator, and the manipulator moves to carry the camera to move; a camera is arranged on the manipulator, so that a camera scanning area is not fixed any more, and the camera scanning space movement and the area scanning are realized through the multi-axis movement of the manipulator;
(7) combining the actually required collected point cloud data, moving to a corresponding position capable of providing the collected point cloud data, wherein the collection times are not limited;
(8) collecting a point cloud image under the pose of the manipulator;
(9) acquiring the position of a manipulator during scanning;
(10) calculating the position of the point cloud image in the manipulator space through a calibration matrix of the camera corresponding to the manipulator pose;
(11) and the spatial transformation of each image is completed to automatically complete the three-dimensional splicing of point cloud images in different spaces.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image stitching device based on robot calibration according to an embodiment of the present invention. In this embodiment, the image stitching apparatus based on robot calibration includes:
the first acquisition module 210 is configured to acquire a first point cloud data set acquired by the 3D camera for the calibration block in a plurality of postures, and obtain a calibration data set of a manipulator carrying the 3D camera through pose calculation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
the data analysis module 220 is configured to obtain a point cloud data difference relationship according to the first point cloud data set and the calibration data set;
a second obtaining module 230, configured to obtain a second point cloud data set acquired by the 3D camera for the target object in a plurality of postures, and obtain a corresponding manipulator position information set when acquiring the point cloud data of the target object; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and the data processing module 240 is configured to perform stereo splicing on the second point cloud data set according to the point cloud data difference relationship and the manipulator position information set to obtain a target spliced image.
In an embodiment, the second obtaining module 230 is further configured to: obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator; obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix; and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
In an embodiment, the first obtaining module 210 is further configured to: acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block; and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
In an embodiment, the data processing module 240 is further configured to: determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set; and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
For specific definition of the image stitching device based on robot calibration, reference may be made to the above definition of the image stitching method based on robot calibration, and details are not repeated here. The modules in the image splicing device based on robot calibration can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Referring to fig. 3, an embodiment of the invention provides a computer terminal device, which includes one or more processors and a memory. The memory is coupled to the processor for storing one or more programs, which when executed by the one or more processors, cause the one or more processors to implement the method for image stitching based on robot calibration as in any of the embodiments described above.
The processor is used for controlling the overall operation of the computer terminal equipment so as to complete all or part of the steps of the image stitching method based on the robot calibration. The memory is used to store various types of data to support the operation at the computer terminal device, which data may include, for example, instructions for any application or method operating on the computer terminal device, as well as application-related data. The Memory may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
In an exemplary embodiment, the computer terminal Device may be implemented by one or more Application Specific 1 integrated circuits (AS 1C), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components, and is configured to perform the above image stitching method based on robot calibration, and achieve technical effects consistent with the above method.
In another exemplary embodiment, a computer readable storage medium is also provided, which comprises a computer program, which when executed by a processor, performs the steps of the image stitching method based on robot calibration in any of the above embodiments. For example, the computer readable storage medium may be the above-mentioned memory including a computer program, and the above-mentioned computer program may be executed by a processor of a computer terminal device to implement the above-mentioned image stitching method based on robot calibration, and achieve the technical effects consistent with the above-mentioned method.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. An image stitching method based on robot calibration is characterized by comprising the following steps:
acquiring a first point cloud data set acquired by a 3D camera on a calibration block under a plurality of postures, and acquiring a calibration data set of a manipulator carrying the 3D camera through pose calculation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set;
acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures, and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and performing three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image.
2. The image stitching method based on robot calibration according to claim 1, wherein the acquiring a second point cloud data set acquired by the 3D camera for the target object under a plurality of poses and simultaneously acquiring a corresponding manipulator position information set when acquiring point cloud data of the target object comprises:
obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator;
obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix;
and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
3. The image stitching method based on robot calibration according to claim 1, wherein the acquiring a first point cloud data set acquired by the 3D camera for the calibration block in a plurality of postures, and acquiring a calibration data set of a manipulator carrying the 3D camera through pose calculation comprises:
acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block;
and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
4. The image stitching method based on robot calibration according to claim 1, wherein the step of performing stereo stitching on the second point cloud data set according to the point cloud data difference relationship and the manipulator position information set to obtain a target stitched image comprises:
determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set;
and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
5. An image stitching device based on robot calibration is characterized by comprising:
the first acquisition module is used for acquiring a first point cloud data set acquired by the 3D camera on the calibration block under a plurality of postures and acquiring a calibration data set of a manipulator carrying the 3D camera through pose operation; the first point cloud data set comprises corresponding first point cloud data under a plurality of postures;
the data analysis module is used for obtaining a point cloud data difference relation according to the first point cloud data set and the calibration data set;
the second acquisition module is used for acquiring a second point cloud data set acquired by the 3D camera on the target object under a plurality of postures and acquiring a corresponding manipulator position information set when the point cloud data of the target object is acquired; the second point cloud data set comprises corresponding second point cloud data under a plurality of postures;
and the data processing module is used for carrying out three-dimensional splicing on the second point cloud data set according to the point cloud data difference relation and the manipulator position information set to obtain a target spliced image.
6. The image stitching device based on robot calibration of claim 5, wherein the second acquiring module is further configured to:
obtaining a calibration matrix of the pose of the manipulator according to the position information of the manipulator;
obtaining a spatial position relation of the second point cloud data relative to the manipulator according to the calibration matrix;
and obtaining a corresponding manipulator position information set when the point cloud data of the target object is acquired according to the spatial position relation.
7. The image stitching device based on robot calibration of claim 5, wherein the first obtaining module is further configured to:
acquiring internal parameters of a 3D camera, a first image of a calibration block, a depth image corresponding to the calibration block and characteristic information of the calibration block;
and performing three-dimensional reconstruction according to the first image, the depth image and the 3D camera internal parameters to obtain first point cloud data.
8. The image stitching device based on robot calibration of claim 5, wherein the data processing module is further configured to:
determining relative attitude information of each group of second point cloud data and the appointed point cloud data in the second point cloud data set;
and adjusting the second point cloud data according to the relative attitude information to ensure that the coordinate system of each group of adjusted second point cloud data is consistent with the coordinate system of the appointed point cloud data, so as to complete the splicing of the plurality of groups of second point cloud data and obtain a target spliced image.
9. A computer terminal device, comprising:
one or more processors;
a memory coupled to the processor for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of image stitching based on robotic calibration of any of claims 1-4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for image stitching based on robot calibration according to any one of claims 1 to 4.
CN202111443204.9A 2021-11-30 2021-11-30 Image splicing method, device and equipment based on robot calibration and storage medium Active CN114092335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111443204.9A CN114092335B (en) 2021-11-30 2021-11-30 Image splicing method, device and equipment based on robot calibration and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111443204.9A CN114092335B (en) 2021-11-30 2021-11-30 Image splicing method, device and equipment based on robot calibration and storage medium

Publications (2)

Publication Number Publication Date
CN114092335A true CN114092335A (en) 2022-02-25
CN114092335B CN114092335B (en) 2023-03-10

Family

ID=80305867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111443204.9A Active CN114092335B (en) 2021-11-30 2021-11-30 Image splicing method, device and equipment based on robot calibration and storage medium

Country Status (1)

Country Link
CN (1) CN114092335B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111590593A (en) * 2020-06-19 2020-08-28 浙江大华技术股份有限公司 Calibration method, device and system of mechanical arm and storage medium
CN111637850A (en) * 2020-05-29 2020-09-08 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
WO2021005135A1 (en) * 2019-07-09 2021-01-14 Pricer Ab Stitch images
CN112767479A (en) * 2021-01-13 2021-05-07 深圳瀚维智能医疗科技有限公司 Position information detection method, device and system and computer readable storage medium
WO2021185219A1 (en) * 2020-03-16 2021-09-23 左忠斌 3d collection and dimension measurement method used in space field
CN113532311A (en) * 2020-04-21 2021-10-22 广东博智林机器人有限公司 Point cloud splicing method, device, equipment and storage equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021005135A1 (en) * 2019-07-09 2021-01-14 Pricer Ab Stitch images
WO2021185219A1 (en) * 2020-03-16 2021-09-23 左忠斌 3d collection and dimension measurement method used in space field
CN113532311A (en) * 2020-04-21 2021-10-22 广东博智林机器人有限公司 Point cloud splicing method, device, equipment and storage equipment
CN111637850A (en) * 2020-05-29 2020-09-08 南京航空航天大学 Self-splicing surface point cloud measuring method without active visual marker
CN111590593A (en) * 2020-06-19 2020-08-28 浙江大华技术股份有限公司 Calibration method, device and system of mechanical arm and storage medium
CN112767479A (en) * 2021-01-13 2021-05-07 深圳瀚维智能医疗科技有限公司 Position information detection method, device and system and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
丁少闻等: "固连双相机多视图重建测量方法设计", 《国防科技大学学报》 *
焦恩璋等: "基于视觉伺服的机器人作业系统研究", 《组合机床与自动化加工技术》 *
陈丹等: "一种新的四自由度SCARA机器人手眼标定方法", 《传感器与微系统》 *

Also Published As

Publication number Publication date
CN114092335B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN110136208B (en) Joint automatic calibration method and device for robot vision servo system
DE102015101710B4 (en) A method of calibrating a moveable gripping member using a remote digital camera
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
KR20170017786A (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
Rakprayoon et al. Kinect-based obstacle detection for manipulator
US20120268567A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
JPWO2018143263A1 (en) Imaging control apparatus, imaging control method, and program
CN112171666B (en) Pose calibration method and device for visual robot, visual robot and medium
CN110722558B (en) Origin correction method and device for robot, controller and storage medium
CN111590593B (en) Calibration method, device and system of mechanical arm and storage medium
JPH1079029A (en) Stereoscopic information detecting method and device therefor
CN113280209A (en) System for detecting pipeline excess, use method of system and detection method
CN114092335B (en) Image splicing method, device and equipment based on robot calibration and storage medium
KR20170020629A (en) Apparatus for registration of cloud points
CN116469101A (en) Data labeling method, device, electronic equipment and storage medium
CN114516051B (en) Front intersection method and system for three or more degrees of freedom robot vision measurement
Fröhlig et al. Three-dimensional pose estimation of deformable linear object tips based on a low-cost, two-dimensional sensor setup and AI-based evaluation
CN115147495A (en) Calibration method, device and system for vehicle-mounted system
CN111625001B (en) Robot control method and device and industrial robot
CN110675454A (en) Object positioning method, device and storage medium
Chen et al. Camera calibration via stereo vision using Tsai's method
CN116418967B (en) Color restoration method and device for laser scanning of underwater dynamic environment
Chen et al. A new robotic hand/eye calibration method by active viewing of a checkerboard pattern
Clarke et al. High accuracy 3-D measurement using multiple camera views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221026

Address after: Room E406-2, No. 388 Ruoshui Road, Suzhou Industrial Park, Suzhou Area, China (Jiangsu) Pilot Free Trade Zone, Suzhou City, Jiangsu Province, 215000

Applicant after: Qunbin Intelligent Manufacturing Technology (Suzhou) Co.,Ltd.

Address before: 518000 room 314, 3 / F, 39 Queshan new village, Gaofeng community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN QB PRECISION INDUSTRIAL CO.,LTD.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant