CN116912302B - High-precision imaging method and system based on depth image registration network - Google Patents

High-precision imaging method and system based on depth image registration network Download PDF

Info

Publication number
CN116912302B
CN116912302B CN202311170392.1A CN202311170392A CN116912302B CN 116912302 B CN116912302 B CN 116912302B CN 202311170392 A CN202311170392 A CN 202311170392A CN 116912302 B CN116912302 B CN 116912302B
Authority
CN
China
Prior art keywords
image
imaging
axis
module
target product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311170392.1A
Other languages
Chinese (zh)
Other versions
CN116912302A (en
Inventor
方遒
蒋天健
朱青
毛建旭
吴成中
周振
黄嘉男
罗越凡
袁宇豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202311170392.1A priority Critical patent/CN116912302B/en
Publication of CN116912302A publication Critical patent/CN116912302A/en
Application granted granted Critical
Publication of CN116912302B publication Critical patent/CN116912302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • G01B11/306Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces for measuring evenness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0106General arrangement of respective parts
    • G01N2021/0112Apparatus in one mechanical, optical or electronic block

Abstract

The invention discloses a high-precision imaging method and a system based on a depth image registration network, wherein a high-precision imaging system is built, the system comprises a triaxial motion platform, an imaging module and a carrier, the carrier and the imaging module are respectively fixedly arranged on a Y axis and a Z axis of the triaxial motion platform, and a target product is fixed through the carrier; determining a shooting path of an imaging module by adjusting X, Y and a Z axis, and carrying out local shooting on a target product to obtain a plurality of local images; processing all adjacent images in the partial images by adopting a depth image registration network to obtain a transformation matrix of each group of adjacent images; and converting a plurality of partial images through a transformation matrix, sequentially filling the converted partial images into a blank large image designed in advance, and fusing adjacent images in the blank large image to obtain a high-precision complete image of the target product. The method can save the characteristic registration time of two adjacent partial images and has low system cost.

Description

High-precision imaging method and system based on depth image registration network
Technical Field
The invention relates to the technical field of industrial vision, in particular to a high-precision imaging method and system based on a depth image registration network.
Background
In today's competitive manufacturing environment, enterprises need to continuously improve product quality, reduce costs, and maintain production efficiency and reliability. The application of high precision imaging techniques provides important support for achieving these goals. The high-precision imaging technology can provide accurate images and data and help industrial enterprises monitor the manufacturing process in real time. By means of the high-resolution imaging equipment, key links on the production line can be accurately recorded and detected, so that products can be ensured to meet quality standards. For example, in the production and processing of electronic and precision mechanical parts, high precision imaging can provide detailed images of the surface quality of objects for evaluating features such as surface finish, flatness, imperfections, and texture, thereby ensuring quality and consistency of each part and assembly.
At present, high precision imaging techniques typically require expensive equipment and complex image processing algorithms, which increase the cost of the application. Most implementations use X-ray equipment for imaging, however X-ray imaging uses ionizing radiation, long or excessive exposure may pose potential health risks to the human body and the environment, and secondly high precision imaging requires higher image resolution and quality, which may require increased radiation dose, which may increase radiation exposure risk. Therefore, there is a need to ensure effective control and monitoring of the radiation dose during imaging. The images produced by X-ray imaging techniques are typically gray scale images and may be difficult for a non-professional to interpret. Second, some high-precision imaging techniques at the present stage may require longer time to acquire and process image data, which may not be suitable for some application scenarios requiring real-time decision-making and feedback. For example, in the case where problems need to be found and corrected in time on a production line, a fast image processing speed is important.
The invention aims to provide a high-precision imaging method and a system based on a depth image registration network, which are used for realizing rapid high-precision imaging of a product to be detected. The depth image registration network can carry out rapid registration on the shot high-precision image, so that the calculated amount brought by the conventional registration method of the high-resolution image is reduced, the high-precision imaging speed is improved, and the robustness of the registration process by an algorithm is improved.
Disclosure of Invention
Aiming at the problems of high cost, low imaging speed and low imaging quality of the imaging equipment, the invention provides a high-precision imaging method and a high-precision imaging system based on a depth image registration network, which are used for realizing rapid high-precision imaging of a product to be detected.
In one aspect, the invention provides a high-precision imaging method based on a depth image registration network, which comprises the following steps:
s1, building a high-precision imaging system, wherein the system comprises a triaxial moving platform, an imaging module and a carrier, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, and a target product is fixed through the carrier;
s2, the imaging module can clearly image the target product by adjusting the X axis, the Y axis and the Z axis until the imaging module can clearly image the target product, a shooting path of the imaging module is determined on the basis that the target product can be clearly imaged, and the imaging module locally shoots the target product according to the shooting path, so that a plurality of local image sequences with overlapping areas are obtained;
S3, presetting a depth image registration network and training to obtain a trained depth image registration network, and processing a plurality of local image sequences with overlapping areas by adopting the trained depth image registration network to obtain a plurality of transformation matrixes;
s4, converting a plurality of partial image sequences with overlapping areas through a plurality of transformation matrixes to obtain a plurality of converted partial images under the same coordinates, sequentially filling the plurality of converted partial images into a blank large image designed in advance, and fusing the overlapping areas of adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of a target product.
Preferably, in S2, the imaging module may clearly image the target product by adjusting the X-axis, the Y-axis and the Z-axis until the imaging module determines a shooting path of the imaging module based on the clearly imaging of the target product, which specifically includes:
s21, enabling a target product to appear in the field of view of the imaging module by adjusting an X axis and a Y axis in the triaxial motion platform;
s22, enabling the imaging module to clearly image the target product by adjusting a Z axis in the triaxial motion platform, and recording the height of the Z axis at the moment;
S23, under the condition of Z-axis height determination, adjusting an X axis and a Y axis in a triaxial motion platform, determining a shooting starting point and an ending point of an imaging module, and recording a starting point coordinateAnd endpoint coordinates->
S24, calibrating the imaging module by using a grid calibration method, and calculating the actual physical size corresponding to the photographed local image with the overlapping area;
s25, passing the starting point coordinatesEndpoint coordinates->And the actual physical size, and planning a shooting path of the imaging module.
Preferably, S25 specifically includes:
s251, passing the start point coordinatesAnd endpoint coordinates->Calculating the two-dimensional length of the region to be shot;
s252, dividing the two-dimensional length through the actual physical size, and determining the number of points to be shot;
s253, according to the starting point coordinatesCalculating the points to be shot to obtain coordinates of the intermediate process points;
s254, origin coordinatesIntermediate process point coordinates and end point coordinates +.>A photographing path of the imaging module is constituted.
Preferably, the coordinates of the intermediate process point in S253 are calculated as follows:
wherein,
in the method, in the process of the invention,representing the origin coordinates>Indicating endpoint coordinates +.>Representing intermediate process point coordinates, +.>Indicate->Go (go)/(go)>,/>Indicate->Column (S)/(S)>,/>Representation->The number of intermediate process points on the shaft, +. >Representation->The number of intermediate process points on the axis, length, represents the actual physical size corresponding to the partial image taken, units represents the length of the Pixels in a grid, and Pixels represents the actual length of a grid.
Preferably, in S3, a depth image registration network is preset and trained, where the depth image registration network includes a feature extraction module, a correlation estimation transform module, and a direct linear transformation module that are sequentially connected, where the feature extraction module is used to extract correlation features between two images in an image pair, the correlation estimation transform module is used to segment and linearly map the correlation features, estimate an offset between the two images in the image pair, and the direct linear transformation module converts the offset into a transformation matrix of the image pair.
Preferably, in S3, the trained depth image registration network is used to process a plurality of local image sequences with overlapping areas to obtain a plurality of transformation matrices, which specifically includes:
s31, preprocessing and grouping a plurality of local image sequences with overlapping areas to obtain a plurality of image pairs;
s32, sequentially selecting one image pair from a plurality of image pairs, and inputting the image pairs into a trained depth image registration network;
S33, respectively extracting features of the two images in the selected image pair by a feature extraction module to obtain vector features and mask features corresponding to each image, multiplying the vector features and the mask features corresponding to each image to obtain a feature matrix of each image overlapping area, and splicing the feature matrices of the two images in the selected image pair according to dimensions to obtain related features of the selected image pair
S34, correlation estimation transducer module pairs correlation featuresPerforming segmentation processing and linear mapping to obtain an initial feature sequence, processing the initial feature sequence, and outputting an offset vector of the selected image pair;
s35, the direct linear transformation module calculates a transformation matrix of the selected image pair according to the offset vector;
s36, sequentially selecting another image pair from the plurality of image pairs until the plurality of image pairs are all selected, and obtaining a transformation matrix of the plurality of image pairs through processing in steps S32 to S35.
Preferably, the initial feature sequence in S34 can be formulated as:
in the method, in the process of the invention,representing an initial feature sequence>Weights representing the linear mapping of the image embedding operation procedure, +.>Representing a learnable one-dimensional position embedding vector, +. >Representing a learnable class embedding vector, +.>Indicate->Sequence of feature maps, ">
Preferably, S4 specifically includes:
s41, dividing a transformation matrix of a plurality of image pairs into a plurality of column transformation matrices and a plurality of row transformation matrices, and calculating transformation matrices of other images in a plurality of partial image sequences with overlapping areas relative to the first image according to the plurality of column transformation matrices and the plurality of row transformation matrices;
s42, transforming other images in the local image sequences with the overlapping areas onto a main coordinate system by taking the coordinate system of the first image in the local image sequences with the overlapping areas as the main coordinate system through transformation matrixes corresponding to the first image respectively to obtain a transformed local image sequence;
s43, generating a blank large image by taking the upper left corner of the first image in a plurality of partial image sequences with overlapping areas as a starting point and the lower right corner of the last image in the transformed partial image sequences as an ending point;
s44, sequentially filling the first image and the transformed partial image sequence in the partial image sequences with the overlapping areas into the blank large image;
S45, carrying out linear weighted fusion on the overlapping areas of the adjacent images in the blank large image to obtain a high-precision complete image of the target product.
Preferably, in S41, the transformation matrix of the other images in the local image sequence with the overlapping area relative to the first image is calculated according to the column transformation matrices and the row transformation matrices, and the transformation matrix can be expressed as:
in the method, in the process of the invention,indicate->Line->Column image->Relative to the first image->Is a transformation matrix of->Indicating the first column is at the +.>Line and->Column transformation matrix of two adjacent images of a row, < >>Indicate->Line is at->Column and->Row transformation matrix of two adjacent images of a column, < >>Representing +.>Line->Image of column->For the post-conversion->Line->An image of a column.
In another aspect, the present invention provides a high-precision imaging system based on a depth image registration network, where the imaging is performed by using the high-precision imaging method based on the depth image registration network, where the high-precision imaging system includes: the high-precision imaging device is connected with the computer system, the computer system is provided with a depth image registration network, the high-precision imaging device comprises a base, a protective cover, a triaxial motion platform, an imaging module, a carrier and a control panel, the protective cover is arranged above the base, a semi-enclosed space is formed by surrounding the protective cover and the base, the triaxial motion platform is fixedly arranged on the base and located in the semi-enclosed space, the carrier is fixedly arranged on a Y-axis of the triaxial motion platform, the imaging module is fixedly arranged on a Z-axis of the triaxial motion platform, the control panel is arranged on one side of the base and close to the carrier, and a target product to be shot is located on the carrier, wherein:
The control panel is used for controlling the triaxial movement platform to move along X, Y and Z axes;
the three-axis motion platform drives the imaging module arranged on the three-axis motion platform to make relative motion with the target product so as to acquire a plurality of shooting positions;
the imaging module shoots a target product at a plurality of shooting positions to obtain a plurality of local image sequences with overlapping areas;
the computer system acquires a plurality of local image sequences with overlapping areas and processes the local image sequences through a depth image registration network arranged on the local image sequences, and outputs high-precision complete images of a target product.
The high-precision imaging method and the system based on the depth image registration network comprise the steps that firstly, a high-precision imaging system is built, the imaging system comprises a triaxial moving platform, an imaging module and a carrier, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, and a target product is fixed through the carrier; then, the imaging module can clearly image the target product by adjusting the X axis, the Y axis and the Z axis of the triaxial moving platform, a shooting path of the imaging module is determined on the basis, and the imaging module locally shoots the target product according to the shooting path, so that a plurality of local image sequences with overlapping areas are obtained; presetting a depth image registration network and training to obtain a trained depth image registration network, and processing a plurality of local image sequences with overlapping areas by adopting the trained depth image registration network to obtain a plurality of transformation matrixes; and finally, converting a plurality of partial image sequences with overlapping areas through a transformation matrix to obtain a plurality of converted partial images under the same coordinates, sequentially filling the plurality of converted partial images into a blank large image designed in advance, and fusing adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of the target product. The imaging module in the method comprises a high-precision camera, a telecentric lens and triaxial motion platform equipment, and compared with the existing imaging system, the equipment cost is greatly reduced; in addition, the depth image registration network in the method can accelerate the imaging speed by reducing the image resolution under the condition of unchanged imaging quality, and the time of feature registration is saved by estimating the transformation matrix of the image pair formed by adjacent images instead of relying on the traditional mode of detecting the feature points first and then registering.
Drawings
FIG. 1 is a flow chart of a high-precision imaging method based on a depth image registration network in an embodiment of the invention;
FIG. 2 is a photograph path of an imaging module according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network architecture of a depth image registration network in accordance with an embodiment of the present invention;
FIG. 4 is a high-precision complete image contrast diagram after fusion of a partial image with an overlapping area of a target product and the target product according to an embodiment of the present invention, where (a) the partial image with the overlapping area of the target product is obtained by shooting with an imaging module, and (b) the high-precision complete image after fusion of the target product is obtained;
FIG. 5 is a schematic diagram of a system architecture of a high-precision imaging system according to an embodiment of the invention.
Reference numerals illustrate:
1. a base; 2. a protective cover; 3. a triaxial motion platform; 4. an imaging module; 5. a carrier; 6. a control panel;
41. a high-precision camera; 42. a telecentric lens; 43. a polychromatic light source.
Detailed Description
In order to make the technical scheme of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings.
A high-precision imaging method based on a depth image registration network specifically comprises the following steps:
S1, building a high-precision imaging system, wherein the system comprises a triaxial moving platform, an imaging module and a carrier, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, and a target product is fixed through the carrier;
s2, the imaging module can clearly image the target product by adjusting the X axis, the Y axis and the Z axis until the imaging module can clearly image the target product, a shooting path of the imaging module is determined on the basis that the target product can be clearly imaged, and the imaging module locally shoots the target product according to the shooting path, so that a plurality of local image sequences with overlapping areas are obtained;
s3, presetting a depth image registration network and training to obtain a trained depth image registration network, and processing a plurality of local image sequences with overlapping areas by adopting the trained depth image registration network to obtain a plurality of transformation matrixes;
s4, converting a plurality of partial image sequences with overlapping areas through a plurality of transformation matrixes to obtain a plurality of converted partial images under the same coordinates, sequentially filling the plurality of converted partial images into a blank large image designed in advance, and fusing the overlapping areas of adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of a target product.
Specifically, referring to fig. 1, fig. 1 is a flowchart of a high-precision imaging method based on a depth image registration network according to an embodiment of the present invention.
Firstly, a high-precision imaging system is built, the imaging system comprises a triaxial moving platform, an imaging module and a carrier, wherein the carrier is fixedly arranged on a Y axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z axis of the triaxial moving platform, a target product is fixed through the carrier, the imaging module can clearly image the target product by adjusting an X axis, a Y axis and a Z axis, on the basis, a shooting path of the imaging module is determined, the imaging module locally shoots the target product according to the shooting path, and a plurality of local image sequences with overlapping areas of the target product are obtained;
the imaging system further comprises a computer, a depth image registration network is preset on the computer and trained to obtain a trained depth image registration network, a plurality of local image sequences with overlapping areas are preprocessed and grouped to obtain a plurality of image pairs, and each image pair is processed by adopting the trained depth image registration network to obtain a transformation matrix of each image pair; converting a plurality of partial images with overlapping areas shot by an imaging module through a corresponding transformation matrix to obtain a plurality of converted partial images with overlapping areas under the same coordinate, sequentially filling the converted partial images with overlapping areas into a blank large image designed in advance, and fusing adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of a target product.
In one embodiment, in S2, the imaging module may clearly image the target product by adjusting the X-axis, the Y-axis and the Z-axis until the imaging module determines a shooting path of the imaging module based on the clearly imaging of the target product, which specifically includes:
s21, enabling a target product to appear in the field of view of the imaging module by adjusting an X axis and a Y axis in the triaxial motion platform;
s22, enabling the imaging module to clearly image the target product by adjusting a Z axis in the triaxial motion platform, and recording the height of the Z axis at the moment;
s23, under the condition of Z-axis height determination, adjusting an X axis and a Y axis in a triaxial motion platform, determining a shooting starting point and an ending point of an imaging module, and recording a starting point coordinateAnd endpoint coordinates->
S24, calibrating the imaging module by using a grid calibration method, and calculating the actual physical size corresponding to the photographed local image with the overlapping area;
s25, passing the starting point coordinatesEndpoint coordinates->And the actual physical size, and planning a shooting path of the imaging module.
In one embodiment, S25 specifically includes:
s251, passing the start point coordinatesAnd endpoint coordinates->Calculating the two-dimensional length of the region to be shot;
s252, dividing the two-dimensional length through the actual physical size, and determining the number of points to be shot;
S253, according to the starting point coordinatesCalculating the points to be shot to obtain coordinates of the intermediate process points;
s254, origin coordinatesIntermediate process point coordinates and end point coordinates +.>A photographing path of the imaging module is constituted.
In one embodiment, the coordinates of the intermediate process point in S253 are calculated as follows:
wherein,
in the method, in the process of the invention,representing the origin coordinates>Indicating endpoint coordinates +.>Representing intermediate process point coordinates, +.>Indicate->Go (go)/(go)>,/>Indicate->Column (S)/(S)>,/>Representation->The number of intermediate process points on the shaft, +.>Representation->The number of intermediate process points on the axis, length, represents the actual physical size corresponding to the partial image taken, units represents the length of the Pixels in a grid, and Pixels represents the actual length of a grid.
Specifically, referring to fig. 2, fig. 2 is a schematic view of a photographing path of an imaging module according to an embodiment of the present invention.
Planning a shooting path of an imaging module comprises the following steps:
1) Adjusting the X axis and the Y axis of the triaxial moving platform, and moving the target product into the field of view of the imaging module, so that an image of the target product exists in the field of view of the imaging module in the focusing process;
2) Adjusting the Z-axis height of the triaxial moving platform, enabling an imaging module fixed on the Z-axis to clearly image a target product, and recording the Z-axis height at the moment;
3) Under the condition of determining the Z-axis height, the X-axis and the Y-axis of the triaxial moving platform are adjusted again, the shooting starting point and the shooting end point of the imaging module when the target product is locally shot are determined, and the starting point coordinates are recordedAnd endpoint coordinates
4) The imaging module is calibrated by using a grid calibration method, specifically, the length Units (unit: number of Pixels) and the actual length Pixels of one grid (unit: mm), the actual physical size corresponding to the photographed partial image is obtained through the length of the pixels in one grid and the actual length of one grid:
wherein Length is the actual physical size corresponding to the local image, units is the length of Pixels in a grid, pixels is the actual length of a grid, 5120 is the resolution of the high-precision camera in the imaging module, and also corresponds to the resolution of the local image with the overlapping region.
5) From the coordinates of the starting pointAnd endpoint coordinates->The two-dimensional length (unit: mm) of the region to be photographed is calculated, and the specific formula is as follows:
6) Calculating the points required to be shot by the imaging module in the X axis and the Y axis respectively:
wherein,representing a downward rounding;
7) According to the coordinates of the starting point And calculating coordinates of the intermediate process points according to the points required to be shot:
wherein,indicate->Go (go)/(go)>,/>Indicate->Column (S)/(S)>Thus obtainingCoordinates of the intermediate shooting points.
8) And planning a moving route according to the starting point coordinates, the intermediate process point coordinates and the end point coordinates to obtain a shooting path of the imaging module shown in fig. 2. The imaging module sequentially acquires the local area images of the target product at each shooting point according to the planned shooting path to obtain the target productA high precision partial image sequence with overlapping regions is provided.
In one embodiment, in S3, a depth image registration network is preset and trained, where the depth image registration network includes a feature extraction module, a correlation estimation transform module, and a direct linear transformation module that are sequentially connected, where the feature extraction module is used to extract correlation features between two images in an image pair, the correlation estimation transform module is used to segment and linearly map the correlation features, estimate an offset between the two images in the image pair, and the direct linear transformation module converts the offset into a transformation matrix of the image pair.
In one embodiment, in S3, the trained depth image registration network is used to process a plurality of local image sequences with overlapping areas to obtain a plurality of transformation matrices, which specifically includes:
S31, preprocessing and grouping a plurality of local image sequences with overlapping areas to obtain a plurality of image pairs;
s32, sequentially selecting one image pair from a plurality of image pairs, and inputting the image pairs into a trained depth image registration network;
s33, respectively extracting the characteristics of the two images in the selected image pair by the characteristic extraction module to obtain vectors corresponding to each imageThe feature and mask feature are obtained by multiplying the vector feature and mask feature corresponding to each image to obtain a feature matrix of each image overlapping region, and the feature matrices of the two images in the selected image pair are spliced according to dimensions to obtain the related feature of the selected image pair
S34, correlation estimation transducer module pairs correlation featuresPerforming segmentation processing and linear mapping to obtain an initial feature sequence, processing the initial feature sequence, and outputting an offset vector of the selected image pair;
s35, the direct linear transformation module calculates a transformation matrix of the selected image pair according to the offset vector;
s36, sequentially selecting another image pair from the plurality of image pairs until the plurality of image pairs are all selected, and obtaining a transformation matrix of the plurality of image pairs through processing in steps S32 to S35.
In one embodiment, the initial signature sequence in S34 may be formulated as:
in the method, in the process of the invention,representing an initial feature sequence>Weights representing the linear mapping of the image embedding operation procedure, +.>Representing a learnable one-dimensional position embedding vector, +.>Representing a learnable class embedding vector, +.>Indicate->Sequence of feature maps, ">
Specifically, referring to fig. 3, fig. 3 is a network structure schematic diagram of the depth image registration network.
The depth image registration network in fig. 3 includes a feature extraction module, a correlation estimation transform module and a direct linear transformation module, which are sequentially connected, where the feature extraction module is used to extract correlation features between two images in an image pair, the correlation estimation transform module is used to segment and linearly map the correlation features, estimate offset between the two images in the image pair, and the direct linear transformation module converts the offset into a transformation matrix of the image pair.
Processing a plurality of local image sequences with overlapping areas by adopting a depth image registration network to obtain a plurality of transformation matrixes, wherein the specific process is as follows:
1) Obtained by shootingPreprocessing a partial image sequence with an overlapping region, specifically, firstly adjusting the resolution of each partial image with the overlapping region to a preset size, for example 512X512, so as to reduce the calculated amount of a depth image registration network; each partial image with the resolution adjusted overlap region is then normalized, i.e. using the empirical mean [118.93, 113.97, 102.60 ] ]And empirical variances [69.85, 68.81, 72.45]Normalizing the pixel values of the partial image from 0 to 255 to 0 to 1, respectively, thereby obtaining +.>A sequence of pre-processed partial images,
for a pair ofThe partial image sequences after preprocessing are grouped according to the line, and the same group of +.>Two adjacent images in the partial image after pretreatment are divided into a group to form +.>Pairs of images; the first preprocessed partial images of each line are also adjacently combined to form +.>Image pairs. By the above operation +.>The partial image sequence after the preprocessing is converted into +.>Image pairs.
2) From theEach image pair is sequentially selected from each image pair and is input to a feature extraction module, the feature extraction module comprises a feature extraction submodule and a mask generation submodule which are arranged in parallel, wherein the feature extraction submodule is a 4-layer convolution module, the convolution module consists of a convolution layer, a batch normalization layer and an activation function layer, and the sizes of the feature extraction submodules are all [1,512,512 ]]Vector features of>And->The method comprises the steps of carrying out a first treatment on the surface of the The mask generation submodule is a 5-layer convolution module, and the convolution module is also composed of a convolution layer, a batch normalization layer and an activation function layer, and generates a mask with the size of [1,512,512 ] ]And a value of 0 or 1 vector mask +.>And->Wherein a position of matrix 0 indicates that the feature of the position is invalid, a position of matrix 1 indicates that the feature of the position is valid, and a vector mask is used to mask out the feature of the non-overlapping region; multiplying the characteristic matrix of each image in the input image by the mask matrix to obtain the characteristic matrix of each image: />Andfinally, the characteristic matrix of the two images in the input image pair is +.>And->Stitching by dimension to form [2,512,512 ] of the input image pair]Relevant features of the dimension matrix->
3) Will [2,512,512 ]]Correlation features of a dimension matrixThe method comprises the steps of inputting a correlation estimation transducer module, wherein the correlation estimation transducer module comprises a feature segmentation sub-module and a transducer encoder sub-module, and the feature segmentation sub-module receives correlation features->And dividing it into a sequence of feature maps of a predetermined number, predetermined size, for example +.>8 pieces of [2,64,64 ] size]Is->The feature map sequence is firstly linearly mapped and then overlapped with position information to form an initial feature sequence, and the initial feature sequence can be expressed as follows by a formula:
in the method, in the process of the invention,representing an initial feature sequence>Weights representing the linear mapping of the image embedding operation procedure, +. >Representing a learnable one-dimensional position embedding vector, +.>Representing a learnable class embedding vector, +.>Indicate->Sequence of feature maps, ">
4) The initial characteristic sequence is carried outThe system comprises a plurality of conversion modules, a plurality of first-layer normalization Modules (MSA) and a plurality of second-layer normalization modules (Multi-Head Attention), wherein the first-layer normalization modules, the plurality of Multi-Head Attention modules, the plurality of second-layer normalization modules and the plurality of feedforward network modules are sequentially connected. Wherein the multi-headed attention module enhances the attention mechanism by adding a plurality of different subspaces such that each subspace is focused on a different subset of information, eachThe multiple head Attention Module (MSA) comprises a plurality of Self-Attention modules (SA) which are assembled by aggregating the initial feature sequences->To extract relevant features.
Specifically, for the firstCharacteristic sequence input by the individual transducer module +.>Each element of which is associated with a weight matrix of three different learnable parameters +.>、/>、/>Multiplying to obtain the query matrix of the current module>Key matrix->Sum matrix->
Wherein,is->Query matrix of the individual transducer modules, < > >Is->Key matrix of the individual transducer modules>,/>Is->Value matrix of the individual transducer modules, < >>Is->Characteristic sequences output by the individual transducer modules, < >>、/>、/>Is->A weight matrix of three different learnable parameters of a transducer module.
Then pass through the firstQuery matrix of the individual transducer modules>And key matrix->Dot product operation between, calculate +.>Attention weight of the individual transducer module +.>
Wherein,is the attention weight.
Then weight attentionAND value matrix->Performing dot product operation to obtain self-attention module value as
The multi-head attention Module (MSA) repeats the self-attention module (SA) 4 times above, concatenating the context vectors of each head output, and then projects them back into an 8-dimensional context vector by linear transformationThe specific formula can be expressed as:
wherein,a weight matrix representing a linear transformation in the multi-headed attention module,indicate->The output matrix of the ith self-attention module of the transducer module.
The transducer encoder sub-module concatenates L multi-headed attention Modules (MSAs) using a residual structure and inputs the calculated vectors to a feed forward network to estimate an offset vector of 8 offsets of 4 vertices of the two images of the input image pair The formula can be expressed as:
wherein,representation layer normalization operation, ++>Representing the use of a multi-layer perceptron to output classification results, < + >>Indicating that go through->Results of multiple head attention Module (MSA), -, for example>Indicate->Output result of individual Tansformer module, < >>Representing the number of multi-head attention Modules (MSA) in series, the last multi-head attention Module (MSA) in series outputting the result +.>Representing the predicted offset vector.
5) The offset vector is processed by a Direct Linear Transformation (DLT) module to calculate a transformation matrix of the input image pair
In one embodiment, S4 specifically includes:
s41, dividing a transformation matrix of a plurality of image pairs into a plurality of column transformation matrices and a plurality of row transformation matrices, and calculating transformation matrices of other images in a plurality of partial image sequences with overlapping areas relative to the first image according to the plurality of column transformation matrices and the plurality of row transformation matrices;
s42, transforming other images in the local image sequences with the overlapping areas onto a main coordinate system by taking the coordinate system of the first image in the local image sequences with the overlapping areas as the main coordinate system through transformation matrixes corresponding to the first image respectively to obtain a transformed local image sequence;
S43, generating a blank large image by taking the upper left corner of the first image in a plurality of partial image sequences with overlapping areas as a starting point and the lower right corner of the last image in the transformed partial image sequences as an ending point;
s44, sequentially filling the first image and the transformed partial image sequence in the partial image sequences with the overlapping areas into the blank large image;
s45, carrying out linear weighted fusion on the overlapping areas of the adjacent images in the blank large image to obtain a high-precision complete image of the target product.
In one embodiment, in S41, a transformation matrix of the other images in the local image sequence with the overlapping area relative to the first image is calculated according to a plurality of column transformation matrices and a plurality of row transformation matrices, where the transformation matrix can be expressed as:
in the method, in the process of the invention,indicate->Line->Column image->Relative to the first image->Is a transformation matrix of->Indicating the first column is at the +.>Line and->Two adjacent rowsColumn transformation matrix of image, ">Indicate->Line is at->Column and->Row transformation matrix of two adjacent images of a column, < >>Representing +.>Line->Image of column- >For the post-conversion->Line->An image of a column.
Specifically, the method for calculating the high-precision complete image of the target product according to the transformation matrix of the plurality of image pairs comprises the following steps:
1) Will beThe transformation matrix of the individual image pairs is divided into +.>Individual row transform matrices and->A column transform matrix, wherein:
the individual row transformation matrices are respectively:
the individual column transformation matrices are respectively:
the above-mentioned "row transformation matrix" means a transformation matrix of images of two adjacent columns for a certain row, for exampleA transformation matrix representing first and second column images of a first row; the above-mentioned "column transformation matrix" means a transformation matrix for images of two adjacent rows of the first column, e.g., +.>A transformation matrix representing a first row image and a second row image of a first column.
2) The transformation matrix of other images and the first image in the partial images with the overlapped area is calculated according to the row transformation matrix and the column transformation matrix, and the specific formula is as follows:
in the method, in the process of the invention,indicate->Line->Column image->Relative to the first image->Is a transformation matrix of->Is->Line->The column of images are transformed into an aligned image. />
And transforming other images (images except the first image) in the partial images with the overlapping areas onto the main coordinate system through the transformation matrix of the images and the first image by taking the coordinate system of the first partial image in the partial image sequence with the overlapping areas shot by the imaging module as the main coordinate system, so as to obtain a transformed partial image sequence. After the transformation, the local image sequence with the overlapping area shot by the imaging module takes the first image as a reference and is aligned in the coordinate system of the first image.
3) With a first image of a plurality of partial images with overlapping areasThe upper left corner of (a) as the starting pointWith the last image in the transformed partial image sequence +.>Generating a blank large by taking the right lower corner point as the end pointThe first image is sequentially marked by +.>And the transformed partial image sequence +.>Filling the images into the blank large image, and using weighted fusion to the overlapping area of the adjacent images on the blank large image to eliminate the problem of inaccurate registration. The weighted fusion can be formulated as follows:
wherein,and->Representing the area sequentially filled in two adjacent images in the blank large image, < >>And->Respectively representing the first and the second image of two adjacent images in the main coordinate system +.>Pixel value below +.>Andrespectively representing the adjacent two images in the main coordinate system +.>The weight of the fusion region below. And after linear weighting fusion is carried out on the overlapped areas of all the adjacent images, finally obtaining the high-precision complete image of the target product. Referring to FIG. 4, FIG. 4 is an illustration of one embodiment of the present inventionAnd (a) shooting by an imaging module to obtain a local image with an overlapping area of the target product, and (b) fusing the high-precision complete image of the target product. And processing 35 partial images with overlapping areas of the target product obtained through shooting by adopting the method to obtain a high-precision complete image of the fused target product.
A high-precision imaging system based on a depth image registration network, imaging by adopting a high-precision imaging method based on the depth image registration network, the high-precision imaging system comprising: base, safety cover, triaxial motion platform, imaging module, carrier and control panel, the base top is located to the safety cover, and the safety cover surrounds with the base and forms semi-closed space, and triaxial motion platform sets firmly on the base and is located semi-closed space, and the carrier sets firmly on triaxial motion platform's the Y axle, and imaging module sets firmly on triaxial motion platform's the Z axle, and control panel sets up in the side of base and is close to carrier one side, and the target product that waits to shoot is located the carrier, wherein:
the control panel is used for controlling the triaxial movement platform to move along X, Y and Z axes;
the three-axis motion platform drives the imaging module arranged on the three-axis motion platform to make relative motion with the target product so as to acquire a plurality of shooting positions;
the imaging module shoots the target product at a plurality of shooting positions to obtain a plurality of partial images with overlapping areas.
The computer system acquires a plurality of partial images with overlapping areas and processes the partial images through a depth image registration network arranged on the partial images, and outputs a high-precision complete image of a target product.
Specifically, referring to fig. 5, fig. 5 is a schematic system configuration diagram of a high-precision imaging system according to an embodiment of the invention.
The high-precision imaging system comprises a high-precision imaging device and a computer system (not shown in fig. 5), wherein the high-precision imaging device is connected with the computer system, and a depth image registration network is arranged in the computer system.
The high-precision imaging device comprises a base 1, a protective cover 2, a triaxial moving platform 3, an imaging module 4, a carrier 5 and a control panel 6, wherein the protective cover 2 is arranged above the base 1, the protective cover 2 and the base 1 are encircled to form a semi-enclosed space, the triaxial moving platform 3 is fixedly arranged on the base 1 and located in the semi-enclosed space, the carrier 5 is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module 4 is fixedly arranged on a Z-axis of the triaxial moving platform, the control panel 6 is arranged on the side face of the base and close to one side of the carrier 5, and a target product to be photographed is located on the carrier 5.
The control panel 6 controls the triaxial moving platform 3 to drive the imaging module 4 arranged on the triaxial moving platform and the target product to perform relative movement so as to acquire a plurality of shooting positions; the imaging module 4 shoots a target product at a plurality of shooting positions to obtain a plurality of partial images with overlapping areas; the computer system acquires a plurality of partial images with overlapping areas and processes the partial images through a depth image registration network arranged on the partial images, and outputs a high-precision complete image of a target product.
Further, the imaging module 4 includes a high-precision camera 41, a telecentric lens 42, a polychromatic light source 43: the high-precision camera 41 is a 2500-ten-thousand-pixel tera-net-area-array RGB camera, has a resolution of 5120x5120, and outputs an RGB three-channel image at a maximum frame rate of 41.5 FPS; telecentric lens 42 is a double telecentric lens, has optical distortion of 0.023%, magnification of 0.634, depth of field of 12.5mm, resolution of 6.8um, and object field of 36.3mm, and can clearly observe local area of product without distortion; the multicolor light source 43 is an AOI light source, has RGB three-color light sources, and can uniformly irradiate the product to reduce the influence of uneven brightness.
Further, the high-precision electromagnetic motion modules are used for the X axis and the Y axis of the triaxial motion platform 3, so that the accuracy of a target product in moving shooting can be guaranteed, and the lead screw motion module is used for the Z axis of the triaxial motion platform 3, so that the shake of a camera module can be reduced.
Further, the base 1 is an integrally formed aluminum alloy base and is used for fixing the triaxial moving platform 3 so as to reduce shake of each axis of the triaxial moving platform 3 in the moving process and improve imaging precision.
Further, the protection cover 2 is an iron plate bent through a metal plate, and a space formed by the protection cover 2 and the base 1 in a surrounding mode is used for an imaging device and a triaxial motion platform arranged in the protection cover 2.
The high-precision imaging method and the system based on the depth image registration network comprise the steps that firstly, a high-precision imaging system is built, the imaging system comprises a triaxial moving platform, an imaging module and a carrier, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, and a target product is fixed through the carrier; then, the imaging module can clearly image the target product by adjusting the X axis, the Y axis and the Z axis of the triaxial moving platform, a shooting path of the imaging module is determined on the basis, and the imaging module locally shoots the target product according to the shooting path, so that a plurality of local image sequences with overlapping areas are obtained; presetting a depth image registration network and training to obtain a trained depth image registration network, and processing a plurality of local image sequences with overlapping areas by adopting the trained depth image registration network to obtain a plurality of transformation matrixes; and finally, converting a plurality of partial image sequences with overlapping areas through a transformation matrix to obtain a plurality of converted partial images under the same coordinates, sequentially filling the plurality of converted partial images into a blank large image designed in advance, and fusing adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of the target product. The imaging module in the method comprises a high-precision camera, a telecentric lens and triaxial motion platform equipment, and compared with the existing imaging system, the equipment cost is greatly reduced; in addition, the depth image registration network in the method can accelerate the imaging speed by reducing the image resolution under the condition of unchanged imaging quality, and the time of feature registration is saved by estimating the transformation matrix of the image pair formed by adjacent images instead of relying on the traditional mode of detecting the feature points first and then registering.
The high-precision imaging method and the system based on the depth image registration network provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the core concepts of the invention. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (8)

1. A depth image registration network-based high-precision imaging method, the method comprising:
s1, building a high-precision imaging system, wherein the system comprises a triaxial moving platform, an imaging module and a carrier, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, and a target product is fixed through the carrier;
s2, the imaging module can clearly image the target product by adjusting an X axis, a Y axis and a Z axis until the imaging module can clearly image the target product, a shooting path of the imaging module is determined on the basis of clearly imaging the target product, and the imaging module locally shoots the target product according to the shooting path, so that a plurality of local image sequences with overlapping areas are obtained;
S3, presetting a depth image registration network and training to obtain a trained depth image registration network, and processing a plurality of local image sequences with overlapping areas by adopting the trained depth image registration network to obtain a plurality of transformation matrixes;
s4, converting a plurality of partial image sequences with overlapping areas through a plurality of transformation matrixes to obtain a plurality of converted partial images under the same coordinates, sequentially filling the plurality of converted partial images into a blank large image designed in advance, and fusing the overlapping areas of adjacent images in the blank large image by using a weighted fusion method to obtain a high-precision complete image of the target product;
in the step S2, the imaging module may clearly image the target product by adjusting the X axis, the Y axis and the Z axis until the imaging module determines a shooting path of the imaging module based on the clearly imaging of the target product, and specifically includes:
s21, enabling the target product to appear in the field of view of the imaging module by adjusting an X axis and a Y axis in the triaxial motion platform;
s22, enabling the imaging module to clearly image the target product by adjusting a Z axis in the triaxial motion platform, and recording the height of the Z axis at the moment;
S23, under the condition of Z-axis height determination, adjusting an X axis and a Y axis in the triaxial motion platform, determining a shooting starting point and an ending point of the imaging module, and recording starting point coordinatesAnd endpoint coordinates->
S24, calibrating the imaging module by using a grid calibration method, and calculating the actual physical size corresponding to the photographed local image with the overlapping area;
s25, passing through the starting point coordinatesEndpoint coordinates->The actual physical size is used for planning a shooting path of the imaging module;
the step S25 specifically includes:
s251, passing the starting point coordinatesAnd endpoint coordinates->Calculating the two-dimensional length of the region to be shot;
s252, dividing the two-dimensional length according to the actual physical size, and determining the number of points to be shot;
s253, according to the starting point coordinatesCalculating the points to be shot to obtain coordinates of intermediate process points;
s254, the starting point coordinatesIntermediate process point coordinates and end point coordinates +.>A photographing path of the imaging module is constituted.
2. The high-precision imaging method based on a depth image registration network according to claim 1, wherein the calculation formula of the coordinates of the intermediate process point in S253 is as follows:
Wherein,
in the method, in the process of the invention,representing the origin coordinates>Indicating endpoint coordinates +.>Representing intermediate process point coordinates, +.>Indicate->Go (go)/(go)>,/>Indicate->Column (S)/(S)>,/>Representation->The number of intermediate process points on the shaft, +.>Representation->The number of intermediate process points on the axis, length, represents the actual physical size corresponding to the partial image taken, units represents the length of the Pixels in a grid, and Pixels represents the actual length of a grid.
3. The high-precision imaging method based on the depth image registration network according to claim 2, wherein the depth image registration network is preset and trained in S3, the depth image registration network comprises a feature extraction module, a correlation estimation transform module and a direct linear transformation module which are sequentially connected, the feature extraction module is used for extracting correlation features between two images in an image pair, the correlation estimation transform module is used for dividing and linearly mapping the correlation features, estimating offset between the two images in the image pair, and the direct linear transformation module is used for converting the offset into a transformation matrix of the image pair.
4. The high-precision imaging method based on a depth image registration network according to claim 3, wherein in S3, the trained depth image registration network is adopted to process a plurality of local image sequences with overlapping areas to obtain a plurality of transformation matrices, and the method specifically comprises:
S31, preprocessing and grouping a plurality of local image sequences with overlapping areas to obtain a plurality of image pairs;
s32, sequentially selecting one image pair from a plurality of image pairs and inputting the image pairs into the trained depth image registration network;
s33, the feature extraction module performs feature extraction on two images in the selected image pair respectively to obtain vector features and mask features corresponding to each image, multiplies the vector features and the mask features corresponding to each image to obtain feature matrixes of each image overlapping area, and splices the feature matrixes of the two images in the selected image pair according to dimensions to obtain relevant features of the selected image pair
S34, the correlation estimation transducer module pairs the correlation characteristicsPerforming segmentation processing and linear mapping to obtain an initial feature sequence, processing the initial feature sequence, and outputting an offset vector of a selected image pair;
s35, the direct linear transformation module calculates a transformation matrix of the selected image pair according to the offset vector;
s36, sequentially selecting another image pair from the plurality of image pairs until the plurality of image pairs are all selected, and obtaining a transformation matrix of the plurality of image pairs through processing in steps S32 to S35.
5. The depth image registration network-based high precision imaging method of claim 4, wherein the initial feature sequence in S34 is formulated as:
in the method, in the process of the invention,representing an initial feature sequence>Weights representing the linear mapping of the image embedding operation procedure, +.>Representing a learnable one-dimensional position embedding vector, +.>Representing a learnable class embedding vector, +.>Indicate->Sequence of feature maps, ">
6. The depth image registration network-based high-precision imaging method according to claim 5, wherein S4 specifically comprises:
s41, dividing a transformation matrix of a plurality of image pairs into a plurality of column transformation matrices and a plurality of row transformation matrices, and calculating transformation matrices of other images in the local image sequences with the overlapping areas relative to the first image according to the plurality of column transformation matrices and the plurality of row transformation matrices;
s42, transforming other images in the local image sequences with the overlapping areas onto a main coordinate system by taking the coordinate system of the first image in the local image sequences with the overlapping areas as the main coordinate system through transformation matrixes corresponding to the first image respectively to obtain a transformed local image sequence;
S43, generating a blank large image by taking the upper left corner of the first image in the partial image sequence with the overlapping area as a starting point and the lower right corner of the last image in the partial image sequence after transformation as an end point;
s44, sequentially filling the first image in the partial image sequences with the overlapped areas and the transformed partial image sequences into the blank large image;
s45, performing linear weighted fusion on the overlapping areas of the adjacent images in the blank large image to obtain a high-precision complete image of the target product.
7. The method of high precision imaging based on a depth image registration network according to claim 6, wherein in S41, a transformation matrix of the other images in the local image sequence with overlapping regions relative to the first image is calculated according to a number of column transformation matrices and a number of row transformation matrices, and the transformation matrix can be expressed as:
in the method, in the process of the invention,indicate->Line->Column image->Relative to the first image->Is a transformation matrix of->Indicating the first column is at the +.>Line and->Column transformation matrix of two adjacent images of a row, < > >Indicate->Line is at->Column and->Row transformation matrix of two adjacent images of a column, < >>Representing +.>Line->Image of column->For the post-conversion->Line->An image of a column.
8. A depth image registration network-based high-precision imaging system for imaging using the depth image registration network-based high-precision imaging method as recited in any one of claims 1 to 7, the high-precision imaging system comprising: the high-precision imaging device is connected with the computer system, a depth image registration network is arranged in the computer system, the high-precision imaging device comprises a base, a protective cover, a triaxial moving platform, an imaging module, a carrier and a control panel, the protective cover is arranged above the base, the protective cover and the base surround to form a semi-closed space, the triaxial moving platform is fixedly arranged on the base and positioned in the semi-closed space, the carrier is fixedly arranged on a Y-axis of the triaxial moving platform, the imaging module is fixedly arranged on a Z-axis of the triaxial moving platform, the control panel is arranged on the side face of the base and is close to one side of the carrier, and a target product to be photographed is positioned on the carrier, wherein:
The control panel is used for controlling the triaxial motion platform to move along X, Y and Z axes;
the three-axis motion platform drives the imaging module arranged on the three-axis motion platform to perform relative motion with the target product so as to acquire a plurality of shooting positions;
the imaging module shoots the target product at a plurality of shooting positions to obtain a plurality of local image sequences with overlapping areas;
and the computer system acquires a plurality of local image sequences with overlapping areas, processes the local image sequences through a depth image registration network arranged on the local image sequences, and outputs a high-precision complete image of the target product.
CN202311170392.1A 2023-09-12 2023-09-12 High-precision imaging method and system based on depth image registration network Active CN116912302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311170392.1A CN116912302B (en) 2023-09-12 2023-09-12 High-precision imaging method and system based on depth image registration network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311170392.1A CN116912302B (en) 2023-09-12 2023-09-12 High-precision imaging method and system based on depth image registration network

Publications (2)

Publication Number Publication Date
CN116912302A CN116912302A (en) 2023-10-20
CN116912302B true CN116912302B (en) 2023-12-01

Family

ID=88351490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311170392.1A Active CN116912302B (en) 2023-09-12 2023-09-12 High-precision imaging method and system based on depth image registration network

Country Status (1)

Country Link
CN (1) CN116912302B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011174799A (en) * 2010-02-24 2011-09-08 Mitsubishi Electric Corp Photographing route calculation device
CN107146201A (en) * 2017-05-08 2017-09-08 重庆邮电大学 A kind of image split-joint method based on improvement image co-registration
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2022205625A1 (en) * 2021-03-30 2022-10-06 广东拓斯达科技股份有限公司 Dispensing path generation method and apparatus, electronic device, and storage medium
WO2022242395A1 (en) * 2021-05-20 2022-11-24 北京城市网邻信息技术有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
CN115526781A (en) * 2022-10-12 2022-12-27 中国人民解放军陆军工程大学 Splicing method, system, equipment and medium based on image overlapping area
EP4160140A1 (en) * 2021-10-04 2023-04-05 VoxelSensors SRL Three dimensional imaging system
CN116017123A (en) * 2022-12-05 2023-04-25 武汉华中天经通视科技有限公司 Photoelectric imaging system and imaging method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068334B2 (en) * 2013-05-29 2018-09-04 Capsovision Inc Reconstruction of images from an in vivo multi-camera capsule
JP2022507259A (en) * 2018-11-15 2022-01-18 ザ リージェンツ オブ ザ ユニバーシティ オブ カリフォルニア Systems and methods for converting holographic microscopic images into microscopic images of various modality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011174799A (en) * 2010-02-24 2011-09-08 Mitsubishi Electric Corp Photographing route calculation device
CN107146201A (en) * 2017-05-08 2017-09-08 重庆邮电大学 A kind of image split-joint method based on improvement image co-registration
CN113273172A (en) * 2020-08-12 2021-08-17 深圳市大疆创新科技有限公司 Panorama shooting method, device and system and computer readable storage medium
WO2022205625A1 (en) * 2021-03-30 2022-10-06 广东拓斯达科技股份有限公司 Dispensing path generation method and apparatus, electronic device, and storage medium
WO2022242395A1 (en) * 2021-05-20 2022-11-24 北京城市网邻信息技术有限公司 Image processing method and apparatus, electronic device and computer-readable storage medium
EP4160140A1 (en) * 2021-10-04 2023-04-05 VoxelSensors SRL Three dimensional imaging system
CN115526781A (en) * 2022-10-12 2022-12-27 中国人民解放军陆军工程大学 Splicing method, system, equipment and medium based on image overlapping area
CN116017123A (en) * 2022-12-05 2023-04-25 武汉华中天经通视科技有限公司 Photoelectric imaging system and imaging method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-focus image fusion: Transformer and shallow feature attention matters;Pan Wu等;《Displays》;1-10 *
叶型孔图像拼接技术实验研究;张超等;《航空精密制造技术》;13-16+12 *

Also Published As

Publication number Publication date
CN116912302A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
RU2601421C2 (en) Method and system of calibrating camera
US9269131B2 (en) Image processing apparatus with function of geometrically deforming image, image processing method therefor, and storage medium
US20230370577A1 (en) Calibration method and apparatus for binocular camera, image correction method and apparatus for binocular camera, storage medium, terminal and intelligent device
CN110044262B (en) Non-contact precision measuring instrument based on image super-resolution reconstruction and measuring method
JP7246900B2 (en) Image processing device, image processing system, imaging device, image processing method, program, and storage medium
CN112907679A (en) Robot repeated positioning precision measuring method based on vision
CN111986267B (en) Coordinate system calibration method of multi-camera vision system
CN116912302B (en) High-precision imaging method and system based on depth image registration network
CN111507904A (en) Image splicing method and device for microcosmic printed patterns
CN116095488A (en) Optical lens driving method and device and electronic equipment
JP2020204880A (en) Learning method, program, and image processing device
CN214200141U (en) Robot repeated positioning precision measuring system based on vision
CN116117800A (en) Machine vision processing method for compensating height difference, electronic device and storage medium
CN104729404B (en) High speed 3D industrial digital microscopes
JP2018133064A (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN115967852B (en) Image processing method based on optical compensation optimization algorithm
US20230143670A1 (en) Automated Image Acquisition System for Automated Training of Artificial Intelligence Algorithms to Recognize Objects and Their Position and Orientation
RU2789190C1 (en) Underwater video camera calibration method
CN114663513B (en) Real-time pose estimation and evaluation method for movement track of working end of operation instrument
CN111508071B (en) Binocular camera-based 3D modeling method and shooting terminal
JP7129229B2 (en) Image processing method, image processing device, imaging device, program, and storage medium
WO2020115866A1 (en) Depth processing system, depth processing program, and depth processing method
JP2010041418A (en) Image processor, image processing program, image processing method, and electronic apparatus
Güngör et al. CalibFPA: A Focal Plane Array Imaging System based on Online Deep-Learning Calibration
CN117422750A (en) Scene distance real-time sensing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant