CN116912467A - Image stitching method, device, equipment and storage medium - Google Patents

Image stitching method, device, equipment and storage medium Download PDF

Info

Publication number
CN116912467A
CN116912467A CN202211658143.2A CN202211658143A CN116912467A CN 116912467 A CN116912467 A CN 116912467A CN 202211658143 A CN202211658143 A CN 202211658143A CN 116912467 A CN116912467 A CN 116912467A
Authority
CN
China
Prior art keywords
image
spliced
corner
initial
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211658143.2A
Other languages
Chinese (zh)
Inventor
叶晓倩
王千
闫敏
杜瞻
柳欣
冯俊兰
邓超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211658143.2A priority Critical patent/CN116912467A/en
Publication of CN116912467A publication Critical patent/CN116912467A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image stitching method, an image stitching device, image stitching equipment and a storage medium. The method comprises the following steps: acquiring an initial stitched image, wherein the initial stitched image is a stitched image obtained by extracting features, matching features and transforming the features of a first image and a second image to be stitched; converting the initial spliced image into a target spliced image based on the trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain a target mosaic image. Due to the introduction of the self-supervision optimization model, the optimization residual error of the initial spliced image can be determined, and the target spliced image is obtained by adding the optimization residual error and the initial spliced image, so that the influence of inaccurate homography matrix calculation, scene depth, illumination changes of different visual angles and the like on the image splicing quality is effectively relieved, the image splicing quality is further improved, and the follow-up application requirements of the spliced image are favorably met.

Description

Image stitching method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image stitching method, apparatus, device, and storage medium.
Background
Image Stitching (Image storage) is a widely studied problem in computer vision and computer graphics, and by Stitching and fusing a plurality of images in the same scene, larger visual angles and more scene information are displayed on the same Image, so that the problem that a single common Image cannot display enough scene information is effectively solved, and the Image Stitching method is widely applied to the fields of virtual display, medical Image processing, automatic driving and the like.
In the related art, the image stitching technology generally includes four steps of feature extraction, feature matching, image transformation and image fusion. The feature extraction is used for extracting features of the images to be spliced; the feature matching is used for calculating a homography transformation matrix between two images to be spliced based on the extracted features; the image transformation is used for carrying out homography transformation on the images, so that two images to be spliced are positioned in the same reference coordinate system; image fusion is used to find optimal splice seams or smooth transitions to reduce visual anomalies such as blurring, ghosting, and fracturing. However, due to the influence of illumination change, scene depth, inaccurate homography matrix calculation and the like on the images to be spliced, the conventional image fusion method still has difficulty in solving the visual anomaly phenomenon, seriously influences the quality of image splicing, and limits the subsequent application of an image splicing algorithm.
Disclosure of Invention
In view of this, the embodiments of the present application provide an image stitching method, apparatus, device and storage medium, which aim to improve image stitching quality.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image stitching method, including:
acquiring an initial spliced image, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming the images of a first image and a second image to be spliced;
converting the initial spliced image into a target spliced image based on the trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
In the above scheme, the method further comprises:
training a self-supervision optimization model to be optimized based on the training sample;
determining that the training times of the self-supervision optimization model reach the set times or the loss value of the loss function converges to obtain a trained self-supervision optimization model;
the training samples comprise paired images to be spliced and initialized spliced images corresponding to the paired images to be spliced, the loss function is determined based on content loss values and gradient loss values, the content loss values represent pixel difference values between the optimized spliced images and the corresponding images to be spliced, and the gradient loss values represent pixel gradient difference values between the optimized spliced images and the corresponding images to be spliced.
In the above scheme, the self-supervision optimization model is a CNN (Convolutional Neural Network ) including a self-encoder structure of a skip connection.
In the above scheme, the acquiring the initial stitched image includes:
respectively extracting features of a first image and a second image to be spliced;
determining a homography matrix between the first image and the second image based on the extracted features;
and mapping the first image and the second image to the same image coordinate system based on the homography matrix to obtain an initial spliced image.
In the above scheme, the feature extraction is performed on the first image and the second image to be spliced, respectively, and the feature extraction includes:
respectively extracting Harris corner points of a first image and a second image to be spliced to obtain a first corner point set of the first image and a second corner point set of the second image;
performing feature extraction on a first image and a second image to be spliced based on CNN shared by parameters to obtain a first feature map of the first image and a second feature map of the second image;
performing position drilling on the first feature map based on each corner in the first corner set to obtain feature vectors of each corner in the first corner set;
And carrying out position drilling on the second feature map based on each corner in the second corner set to obtain feature vectors of each corner in the second corner set.
In the above aspect, the determining, based on the extracted features, a homography matrix between the first image and the second image includes:
performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matching feature set between the first image and the second image;
a homography matrix between the first image and the second image is determined using a random sample consensus (RANdom SAmple Consensus, RANSAC) algorithm based on the set of matching features.
In the above scheme, the performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matching feature set between the first image and the second image includes:
traversing each corner in the first corner set, and respectively solving the corner matched in the second corner set;
obtaining a matching feature set between the first image and the second image based on the corner pairs matched between the first corner set and the corner set;
Wherein, the matched corner point pair is the shortest distance between the feature vectors of the two corner points.
In a second aspect, an embodiment of the present application provides an image stitching apparatus, including:
the acquisition module is used for acquiring an initial spliced image, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming images of a first image and a second image to be spliced;
the optimizing module is used for converting the initial spliced image into a target spliced image based on the trained self-supervision optimizing model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory for storing a computer program capable of running on the processor, wherein the processor is adapted to perform the steps of the method according to the first aspect of the embodiments of the application when the computer program is run.
In a fourth aspect, embodiments of the present application provide a computer storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of the method according to the first aspect of the embodiments of the present application.
According to the technical scheme provided by the embodiment of the application, an initial spliced image is obtained, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming images of a first image and a second image to be spliced; converting the initial spliced image into a target spliced image based on the trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain a target mosaic image. Due to the introduction of the self-supervision optimization model, the optimization residual error of the initial spliced image can be determined, and the target spliced image is obtained by adding the optimization residual error and the initial spliced image, so that the influence of inaccurate homography matrix calculation, scene depth, illumination changes of different visual angles and the like on the image splicing quality is effectively relieved, the image splicing quality is further improved, and the follow-up application requirements of the spliced image are favorably met.
Drawings
FIG. 1 is a flow chart of an image stitching method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image stitching method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of random depth descriptor generation in an embodiment of the application;
FIG. 4 is a schematic diagram of a self-monitoring optimization module in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image stitching device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings and examples.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The embodiment of the application provides an image stitching method which can be applied to electronic equipment with data processing capacity, such as terminal equipment, a server and the like, and can also be realized by the mutual matching of the terminal equipment and the server; the terminal equipment can be a computer, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA) and the like; the server can be an application server or a Web server, and can be an independent server or a cluster server in actual deployment.
Illustratively, as shown in fig. 1, the image stitching method includes:
step 101, obtaining an initial spliced image, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming images of a first image and a second image to be spliced.
Step 102, converting the initial spliced image into a target spliced image based on a trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
According to the image stitching method, the self-supervision optimization model is introduced, so that the optimization residual error of the initial stitched image can be determined, and the target stitched image is obtained by adding the optimization residual error and the initial stitched image, so that the influence of inaccuracy in homography matrix calculation, scene depth, illumination changes of different visual angles and the like on the image stitching quality is effectively relieved, the image stitching quality is further improved, and the subsequent application requirements of the stitched image are favorably met.
It should be noted that, the conventional image fusion method is complex, for example, if the images are directly spliced, obvious splicing seams often appear, and the overlapping area may be blurred and distorted, in consideration of the influence of different shooting angles, illumination and shooting environments, and in the related art, the Alpha fusion method is used to realize image fusion. Alpha fusion involves an important concept-Alpha channel. In an image obtained by normal photographing, there are only three channels of RGB, and a channel for representing the transparency size of each pixel is called Alpha channel, except for three primary color channels for describing a digital image. However, in practical application, the fused spliced image still has visual anomalies such as fracture, blurring, ghosting and the like, and the fused image is not natural enough, so that the requirement of subsequent application is seriously affected.
According to the image stitching method, the trained self-supervision optimization model is adopted to determine the optimization residual error of the initial stitched image, and the optimization residual error can represent the difference value between the initial stitched image and the stitched real image. Wherein, the residual refers to the direct difference between the predicted value and the actual value, and if there is a mapping f (x) =b, x=x 0 When, then b-f (x 0 ) Then it is the residual. In this way, the target spliced image obtained by adding the optimized residual error and the initial spliced image can approach the spliced real image to the greatest extent, so that a natural and high-quality image splicing effect is achieved, and the subsequent application requirements of the spliced image, for example, the video task requirements of VR (Virtual Reality), 360-degree panoramic video, 3D reconstruction and the like, can be met; in addition, the image stitching method provided by the embodiment of the application has universality and flexibility, avoids the complex quantization process in the traditional complex image fusion, improves the image stitching efficiency and quality, and has wide application prospect.
It can be appreciated that the method according to the embodiment of the present application needs to be based on a trained self-supervision optimization model, and based on this, the method further includes:
training a self-supervision optimization model to be optimized based on the training sample;
Determining that the training times of the self-supervision optimization model reach the set times or the loss value of the loss function converges to obtain a trained self-supervision optimization model;
the training samples comprise paired images to be spliced and initialized spliced images corresponding to the paired images to be spliced, the loss function is determined based on content loss values and gradient loss values, the content loss values represent pixel difference values between the optimized spliced images and the corresponding images to be spliced, and the gradient loss values represent pixel gradient difference values between the optimized spliced images and the corresponding images to be spliced.
Here, the paired images to be stitched may obtain corresponding initial stitched images through the foregoing feature extraction, feature matching, and image transformation, and the specific process may be referred to as related description below.
Here, the optimized stitched image is to input an initial stitched image of the paired images to be stitched into a stitching image output after the self-supervision optimization model, and a content loss value and a gradient loss value are determined based on the optimized stitched image and the paired images to be stitched. The content loss value includes a first content loss value corresponding to a first image to be stitched and a second content loss value corresponding to a second image to be stitched, and the gradient loss value may be determined based on a pixel gradient difference value between the optimized stitched image and a stitched image serving as a reference coordinate system in the paired stitched images. For example, the pair of images to be stitched includes image I to be stitched 1 And the image I of the image to be spliced 2 To be spliced with the image I 1 As a reference coordinate system, i.e. the images I of the pictures to be spliced 2 Transform to image I to be spliced 1 The coordinate system forms an initial spliced image, and the gradient loss value can be obtained by the optimized spliced image and the image I to be spliced 1 And determining pixel gradient difference values between the two.
Here, loss value convergence means that the loss value is less than or equal to a set threshold. It can be understood that through the training, the self-supervision optimization model can learn the optimization residual error of the training sample, and adjust the parameters of the model based on the loss function, so as to obtain the trained self-supervision optimization model. Because the loss function is determined based on the content loss value and the gradient loss value, the training process comprises self-supervision loss constraint network training of content reconstruction (corresponding to the pixel difference value) and gradient reconstruction (corresponding to the pixel gradient difference value), so that the self-supervision optimization model can effectively reduce image artifacts and splicing errors of a splicing area, improve the quality of spliced images and solve the problem that the acquisition of image splicing true values is difficult.
Illustratively, the self-supervision optimization model is a CNN (convolutional neural network) of a self-encoder structure including Skip Connection, so that feature fusion can be performed based on the Skip Connection self-encoder structure, and further, an optimization residual of a training sample can be more comprehensively learned.
In an application example, the self-supervision optimization model takes as output the result of adding the optimized residual and the initial stitched image, and the output formula is as follows:
P=R+P i =f cnn (P i )+P i
wherein P is an optimized spliced image (corresponding to the target spliced image), R is an optimized residual error, and P is i For initial stitching of the images, R may be defined by f cnn (P i ) Denoted by f cnn () I.e. a transfer function that identifies the optimized residual of the initial stitched image.
Considering that the true value of image stitching of a real scene is difficult to obtain, the loss function of the self-supervision optimization model is as follows, for example:
L=L 1 +L 2 +αL g
wherein L is a loss value of the self-supervision optimization model, L 1 For the content loss value, L, corresponding to the first image to be spliced 2 For the content loss value, L, corresponding to the second image to be spliced g And alpha is a weight coefficient for the gradient loss value between the optimized spliced image and the image to be spliced serving as a reference coordinate system.
Illustratively, the acquiring an initial stitched image includes:
respectively extracting features of a first image and a second image to be spliced;
determining a homography matrix between the first image and the second image based on the extracted features;
and mapping the first image and the second image to the same image coordinate system based on the homography matrix to obtain an initial spliced image.
Here, the homography matrix is used to describe the positional mapping relationship of the object between the world coordinate system and the pixel coordinate system.
In the related technology, the feature point positions and the corresponding feature descriptors are often manually extracted based on the image domain, so that the efficiency is low, noise and illumination change influence is easily received, and the accuracy of subsequent feature matching is further influenced.
Based on this, in some embodiments, the feature extraction is performed on the first image and the second image to be stitched, respectively, including:
respectively extracting Harris corner points of a first image and a second image to be spliced to obtain a first corner point set of the first image and a second corner point set of the second image;
performing feature extraction on a first image and a second image to be spliced based on CNN shared by parameters to obtain a first feature map of the first image and a second feature map of the second image;
performing position drilling on the first feature map based on each corner in the first corner set to obtain feature vectors of each corner in the first corner set;
and carrying out position drilling on the second feature map based on each corner in the second corner set to obtain feature vectors of each corner in the second corner set.
Here, the Harris corner extraction by autocorrelation matrix developed by Chris Harris and Mike Stephens on the basis of h.moravec algorithm, and the first corner set C of the first image can be obtained by Harris corner extraction 1 And a second set of corner points C of the second image 2
In the embodiment of the application, the first image and the second image to be spliced are respectively subjected to feature extraction based on the CNN shared by the parameters to obtain the first feature image of the first image and the second feature image of the second image, so that the mapping rule of the CNN can be utilized to map the low-dimensional image features to the high-dimensional depth features. In addition, position drilling is performed based on the angular points in the first angular point set and the second angular point set, so that local high-dimensional depth features on the first image and the second image are obtained, and further robustness and distinguishability of subsequent feature matching can be improved.
Here, position drilling refers to determining a corresponding feature vector (also referred to as a feature descriptor) on a feature map based on a position where a corner is located, for example, determining a corresponding feature vector on a first feature map based on each corner in a first corner set, and determining a corresponding feature vector on a second feature map based on each corner in a second corner set.
Illustratively, the determining a homography matrix between the first image and the second image based on the extracted features includes:
performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matching feature set between the first image and the second image;
a homography matrix between the first image and the second image is determined using a random sample consensus (RANSAC) algorithm based on the set of matching features.
Illustratively, the performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matched feature set between the first image and the second image includes:
traversing each corner in the first corner set, and respectively solving the corner matched in the second corner set;
obtaining a matching feature set between the first image and the second image based on the corner pairs matched between the first corner set and the corner set;
wherein, the matched corner point pair is the shortest distance between the feature vectors of the two corner points.
In an application example, a BF (Brute Force) algorithm may be used to perform feature matching to obtain a set of matching features between the first image and the second image, and then a RANSAC algorithm may be used to calculate the first image and the second imageHomography matrix between. The idea of the RANSAC algorithm can be understood as: randomly selecting n samples from the matched feature set to calculate homography matrix H i And based on the homography matrix H calculated at present i Calculating the error value corresponding to the rest samples, if the error value is smaller than the threshold t, considering the sample as an inner point, otherwise, as an outer point, and calculating the number N of the inner points i . The above process is repeated until the appointed iteration times are reached, and a new homography matrix is recalculated as a final homography matrix H by utilizing all the interior point samples in the iteration result with the largest number of interior points.
The application is described in further detail below in connection with an application example.
The embodiment of the application provides an image stitching method based on random depth features and self-supervision optimization, as shown in fig. 2, the method comprises the processing steps of feature extraction, homography calculation, image mapping and self-supervision optimization, and the following steps are described in detail:
1. Feature extraction
Here, the feature extraction may be performed by a feature extraction module. The feature extraction module can perform feature extraction on two input images to be spliced (an image to be spliced 1 and an image to be spliced 2) based on a Harris corner detection and a feature extraction method of a random depth descriptor. Because the feature extraction process of the two images to be spliced is consistent, the specific process of feature extraction is described in detail below by taking the image to be spliced 1 as an example, and is specifically as follows:
1) Harris corner extraction
According to Harris corner definition, the input image can be filtered by utilizing a horizontal and vertical difference operator to obtain a gradient map I in the horizontal direction x And gradient map I in vertical direction y And calculates an M matrix based on the following formula (1):
and smoothing the M matrix by utilizing Gaussian filtering to obtain an M 'matrix, and solving a corner response matrix R based on the M' matrix, wherein the formula (2) is as follows:
R=det(M‘)-k(traceM‘) 2 (2)
where det (M ') represents the rank of the M ' matrix, (traceM ') 2 Representing the trace of the M' matrix, k is a set constant, and illustratively, k has a value of 0.05.
And judging the pixel point position of which the value in the angular point response matrix R is larger than a set threshold value as the angular point position, and further extracting the Harris angular point. The set threshold may be reasonably set based on requirements, for example, the set threshold may be 0.1 times (i.e. max (R) ×0.1) of the maximum value of the corner response matrix R.
Illustratively, in order to avoid the corner distribution concentration, a non-maximum value suppression algorithm may be used to take the pixel position with the maximum local area response value as the final corner output, and discard the rest corners of the neighborhood. In this way, for the input image to be stitched 1 and the input image to be stitched 2, the detected first angle set C can be obtained respectively 1 And a second set of corner points C 2
2) Random depth descriptor (feature vector corresponding to the corner point) generation
For further feature matching and computation of homography, feature descriptors need to be generated for each corner position detected. Traditional feature descriptor generation methods are manually constructed based on local image space and are generally sensitive to noise and illumination variations. The existing semantic feature extraction method based on deep learning is usually based on specific tasks, the training is carried out by monitoring data, and the true image stitching value of a real scene is usually difficult to obtain.
In view of the above problems, the present application embodiment proposes a method for generating a random depth descriptor based on CNN, as shown in fig. 3. The method is based on a parameter sharing Siamese CNN network to-be-spliced image 1 and to-be-spliced image 2 (marked as to-be-spliced image I) 1 And the images I to be spliced 2 ) Respectively extracting features to obtain corresponding feature images F 1 And feature map F 2 . In the corner extraction step, the images I to be spliced 1 And I 2 Respectively detect the first angle point set C 1 (i.e. set of detected corner points 1 shown in fig. 3) and a second set of corner points C 2 (i.e. the set of detection corner points 2 shown in fig. 3). At a first set of angles C 1 For example, it can be represented as C 1 =[(x 1 ,y 1 ),(x 2 ,y 2 ),...(x n ,y n )]Wherein (x) n ,y n ) Representing images I to be stitched 1 The nth Harris corner coordinate, C 2 And the same is true.
To further calculate feature descriptors corresponding to each detected corner point, a first set of corner points C is utilized 1 And a second set of corner points C 2 Respectively corresponding to the characteristic diagram F 1 And feature map F 2 And (5) performing position drilling. At a first set of angles C 1 For example, the location drilling process is shown in the following formula (3):
d i =F 1 (C 1 [i]) (3)
wherein d i Representing a first set of corner points C 1 Feature descriptors corresponding to the ith corner point position in (a). For the second set of angular points C 2 The feature descriptor extraction of the middle corner point can refer to the above process, and is not described herein.
It should be noted that, the Siamese CNN network only requires parameter sharing, and can use an untrained random parameter CNN network, or can use a feature pre-trained in any task to extract the CNN network, and the network structure has no hard specification. In the present application embodiment, a convolution with a step size of 1 is used as an example in 6 layers 3×3, but any CNN network described above may be used in the implementation process. In this embodiment of the present application, based on the feature extraction described above, the same CNN mapping rule and local information may be used to map the low-dimensional image feature to the high-dimensional depth feature, so as to improve the robustness and the distinguishability of the subsequent feature matching.
2. Homography calculation
Here, the homography calculation may be performed by a homography calculation module. The feature extraction module extracts the first corner set C from the image to be stitched 1 and the image to be stitched 2 respectively 1 And a second set of corner points C 2 Corresponding specialCondition descriptor set D 1 (corresponding to the first corner set C 1 ) And D 2 (corresponding to the second corner point set C) 2 ). The homography calculation module performs feature matching by using a violence (BF) algorithm based on the features, and calculates a homography matrix by using a RANSAC algorithm so as to represent a transformation relationship between two images to be spliced.
For the ith feature position c on the image 1 to be stitched i The feature descriptor is d i The BF algorithm calculates the descriptor distances between the BF algorithm and all feature points on the image 2 to be spliced, and returns the feature with the smallest distance as the best matching feature. And traversing all the features on the images 1 to be spliced, and obtaining a matched feature set on the two images.
After the matching feature sets on the two images are obtained, the homography matrix is further calculated by using the RANSAC algorithm. The RANSAC algorithm is based on the idea of randomly selecting n samples from a matching feature set to calculate a homography matrix H i And based on the homography matrix H calculated at present i Calculating the error value corresponding to the rest samples, if the error value is smaller than the threshold t, considering the sample as an inner point, otherwise, as an outer point, and calculating the number N of the inner points i . The above process is repeated until the appointed iteration times are reached, and a new homography matrix is recalculated as a final homography matrix H by utilizing all the interior point samples in the iteration result with the largest number of interior points.
3. Image mapping
Here, the image mapping may be performed by an image mapping module. Since the homography calculation module has already calculated the homography matrix between the image to be stitched 1 and the image to be stitched 2 (for example, taking the image to be stitched 1 as the reference coordinate system). The image mapping module can map the image 2 to be spliced to a reference image coordinate system by using the calculated homography matrix H, and fuse the image 2 to be spliced with the image 1 to be spliced, so as to obtain an initial spliced image. The image mapping module can be realized by using a warp Perselected function in opencv.
4. Self-supervision optimization
Here, the self-supervised optimization may be performed by a self-supervised optimization module. The self-supervision optimization module can convert the initial spliced image into a target spliced image based on the trained self-supervision optimization model, so that the aim of optimizing the image quality of a spliced region is fulfilled.
Illustratively, the principle of the self-supervising optimization module is shown in FIG. 4, for an initially stitched image P i The residual R may be optimized first by CNN network learning, which uses a self-encoder structure comprising a jump connection. Secondly, the predicted optimized residual R is further matched with the initial spliced image P i The optimized spliced image P obtained by addition is taken as output, and the following formula (4) shows:
P=R+P i =f cnn (P i )+P i (4)
because the true value of image stitching of a real scene is difficult to acquire, the application embodiment designs a self-supervision loss for network training, as shown in the following formula (5):
L=L 1 +L 2 +αL g (5)
wherein L is a loss value of the self-supervision optimization model, L 1 For the content loss value, L, corresponding to the first image to be spliced 2 For the content loss value, L, corresponding to the second image to be spliced g And alpha is a weight coefficient for the gradient loss value between the optimized spliced image and the image to be spliced serving as a reference coordinate system. L (L) 1 And L 2 It can be understood that the images I to be stitched 1 And I 2 Corresponding content reconstruction loss, respectively restraining the pixel on the optimized spliced image P and the image I to be spliced 1 And I 2 The corresponding pixel has consistent color, L g The method can be understood as gradient constraint loss, and the spliced image P and the image I to be spliced after constraint optimization 1 The corresponding pixel gradients are uniform.
Illustratively, with the smoth_L1 loss S as the base loss function, defined as shown in equation (6) below, then L 1 And L 2 The loss term definitions are shown in equation (7) and equation (8), respectively:
L 1 =S( I 1 -M 1 P ) (7)
L 2 =S( I w -M 2 P ) (8)
where x is the variable of the base loss function, I w For the images I to be spliced 2 The mapped image after homography transformation (corresponding to the processing of the image mapping module described above) is calculated as shown in formula (9). P is an optimized spliced image output by the self-supervision optimization module, M 1 And M 2 Respectively are images I to be spliced 1 And I 2 Mask, M in corresponding position on P 2 The calculation is shown in formula (10). Since in the homography conversion, I is 1 Is a reference coordinate system, thus M 1 Directly taking P [0:W,0:H ]]The region is 1 and the rest are 0, wherein W is I 1 Is of the width of H is I 1 Is high.
I w =W(H,I 2 ) (9)
M 2 =W(H,E) (10)
Wherein, W is a homography transformation function, which can be realized by using a warp Perspective function in opencv. E is an all 1 matrix, size and I w And consistent.
L g The loss definition is shown in equation (11):
wherein G is P Respectively representing the gradient graphs corresponding to the optimized spliced image P,representing the corresponding gradient map of the image 1 to be stitched. Illustratively, the gradient map filters the image along the x and y directions using sobel operators, respectively, and weights the filtering results in the two directions.
It can be understood that the image stitching method of the application embodiment realizes high-quality image stitching through four steps of random depth feature extraction, homography calculation, image mapping and self-supervision optimization. The method is easy to implement, has robustness to illumination changes and noise on different spliced images, and avoids the problem of splicing true value labeling in a depth algorithm.
In this embodiment of the present application, random depth feature extraction is performed by combining corner detection and depth descriptors, specifically, feature point extraction is performed based on Harris corners, and a siamese CNN network is utilized to construct a random depth vector for each detected feature point as a feature descriptor for feature matching. The feature extraction is simple to construct and easy to implement, and the inefficiency of manually constructing the feature descriptors is avoided. In addition, by introducing image local features and high-dimensional depth feature representation, the problem that the low-dimensional image pixel features are influenced by illumination change and noise is solved, and the robustness of subsequent feature matching is improved.
In the application embodiment, CNN including the jump connection self-encoder structure is used as a self-supervision optimization model, the optimization residual error is learned, and the self-supervision loss constraint network training including content reconstruction and gradient reconstruction is designed, so that the image artifact and splicing error of a splicing area can be effectively reduced, the quality of a spliced image is improved, the problem that the acquisition of an image splicing true value is difficult can be solved, the problems of illumination artifact and splicing fracture of the splicing area are further effectively solved, the influence of inaccurate homography matrix calculation, scene depth, illumination change of different visual angles and the like on the quality of image splicing is effectively relieved, the quality of image splicing is further improved, and the follow-up application requirements of spliced images are met.
In order to implement the method according to the embodiment of the present application, the embodiment of the present application further provides an image stitching device, where the image stitching device corresponds to the image stitching method, and each step in the embodiment of the image stitching method is also completely applicable to the embodiment of the image stitching device.
As shown in fig. 5, the image stitching apparatus includes: an acquisition module 501 and an optimization module 502. The acquiring module 501 is configured to acquire an initial stitched image, where the initial stitched image is a stitched image obtained by performing feature extraction, feature matching and image transformation on a first image and a second image to be stitched; the optimization module 502 is configured to convert the initial stitched image into a target stitched image based on a trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
In some embodiments, the optimization module 502 is further to:
training a self-supervision optimization model to be optimized based on the training sample;
determining that the training times of the self-supervision optimization model reach the set times or the loss value of the loss function converges to obtain a trained self-supervision optimization model;
the training samples comprise paired images to be spliced and initialized spliced images corresponding to the paired images to be spliced, the loss function is determined based on content loss values and gradient loss values, the content loss values represent pixel difference values between the optimized spliced images and the corresponding images to be spliced, and the gradient loss values represent pixel gradient difference values between the optimized spliced images and the corresponding images to be spliced.
In some embodiments, the self-supervised optimization model is a CNN of a self-encoder structure including skip connections.
In some embodiments, the obtaining module 501 is specifically configured to:
respectively extracting features of a first image and a second image to be spliced;
determining a homography matrix between the first image and the second image based on the extracted features;
and mapping the first image and the second image to the same image coordinate system based on the homography matrix to obtain an initial spliced image.
In some embodiments, the obtaining module 501 performs feature extraction on the first image and the second image to be stitched, respectively, including:
respectively extracting Harris corner points of a first image and a second image to be spliced to obtain a first corner point set of the first image and a second corner point set of the second image;
performing feature extraction on a first image and a second image to be spliced based on CNN shared by parameters to obtain a first feature map of the first image and a second feature map of the second image;
performing position drilling on the first feature map based on each corner in the first corner set to obtain feature vectors of each corner in the first corner set;
and carrying out position drilling on the second feature map based on each corner in the second corner set to obtain feature vectors of each corner in the second corner set.
In some embodiments, the obtaining module 501 determines a homography matrix between the first image and the second image based on the extracted features, including:
performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matching feature set between the first image and the second image;
A homography matrix between the first image and the second image is determined using a random sample consensus (RANSAC) algorithm based on the set of matching features.
In some embodiments, the obtaining module 501 performs feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matched feature set between the first image and the second image, including:
traversing each corner in the first corner set, and respectively solving the corner matched in the second corner set;
obtaining a matching feature set between the first image and the second image based on the corner pairs matched between the first corner set and the corner set;
wherein, the matched corner point pair is the shortest distance between the feature vectors of the two corner points.
In practical applications, the acquisition module 501 (corresponding to the feature extraction module, homography calculation module, and image mapping module shown in fig. 2) and the optimization module 502 (corresponding to the self-supervision optimization module shown in fig. 2) may be implemented by a processor in the image stitching device. Of course, the processor needs to run a computer program in memory to implement its functions.
It should be noted that: in the image stitching device provided in the above embodiment, only the division of each program module is used for illustration, and in practical application, the processing allocation may be performed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules, so as to complete all or part of the processing described above. In addition, the image stitching device and the image stitching method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the image stitching device and the image stitching method are detailed in the method embodiments and are not repeated herein.
Based on the hardware implementation of the program modules, and in order to implement the method of the embodiment of the present application, the embodiment of the present application further provides an electronic device, which is configured to execute the foregoing image stitching method. Fig. 6 shows only an exemplary structure of the electronic device, not all of which may be implemented as needed.
As shown in fig. 6, an electronic device 600 provided in an embodiment of the present application includes: at least one processor 601, a memory 602, a user interface 603 and at least one network interface 604. The various components in the electronic device 600 are coupled together by a bus system 605. It is understood that the bus system 605 is used to enable connected communications between these components. The bus system 605 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 605 in fig. 6.
The user interface 603 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
The memory 602 in embodiments of the application is used to store various types of data to support the operation of the electronic device. Examples of such data include: any computer program for operating on an electronic device.
The image stitching method disclosed by the embodiment of the application can be applied to the processor 601 or realized by the processor 601. The processor 601 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the image stitching method may be performed by integrated logic circuitry of hardware in the processor 601 or instructions in the form of software. The processor 601 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 601 may implement or perform the methods, steps and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiment of the application can be directly embodied in the hardware of the decoding processor or can be implemented by combining hardware and software modules in the decoding processor. The software module may be located in a storage medium, where the storage medium is located in the memory 602, and the processor 601 reads information in the memory 602, and in combination with its hardware, performs the steps of the image stitching method provided in the embodiment of the present application.
In an exemplary embodiment, the electronic device may be implemented by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLD, programmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Device), field programmable gate arrays (FPGA, field Programmable Gate Array), general purpose processors, controllers, microcontrollers (MCU, micro Controller Unit), microprocessors (Microprocessor), or other electronic components for performing the aforementioned methods.
It is to be appreciated that the memory 602 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). The memory described by embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
In an exemplary embodiment, the present application also provides a computer storage medium, which may be a computer readable storage medium, for example, including a memory 602 storing a computer program, where the computer program may be executed by the processor 601 of the electronic device to perform the steps described in the method of the embodiment of the present application. The computer readable storage medium may be ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM.
It should be noted that: "first," "second," etc. are used to distinguish similar objects and not necessarily to describe a particular order or sequence.
In addition, the embodiments of the present application may be arbitrarily combined without any collision.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. An image stitching method, comprising:
Acquiring an initial spliced image, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming the images of a first image and a second image to be spliced;
converting the initial spliced image into a target spliced image based on the trained self-supervision optimization model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
2. The method according to claim 1, wherein the method further comprises:
training a self-supervision optimization model to be optimized based on the training sample;
determining that the training times of the self-supervision optimization model reach the set times or the loss value of the loss function converges to obtain a trained self-supervision optimization model;
the training samples comprise paired images to be spliced and initialized spliced images corresponding to the paired images to be spliced, the loss function is determined based on content loss values and gradient loss values, the content loss values represent pixel difference values between the optimized spliced images and the corresponding images to be spliced, and the gradient loss values represent pixel gradient difference values between the optimized spliced images and the corresponding images to be spliced.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the self-supervised optimization model is a CNN of a self-encoder structure including a skip connection.
4. The method of claim 1, wherein the acquiring the initial stitched image comprises:
respectively extracting features of a first image and a second image to be spliced;
determining a homography matrix between the first image and the second image based on the extracted features;
and mapping the first image and the second image to the same image coordinate system based on the homography matrix to obtain an initial spliced image.
5. The method of claim 4, wherein the feature extraction of the first image and the second image to be stitched, respectively, comprises:
respectively extracting Harris corner points of a first image and a second image to be spliced to obtain a first corner point set of the first image and a second corner point set of the second image;
performing feature extraction on a first image and a second image to be spliced based on CNN shared by parameters to obtain a first feature map of the first image and a second feature map of the second image;
performing position drilling on the first feature map based on each corner in the first corner set to obtain feature vectors of each corner in the first corner set;
And carrying out position drilling on the second feature map based on each corner in the second corner set to obtain feature vectors of each corner in the second corner set.
6. The method of claim 5, wherein determining a homography matrix between the first image and the second image based on the extracted features comprises:
performing feature matching based on the feature vector of each corner in the first corner set and the feature vector of each corner in the second corner set to obtain a matching feature set between the first image and the second image;
and determining a homography matrix between the first image and the second image by utilizing a random sampling coincidence algorithm based on the matching feature set.
7. The method according to claim 6, wherein the performing feature matching based on the feature vector of each corner in the first set of corner points and the feature vector of each corner in the second set of corner points to obtain a matched feature set between the first image and the second image includes:
traversing each corner in the first corner set, and respectively solving the corner matched in the second corner set;
Obtaining a matching feature set between the first image and the second image based on the corner pairs matched between the first corner set and the corner set;
wherein, the matched corner point pair is the shortest distance between the feature vectors of the two corner points.
8. An image stitching device, comprising:
the acquisition module is used for acquiring an initial spliced image, wherein the initial spliced image is a spliced image obtained by extracting features, matching features and transforming images of a first image and a second image to be spliced;
the optimizing module is used for converting the initial spliced image into a target spliced image based on the trained self-supervision optimizing model; the trained self-supervision optimization model is used for determining an optimization residual error of the initial mosaic image, and adding the optimization residual error and the initial mosaic image to obtain the target mosaic image.
9. An electronic device, comprising: a processor and a memory for storing a computer program capable of running on the processor, wherein,
the processor being adapted to perform the steps of the method of any of claims 1 to 7 when the computer program is run.
10. A computer storage medium having a computer program stored thereon, which, when executed by a processor, implements the steps of the method according to any of claims 1 to 7.
CN202211658143.2A 2022-12-22 2022-12-22 Image stitching method, device, equipment and storage medium Pending CN116912467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211658143.2A CN116912467A (en) 2022-12-22 2022-12-22 Image stitching method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211658143.2A CN116912467A (en) 2022-12-22 2022-12-22 Image stitching method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116912467A true CN116912467A (en) 2023-10-20

Family

ID=88361486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211658143.2A Pending CN116912467A (en) 2022-12-22 2022-12-22 Image stitching method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116912467A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117764825A (en) * 2024-02-22 2024-03-26 南京派光智慧感知信息技术有限公司 multi-tunnel image stitching algorithm, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117764825A (en) * 2024-02-22 2024-03-26 南京派光智慧感知信息技术有限公司 multi-tunnel image stitching algorithm, electronic equipment and readable storage medium
CN117764825B (en) * 2024-02-22 2024-04-26 南京派光智慧感知信息技术有限公司 Multi-tunnel image stitching algorithm, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US11232286B2 (en) Method and apparatus for generating face rotation image
Zhao et al. Alike: Accurate and lightweight keypoint detection and descriptor extraction
US11663691B2 (en) Method and apparatus for restoring image
CN111402130B (en) Data processing method and data processing device
Liao et al. Model-free distortion rectification framework bridged by distortion distribution map
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN111368717B (en) Line-of-sight determination method, line-of-sight determination device, electronic apparatus, and computer-readable storage medium
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
CN110706262B (en) Image processing method, device, equipment and storage medium
CN113674146A (en) Image super-resolution
CN111080699B (en) Monocular vision odometer method and system based on deep learning
CN113724135A (en) Image splicing method, device, equipment and storage medium
Wang et al. Bifuse++: Self-supervised and efficient bi-projection fusion for 360 depth estimation
CN116912467A (en) Image stitching method, device, equipment and storage medium
Shen et al. Distortion-tolerant monocular depth estimation on omnidirectional images using dual-cubemap
CN115661336A (en) Three-dimensional reconstruction method and related device
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
Hutchcroft et al. CoVisPose: Co-visibility pose transformer for wide-baseline relative pose estimation in 360∘ indoor panoramas
US20230098437A1 (en) Reference-Based Super-Resolution for Image and Video Enhancement
CN112711984B (en) Fixation point positioning method and device and electronic equipment
CN110689609B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115482285A (en) Image alignment method, device, equipment and storage medium
Mao et al. A deep learning approach to track Arabidopsis seedlings’ circumnutation from time-lapse videos
Chen et al. Screen image segmentation and correction for a computer display
Zhang et al. Image mosaic of bionic compound eye imaging system based on image overlap rate prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination