CN114821712A - Face recognition image fusion method - Google Patents

Face recognition image fusion method Download PDF

Info

Publication number
CN114821712A
CN114821712A CN202210359544.1A CN202210359544A CN114821712A CN 114821712 A CN114821712 A CN 114821712A CN 202210359544 A CN202210359544 A CN 202210359544A CN 114821712 A CN114821712 A CN 114821712A
Authority
CN
China
Prior art keywords
image
resolution image
resolution
gray
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210359544.1A
Other languages
Chinese (zh)
Inventor
王文峰
王玉莹
张晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN202210359544.1A priority Critical patent/CN114821712A/en
Publication of CN114821712A publication Critical patent/CN114821712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition image fusion method, comprising the following steps of S1: acquiring a plurality of images to be identified as low-spatial resolution images, and acquiring an image from a pre-stored image library as a high-resolution image; s2: carrying out PCA (principal component analysis) conversion on the low-spatial-resolution image to obtain a principal component image group; s3, carrying out gray scale stretching on the high-resolution image, and replacing the high-resolution image after gray scale stretching with a first component image in the main component image group to obtain a replaced image group; the high-resolution image after gray stretching has the same gray uniform value as the main component image group; and S4, carrying out PCA inverse transformation on the replacement image group to obtain a fused image. The information quantity during the image fusion is compared by using the variance, so that the method is not easy to be interfered by the outside and has high accuracy. The image data principal components are in an orthogonal relation, so that mutual interference among original data can be counteracted; the method has the advantages of simple operation, small calculated amount and low time cost, and can quickly perform image fusion to perform subsequent face recognition.

Description

Face recognition image fusion method
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a face recognition image fusion method.
Background
The existing face recognition system mainly uses big data and artificial intelligence to compare and identify the collected face static image or the face in the video and the face data in the database, which has high requirements on the collected face image and can not have too much shielding. Especially under epidemic conditions, because people wear the mask, the face recognition system in the prior art has limited recognition capability.
Therefore, it is necessary to solve the problems that most of the facial features of the face of a user are lost after wearing the mask, data which can be provided for a computer is left, more than half of the face is shielded by the mask, only half of the facial features are reserved, and the execution efficiency of the machine vision is low.
Disclosure of Invention
The invention aims to provide a face recognition image fusion method to solve the problem of low recognition efficiency caused by few facial features.
In order to solve the problems, the technical scheme of the invention is as follows:
a face recognition image fusion method comprises the following steps:
s1: acquiring a plurality of images to be identified as low-spatial-resolution images, and acquiring an image from a prestored image library as a high-resolution image;
s2: carrying out PCA (principal component analysis) conversion on the low-spatial-resolution image to obtain a principal component image group;
s3, carrying out gray scale stretching on the high-resolution image, and replacing the high-resolution image after gray scale stretching with a first component image in the main component image group to obtain a replaced image group;
the high-resolution image after gray stretching has the same gray uniform value as the main component image group;
and S4, carrying out PCA inverse transformation on the replacement image group to obtain a fused image.
In step S2, the method specifically includes the following steps
S21: reading RGB values of the low-spatial-resolution image and constructing corresponding three-dimensional column vectors, wherein the three-dimensional column vectors comprise pixel point positions of the low-spatial-resolution image and the corresponding RGB values;
s22: calculating to obtain an average value of the three-dimensional column vectors;
s23: calculating a covariance matrix according to the average value of the three-dimensional column vectors;
s24: extracting the characteristic vector of the covariance matrix, and forming a coefficient matrix for obtaining PCA transformation;
s25: and multiplying the coefficient matrix and the three-dimensional column vector to respectively obtain a first component image, a second component image and a third component image, wherein the main component image group comprises the first component image, the second component image and the third component image.
In step S3, performing grayscale stretching on the high-resolution image specifically includes the following steps
S31: reading a high-resolution image and drawing a corresponding histogram;
s32: setting a linear stretching function and a coefficient thereof to perform gray stretching on the high-resolution image;
s33: drawing a corresponding histogram according to the corresponding histogram of the high-resolution image and the high-resolution image after gray stretching;
the histogram comprises information of gray values and the outgoing probability of each level of gray values.
In step S3, replacing the first component image in the main component image group with the gray-scaled high-resolution image to obtain a replaced image group specifically includes the following steps
S34: acquiring a first component image and normalizing to obtain a histogram to be matched;
s35: accumulating the histogram to be matched to obtain an accumulated histogram to be matched;
s36: acquiring a histogram of the high-resolution image after gray stretching, calculating an average value, and then accumulating to obtain an accumulated histogram to be processed;
s37: finding a mapping point with the shortest distance between the cumulative histogram to be matched and the cumulative histogram to be processed;
s38: and mapping the high-resolution image with the stretched gray scale based on the mapping point to obtain a new gray scale, and further replacing the first component image.
A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements a face recognition image fusion method as in any one of the above.
A computer device comprising a memory, a processor and a computer program stored on the memory and being callable by the processor, when executing the computer program, implementing the face recognition image fusion method as claimed in any one of the above.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects:
1) the information quantity in the picture fusion is compared by using the variance, so that the method is not easy to be interfered by the outside and has high accuracy;
2) the image data principal components are in an orthogonal relation, so that mutual interference among original data can be counteracted;
3) the method has the advantages of simple operation, small calculated amount and low time cost, and can quickly perform image fusion to perform subsequent face recognition.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIG. 1 is a flow chart of a face recognition image fusion method of the present invention;
fig. 2 is a structural block diagram of a face recognition image fusion method of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. Moreover, in the interest of brevity and understanding, only one of the components having the same structure or function is illustrated schematically or designated in some of the drawings. In this document, "one" means not only "only one" but also a case of "more than one".
The following describes a face recognition image fusion method according to the present invention in further detail with reference to the accompanying drawings and specific embodiments. Advantages and features of the present invention will become apparent from the following description and from the claims.
Example 1
Referring to fig. 1 and fig. 2, the embodiment provides a face recognition image fusion method, including the following steps:
first, in step S1, a number of images to be recognized are acquired as low spatial resolution images, and an image is acquired from a library of pre-stored images as a high resolution image.
Then, in step S2, PCA conversion is performed on the low spatial resolution image to obtain a principal component image group. Specifically, the RGB values of the low spatial resolution image are read and a corresponding three-dimensional column vector is constructed, where the three-dimensional column vector includes the pixel position of the low spatial resolution image and the corresponding RGB values, and [ low _ R (i, j), low _ G (i, j), and low _ B (i, j) ] are shown in the following formulas. i, j are coordinate points of the low spatial resolution image, and low _ R, low _ G, low _ B indicates different colors. And adding the three-dimensional column vectors of all the pixel points, and dividing the three-dimensional column vectors by the number of the pixel points to obtain the average value of the three-dimensional column vectors. Then, a covariance matrix is calculated by adding a formula, which is cov (x, y) ═ E (x, y) -Ex × Ey, to the average value of the three-dimensional column vectors. Then, the eigenvector of the covariance matrix is extracted, and a coefficient matrix, i.e., eigenvalue, for the PCA transformation is formed. And finally, multiplying the coefficient matrix and the three-dimensional column vector to respectively obtain a first component image, a second component image and a third component image, wherein the first component image, the second component image and the third component image form a principal component image group, and the first component image is low _ R (i, j).
The PCA transformation, which is also called K-L transformation, hotelling transformation, etc., is explained here, and its central idea is to project the original data into new coordinates using a "linear projection" approach. This makes the projected principal components irrelevant, and the new components are arranged according to the amount of information, the principal component with number 1 has the most information, and the higher the following principal component, the less information is contained. The basic idea of fusion is to transform the image and then use the stretched image with high resolution to replace the previous first principal component for the inverse transformation. The fused picture has higher resolution and contains more details. In one sentence, PCA is a method of projecting data into a low-dimensional subspace using a linear transformation, with minimal loss of data.
The PCA transformation is derived as follows, as best defined
Sample x i =[x i1 ,x i2 ,x i3 …x iM ] 1,M
Data set X ═ X 1 ,x i ,x i …x i ] N, M
Sample v iT =[v i1 ,v i2 ,v i3 …v iM ] 1,M
Matrix V T =[v 1 ,v 2 ,v 3 …v j ] M,J
Assuming that dataset X has been decentralized, using X and V T The linear transformation is expressed by dot product operation, and the variance is calculated as follows
S=(V T X)(V T X)=V T XX T V
To obtain the largest variance while satisfying that V is constituted by a unit vector, the following formula can be listed
maxV T XX T V
s.t.V T V=1
By the Lagrange multiplier method, it is possible to obtain
L(V,λ)=V T XX T V-λ(V T V-1)
Derived therefrom to obtain
Figure BDA0003584455600000051
Resolution of XX T V=λV
It can be found that this equation is a formula for finding eigenvalues and eigenvectors matrices, where λ is the eigenvalue and V is the eigenvector. Equality multiplying both sides by V simultaneously T Is obtained by
V T XX T V=λV T V=λ
It follows that the eigenvalues λ may represent the variances S, XX T Is V, the target is the linear transformation rule V.
Next, the process proceeds to step S3, where the high-resolution image obtained before is subjected to grayscale stretching so that the grayscale uniformity value of the high-resolution image after grayscale stretching is the same as that of the main component image group. And replacing the high-resolution image after the gray stretching with a first component image in the main component image group to obtain a replaced image group. The gray stretching is an image enhancement algorithm, and belongs to one of linear point operations. Gray stretching, also known as contrast stretching, is a simple linear point operation. It expands the histogram of the image to fill the entire gray scale range.
The method comprises the following steps of firstly reading a high-resolution image, measuring the image size of the high-resolution image, constructing and storing a vector of gray level occurrence probability, calculating the gray level occurrence probability of each level, putting the gray level occurrence probability into the vector, and drawing a corresponding histogram after the calculation is finished. Then setting a linear stretching function ax + b, wherein coefficients a and b are set values, and x is a gray value corresponding to the pixel point; thereby carrying out gray stretching on each pixel point of the high-resolution image. And drawing a corresponding histogram according to the histogram corresponding to the high-resolution image and the high-resolution image after gray stretching, wherein the histogram comprises information such as gray values and the probability of outgoing lines of each level of gray.
And then, acquiring and normalizing the first component image to obtain a histogram to be matched. And accumulating the histogram to be matched to obtain an accumulated histogram to be matched. Similarly, a histogram of the high-resolution image after the gray stretching is obtained, the average value calculation is performed, and then the accumulation is performed, so that an accumulated histogram to be processed is obtained. And finding a mapping point with the closest distance between the cumulative histogram to be matched and the cumulative histogram to be processed, mapping the high-resolution image with the stretched gray scale based on the mapping point to obtain a new gray scale, and further replacing the first component image of the low-resolution panchromatic image with a high-resolution image.
And finally, carrying out PCA inverse transformation on the replacement image group to obtain a fused image. PCA inverse transform derivation procedure: relation of covariance matrix and correlation coefficient matrix
The matrix of the correlation coefficient of the normalized matrix (mean is 0 and var is 1) is the covariance matrix of the original matrix (obtained from the mathematical formula of the two matrices).
PCA in essence finds a matrix that selects the transformation such that the data of the original space is projected into the transformed space with larger variance (the larger the variance, the larger the information contained). The overall working simplicity of PCA is to find a set of mutually orthogonal axes sequentially in the original space, the first axis being the one that maximizes variance, the second axis being the one that maximizes variance in a plane orthogonal to the first axis, and the third axis being the one that maximizes variance in a plane orthogonal to the 1 st and 2 nd axes, so assuming that in N-dimensional space, we can find N such axes, we take the first r to approximate this space, so we compress from an N-dimensional space to r-dimensional space, but we choose r axes to enable spatial compression to minimize data loss.
As can be seen from the above, the transformed data have no correlation in the space, because they are orthogonal or one-dimensional, the covariance matrix or the correlation coefficient matrix of the transformed matrix can be considered as a diagonal matrix, i.e. the numbers of the other positions except the number on the diagonal are all 0, and the idea of PCA rotation transformation matrix derivation is based on this.
Assuming that the matrix X has a transformation matrix P, and the transformed matrix is Y ═ PX, then
Y=PX
Figure BDA0003584455600000061
YY T =(PX)(PX) T =PXX T P T
XX T =QDQ T
(n-1)S(Y)=PXX T P T =PQDQ T P T =(PQ)D(PD) T
P=Q T
Wherein XX T =QDQ T Is decomposed into characteristics
This transformation matrix is the transpose of the eigenvector Q of the covariance matrix of X, to which the rotational transformation matrix of PCA has been found.
Example 2
Based on the same inventive concept as embodiment 1, the present embodiment also provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements a face recognition image fusion method as embodiment 1.
The computer readable storage medium in this embodiment stores a computer program that can be executed by a processor, and when executed, first obtains a number of images to be identified as low spatial resolution images and obtains an image from a pre-stored image library as a high resolution image. Then, PCA conversion is carried out on the low-spatial resolution image to obtain a principal component image group. And then, carrying out gray stretching on the high-resolution image, and replacing the high-resolution image subjected to gray stretching with the first component image in the main component image group to obtain a replaced image group. And finally, carrying out PCA inverse transformation on the replacement image group to obtain a fused image.
The information quantity in the picture fusion is compared by using the variance, so that the method is not easy to be interfered by the outside and has high accuracy; the image data principal components are in an orthogonal relation, so that mutual interference among original data can be counteracted; the method has the advantages of simple operation, small calculated amount and low time cost, and can quickly perform image fusion to perform subsequent face recognition.
Example 3
Based on the same inventive concept as embodiment 1, this embodiment further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and capable of being called by the processor, and when the processor executes the computer program, the face recognition image fusion method as embodiment 1 is implemented.
In the process of executing the face recognition image fusion method, the processor of the computer device in this embodiment first acquires a plurality of images to be recognized as low spatial resolution images, and acquires an image from a pre-stored image library as a high resolution image. Then, PCA conversion is carried out on the low-spatial resolution image to obtain a principal component image group. And then, carrying out gray stretching on the high-resolution image, and replacing the high-resolution image subjected to gray stretching with the first component image in the main component image group to obtain a replaced image group. And finally, carrying out PCA inverse transformation on the replacement image group to obtain a fused image.
The information quantity in the picture fusion is compared by using the variance, so that the method is not easy to be interfered by the outside and has high accuracy; the image data principal components are in an orthogonal relation, so that mutual interference among original data can be counteracted; the method has the advantages of simple operation, small calculated amount and low time cost, and can quickly perform image fusion to perform subsequent face recognition.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments. Even if various changes are made to the present invention, it is still within the scope of the present invention if they fall within the scope of the claims of the present invention and their equivalents.

Claims (6)

1. A face recognition image fusion method is characterized by comprising the following steps:
s1: acquiring a plurality of images to be identified as low-spatial resolution images, and acquiring an image from a pre-stored image library as a high-resolution image;
s2: carrying out PCA conversion on the low spatial resolution image to obtain a principal component image group;
s3, carrying out gray scale stretching on the high-resolution image, and replacing the high-resolution image after gray scale stretching with a first component image in the main component image group to obtain a replaced image group;
the high-resolution image after gray stretching has the same gray uniform value as the main component image group;
and S4, carrying out PCA inverse transformation on the replacement image group to obtain a fused image.
2. The method for fusing face recognition images according to claim 1, wherein the step S2 specifically comprises the following steps
S21: reading RGB values of the low-spatial-resolution image and constructing corresponding three-dimensional column vectors, wherein the three-dimensional column vectors comprise pixel positions of the low-spatial-resolution image and the corresponding RGB values;
s22: calculating to obtain the average value of the three-dimensional column vectors;
s23: calculating a covariance matrix according to the average value of the three-dimensional column vectors;
s24: extracting the characteristic vector of the covariance matrix, and forming a coefficient matrix for obtaining PCA transformation;
s25: and multiplying the coefficient matrix and the three-dimensional column vector to respectively obtain the first component image, the second component image and the third component image, wherein the main component image group comprises the first component image, the second component image and the third component image.
3. The method as claimed in claim 1, wherein in the step S3, the gray stretching of the high resolution image comprises the following steps
S31: reading the high-resolution image and drawing a corresponding histogram;
s32: setting a linear stretching function and coefficients thereof to perform gray stretching on the high-resolution image;
s33: drawing a corresponding histogram according to the corresponding histogram of the high-resolution image and the high-resolution image after gray stretching;
the histogram comprises information of gray values and the outgoing probability of each level of gray values.
4. The method as claimed in claim 3, wherein in the step S3, the replacing the high-resolution image after gray scale stretching replaces the first component image in the main component image group to obtain the replaced image group specifically includes the following steps
S34: acquiring and normalizing the first component image to obtain a histogram to be matched;
s35: accumulating the histogram to be matched to obtain an accumulated histogram to be matched;
s36: acquiring a histogram of the high-resolution image after gray stretching, performing average value calculation, and then performing accumulation to obtain an accumulated histogram to be processed;
s37: finding a mapping point with the shortest distance between the cumulative histogram to be matched and the cumulative histogram to be processed;
s38: and mapping the high-resolution image with the stretched gray scale based on the mapping point to obtain a new gray scale, and further replacing the first component image.
5. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the face recognition image fusion method according to any one of claims 1 to 4.
6. A computer device comprising a memory and a processor and a computer program stored on the memory and being callable by the processor, when executing the computer program, implementing the face recognition image fusion method according to any one of claims 1-4.
CN202210359544.1A 2022-04-07 2022-04-07 Face recognition image fusion method Pending CN114821712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210359544.1A CN114821712A (en) 2022-04-07 2022-04-07 Face recognition image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210359544.1A CN114821712A (en) 2022-04-07 2022-04-07 Face recognition image fusion method

Publications (1)

Publication Number Publication Date
CN114821712A true CN114821712A (en) 2022-07-29

Family

ID=82534391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210359544.1A Pending CN114821712A (en) 2022-04-07 2022-04-07 Face recognition image fusion method

Country Status (1)

Country Link
CN (1) CN114821712A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050094887A1 (en) * 2003-11-05 2005-05-05 Cakir Halil I. Methods, systems and computer program products for fusion of high spatial resolution imagery with lower spatial resolution imagery using correspondence analysis
CN102063710A (en) * 2009-11-13 2011-05-18 烟台海岸带可持续发展研究所 Method for realizing fusion and enhancement of remote sensing image
CN108875623A (en) * 2018-06-11 2018-11-23 辽宁工业大学 A kind of face identification method based on multi-features correlation technique

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050094887A1 (en) * 2003-11-05 2005-05-05 Cakir Halil I. Methods, systems and computer program products for fusion of high spatial resolution imagery with lower spatial resolution imagery using correspondence analysis
CN102063710A (en) * 2009-11-13 2011-05-18 烟台海岸带可持续发展研究所 Method for realizing fusion and enhancement of remote sensing image
CN108875623A (en) * 2018-06-11 2018-11-23 辽宁工业大学 A kind of face identification method based on multi-features correlation technique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张微等: "不同高分辨率遥感图像融合技术特征比较", 《东海海洋》, vol. 23, no. 1, pages 23 - 31 *

Similar Documents

Publication Publication Date Title
US7369687B2 (en) Method for extracting face position, program for causing computer to execute the method for extracting face position and apparatus for extracting face position
CN110097609B (en) Sample domain-based refined embroidery texture migration method
US20120121142A1 (en) Ultra-low dimensional representation for face recognition under varying expressions
CN107341776B (en) Single-frame super-resolution reconstruction method based on sparse coding and combined mapping
Tuzel et al. Global-local face upsampling network
Li et al. Local spectral similarity preserving regularized robust sparse hyperspectral unmixing
CN112967174A (en) Image generation model training method, image generation device and storage medium
Chowdhary et al. Singular value decomposition–principal component analysis-based object recognition approach
Zhang et al. Morphable model space based face super-resolution reconstruction and recognition
CN108537752B (en) Image processing method and device based on non-local self-similarity and sparse representation
Han et al. Local sparse structure denoising for low-light-level image
Gao et al. Face image super-resolution with pose via nuclear norm regularized structural orthogonal procrustes regression
CN107944497A (en) Image block method for measuring similarity based on principal component analysis
JP2006285570A (en) Similar image retrieval method, and similar image retrieval device
Su et al. Efficient and accurate face alignment by global regression and cascaded local refinement
Dutta et al. Weighted low rank approximation for background estimation problems
Chen et al. A novel face super resolution approach for noisy images using contour feature and standard deviation prior
Meng et al. A general framework for understanding compressed subspace clustering algorithms
CN114821712A (en) Face recognition image fusion method
CN110503606B (en) Method for improving face definition
US20160292529A1 (en) Image collation system, image collation method, and program
JP7301589B2 (en) Image processing device, image processing method, and program
CN110856014A (en) Moving image generation method, moving image generation device, electronic device, and storage medium
CN113506212A (en) Improved POCS-based hyperspectral image super-resolution reconstruction method
CN106780331B (en) Novel super-resolution method based on neighborhood embedding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220729

WD01 Invention patent application deemed withdrawn after publication