CN112163999A - Image reconstruction method and device, electronic equipment and readable storage medium - Google Patents

Image reconstruction method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112163999A
CN112163999A CN202011020356.3A CN202011020356A CN112163999A CN 112163999 A CN112163999 A CN 112163999A CN 202011020356 A CN202011020356 A CN 202011020356A CN 112163999 A CN112163999 A CN 112163999A
Authority
CN
China
Prior art keywords
image
noise reduction
pixel point
resolution
reduction weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011020356.3A
Other languages
Chinese (zh)
Other versions
CN112163999B (en
Inventor
冯进亨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202011020356.3A priority Critical patent/CN112163999B/en
Publication of CN112163999A publication Critical patent/CN112163999A/en
Application granted granted Critical
Publication of CN112163999B publication Critical patent/CN112163999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image reconstruction method and device, an electronic device and a readable storage medium, and at least two frames of first resolution images which are acquired continuously are acquired. Determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first-resolution image and a preset mapping relation; the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping includes correspondence between different image regions and different noise reduction weights. According to the noise reduction weight of each pixel point in each image area, at least two frames of first resolution images are up-sampled, and second resolution images corresponding to the at least two frames of first resolution images are generated; wherein the second resolution image has a higher resolution than the first resolution image. The problems of image blurring and low image quality caused by excessive noise reduction are solved by adopting a uniform noise reduction weight to perform upsampling in the image reconstruction process.

Description

Image reconstruction method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image reconstruction method and apparatus, an electronic device, and a readable storage medium.
Background
With the development of mobile terminal technology, in order to meet the user's shooting requirement for high resolution images, the shooting technology in mobile terminals has also been rapidly developed. The digital zoom technology is widely applied to the mobile terminal, and the super-resolution image can be reconstructed through the digital zoom technology. The super-resolution image is a high-resolution image constructed by a digital zoom technology based on a low-resolution image, and the super-resolution image has higher pixel density and can provide more detailed features. Therefore, the super-resolution image can meet the higher and higher photographing requirements of users.
However, in the process of reconstructing a super-resolution image, the conventional method often has the problems of image blurring and low image quality caused by excessive noise reduction.
Disclosure of Invention
The embodiment of the application provides an image reconstruction method and device, electronic equipment and a readable storage medium, which can reduce the power consumption of the electronic equipment while improving the operation convenience in the navigation process.
A method of image reconstruction, the method comprising:
acquiring at least two continuously acquired first resolution images;
determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first-resolution image and a preset mapping relation; wherein the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights;
according to the noise reduction weight of each pixel point in each image area, performing up-sampling on the at least two frames of first resolution images to generate second resolution images corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
An image reconstruction apparatus, characterized in that the apparatus comprises:
the first resolution image acquisition module is used for acquiring at least two frames of first resolution images which are continuously acquired;
the noise reduction weight determining module is used for determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first resolution image and a preset mapping relation; wherein the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights;
the second resolution image generation module is used for performing up-sampling on the at least two frames of first resolution images according to the noise reduction weight of each pixel point in each image area to generate second resolution images corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the image reconstruction method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image reconstruction method as described above.
The image reconstruction method and device, the electronic equipment and the readable storage medium acquire at least two continuously acquired first-resolution images. Determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first-resolution image and a preset mapping relation; the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping includes correspondence between different image regions and different noise reduction weights. According to the noise reduction weight of each pixel point in each image area, at least two frames of first resolution images are up-sampled, and second resolution images corresponding to the at least two frames of first resolution images are generated; wherein the second resolution image has a higher resolution than the first resolution image.
Because the user visually has different image quality requirements for the flat region, the edge region and the corner region on the image, in order to meet the requirements of the user, the noise reduction weight of each pixel point in the image region is determined according to the type of the image region on each frame of first-resolution image and the preset mapping relation. Thus, different noise reduction weights are used for different image regions for up-sampling, and a second resolution image corresponding to at least two frames of the first resolution image is generated. The problems of image blurring and low image quality caused by excessive noise reduction are solved by adopting a uniform noise reduction weight to perform upsampling in the image reconstruction process.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image reconstruction method;
FIG. 2 is a flow diagram of a method for image reconstruction in one embodiment;
fig. 3 is a diagram showing a correspondence relationship between two eigenvalues of the structure tensor of the pixel point and the image region category;
FIG. 4 is a flowchart illustrating a method for calculating noise reduction weights for pixels in each image region of the first resolution image shown in FIG. 2;
FIG. 5 is a flowchart of a method for upsampling the at least two frames of low resolution images to generate a second resolution image of FIG. 2;
FIG. 6 is a schematic illustration of image registration in one embodiment;
FIG. 7 is a flowchart of the method for generating a second resolution image of FIG. 5 based on the determined noise reduction weights for each sampled pixel in the neighborhood and the pixel values of the sampled pixels for upsampling;
FIG. 8 is a flow chart of a method of image reconstruction in an exemplary embodiment;
FIG. 9 is a block diagram showing the structure of an image reconstructing apparatus according to an embodiment;
fig. 10 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Fig. 1 is a view of an application scenario of an image reconstruction method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 120. Acquiring at least two continuously acquired first-resolution images by using an image reconstruction method in the application; determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first-resolution image and a preset mapping relation; the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights; according to the noise reduction weight of each pixel point in each image area, at least two frames of first resolution images are up-sampled, and second resolution images corresponding to the at least two frames of first resolution images are generated; wherein the second resolution image has a higher resolution than the first resolution image. Here, the electronic device 120 may be any terminal device such as a mobile phone, a tablet computer, a PDA (personal digital assistant), a wearable device, and the like.
FIG. 2 is a flow chart of an image reconstruction method in one embodiment. The image reconstruction method in the present embodiment is described by taking the electronic device 120 in fig. 1 as an example. As shown in fig. 2, the image reconstruction method includes steps 220 to 260. Wherein,
step 220, at least two frames of first resolution images are acquired.
The first resolution image may be a low resolution image, wherein the low resolution image is defined with respect to the high resolution image, i.e. the image resolution of the high resolution image is higher than the image resolution of the low resolution image. The super-resolution image is a high-resolution image constructed by a digital zoom technology based on a low-resolution image, and the super-resolution image has higher pixel density and can provide more detailed features. Image resolution generally refers to the ability of an imaging or display system to resolve details, representing the amount of information stored in an image.
In the super-resolution image reconstruction process, the electronic device first needs to acquire at least two continuously acquired first-resolution images. Since the super-resolution image reconstruction process is generally performed on images of the same scene, it is necessary to continuously acquire images of the same scene and then acquire at least two continuously acquired images, which are low-resolution images. Here, two frames of images may be acquired, or more than two frames of images may be acquired, which is not limited in the present application.
Step 240, determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first resolution image and a preset mapping relation; the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping includes correspondence between different image regions and different noise reduction weights.
In the super-resolution image reconstruction process, the electronic device needs to perform noise reduction processing on the image. Noise reduction refers to the process of reducing noise in a digital image. The acquired low-resolution image may be divided into different image regions according to different image region types, for example, when the type of the image region includes at least one of a flat region, an edge region, and a corner region, the acquired low-resolution image may be divided into the flat region, the edge region, or the corner region.
Since a preset mapping relationship is stored in the electronic device in advance, the mapping relationship includes correspondence between different image regions and different noise reduction weights. Therefore, the noise reduction weight of each pixel point in the image area can be determined according to the type of the image area on each frame of low-resolution image and the preset mapping relation.
In the process of determining the noise reduction weight of each pixel point in the image area, because the image quality requirements of the user on the flat area, the edge area and the corner area on the image are different visually, in order to meet the requirements of the user, different noise reduction weights can be configured for different image areas through preset mapping relations, that is, the noise reduction weight of each pixel point in different image areas is determined. For example, if the user has a low requirement on the image quality of the flat region, a large noise reduction weight can be set for the pixel points in the flat region; the user has high requirements on the image quality of the edge region and the corner region, and can set low noise reduction weight for pixel points in the edge region and the corner region. Therefore, the problem that the visual impression of a user is influenced due to the fact that the image blurring and the like occur in the edge region and the corner region of the image caused by excessive noise reduction of the edge region and the corner region can be avoided.
Step 260, according to the noise reduction weight of each pixel point in each image area, performing up-sampling on at least two frames of first resolution images to generate second resolution images corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
After determining the noise reduction weight of each pixel point in each image region in the above steps, the electronic device may perform upsampling on at least two frames of first resolution images according to the noise reduction weight of each pixel point in each image region, and generate a second resolution image corresponding to at least two frames of first resolution images. Wherein the second resolution image has a higher resolution than the first resolution image. The second resolution image can also be called a super-resolution image, and the super-resolution image is a high-resolution image constructed by a digital zoom technology based on a low-resolution image, and has higher pixel density and can provide more detailed features.
In the process of up-sampling at least two frames of first-resolution images and generating a second-resolution image corresponding to the at least two frames of first-resolution images, specifically, the at least two frames of low-resolution images are subjected to image alignment, and then a new pixel value is calculated based on the pixel values of the aligned pixel points on the low-resolution images of different frames and the noise reduction weight, and is used as the pixel value of the corresponding position of the obtained super-resolution image. In this way, a super-resolution image is generated based on at least two frames of low-resolution images.
In the embodiment of the application, because the user visually has different image quality requirements for the flat region, the edge region and the corner region on the image, in order to meet the requirements of the user, the noise reduction weight of each pixel point in the image region is determined according to the type of the image region on each frame of first-resolution image and the preset mapping relation. Thus, different noise reduction weights are used for different image regions for up-sampling, and a second resolution image corresponding to at least two frames of the first resolution image is generated. The problems of image blurring and low image quality caused by excessive noise reduction are solved by adopting a uniform noise reduction weight to perform upsampling in the image reconstruction process.
In one embodiment, determining the noise reduction weight of each pixel point in the image region according to the type of the image region on each frame of the first resolution image and a preset mapping relationship includes:
for each frame of first resolution image, calculating the characteristic value of the structure tensor of each pixel point in the first resolution image; the eigenvalue size of the structure tensor represents the image area to which the pixel point belongs;
and calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image based on the characteristic value of the structure tensor of each pixel point and the mathematical correspondence between the preset characteristic value of the structure tensor and the noise reduction weight.
Specifically, in the process of determining the noise reduction weight of each pixel point in an image area according to the type of the image area on each frame of first resolution image and a preset mapping relation, firstly, for each frame of first resolution image, calculating the structure tensor of each pixel point in the first resolution image; secondly, calculating the characteristic value of the structure tensor of each pixel point; and finally, calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image based on the characteristic value of the structure tensor of each pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight.
The structure tensor is a structure matrix about an image, and a specific calculation formula of the structure tensor T is as follows:
Figure BDA0002700416890000041
wherein, IxxIs 2 order partial derivative, I, of pixel point on image in x directionyyIs 2 order partial derivative, I, of pixel point on image in y directionxyIs the partial derivative of a pixel point on the image with respect to the x-direction and the y-direction. According to the structure tensor of the pixel point, which area of the flat area, the edge area and the corner area of the image the pixel point is located in can be distinguished.
After the structure tensor of each pixel point in the first resolution ratio image is calculated, two eigenvalues λ of the structure tensor of each pixel point can be further calculated1、λ2. As shown in fig. 3, it is a corresponding relationship diagram of two eigenvalues of the structure tensor of the pixel point and the image region category. As can be seen from FIG. 3, when two eigenvalues λ1、λ2When the pixel points are larger than the pixel points, the pixel points are positioned in the corner area on the image; when two eigenvalues lambda1、λ2When one of the feature values is larger, the pixel point is positioned in the edge area of the image; when two eigenvalues lambda1、λ2When the pixel points are smaller, the pixel points are located in the flat area on the image.
Under general conditions, users have low image quality requirements on a flat area, and can set a large noise reduction weight for pixel points in the flat area; the user has high requirements on the image quality of the edge region and the corner region, and can set low noise reduction weight for pixel points in the edge region and the corner region. Therefore, after determining which region of the image the pixel is located in, the corresponding noise reduction weight can be configured for the pixel. Specifically, the noise reduction weight of the pixel point in each image region in the first-resolution-ratio image may be calculated based on the feature value of the structure tensor of each pixel point and the mathematical correspondence between the feature value of the preset structure tensor and the noise reduction weight.
In the embodiment of the application, for each frame of the first resolution image, the structure tensor of each pixel point in the first resolution image is calculated, and then the characteristic value of the structure tensor of each pixel point is calculated. The area of each pixel point in the image can be judged based on the characteristic value of the structure tensor of each pixel point, and corresponding relations exist between different image areas and noise reduction weights. Therefore, the noise reduction weight of the pixel point can be obtained based on the characteristic value of the structure tensor of the pixel point. Specifically, there is a mathematical correspondence between the eigenvalue of the structure tensor of each pixel point and the eigenvalue of the preset structure tensor and the noise reduction weight, so that the noise reduction weight of the pixel point in each image region in the first-resolution image can be more accurately calculated based on the mathematical correspondence. Thus, different noise reduction weights are used for different image regions for up-sampling, and a second resolution image corresponding to at least two frames of the first resolution image is generated. The problems of image blurring and low image quality caused by excessive noise reduction are solved by adopting a uniform noise reduction weight to perform upsampling in the image reconstruction process.
In one embodiment, calculating the eigenvalue of the structure tensor of each pixel point in the first resolution image comprises:
carrying out semi-positive definite processing on the structure tensor of the pixel point to obtain a semi-positive definite structure tensor;
and carrying out eigenvalue decomposition on the semi-positive structure tensor to obtain the eigenvalue of the structure tensor of the pixel point.
Specifically, only when the structure tensor is the semi-positive definite matrix, the eigenvalues of the structure tensor can be ensured to be greater than 0, and the noise reduction weight can be conveniently calculated based on the eigenvalues. However, the calculated structure tensor of the pixel point is not necessarily a semi-positive definite matrix, so that the semi-positive definite matrix is obtained by performing semi-positive definite processing on the structure tensor of the pixel point, and the semi-positive definite matrix is the semi-positive definite structure tensor. Wherein, the semi-positive definite treatment process comprises the following steps: if B is a real symmetric matrix, if X 'BX is more than or equal to 0 and is always true for each non-zero real vector X, B is called a semi-positive fixed matrix, wherein X' represents the transposition of X.
And then, carrying out eigenvalue decomposition on the half-and-half positive structure tensor to obtain the eigenvalue of the structure tensor of the pixel point, wherein the eigenvalue of the structure tensor is greater than 0. For example, if the matrix B is a semi-positive definite matrix and X is a column vector, λ is obtained by using BX ═ λ X, where λ is an eigenvalue of the matrix B and X is an eigenvector of the eigenvalue λ belonging to the matrix B. Where the matrix B is the tensor for the structure
Figure BDA0002700416890000051
The matrix B is known to be a two-dimensional matrix, and thus the matrix B is subjected to eigenvalue decomposition to obtain two eigenvalues lambda1、λ2And two eigenvectors e1,e2
In the embodiment of the application, only when the structure tensor is the semi-positive definite matrix, the eigenvalue of the structure tensor can be ensured to be greater than 0, so that the noise reduction weight can be conveniently calculated based on the eigenvalue subsequently. Therefore, the structure tensor of the pixel point is subjected to semi-positive definite processing to obtain a semi-positive definite structure tensor, and then the eigenvalue of the semi-positive definite structure tensor is decomposed to obtain the eigenvalue of the structure tensor of the pixel point. Therefore, the noise reduction weight of the pixel point can be accurately calculated based on the eigenvalue of the structure tensor.
In one embodiment, the mathematical correspondence between the eigenvalues of the preset structure tensor and the noise reduction weight is comprised of
Figure BDA0002700416890000052
The relational expression of (1);
based on the eigenvalue of the structure tensor of each pixel point and the mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight, the noise reduction weight of the pixel point in each image area in the first resolution ratio image is calculated, which includes:
eigenvalue of structure tensor based on each pixel point and its content
Figure BDA0002700416890000053
Calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image according to the relational expression; it is composed ofIn (A) is an eigen enhancement coefficient calculated from an eigenvalue of a structure tensor based on pixel values, λ1、λ2Is two eigenvalues of the structure tensor of the pixel point, and λ1Greater than λ2
Specifically, the mathematical correspondence between the eigenvalues of the preset structure tensor and the noise reduction weight includes
Figure BDA0002700416890000054
Figure BDA0002700416890000055
The relational expression of (1); where A is an eigen enhancement coefficient calculated based on the eigenvalues of the structure tensor of the pixel values, λ1、λ2Is two eigenvalues of the structure tensor of the pixel point, and λ1Greater than λ2. For example, the mathematical correspondence between the eigenvalues of the preset structure tensor and the noise reduction weight is specifically:
Figure BDA0002700416890000056
Figure BDA0002700416890000057
wherein DN is the noise reduction weight. Of course, the above equations (1-2) and (1-3) are only examples, and the equations may be modified or other equations may be used to calculate DN, which is not limited in this application. The value of random variation can be adjusted by Clamp function
Figure BDA0002700416890000058
Restricted to a given interval [0,1 ]]In, i.e. if
Figure BDA0002700416890000059
Is in a range of values between a minimum value and a maximum value, the function will return
Figure BDA00027004168900000510
The value of (c). If it is not
Figure BDA00027004168900000511
Is greater than the range, the function will return the maximum value. If it is not
Figure BDA0002700416890000061
If the value is less than the range, the function will return the minimum value.
Calculating the characteristic value lambda of the structure tensor of each pixel point in each frame of low-resolution image1、λ2. Then will lambda1、λ2Inputting the data into the formula (1-3) to calculate the characteristic enhancement coefficient A, and calculating the characteristic enhancement coefficient A and the characteristic value lambda1And inputting the noise reduction weight DN into a formula (1-2) to calculate the noise reduction weight DN of each pixel point.
In the embodiment of the application, based on the lower image quality requirement of the user on the flat area, a larger noise reduction weight can be set for the pixel points in the flat area; the user has high requirements on the image quality of the edge region and the corner region, and can set a low noise reduction weight principle for the pixel points in the edge region and the corner region, and establish a mathematical correspondence between the characteristic value of the structure tensor and the noise reduction weight. Therefore, the relation between the eigenvalue of the structure tensor and the noise reduction weight is quantized, so that different noise reduction weights are given to the pixel points with different eigenvalues, and the phenomenon that the noise is excessively reduced to different pixel points is accurately avoided. Thus, the image quality of the finally obtained super-resolution image is improved.
In an embodiment, as shown in fig. 4, calculating the noise reduction weight of the pixel point in each image region in the first resolution image based on the eigenvalue of the structure tensor of each pixel point and the mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight includes:
step 420, if the pixel point is located in a flat area in the first resolution image, calculating a noise reduction weight of the pixel point based on a mathematical correspondence between an eigenvalue of a structure tensor of the pixel point and an eigenvalue of a preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as a first noise reduction weight;
step 440, if the pixel point is located in the edge region of the first resolution image, calculating a noise reduction weight of the pixel point based on the eigenvalue of the structure tensor of the pixel point and the mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as a second noise reduction weight;
step 460, if the pixel point is located in the corner region of the first resolution image, calculating the noise reduction weight of the pixel point based on the eigenvalue of the structure tensor of the pixel point and the mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as a third noise reduction weight; the first noise reduction weight is greater than the second noise reduction weight, and the first noise reduction weight is greater than the third noise reduction weight.
And a preset mathematical correspondence exists between the eigenvalue of the structure tensor and the noise reduction weight. The noise reduction weight corresponding to the pixel point can be calculated by inputting the characteristic value of the structure tensor of the pixel point into the mathematical correspondence. And the mathematical correspondence can make the first noise reduction weight of the pixel point of the calculated flat area larger than the second noise reduction weight of the pixel point of the calculated edge area. The mathematical correspondence can make the first noise reduction weight of the pixel point of the calculated flat region larger than the third noise reduction weight of the pixel point of the calculated angular region. And for the second noise reduction weight of the pixel point in the edge region and the third noise reduction weight of the pixel point in the corner region, the size relationship between the two is not limited, because the visual requirements of the user on the edge region and the corner region are high.
In the embodiment of the application, for the pixel points located in different areas in the first resolution image, the noise reduction weight of the pixel point is calculated based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight, so that the first noise reduction weight of the pixel point in the flat area is greater than the second noise reduction weight of the pixel point in the edge area and is also greater than the third noise reduction weight of the pixel point in the corner area. Because the visual requirements of the user on the edge region and the corner region are high, the two regions are endowed with smaller noise reduction weights, so that excessive noise reduction processing on the edge region and the corner region is avoided. Finally, the image quality and the definition of the image can be improved.
In an embodiment, as shown in fig. 5, in step 260, upsampling at least two frames of low resolution images according to the noise reduction weight of each pixel point in each image region, and generating a second resolution image corresponding to at least two frames of first resolution images includes:
step 262, performing image registration on at least two frames of the first resolution images to obtain a registration pixel point set, where the registration pixel point set includes multiple groups of mutually aligned pixel points.
When at least two frames of low-resolution images are up-sampled according to the noise reduction weight of each pixel point in each image area, at first, image registration is required to be carried out on at least two frames of first-resolution images. The image registration may select one image from at least two frames of low resolution images as a reference image to which the other frames of low resolution images are registered. Image registration is a typical problem and technical difficulty in the field of image processing research, and aims to compare or fuse images acquired under different conditions for the same object, for example, images of the same object under different conditions, such as different acquisition devices, different times, different shooting perspectives, and the like, are fused. Specifically, for two frames of images in a set of image data sets, one frame of image is mapped to the other frame of image by finding a spatial transformation, so that points corresponding to the same object in space in the two frames of images are in one-to-one correspondence, thereby achieving the purpose of information fusion.
The image registration method in the application can adopt a matching method based on features, firstly, the features of the image are extracted, then a feature descriptor is generated, and finally, the features of the multi-frame image are matched according to the similarity degree of the descriptor, so that conversion matrixes of different image frames are generated. The features of the image may be mainly classified into features such as points, lines (edges), areas (faces), and the like, or may be classified into local features and global features. The extraction of the region (surface) features is relatively troublesome and takes a long time, so that point features and edge features are mainly adopted for image registration.
The pixel points of the same target on other image frames can be converted to the positions of the pixel points of the same target on the reference image frame through the conversion matrix. After the pixel points of the same target on other image frames are converted into the coordinate system of the reference image frame, the coordinates of the pixel points of the same target on the other image frames are consistent with the coordinates of the pixel points of the same target on the reference image frame. Each set of pixels at the same location on the same object in these different image frames constitutes a set of registration pixels. For example, if image registration is performed on three image frames, each group of pixel points at the same position on the same target in the three image frames is obtained, and thus a group of registration pixel points is formed. That is, each group of alignment pixels includes three pixels, and each pixel is located in one frame of image frame.
Step 264, based on the registration pixel point set, determining the neighborhood of each group of registration pixel points in the registration pixel point set.
After the image registration, a plurality of groups of registered pixel points are obtained, and the registered pixel points form a registered pixel point set. And determining the neighborhood of each group of registration pixel points in the registration pixel point set based on the registration pixel point set. Namely, each group of alignment pixel points is traversed, and the neighborhood of each pixel point in each group of alignment pixel points in the image frame is respectively obtained. For example, as shown in fig. 6, a schematic diagram is formed after image registration is performed on a first frame low-resolution image 602, a second frame low-resolution image 604, and a third frame low-resolution image 606. And the pixel points at the same position on the same target in different image frames have the same coordinates (x, y), and for each group of alignment pixel points, the neighborhood of each pixel point in each group of alignment pixel points in the image frame is respectively obtained. For example, a 3 × 3 neighborhood (lattice region in the figure) of a pixel point at (x, y) coordinates is acquired on the first frame low resolution image 602, a 3 × 3 neighborhood of a pixel point at (x, y) coordinates is acquired on the second frame low resolution image 604, and a 3 × 3 neighborhood of a pixel point at (x, y) coordinates is acquired on the third frame low resolution image 606.
Step 266, obtaining the noise reduction weight of each sampling pixel point in the determined neighborhood from the noise reduction weight of each pixel point in each image area according to the determined coordinates of the sampling pixel points in the neighborhood;
after the neighborhood of the pixel point at the (x, y) coordinate on each frame of image frame is obtained, the coordinates of the sampling pixel point in the neighborhood are determined based on the (x, y) coordinate and the size of the neighborhood (e.g., 3 × 3). And correspondingly acquiring the noise reduction weight of each sampling pixel point in the determined neighborhood from the noise reduction weight of the pixel point on each frame of image frame based on the determined coordinates of the sampling pixel points in the neighborhood.
And 268, performing upsampling based on the determined denoising weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point, and generating a second resolution image corresponding to the at least two frames of the first resolution images.
For each group of registration pixel points on the collected image frame, the neighborhood of each pixel point in each group of registration pixel points is respectively determined. And then, obtaining the noise reduction weight of each sampling pixel point in the determined neighborhood, and finally performing up-sampling based on the noise reduction weight of each sampling pixel point in the determined neighborhood and the pixel value of the sampling pixel point to generate a second resolution image corresponding to at least two frames of the first resolution image.
In the embodiment of the application, for each group of registration pixel points on the collected image frame, the neighborhood of each pixel point in each group of registration pixel points is respectively determined. And then obtaining the noise reduction weight of each sampling pixel point in the determined neighborhood, and finally performing up-sampling based on the noise reduction weight of each sampling pixel point in the determined neighborhood and the pixel value of the sampling pixel point to generate a second resolution image (super-resolution image) corresponding to at least two frames of the first resolution image (low resolution image). Because each group of registration pixel points on the collected low-resolution image frame is traversed, the neighborhood of each pixel point in each group of registration pixel points is respectively determined, the pixel value corresponding to the position of the registration pixel point is calculated based on the pixel value of each sampling pixel point in the neighborhood and the noise reduction weight, and the pixel value is used as the pixel value of the corresponding position on the super-resolution image. Therefore, the noise reduction weight of the pixel point of each frame of low-resolution image is comprehensively considered, so that the pixel value obtained after noise reduction is more accurate, and the problems of image blurring and low image quality caused by excessive noise reduction are avoided.
In one embodiment, as shown in fig. 7, the performing upsampling based on the determined noise reduction weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point to generate the second resolution image corresponding to at least two frames of the first resolution image includes:
step 268a, calculating the noise reduction weight of each sampling pixel point in the determined neighborhood and the weighted average value of the pixel values of the sampling pixel points;
step 268b, using the weighted average as the pixel value of the up-sampled image obtained by up-sampling to generate an up-sampled image;
and 268c, post-processing the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
Specifically, in the process of performing upsampling based on the determined noise reduction weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point, first, the noise reduction weight of each sampling pixel point in the neighborhood and the weighted average value of the pixel values of the sampling pixel points are calculated. Specifically, the following formula is adopted for calculation:
Figure BDA0002700416890000081
wherein, Ii(x, y) is the pixel value of the pixel point with coordinates (x, y) in the low resolution image frame, DNi(x, y) is the noise reduction weight of the pixel point with the coordinate (x, y) in the low-resolution image frame, i represents the low-resolution image of the ith frame, and x, y belongs to R and represents the sampling in the neighborhood R of the pixel point with the coordinate (x, y)And n is the total number of sampling pixel points.
With reference to fig. 6, assuming that three frames of low-resolution image frames are collected in total, when a group of registration pixel points are obtained, a 3 × 3 neighborhood of the pixel point at the (x, y) coordinate is obtained on the first frame low-resolution image 602, a 3 × 3 neighborhood of the pixel point at the (x, y) coordinate is obtained on the second frame low-resolution image 604, and a 3 × 3 neighborhood of the pixel point at the (x, y) coordinate is obtained on the third frame low-resolution image 606, respectively. At this time, the number n of total sampling pixels is 3 × 3 × 3 — 27. Of course, the neighborhood may also be a 5 × 5 neighborhood or a neighborhood of another size, which is not limited in this application.
For each group of alignment pixel points, a corresponding weighted average I can be calculatedup(u, v), weighted average Iup(u, v) as a pixel value I at coordinates (u, v) on an up-sampled image obtained by up-samplingup(u,v)。
Finally, the up-sampled image is post-processed to generate a second resolution image (super-resolution image) corresponding to at least two frames of the first resolution image (low resolution image).
In the embodiment of the application, in the process of upsampling based on the determined noise reduction weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point, the determined noise reduction weight of each sampling pixel point in the neighborhood and the weighted average value of the pixel values of the sampling pixel points are calculated. And generating the up-sampled image by taking the weighted average value as the pixel value of the up-sampled image obtained by up-sampling. Post-processing the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image. For each group of alignment pixel points in the low-resolution image, the super-resolution image is calculated and generated through the noise reduction weight of the sampling pixel points in the neighborhood of the pixel points, so that the targeted noise reduction processing of each group of alignment pixel points is realized, and the problems of image blurring and low image quality caused by excessive noise reduction of partial image areas are avoided.
In one embodiment, post-processing the up-sampled image to generate a second resolution image corresponding to at least two frames of the first resolution image comprises:
and performing at least one of image sharpening, image contrast enhancement and color correction on the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
In the embodiment of the present application, after the upsampled image is generated by using the weighted average as the pixel value of the upsampled image obtained by upsampling, the upsampled image is post-processed to generate the second resolution image corresponding to the at least two frames of the first resolution images. Specifically, the second resolution image corresponding to the at least two frames of the first resolution image may be generated by at least one of image sharpening, image contrast enhancement and color correction of the up-sampled image. Thereby, the image quality of the second resolution image is improved.
In one embodiment, the at least two frames of the first resolution image are RAW images and the second resolution image is an RGB image.
In the embodiment of the application, the RAW image is the most original image output by the image sensor, and more original and complete information can be reserved, so that when the low-resolution image is acquired, the low-resolution image in the RAW format can be acquired, and the super-resolution image can be reconstructed more conveniently. However, since the RAW image is the most primitive image output by the image sensor and cannot be directly output as an electronic device, it is necessary to generate a super-resolution image in a single-frame RGB format by performing image signal processing on a super-resolution image in a single-frame RAW format obtained by up-sampling a low-resolution image in a RAW format. So that the super-resolution image in single frame RGB format is displayed on the electronic device.
In a specific embodiment, as shown in fig. 8, there is provided an image reconstruction method, comprising the steps of:
step 802, judging whether the electronic equipment supports digital zooming; if yes, go to step 804; if not, go to step 806;
step 804, acquiring at least two frames of continuously acquired low-resolution images to form an image sequence; step 808 is entered;
step 806, optical zoom processing; step 816 is entered;
808, calculating the characteristic value of the structure tensor of each pixel point in each frame of low-resolution image; entering step 810;
step 810, the eigenvalues and the inclusion of the structure tensor based on each pixel point
Figure BDA0002700416890000091
Calculating the noise reduction weight of the pixel points in each image area in the low-resolution image according to the relational expression; step 812 is entered;
step 812, performing image registration on at least two frames of low-resolution images to obtain a registration pixel point set, wherein the registration pixel point set comprises a plurality of groups of mutually aligned pixel points; step 814 is entered;
step 814, determining the neighborhood of each group of registration pixel points in the registration pixel point set based on the registration pixel point set; acquiring the noise reduction weight of each sampling pixel point in the determined neighborhood from the noise reduction weight of each pixel point in each image area according to the determined coordinates of the sampling pixel points in the neighborhood; calculating the noise reduction weight of each sampling pixel point in the determined neighborhood and the weighted average value of the pixel values of the sampling pixel points; taking the weighted average value as a pixel value of an up-sampled image obtained by up-sampling to generate an up-sampled image; step 816 is entered;
step 816, at least one of image sharpening, image contrast enhancement and color correction is carried out on the up-sampled image; step 818 is entered;
step 818, outputting super-resolution images corresponding to at least two frames of low-resolution images.
In the embodiment of the application, because the user visually has different image quality requirements for the flat area, the edge area and the corner area on the image, in order to meet the requirements of the user, the noise reduction weight of each pixel point in the image area is determined according to the type of the image area on each frame of low-resolution image and the preset mapping relation. Therefore, different noise reduction weights are adopted for different image areas to perform upsampling, and super-resolution images corresponding to at least two frames of low-resolution images are generated. The problems of image blurring and low image quality caused by excessive noise reduction are solved by adopting a uniform noise reduction weight to perform upsampling in the image reconstruction process.
In one embodiment, as shown in fig. 9, there is provided an image reconstruction apparatus 900, the apparatus comprising:
a first resolution image obtaining module 920, configured to obtain at least two frames of first resolution images that are continuously collected;
a noise reduction weight determining module 940, configured to determine a noise reduction weight of each pixel point in each image region according to the type of the image region in each frame of the first-resolution image and a preset mapping relationship; the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights;
a second resolution image generation module 960, configured to perform upsampling on at least two frames of first resolution images according to the noise reduction weight of each pixel point in each image region, and generate a second resolution image corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
In one embodiment, the noise reduction weight determination module 940 includes:
the structure tensor eigenvalue calculation unit is used for calculating the structure tensor eigenvalue of each pixel point in each frame of first resolution image; the eigenvalue size of the structure tensor represents the image area to which the pixel point belongs;
and the noise reduction weight calculation unit is used for calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image based on the characteristic value of the structure tensor of each pixel point and the mathematical corresponding relation between the preset characteristic value of the structure tensor and the noise reduction weight.
In an embodiment, the eigenvalue calculation unit of the structure tensor is further configured to perform a semi-positive definite processing on the structure tensor of the pixel point to obtain a semi-positive definite structure tensor; and carrying out eigenvalue decomposition on the semi-positive structure tensor to obtain the eigenvalue of the structure tensor of the pixel point.
In one embodiment, the mathematical correspondence between the eigenvalues of the preset structure tensor and the noise reduction weight is comprised of
Figure BDA0002700416890000101
The relational expression of (1); a noise reduction weight calculation unit for calculating the eigenvalue of the structure tensor based on each pixel point and the eigenvalue of the structure tensor based on each pixel point
Figure BDA0002700416890000102
Calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image according to the relational expression; where A is an eigen enhancement coefficient calculated based on the eigenvalues of the structure tensor of the pixel values, λ1、λ2Is two eigenvalues of the structure tensor of the pixel point, and λ1Greater than λ2
In an embodiment, the noise reduction weight calculation unit is further configured to calculate the noise reduction weight of the pixel point based on the eigenvalue of the structure tensor of the pixel point and a mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight if the pixel point is located in the flat region in the first resolution image, and obtain the noise reduction weight of the pixel point as the first noise reduction weight;
if the pixel point is located in the edge area of the first resolution image, calculating the noise reduction weight of the pixel point based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as a second noise reduction weight;
if the pixel point is located in the corner region of the first resolution image, calculating the noise reduction weight of the pixel point based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as a third noise reduction weight; the first noise reduction weight is greater than the second noise reduction weight, and the first noise reduction weight is greater than the third noise reduction weight.
In one embodiment, the second resolution image generation module 960 includes:
the image registration unit is used for carrying out image registration on at least two frames of first resolution images to obtain a registration pixel point set, and the registration pixel point set comprises a plurality of groups of mutually aligned pixel points;
the neighborhood determining unit is used for determining the neighborhood of each group of registration pixel points in the registration pixel point set based on the registration pixel point set;
the noise reduction weight acquisition unit of the sampling pixel points in the neighborhood is used for acquiring the noise reduction weight of each sampling pixel point in the neighborhood from the noise reduction weight of each pixel point in each image area according to the determined coordinates of the sampling pixel points in the neighborhood;
and the upsampling unit is used for upsampling based on the determined denoising weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point to generate a second resolution image corresponding to the at least two frames of first resolution images.
In one embodiment, the upsampling unit is further configured to calculate a noise reduction weight of each sampling pixel point in the determined neighborhood and a weighted average of pixel values of the sampling pixel points; taking the weighted average value as a pixel value of an up-sampled image obtained by up-sampling to generate an up-sampled image; post-processing the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
In one embodiment, the upsampling unit is further configured to perform at least one of image sharpening, image contrast enhancement and color correction on the upsampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
In one embodiment, the at least two frames of the first resolution image are RAW images and the second resolution image is an RGB image.
It should be understood that, although the steps in the flowcharts in the above-described figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in the above figures may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
The division of the modules in the image reconstruction apparatus is only for illustration, and in other embodiments, the image reconstruction apparatus may be divided into different modules as needed to complete all or part of the functions of the image reconstruction apparatus.
For specific limitations of the image reconstruction apparatus, reference may be made to the above limitations of the image reconstruction method, which are not described herein again. The modules in the image reconstruction device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, an electronic device is further provided, which includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to perform the steps of an image reconstruction method provided in the above embodiments.
Fig. 10 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 10, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing an image reconstruction method provided in the above embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sale), a vehicle-mounted computer, and a wearable device.
The implementation of each module in the image reconstruction apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on an electronic device or an electronic device. The program modules constituting the computer program may be stored on the electronic device or a memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image reconstruction method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image reconstruction method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above image reconstruction examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A method of image reconstruction, the method comprising:
acquiring at least two continuously acquired first resolution images;
determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first-resolution image and a preset mapping relation; wherein the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights;
according to the noise reduction weight of each pixel point in each image area, performing up-sampling on the at least two frames of first resolution images to generate second resolution images corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
2. The method according to claim 1, wherein determining the noise reduction weight of each pixel point in the image region according to the type of the image region on each frame of the first resolution image and a preset mapping relationship comprises:
for each frame of first resolution image, calculating the characteristic value of the structure tensor of each pixel point in the first resolution image; the eigenvalue size of the structure tensor represents the image area to which the pixel point belongs;
and calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image based on the characteristic value of the structure tensor of each pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight.
3. The method of claim 2, wherein the calculating the eigenvalues of the structure tensor of each pixel point in the first resolution image comprises:
carrying out semi-positive definite processing on the structure tensor of the pixel point to obtain a semi-positive definite structure tensor;
and carrying out eigenvalue decomposition on the semi-positive structure tensor to obtain the eigenvalue of the structure tensor of the pixel point.
4. The method of claim 2, wherein the mathematical correspondence between the eigenvalues of the preset structure tensor and the noise reduction weight comprises
Figure FDA0002700416880000011
The relational expression of (1);
calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image based on the mathematical correspondence between the eigenvalue of the structure tensor of each pixel point and the eigenvalue of the preset structure tensor and the noise reduction weight, and the method comprises the following steps:
eigenvalue and inclusion of structure tensor based on each pixel point
Figure FDA0002700416880000012
Calculating the noise reduction weight of the pixel points in each image area in the first resolution ratio image according to the relational expression; wherein A is an eigen enhancement coefficient calculated based on an eigenvalue of a structure tensor of the pixel value, λ1、λ2Is two eigenvalues of the structure tensor of the pixel point, and the lambda1Is greater than λ2
5. The method according to claim 2, wherein the calculating the noise reduction weight of the pixel point in each image region in the first resolution image based on the eigenvalue of the structure tensor of each pixel point and the mathematical correspondence between the eigenvalue of the preset structure tensor and the noise reduction weight comprises:
if the pixel point is located in a flat area in the first resolution ratio image, calculating noise reduction weight of the pixel point based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of a preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as first noise reduction weight;
if the pixel point is located in the edge area of the first resolution image, calculating noise reduction weight of the pixel point based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of the preset structure tensor and the noise reduction weight, and taking the noise reduction weight of the pixel point as second noise reduction weight;
if the pixel point is located in the corner region of the first resolution image, calculating noise reduction weight of the pixel point based on the characteristic value of the structure tensor of the pixel point and the mathematical correspondence between the characteristic value of a preset structure tensor and the noise reduction weight, and obtaining the noise reduction weight of the pixel point as third noise reduction weight; the first noise reduction weight is greater than the second noise reduction weight, and the first noise reduction weight is greater than the third noise reduction weight.
6. The method according to any one of claims 1 to 5, wherein the upsampling the at least two frames of the first resolution images according to the noise reduction weight of each pixel point in each image region to generate the second resolution images corresponding to the at least two frames of the first resolution images comprises:
carrying out image registration on the at least two frames of first resolution images to obtain a registration pixel point set, wherein the registration pixel point set comprises a plurality of groups of mutually aligned pixel points;
determining the neighborhood of each group of registration pixel points in the registration pixel point set based on the registration pixel point set;
acquiring the noise reduction weight of each sampling pixel point in the determined neighborhood from the noise reduction weight of each pixel point in each image area according to the determined coordinates of the sampling pixel points in the neighborhood;
and performing upsampling based on the determined noise reduction weight of each sampling pixel point in the neighborhood and the pixel value of the sampling pixel point to generate a second resolution image corresponding to the at least two frames of first resolution images.
7. The method of claim 6, wherein upsampling based on the determined noise reduction weight of each sampling pixel in the neighborhood and the pixel value of the sampling pixel to generate a second resolution image corresponding to the at least two frames of the first resolution image comprises:
calculating the noise reduction weight of each sampling pixel point in the determined neighborhood and the weighted average value of the pixel values of the sampling pixel points;
taking the weighted average value as a pixel value of an up-sampled image obtained by up-sampling to generate an up-sampled image;
and performing post-processing on the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
8. The method of claim 7, wherein post-processing the up-sampled image to generate a second resolution image corresponding to the at least two frames of the first resolution image comprises:
and performing at least one of image sharpening, image contrast enhancement and color correction on the up-sampling image to generate a second resolution image corresponding to the at least two frames of the first resolution image.
9. The method of claim 1, wherein the at least two frames of the first resolution image are RAW images and the second resolution image is an RGB image.
10. An image reconstruction apparatus, characterized in that the apparatus comprises:
the first resolution image acquisition module is used for acquiring at least two frames of first resolution images which are continuously acquired;
the noise reduction weight determining module is used for determining the noise reduction weight of each pixel point in each image area according to the type of the image area on each frame of first resolution image and a preset mapping relation; wherein the type of the image area comprises at least one of a flat area, an edge area and a corner area; the mapping relation comprises corresponding relations between different image areas and different noise reduction weights;
the second resolution image generation module is used for performing up-sampling on the at least two frames of first resolution images according to the noise reduction weight of each pixel point in each image area to generate second resolution images corresponding to the at least two frames of first resolution images; wherein the second resolution image has a higher resolution than the first resolution image.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the steps of the image reconstruction method according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image reconstruction method according to one of claims 1 to 9.
CN202011020356.3A 2020-09-25 2020-09-25 Image reconstruction method and device, electronic equipment and readable storage medium Active CN112163999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011020356.3A CN112163999B (en) 2020-09-25 2020-09-25 Image reconstruction method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020356.3A CN112163999B (en) 2020-09-25 2020-09-25 Image reconstruction method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112163999A true CN112163999A (en) 2021-01-01
CN112163999B CN112163999B (en) 2023-03-31

Family

ID=73863767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011020356.3A Active CN112163999B (en) 2020-09-25 2020-09-25 Image reconstruction method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112163999B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274370A1 (en) * 2010-05-10 2011-11-10 Yuhi Kondo Image processing apparatus, image processing method and image processing program
CN103675902A (en) * 2012-09-07 2014-03-26 中国石油化工股份有限公司 Optimal direction edge monitoring method
CN105933714A (en) * 2016-04-20 2016-09-07 济南大学 Three-dimensional video frame rate enhancing method based on depth guide extension block matching
CN106612386A (en) * 2015-10-27 2017-05-03 北京航空航天大学 Noise reduction method combined with spatio-temporal correlation
US20170345132A1 (en) * 2014-11-24 2017-11-30 Koninklijke Philips N.V. Simulating dose increase by noise model based multi scale noise reduction
CN108280804A (en) * 2018-01-25 2018-07-13 湖北大学 A kind of multi-frame image super-resolution reconstruction method
CN110766610A (en) * 2019-10-28 2020-02-07 维沃移动通信有限公司 Super-resolution image reconstruction method and electronic equipment
CN111652800A (en) * 2020-04-30 2020-09-11 清华大学深圳国际研究生院 Single image super-resolution method and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274370A1 (en) * 2010-05-10 2011-11-10 Yuhi Kondo Image processing apparatus, image processing method and image processing program
CN103675902A (en) * 2012-09-07 2014-03-26 中国石油化工股份有限公司 Optimal direction edge monitoring method
US20170345132A1 (en) * 2014-11-24 2017-11-30 Koninklijke Philips N.V. Simulating dose increase by noise model based multi scale noise reduction
CN106612386A (en) * 2015-10-27 2017-05-03 北京航空航天大学 Noise reduction method combined with spatio-temporal correlation
CN105933714A (en) * 2016-04-20 2016-09-07 济南大学 Three-dimensional video frame rate enhancing method based on depth guide extension block matching
CN108280804A (en) * 2018-01-25 2018-07-13 湖北大学 A kind of multi-frame image super-resolution reconstruction method
CN110766610A (en) * 2019-10-28 2020-02-07 维沃移动通信有限公司 Super-resolution image reconstruction method and electronic equipment
CN111652800A (en) * 2020-04-30 2020-09-11 清华大学深圳国际研究生院 Single image super-resolution method and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
严宏海等: "基于结构张量的视频超分辨率算法", 《计算机应用》 *
方明 等: "低照度视频图像增强算法综述", 《长春理工大学学报(自然科学版)》 *
蒋辉等: "梯度双边滤波的图像去噪", 《计算机工程与应用》 *

Also Published As

Publication number Publication date
CN112163999B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN108229497B (en) Image processing method, image processing apparatus, storage medium, computer program, and electronic device
WO2020038205A1 (en) Target detection method and apparatus, computer-readable storage medium, and computer device
CN113838138B (en) System calibration method, system, device and medium for optimizing feature extraction
CN108876716B (en) Super-resolution reconstruction method and device
CN113674191A (en) Weak light image enhancement method and device based on conditional countermeasure network
CN115272250B (en) Method, apparatus, computer device and storage medium for determining focus position
CN113643333A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN114298900A (en) Image super-resolution method and electronic equipment
CN107220934B (en) Image reconstruction method and device
CN111861888A (en) Image processing method, image processing device, electronic equipment and storage medium
Yang et al. Image super-resolution reconstruction based on improved Dirac residual network
CN115082322A (en) Image processing method and device, and training method and device of image reconstruction model
CN113689371A (en) Image fusion method and device, computer equipment and storage medium
CN112163999B (en) Image reconstruction method and device, electronic equipment and readable storage medium
Mathur et al. A real-time super-resolution for surveillance thermal cameras using optimized pipeline on embedded edge device
US20210012464A1 (en) Image anti-aliasing method and image anti-aliasing device
CN116630152A (en) Image resolution reconstruction method and device, storage medium and electronic equipment
CN116862765A (en) Medical image super-resolution reconstruction method and system
CN116167918A (en) Training method, processing method and device for super-resolution model of remote sensing image
CN113205451B (en) Image processing method, device, electronic equipment and storage medium
CN115375780A (en) Color difference calculation method and device, electronic equipment, storage medium and product
Wang Interpolation and sharpening for image upsampling
CN115311145A (en) Image processing method and device, electronic device and storage medium
Wu et al. Wavelet Domain Multidictionary Learning for Single Image Super‐Resolution
CN110910436B (en) Distance measuring method, device, equipment and medium based on image information enhancement technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant