CN110796600B - Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment - Google Patents

Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment Download PDF

Info

Publication number
CN110796600B
CN110796600B CN201911037652.1A CN201911037652A CN110796600B CN 110796600 B CN110796600 B CN 110796600B CN 201911037652 A CN201911037652 A CN 201911037652A CN 110796600 B CN110796600 B CN 110796600B
Authority
CN
China
Prior art keywords
image
processed
scene
preview image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911037652.1A
Other languages
Chinese (zh)
Other versions
CN110796600A (en
Inventor
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911037652.1A priority Critical patent/CN110796600B/en
Publication of CN110796600A publication Critical patent/CN110796600A/en
Priority to PCT/CN2020/123345 priority patent/WO2021083059A1/en
Application granted granted Critical
Publication of CN110796600B publication Critical patent/CN110796600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an image super-resolution reconstruction method, an image super-resolution reconstruction device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a preview image; if the target to be processed exists in the preview image, dividing the preview image to obtain a target to be processed image containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition; performing super-resolution reconstruction on the image to be processed to obtain a processed image; and fusing the processed image with the scene image to obtain a new preview image. By the scheme of the application, the definition of the shooting target can be pertinently improved, and the processing data volume of the electronic equipment can be reduced.

Description

Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image super-resolution reconstruction method, an image super-resolution reconstruction device, an electronic apparatus, and a computer readable storage medium.
Background
When people use electronic equipment to shoot under a worse shooting scene, the situation that shooting targets are not clear enough often occurs. In order to ensure the definition of the shot image, a user can perform super-resolution reconstruction on the whole shot image through electronic equipment. However, this manner of superdivision reconstruction makes it difficult to achieve targeted processing of the photographic subject.
Disclosure of Invention
The application provides an image super-resolution reconstruction method, an image super-resolution reconstruction device, electronic equipment and a computer readable storage medium, which can pertinently improve the definition of a shooting target.
In a first aspect, an embodiment of the present application provides an image super-resolution reconstruction method, including:
acquiring a preview image;
if the target to be processed exists in the preview image, dividing the preview image to obtain a target to be processed image containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing super-resolution reconstruction on the image to be processed to obtain a processed image;
and fusing the processed image with the scene image to obtain a new preview image.
In a second aspect, an embodiment of the present application provides an image super-resolution reconstruction apparatus, including:
an acquisition unit configured to acquire a preview image;
the segmentation unit is used for segmenting the preview image to obtain a to-be-processed image containing the to-be-processed object and a scene image not containing the to-be-processed object if the to-be-processed object exists in the preview image, wherein the to-be-processed object is an object meeting a preset condition;
the processing unit is used for performing super-resolution reconstruction on the image to be processed to obtain a processed image;
and the fusion unit is used for fusing the processed image with the scene image to obtain a new preview image.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the method according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program which, when executed by a processor, implements a method according to the first aspect.
In a fifth aspect, embodiments of the present application also provide a computer program product for implementing a method as described in the first aspect when the above computer program product is run on an electronic device.
In the scheme of the application, after the electronic equipment acquires the preview image, if the preview image contains the target to be processed, the preview image is segmented to obtain the image to be processed containing the target to be processed and the scene image not containing the target to be processed, and the image to be processed is subjected to super-resolution reconstruction only, so that the processing data amount during super-resolution reconstruction is reduced, and finally, the processed image is fused with the scene image to obtain a new preview image, thereby pertinently improving the definition of the shooting target.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of an image super-resolution reconstruction method provided by an embodiment of the present application;
FIG. 2-1 is a schematic diagram of a target detection frame and an image frame to be processed in the image super-resolution reconstruction method according to the embodiment of the present application;
fig. 2-2 is another schematic diagram of a target detection frame and an image frame to be processed in the image super-resolution reconstruction method according to the embodiment of the present application;
FIG. 3-1 is a schematic diagram of an image to be processed in the image super-resolution reconstruction method according to the embodiment of the present application;
fig. 3-2 are schematic diagrams of a scene image in the image super-resolution reconstruction method provided by the embodiment of the application;
FIG. 4 is a schematic diagram of an overlapping region in the image super-resolution reconstruction method according to the embodiment of the present application;
FIG. 5 is a schematic diagram of an image super-resolution reconstruction device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the above technical solution of the present application, the following description will be made by specific examples.
The image super-resolution reconstruction method in the embodiment of the application can be applied to electronic equipment such as smart phones, tablet computers, digital cameras and the like, and is not limited herein. The following describes an image super-resolution reconstruction method provided by the embodiment of the present application by taking an example that the image super-resolution reconstruction method is applied to a smart phone, referring to fig. 1, including:
step 101, obtaining a preview image;
in the embodiment of the application, the image capturing operation can be performed by a camera carried by the electronic device to obtain the preview image, wherein the camera can be a front camera or a rear camera, and the method is not limited herein.
102, if a target to be processed exists in the preview image, dividing the preview image to obtain a target to be processed image containing the target to be processed and a scene image not containing the target to be processed;
in the embodiment of the present application, the target to be processed is a target that satisfies a preset condition. Alternatively, after the preview image is acquired, the preview image may be displayed on a screen of the electronic device, and if a click command of the preview image by a user is received, a target at an input coordinate position of the click command may be determined as a target to be processed; alternatively, the electronic device may intelligently detect whether the preview image has an object to be processed, which is not limited herein. That is, the target to be processed may be determined by a user or may be determined intelligently by the electronic device. When the existence of the target to be processed in the preview image is determined, the preview image can be segmented to obtain the image to be processed, wherein the image to be processed contains the target to be processed; and meanwhile, a scene image can be obtained, and the scene image does not contain the target to be processed.
103, performing super-resolution reconstruction on the image to be processed to obtain a processed image;
in the embodiment of the present application, in order to achieve targeted processing of the target to be processed, only the above-mentioned image to be processed including the target to be processed is subjected to the super-resolution reconstruction processing. Specifically, the image to be processed can be processed through a preset super-resolution algorithm to obtain an image after super-resolution processing, the width and the height of the image after super-resolution processing are both N times that of the original image to be processed, and the value of N is 2 or 4; and then, using a bilinear interpolation method to the image after super-resolution processing to obtain an image with the same size as the image to be processed, wherein the image is the processed image after super-resolution reconstruction to the image to be processed.
And step 104, fusing the processed image with the scene image to obtain a new preview image.
In the embodiment of the application, after the processed image is obtained, the processed image and the scene image can be fused, the fused image is a new preview image, and the new preview image is displayed in a screen of the electronic equipment for the user to review. Because the sizes of the processed image and the image to be processed are completely consistent, and the image to be processed is obtained by dividing the original preview image, the fusion of the processed image and the scene image can be realized based on the position of the image to be processed in the original preview image and the position of the scene image in the original preview image. Optionally, after obtaining the new preview image, the screen of the electronic device does not display the original preview image any more, but displays the new preview image.
Optionally, considering that when the electronic device shoots in the night scene, the difficulty of shooting a clear image by the user increases due to darker light in the night scene, the image super-resolution reconstruction method may be optimized based on the application scene of the night scene, and after the step 101, the image super-resolution reconstruction method includes:
a1, detecting whether a shooting scene of the preview image is a night scene;
after the preview image is acquired, the electronic device may analyze the gray information of the preview image to determine whether the shooting scene of the preview image is a night scene. Specifically, the step A1 includes:
b1, calculating a gray average value of the preview image;
after the gray values of the pixels of the preview image are obtained, the average value of the gray values of the pixels may be continuously calculated to obtain the average gray value of the preview image.
B2, comparing the gray average value of the preview image with a preset first gray average value threshold value;
the electronic device may preset a first gray average value threshold, which may be changed by the user according to the actual requirement, and is not limited herein.
B3, if the gray average value of the preview image is smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is a night scene;
and B4, if the gray average value of the preview image is not smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is not a night scene.
Since the gradation value is all black when 0 and all white when 255, the smaller the average gradation value of the preview image, the darker the photographed scene of the preview image is considered. When the gray average value of the preview image is smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is a night scene; when the average gray level value of the preview image is not less than the first average gray level threshold value, it is determined that the shooting scene of the preview image is not a night scene.
A2, if the shooting scene of the preview image is a night scene, detecting whether a target to be processed exists in the preview image.
When the shooting scene of the preview image is determined to be a night scene, whether a target to be processed exists in the preview image or not, namely, a target meeting a preset condition can be detected. Specifically, the step of detecting whether the target to be processed exists in the preview image includes:
C1, performing object detection on the preview image to obtain more than one object contained in the preview image;
when the shooting scene of the preview image is determined to be a night scene, the target detection can be further performed on the preview image, so as to obtain one or more targets contained in the preview image. Considering that the above-mentioned targets are of various types, and the user may only be concerned with some of the types of targets, the obtained targets may be screened after the target detection is performed on the above-mentioned preview image, and only the types of targets that are of interest to the user remain. For example, in the daily photographing process, the person is the most common photographing object, and thus the object type of interest of the user may be set to be a face, and in this application scenario, the step C1 may be specifically performed to perform face detection on the preview image, so as to obtain one or more faces included in the preview image. Of course, the user may also modify the type of the object of interest according to the specific shooting requirement, which is not limited herein.
C2, calculating the gray average value of all targets;
after obtaining more than one target contained in the preview image, the gray value of the pixel point of each target can be obtained, and the gray value average value of the pixel points is calculated according to the gray value average value of the pixel points, so that the gray average value of all the targets is obtained. It should be noted that the calculation of the gradation average value is not performed in units of individual targets but is performed in units of all targets as a whole.
C3, comparing the gray average value of all targets with a preset second gray average value threshold value;
the electronic device may preset a second gray level average threshold, which may be changed by the user according to the actual requirement, but is not limited herein.
And C4, if the gray average value of all the targets is smaller than the second gray average value threshold value, determining all the targets as the targets to be processed.
If the average gray level value calculated based on all the objects is smaller than the second average gray level threshold, the objects are considered to have darker brightness in the preview image, and are difficult to present to the user with good shooting experience, and based on this, the objects can be determined as the objects to be processed. For example, assuming that in a shooting scene of a night scene, the target type of interest of the user is a face, the electronic device will detect whether a face exists in the preview image, if so, continue to calculate the gray average value of the faces, compare with the second gray average value threshold, and if the gray average value of the faces is smaller than the second gray average value threshold, determine the faces as targets to be processed.
Optionally, the step of segmenting the preview image to obtain a to-be-processed image including the to-be-processed object and a scene image not including the to-be-processed object specifically includes:
d1, acquiring a target detection frame of the target to be processed;
wherein, in the process of detecting the target, a plurality of target detection frames are generated. Thus, in this step, the target detection frame of the target to be processed described above can be acquired, and this is taken as the basis for the subsequent segmentation. Of course, if the target to be processed is determined by the user, after determining the target to be processed based on the click command input by the user, a target detection frame may be generated to frame the target to be processed therein. In general, the target detection frame is rectangular, and the target detection frame may be polygonal according to the algorithm used for target detection, which is not limited herein. Specifically, when the target is a face, the target detection frame is a face detection frame.
D2, setting an image frame to be processed in the preview image based on the target detection frame;
the image frame to be processed is the same as the target detection frame in shape, the size of the image frame to be processed is larger than that of the target detection frame, each boundary of the image frame to be processed is parallel to the corresponding boundary of the target detection frame, and each boundary of the image frame to be processed is spaced from the corresponding boundary of the target detection frame by a preset distance. As shown in fig. 2-1 and fig. 2-2, fig. 2-1 is a schematic diagram of a corresponding set image frame to be processed when the target detection frame is rectangular; fig. 2-2 is a schematic diagram of a corresponding set image frame to be processed when the target detection frame is hexagonal. It can be seen that the distance between the target detection frame and the corresponding set image frame to be processed is kept at a fixed value.
D3, determining the image in the image frame to be processed as the image to be processed;
the method comprises the steps of setting a frame of an image to be processed in a preview image, and determining the image in the frame of the image to be processed as the image to be processed, namely, the frame of the image to be processed is an edge of the image to be processed, based on the preview image. Taking the target detection frame as a rectangle, as shown in fig. 3-1, the outside of the image frame to be processed is a shadow part, and after the shadow part is removed, the image to be processed is reserved.
And D4, determining the images outside the target detection frame as scene images.
And determining the images except the target detection frame as scene images based on the preview images, namely, the target detection frame is the inner edge of the image to be processed, and the original edge of the preview image is the outer edge of the image to be processed. Taking the target detection frame as a rectangle as an example, as shown in fig. 3-2, the inside of the target detection frame is a shadow part, and after the shadow part is removed, the scene image is reserved.
Optionally, in order to better fuse the scene image and the processed image, the above image super-resolution reconstruction method further includes:
E1, acquiring coordinates of each vertex of the image frame to be processed in the preview image;
the processed image is actually an image obtained after the super-resolution reconstruction operation is performed on the image to be processed, so that the shape and the size of the processed image are completely consistent with those of the image to be processed. In order to realize the fusion of the processed image and the scene image, the coordinates of each vertex of the image frame to be processed in the preview image can be obtained first, and the coordinates of each vertex of the image frame to be processed in the preview image are the coordinates of each vertex of the processed image in the preview image. It should be noted that the coordinates are obtained based on an image coordinate system, that is, a coordinate system constructed by taking the top left vertex of the image as the origin of the coordinate system and taking the pixels as units, and the abscissa u and the ordinate v of the pixels are the number of columns and the number of rows in the image array, respectively.
Accordingly, the step 104 includes:
e2, overlapping the processed image with the scene image based on the coordinates of each vertex of the image frame to be processed in the preview image to obtain an overlapping area;
the outer edge of the scene image is the edge of the original preview image, so that the image coordinate systems of the preview image and the scene image are completely overlapped. Based on this, the coordinates of each vertex of the processed image in the scene image can be determined from the coordinates of each vertex of the image frame to be processed in the preview image. As shown in fig. 4, the solid line portion is composed of a scene image, the broken line portion is composed of a processed image, and an overlapping area, that is, a hatched portion, exists between the scene image and the processed image. In practice, it can be seen that the object detection frame constitutes the inner edge of the overlap region and the image frame to be processed constitutes the outer edge of the overlap region when image segmentation is performed.
And E3, fusing the edges of the scene image and the processed image based on the overlapping area to obtain a new preview image.
The part outside the overlapping area does not need to be subjected to other processing, namely, the pixel points outside the overlapping area in the scene image are kept unchanged; in the processed image, the pixel points outside the overlapping area also remain unchanged. And only carrying out fusion processing on the pixel points of the overlapping area, so that the inner edge area of the scene image can be fused with the edge area of the processed image, and a new preview image is obtained. Specifically, the step E3 includes:
f1, acquiring a gray value of the pixel point in the scene image according to any pixel point in the overlapping area, and marking the gray value as a first gray value, and acquiring a gray value of the pixel point in the processed image, and marking the gray value as a second gray value;
since the overlapping area exists in the scene image and also exists in the processed image, the gray value of any pixel point in the overlapping area is recorded as a first gray value, and the gray value of the pixel point in the processed image is simultaneously recorded as a second gray value, so as to be used as the basis of subsequent fusion.
F2, calculating a gray average value of the first gray value and the second gray value;
and F3, determining the gray average value of the first gray value and the second gray value as the gray value of the fused pixel point.
The above steps F1 to F3 are described below by specific examples: assuming that the gray value of a pixel P1 in the overlapping area in the field image is X1, the gray value in the processed image is X2, the gray average value of the X1 and the X2 is obtained, and the gray average value is rounded to obtain a gray value X3, and the gray value of the fused pixel is X3. Through the above process, each pixel point of the overlapping area is obtained by fusing corresponding pixel points based on the scene image and the processed image. Thus, the new preview image obtained at last is actually composed of three parts: one is the portion of the scene image outside the frame of the image to be processed, without any processing; secondly, a part of the processed image is subjected to super-resolution reconstruction processing in the target detection frame; and thirdly, fusing the part of the overlapping area of the scene image and the processed image between the image frame to be processed and the target detection frame.
From the above, according to the embodiment of the present application, after obtaining the preview image, if there is a target to be processed in the preview image, the preview image is segmented to obtain a to-be-processed image including the target to be processed and a scene image not including the target to be processed, and only the to-be-processed image is subjected to super-resolution reconstruction, so that the processing data amount during super-resolution reconstruction is reduced, and finally the processed image and the scene image are fused to obtain a new preview image, thereby realizing targeted improvement of the definition of the shooting target.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Corresponding to the image super-resolution reconstruction method proposed above, an image super-resolution reconstruction device provided in the embodiment of the present application is described below, referring to fig. 5, where the image super-resolution reconstruction device 5 includes:
an acquisition unit 501 for acquiring a preview image;
a segmentation unit 502, configured to segment the preview image to obtain a to-be-processed image including the to-be-processed object and a scene image not including the to-be-processed object if the to-be-processed object exists in the preview image, where the to-be-processed object is an object that satisfies a preset condition;
a processing unit 503, configured to perform super-resolution reconstruction on the image to be processed, so as to obtain a processed image;
and a fusing unit 504, configured to fuse the processed image with the scene image, so as to obtain a new preview image.
Optionally, the above image super-resolution reconstruction apparatus 5 further includes:
a night scene detection unit configured to detect whether a shooting scene of the preview image is a night scene after the preview image is acquired;
And the target detection unit is used for detecting whether a target to be processed exists in the preview image or not if the shooting scene of the preview image is a night scene.
Optionally, the night scene detection unit includes:
a first calculating subunit, configured to calculate a gray average value of the preview image;
the first comparison subunit is used for comparing the gray average value of the preview image with a preset first gray average value threshold value;
a night scene judging subunit, configured to determine that a shooting scene of the preview image is a night scene if the gray average value of the preview image is smaller than the first gray average value threshold value;
the night scene judging subunit is further configured to determine that the captured scene of the preview image is not a night scene if the average gray level value of the preview image is not less than the first average gray level threshold value.
Optionally, the target detection unit to be processed includes:
the target detection subunit is used for carrying out target detection on the preview image to obtain more than one target contained in the preview image;
a second calculation subunit for calculating a gray average value of all the targets;
the second comparison subunit is used for comparing the gray average value of all the targets with a preset second gray average value threshold value;
And the target to be processed determining subunit is configured to determine all the targets as the target to be processed if the gray average value of all the targets is smaller than the second gray average value threshold.
Optionally, the dividing unit 502 includes:
the target detection frame acquisition subunit is used for acquiring the target detection frame of the target to be processed;
a to-be-processed image frame setting subunit, configured to set, based on the target detection frame, an to-be-processed image frame in the preview image, where the to-be-processed image frame has the same shape as the target detection frame, each boundary of the to-be-processed image frame is parallel to a corresponding boundary of the target detection frame, and each boundary of the to-be-processed image frame is spaced from a corresponding boundary of the target detection frame by a preset distance;
a to-be-processed image determining subunit, configured to determine an image within the to-be-processed image frame as a to-be-processed image;
and the scene image determination subunit is used for determining the images outside the target detection frame as scene images.
Optionally, the above image super-resolution reconstruction device further includes:
a coordinate acquiring unit, configured to acquire coordinates of each vertex of the image frame to be processed in the preview image;
Accordingly, the above-mentioned fusing unit 504 includes:
an overlapping region obtaining subunit, configured to overlap the processed image with the scene image based on coordinates of each vertex of the image frame to be processed in the preview image, to obtain an overlapping region;
and the overlapping region fusion subunit is used for fusing the edges of the scene image and the processed image based on the overlapping region to obtain a new preview image.
Optionally, the above overlapping region fusion subunit includes:
a gray level obtaining subunit, configured to obtain, for any pixel point in the overlapping area, a gray level value of the pixel point in the scene image, and record the gray level value as a first gray level value, and obtain a gray level value of the pixel point in the processed image, and record the gray level value as a second gray level value;
a gray level calculating subunit, configured to calculate a gray level average value of the first gray level value and the second gray level value;
and the gray level determining subunit is used for determining the gray level average value of the first gray level value and the second gray level value as the gray level value of the fused pixel point.
From the above, according to the embodiment of the present application, after obtaining the preview image, if the preview image has the target to be processed, the image super-resolution reconstruction device segments the preview image to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and performs super-resolution reconstruction only on the image to be processed, thereby reducing the processing data amount during super-resolution reconstruction, and finally fusing the processed image with the scene image to obtain a new preview image, so as to achieve targeted improvement of the definition of the shooting target.
Referring to fig. 6, the electronic device 6 in the embodiment of the present application includes: a memory 601, one or more processors 602 (only one shown in fig. 6) and computer programs stored on the memory 601 and executable on the processors. Wherein: the memory 601 is used for storing software programs and modules, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601 to acquire resources corresponding to the preset events. Specifically, the processor 602 implements the following steps by running the computer program stored in the memory 601:
acquiring a preview image;
if the target to be processed exists in the preview image, dividing the preview image to obtain a target to be processed image containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing super-resolution reconstruction on the image to be processed to obtain a processed image;
and fusing the processed image with the scene image to obtain a new preview image.
Assuming that the above is a first possible embodiment, in a second possible embodiment provided by way of example of the first possible embodiment, after the obtaining of the preview image, the processor 602 further performs the following steps by executing the computer program stored in the memory 601:
detecting whether the shooting scene of the preview image is a night scene or not;
and if the shooting scene of the preview image is a night scene, detecting whether a target to be processed exists in the preview image.
In a third possible embodiment provided by the second possible embodiment, the detecting whether the shooting scene of the preview image is a night scene includes:
calculating a gray average value of the preview image;
comparing the gray average value of the preview image with a preset first gray average value threshold value;
if the gray average value of the preview image is smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is a night scene;
and if the gray average value of the preview image is not smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is not a night scene.
In a fourth possible implementation manner provided by the second possible implementation manner, the detecting whether the target to be processed exists in the preview image includes:
performing object detection on the preview image to obtain more than one object contained in the preview image;
calculating the gray average value of all targets;
comparing the gray average value of all targets with a preset second gray average value threshold value;
and if the gray average value of all the targets is smaller than the second gray average value threshold value, determining all the targets as the targets to be processed.
In a fifth possible embodiment provided by the first possible embodiment as a basis, the second possible embodiment as a basis, the third possible embodiment as a basis, or the fourth possible embodiment as a basis, the dividing the preview image to obtain a to-be-processed image including the to-be-processed object and a scene image not including the to-be-processed object includes:
acquiring a target detection frame of the target to be processed;
setting an image frame to be processed in the preview image based on the target detection frame, wherein the shape of the image frame to be processed is the same as that of the target detection frame, each boundary of the image frame to be processed is parallel to the corresponding boundary of the target detection frame, and each boundary of the image frame to be processed is spaced from the corresponding boundary of the target detection frame by a preset distance;
Determining the image in the image frame to be processed as the image to be processed;
and determining the image outside the target detection frame as a scene image.
In a sixth possible implementation provided by the fifth possible implementation, the processor 602 further implements the following steps by running the computer program stored in the memory 601:
acquiring coordinates of each vertex of the image frame to be processed in the preview image;
correspondingly, the fusing the processed image and the scene image to obtain a new preview image comprises the following steps:
overlapping the processed image with the scene image based on the coordinates of each vertex of the image frame to be processed in the preview image to obtain an overlapping region;
and fusing the edges of the scene image and the processed image based on the overlapping area to obtain a new preview image.
In a seventh possible implementation manner provided by the sixth possible implementation manner, the fusing the edges of the scene image and the processed image based on the overlapping area to obtain a new preview image includes:
For any pixel point in the overlapping area, acquiring a gray value of the pixel point in the scene image, and marking the gray value as a first gray value, and acquiring a gray value of the pixel point in the processed image, and marking the gray value as a second gray value;
calculating a gray average value of the first gray value and the second gray value;
and determining the gray average value of the first gray value and the second gray value as the gray value of the fused pixel point.
Further, the electronic device may further include: one or more input devices and one or more output devices. The memory 601, the processor 602, the input devices and the output devices are connected by buses.
It should be appreciated that in embodiments of the present application, the processor 602 may be a central processing unit (Central Processing Unit, CPU), which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input devices may include a keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of a fingerprint), a microphone, etc., and the output devices may include a display, a speaker, etc.
Memory 601 may include read only memory and random access memory and provides instructions and data to processor 602. Some or all of the memory 601 may also include non-volatile random access memory. For example, the memory 601 may also store information of a device type.
From the above, according to the embodiment of the present application, after obtaining the preview image, if the preview image has the target to be processed, the electronic device segments the preview image to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and performs the super-resolution reconstruction only on the image to be processed, thereby reducing the processing data amount during the super-resolution reconstruction, and finally fusing the processed image with the scene image to obtain a new preview image, so as to achieve the targeted improvement of the definition of the shooting target.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of modules or units described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment may be implemented. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The above computer readable storage medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer readable Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium described above may be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (9)

1. An image super-resolution reconstruction method is characterized by comprising the following steps:
acquiring a preview image;
if the target to be processed exists in the preview image, dividing the preview image to obtain a target to be processed image containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing super-resolution reconstruction on the image to be processed to obtain a processed image;
fusing the processed image with the scene image to obtain a new preview image;
the step of dividing the preview image to obtain a to-be-processed image containing the to-be-processed object and a scene image not containing the to-be-processed object, includes:
Acquiring a target detection frame of the target to be processed;
setting an image frame to be processed in the preview image based on the target detection frame, wherein the shape of the image frame to be processed is the same as that of the target detection frame, each boundary of the image frame to be processed is respectively parallel to the corresponding boundary of the target detection frame, each boundary of the image frame to be processed is respectively spaced from the corresponding boundary of the target detection frame by a preset distance, and the image frame to be processed surrounds the target detection frame;
determining an image in the image frame to be processed as an image to be processed;
and determining the image outside the target detection frame as a scene image.
2. The image super-resolution reconstruction method as claimed in claim 1, wherein after said acquiring the preview image, the image super-resolution reconstruction method further comprises:
detecting whether a shooting scene of the preview image is a night scene or not;
and if the shooting scene of the preview image is a night scene, detecting whether a target to be processed exists in the preview image.
3. The image super-resolution reconstruction method as claimed in claim 2, wherein said detecting whether the photographed scene of the preview image is a night scene comprises:
Calculating the gray average value of the preview image;
comparing the gray average value of the preview image with a preset first gray average value threshold value;
if the gray average value of the preview image is smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is a night scene;
and if the gray average value of the preview image is not smaller than the first gray average value threshold value, determining that the shooting scene of the preview image is not a night scene.
4. The method for image super-resolution reconstruction as claimed in claim 2, wherein said detecting whether an object to be processed exists in the preview image comprises:
performing target detection on the preview image to obtain more than one target contained in the preview image;
calculating the gray average value of all targets;
comparing the gray average value of all targets with a preset second gray average value threshold value;
and if the gray average value of all the targets is smaller than the second gray average value threshold value, determining all the targets as the targets to be processed.
5. The image super-resolution reconstruction method according to any one of claims 1 to 4, further comprising:
Acquiring coordinates of each vertex of the image frame to be processed in the preview image;
the fusing of the processed image and the scene image to obtain a new preview image comprises the following steps:
overlapping the processed image with the scene image based on the coordinates of each vertex of the image frame to be processed in the preview image to obtain an overlapping region;
and fusing the edges of the scene image and the processed image based on the overlapping area to obtain a new preview image.
6. The method of image super-resolution reconstruction as claimed in claim 5, wherein said fusing the scene image and the edges of the processed image based on the overlapping region to obtain a new preview image comprises:
for any pixel point in the overlapping area, acquiring a gray value of the pixel point in the scene image, and marking the gray value as a first gray value, and acquiring a gray value of the pixel point in the processed image, and marking the gray value as a second gray value;
calculating a gray average value of the first gray value and the second gray value;
and determining the gray average value of the first gray value and the second gray value as the gray value of the fused pixel point.
7. An image super-resolution reconstruction device, comprising:
an acquisition unit configured to acquire a preview image;
the segmentation unit is used for segmenting the preview image to obtain a to-be-processed image containing the to-be-processed object and a scene image not containing the to-be-processed object if the to-be-processed object exists in the preview image, wherein the to-be-processed object is an object meeting a preset condition;
the processing unit is used for performing super-resolution reconstruction on the image to be processed to obtain a processed image;
the fusion unit is used for fusing the processed image with the scene image to obtain a new preview image;
the step of dividing the preview image to obtain a to-be-processed image containing the to-be-processed object and a scene image not containing the to-be-processed object, includes:
acquiring a target detection frame of the target to be processed;
setting an image frame to be processed in the preview image based on the target detection frame, wherein the shape of the image frame to be processed is the same as that of the target detection frame, each boundary of the image frame to be processed is respectively parallel to the corresponding boundary of the target detection frame, each boundary of the image frame to be processed is respectively spaced from the corresponding boundary of the target detection frame by a preset distance, and the image frame to be processed surrounds the target detection frame;
Determining an image in the image frame to be processed as an image to be processed;
and determining the image outside the target detection frame as a scene image.
8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
CN201911037652.1A 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment Active CN110796600B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911037652.1A CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
PCT/CN2020/123345 WO2021083059A1 (en) 2019-10-29 2020-10-23 Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911037652.1A CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110796600A CN110796600A (en) 2020-02-14
CN110796600B true CN110796600B (en) 2023-08-11

Family

ID=69441809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911037652.1A Active CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110796600B (en)
WO (1) WO2021083059A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796600B (en) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN111968037A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Digital zooming method and device and electronic equipment
CN114697543B (en) * 2020-12-31 2023-05-19 华为技术有限公司 Image reconstruction method, related device and system
CN113240687A (en) * 2021-05-17 2021-08-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN113313630A (en) * 2021-05-27 2021-08-27 艾酷软件技术(上海)有限公司 Image processing method and device and electronic equipment
CN113572955A (en) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009177764A (en) * 2007-12-27 2009-08-06 Eastman Kodak Co Imaging apparatus
CN105517677A (en) * 2015-05-06 2016-04-20 北京大学深圳研究生院 Depth/disparity map post-processing method and apparatus
CN107835661A (en) * 2015-08-05 2018-03-23 深圳迈瑞生物医疗电子股份有限公司 Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4424518B2 (en) * 2007-03-27 2010-03-03 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
KR101179497B1 (en) * 2008-12-22 2012-09-07 한국전자통신연구원 Apparatus and method for detecting face image
CN104820966B (en) * 2015-04-30 2016-01-06 河海大学 Asynchronous many video super-resolution methods of registration deconvolution during a kind of sky
CN109064399B (en) * 2018-07-20 2023-01-24 广州视源电子科技股份有限公司 Image super-resolution reconstruction method and system, computer device and storage medium thereof
CN110310229B (en) * 2019-06-28 2023-04-18 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device, and readable storage medium
CN110796600B (en) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009177764A (en) * 2007-12-27 2009-08-06 Eastman Kodak Co Imaging apparatus
CN105517677A (en) * 2015-05-06 2016-04-20 北京大学深圳研究生院 Depth/disparity map post-processing method and apparatus
CN107835661A (en) * 2015-08-05 2018-03-23 深圳迈瑞生物医疗电子股份有限公司 Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于灰度直方图拟合曲线的数字图像多阈值分割技术研究;梁忠伟等;《现代制造工程》;20070918(第09期);第103-106页 *

Also Published As

Publication number Publication date
WO2021083059A1 (en) 2021-05-06
CN110796600A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN108898567B (en) Image noise reduction method, device and system
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
US20220166930A1 (en) Method and device for focusing on target subject, and electronic device
US10382712B1 (en) Automatic removal of lens flares from images
CN108769634B (en) Image processing method, image processing device and terminal equipment
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11538175B2 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN107909569B (en) Screen-patterned detection method, screen-patterned detection device and electronic equipment
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant