CN110796600A - Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment - Google Patents

Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment Download PDF

Info

Publication number
CN110796600A
CN110796600A CN201911037652.1A CN201911037652A CN110796600A CN 110796600 A CN110796600 A CN 110796600A CN 201911037652 A CN201911037652 A CN 201911037652A CN 110796600 A CN110796600 A CN 110796600A
Authority
CN
China
Prior art keywords
image
processed
target
preview image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911037652.1A
Other languages
Chinese (zh)
Other versions
CN110796600B (en
Inventor
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911037652.1A priority Critical patent/CN110796600B/en
Publication of CN110796600A publication Critical patent/CN110796600A/en
Priority to PCT/CN2020/123345 priority patent/WO2021083059A1/en
Application granted granted Critical
Publication of CN110796600B publication Critical patent/CN110796600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses an image hyper-resolution reconstruction method, an image hyper-resolution reconstruction device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring a preview image; if the preview image has the target to be processed, segmenting the preview image to obtain the image to be processed containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition; performing hyper-resolution reconstruction on the image to be processed to obtain a processed image; and fusing the processed image and the scene image to obtain a new preview image. Through this application scheme, can the pertinence promote the definition of shooting the target, can reduce electronic equipment's the processing data volume simultaneously.

Description

Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image hyper-resolution reconstruction method, an image hyper-resolution reconstruction apparatus, an electronic device, and a computer-readable storage medium.
Background
When people use electronic equipment to shoot in a severe shooting scene, the shooting target is often not clear enough. In order to guarantee the definition of the shot image, a user can perform hyper-resolution reconstruction on the whole shot image through electronic equipment. However, this way of hyper-resolution reconstruction is difficult to realize targeted processing of the shot object.
Disclosure of Invention
The application provides an image super-resolution reconstruction method, an image super-resolution reconstruction device, electronic equipment and a computer readable storage medium, which can improve the definition of a shot target in a targeted manner.
In a first aspect, an embodiment of the present application provides an image hyper-resolution reconstruction method, including:
acquiring a preview image;
if the preview image has the target to be processed, segmenting the preview image to obtain the image to be processed containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing hyper-resolution reconstruction on the image to be processed to obtain a processed image;
and fusing the processed image and the scene image to obtain a new preview image.
In a second aspect, an embodiment of the present application provides an image hyper-resolution reconstruction apparatus, including:
an acquisition unit configured to acquire a preview image;
the segmentation unit is used for segmenting the preview image to obtain a to-be-processed image containing the to-be-processed target and a scene image not containing the to-be-processed target if the to-be-processed target exists in the preview image, wherein the to-be-processed target is a target meeting a preset condition;
the processing unit is used for carrying out hyper-resolution reconstruction on the image to be processed to obtain a processed image;
and the fusion unit is used for fusing the processed image and the scene image to obtain a new preview image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to the first aspect.
In a fifth aspect, the present application further provides a computer program product, which when run on an electronic device, implements the method according to the first aspect.
It can be seen that, in the scheme of the application, after the electronic device acquires the preview image, if the preview image has the target to be processed, the preview image is segmented to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and only the image to be processed is subjected to super-resolution reconstruction, so that the processing data amount during super-resolution reconstruction is reduced, and finally the processed image and the scene image are fused to obtain a new preview image, so that the definition of the shooting target can be improved in a targeted manner.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an image hyper-resolution reconstruction method provided in an embodiment of the present application;
fig. 2-1 is a schematic diagram of a target detection frame and an image frame to be processed in the image hyper-resolution reconstruction method provided in the embodiment of the present application;
fig. 2-2 is another schematic diagram of a target detection frame and an image frame to be processed in the image hyper-resolution reconstruction method according to the embodiment of the present application;
fig. 3-1 is a schematic diagram of an image to be processed in an image hyper-resolution reconstruction method provided in an embodiment of the present application;
fig. 3-2 is a schematic diagram of a scene image in an image hyper-resolution reconstruction method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an overlapping region in an image hyper-resolution reconstruction method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of an image hyper-resolution reconstruction apparatus provided in an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
The image super-resolution reconstruction method in the embodiment of the application can be applied to electronic devices such as smart phones, tablet computers and digital cameras, and is not limited here. Taking the application of the image hyper-resolution reconstruction method to a smart phone as an example, a description is given below of an image hyper-resolution reconstruction method provided in the embodiment of the present application, with reference to fig. 1, including:
step 101, acquiring a preview image;
in the embodiment of the present application, an image capturing operation may be performed by a camera mounted on an electronic device to obtain a preview image, where the camera may be a front camera or a rear camera, and is not limited herein.
Step 102, if the preview image has a target to be processed, segmenting the preview image to obtain a target to be processed including the target to be processed and a scene image not including the target to be processed;
in the embodiment of the present application, the target to be processed is a target satisfying a preset condition. Optionally, after a preview image is acquired, displaying the preview image on a screen of the electronic device, and if a click instruction of a user on the preview image is received, determining a target at an input coordinate position of the click instruction as a target to be processed; alternatively, the electronic device may intelligently detect whether the preview image has the target to be processed, which is not limited herein. That is, the object to be processed may be determined by a user or may be intelligently determined by the electronic device. When the preview image is determined to have the target to be processed, the preview image can be segmented to obtain the image to be processed, and the image to be processed comprises the target to be processed; and simultaneously, a scene image can be obtained, wherein the scene image does not contain the target to be processed.
103, performing hyper-resolution reconstruction on the image to be processed to obtain a processed image;
in the embodiment of the present application, in order to implement targeted processing on the target to be processed, only the above-mentioned image to be processed containing the target to be processed is subjected to the hyper-resolution reconstruction processing. Specifically, the image to be processed can be processed through a preset super-resolution algorithm to obtain a super-resolution processed image, the width and height of the super-resolution processed image are both N times of the original image to be processed, and the value of N is 2 or 4; and then, using a bilinear interpolation method to the super-resolution processed image to obtain an image with the same size as the image to be processed, wherein the image is the processed image after the super-resolution reconstruction of the image to be processed.
And 104, fusing the processed image and the scene image to obtain a new preview image.
In this embodiment of the application, after the processed image is obtained, the processed image and the scene image may be fused, the fused image is a new preview image, and the new preview image is displayed on a screen of the electronic device for a user to refer. Because the size of the processed image is completely consistent with that of the image to be processed, and the image to be processed is obtained by dividing the original preview image, the fusion of the processed image and the scene image can be realized based on the position of the image to be processed in the original preview image and the position of the scene image in the original preview image. Optionally, after obtaining the new preview image, the screen of the electronic device displays the new preview image instead of the original preview image.
Optionally, in consideration that when the electronic device shoots in a night scene, since light rays in the night scene are dark, the difficulty of shooting a clear image by a user increases, and therefore, the image hyper-resolution reconstruction method may be optimized based on an application scene of the night scene, after the step 101, the image hyper-resolution reconstruction method includes:
a1, detecting whether the shooting scene of the preview image is a night scene;
after the preview image is acquired, the electronic device may analyze the grayscale information of the preview image to determine whether a shooting scene of the preview image is a night scene. Specifically, the step a1 includes:
b1, calculating the average gray level of the preview image;
after the gray values of the pixel points of the preview image are obtained, the mean value calculation of the gray values of the pixel points can be continuously carried out to obtain the average gray value of the preview image.
B2, comparing the gray average value of the preview image with a preset first gray average value threshold;
the electronic device may preset a first gray-scale average threshold, and of course, the first gray-scale average threshold may also be modified by the user according to actual needs, which is not limited herein.
B3, if the average value of the gray levels of the preview images is smaller than the first threshold value of the average value of the gray levels, determining that the shooting scene of the preview images is a night scene;
b4, if the average value of the gray levels of the preview images is not less than the first threshold value of the average value of the gray levels, determining that the shooting scene of the preview images is not a night scene.
However, since the image is completely black when the gradation value is 0 and completely white when the gradation value is 255, the smaller the average gradation value of the preview image is, the darker the shot scene of the preview image is considered. When the average gray level of the preview image is smaller than the first threshold, determining that the shooting scene of the preview image is a night scene; and when the average gray level of the preview image is not less than the first threshold, determining that the shooting scene of the preview image is not a night scene.
And A2, if the shooting scene of the preview image is a night scene, detecting whether the preview image has the target to be processed.
When the shooting scene of the preview image is determined to be a night scene, whether a target to be processed exists in the preview image or not, namely the target meeting preset conditions can be detected. Specifically, the step of detecting whether the target to be processed exists in the preview image includes:
c1, detecting the object in the preview image to obtain more than one object contained in the preview image;
when the shooting scene of the preview image is determined to be a night scene, the preview image can be further subjected to target detection to obtain one or more targets contained in the preview image. Considering that there are many types of the targets, and the user may only care about some types of the targets, the obtained targets may be filtered after the target detection is performed on the preview image, and only the types of the targets that the user is interested in are reserved. For example, in a daily shooting process, a person is the most common shooting object, and thus, the type of the object in which the user is interested may be set as a human face, in this application scenario, the step C1 may be embodied as performing face detection on the preview image to obtain one or more human faces included in the preview image. Of course, the user may also modify the above object types of interest according to the specific shooting requirements, which is not limited herein.
C2, calculating the gray level average value of all the targets;
after more than one target contained in the preview image is obtained, the gray value of the pixel point of each target can be obtained, and the average value of the gray values of the pixel points is calculated according to the gray value of the pixel points to obtain the average value of the gray values of all the targets. Note that, here, the calculation of the gradation average value is not performed in units of a single object, but performed with all objects as a whole.
C3, comparing the gray level average value of all the targets with a preset second gray level average value threshold value;
the electronic device may preset a second gray level average threshold, and of course, the second gray level average threshold may also be changed by the user according to actual needs, which is not limited herein.
And C4, if the gray level average value of all the targets is smaller than the second gray level average value threshold, all the targets are determined as the target to be processed.
If the gray level average value calculated based on all the targets is smaller than the second gray level average value threshold, it is considered that the brightness of the targets in the preview image is relatively dark and is difficult to present to a user with a good shooting experience, and based on this, the targets can be determined as the targets to be processed. For example, assuming that the type of the target in which the user is interested is a face in a shooting scene of a night scene, the electronic device detects whether the face exists in the preview image, if a plurality of faces exist, the electronic device continues to calculate a gray level average value of the plurality of faces and compares the gray level average value with the second gray level average value threshold, and if the gray level average value of the plurality of faces is smaller than the second gray level average value threshold, all the plurality of faces are determined as the target to be processed.
Optionally, the step of segmenting the preview image to obtain the to-be-processed image including the to-be-processed target and the scene image not including the to-be-processed target specifically includes:
d1, acquiring a target detection frame of the target to be processed;
in the process of target detection, a plurality of target detection frames are generated. Therefore, in this step, the target detection frame of the target to be processed can be obtained and used as the basis for the subsequent segmentation. Of course, if the object to be processed is determined by the user, the object to be processed may be framed in by generating an object detection frame after determining the object to be processed based on the click command input by the user. The object detection frame is generally rectangular, but may be polygonal according to the algorithm used for object detection, and is not limited herein. Specifically, when the target is a human face, the target detection frame is a human face detection frame.
D2, setting a frame of an image to be processed in the preview image based on the target detection frame;
the image frame to be processed and the target detection frame have the same shape, the size of the image frame to be processed is larger than that of the target detection frame, each boundary of the image frame to be processed is parallel to the corresponding boundary of the target detection frame, and each boundary of the image frame to be processed is separated from the corresponding boundary of the target detection frame by a preset distance. As shown in fig. 2-1 and 2-2, fig. 2-1 is a schematic view of a to-be-processed image frame correspondingly set when the target detection frame is rectangular; fig. 2-2 is a schematic diagram of a corresponding set to-be-processed image frame when the target detection frame is a hexagon. Therefore, the distance between the target detection frame and the correspondingly set image frame to be processed is kept as a fixed value.
D3, determining the image in the image frame to be processed as the image to be processed;
based on the preview image, after the image frame to be processed is set in the preview image, the image in the image frame to be processed is determined as the image to be processed, that is, the image frame to be processed is the edge of the image to be processed. Taking the target detection frame as a rectangle as an example, as shown in fig. 3-1, the outside of the frame of the image to be processed is a shadow part, and after the shadow part is removed, the remaining image is the image to be processed.
And D4, determining the images outside the target detection frame as scene images.
Based on the preview image, determining the image outside the target detection frame as the scene image, that is, the target detection frame is the inner edge of the image to be processed, and the primary edge of the preview image is the outer edge of the image to be processed. Still taking the target detection frame as a rectangle, as shown in fig. 3-2, the inside of the target detection frame is a shadow part, and after the shadow part is removed, the remaining part is the scene image.
Optionally, in order to better fuse the scene image and the processed image, the image hyper-resolution reconstruction method further includes:
e1, obtaining the coordinates of each vertex of the image frame to be processed in the preview image;
the processed image is actually an image obtained by performing a hyper-resolution reconstruction operation on the image to be processed, and therefore, the shape and the size of the processed image are completely consistent with those of the image to be processed. In order to realize the fusion of the processed image and the scene image, the coordinates of each vertex of the to-be-processed image frame in the preview image may be obtained first, and the coordinates of each vertex of the to-be-processed image frame in the preview image are the coordinates of each vertex of the processed image in the preview image. It should be noted that the coordinates are obtained based on an image coordinate system, that is, a coordinate system constructed by taking the top left vertex of the image as the origin of the coordinate system and taking pixels as units, and the abscissa u and the ordinate v of the pixel are the number of columns and the number of rows in the image array, respectively.
Accordingly, the step 104 includes:
e2, overlapping the processed image and the scene image based on the coordinates of each vertex of the frame of the image to be processed in the preview image to obtain an overlapping area;
since the outer edge of the scene image is the edge of the original preview image, the image coordinate systems of the preview image and the scene image are completely overlapped. Based on the above, the coordinates of each vertex of the processed image in the scene image can be determined according to the coordinates of each vertex of the to-be-processed image frame in the preview image. As shown in fig. 4, the solid line portion constitutes an image of a scene, the dashed line portion constitutes a processed image, and an overlapping region, i.e., a shadow portion, exists between the image of the scene and the processed image. In practice, it can be seen that when image segmentation is performed, the target detection frame forms the inner edge of the overlap region, and the image frame to be processed forms the outer edge of the overlap region.
E3, based on the overlap region, fusing the edges of the scene image and the processed image to obtain a new preview image.
The part outside the overlapping area does not need to be processed, namely, pixel points outside the overlapping area in the scene image are kept unchanged; and in the processed image, the pixel points outside the overlapping area are also kept unchanged. And only carrying out fusion processing on the pixel points of the overlapped area, so that the inner edge area of the scene image can be fused with the edge area of the processed image, and thus a new preview image is obtained. Specifically, the step E3 includes:
f1, aiming at any pixel point in the overlapping area, acquiring the gray value of the pixel point in the scene image, and recording the gray value as a first gray value, and acquiring the gray value of the pixel point in the processed image, and recording the gray value as a second gray value;
the overlap region exists in both the scene image and the processed image, so that for any pixel point in the overlap region, the gray value of the pixel point in the scene image is obtained and recorded as a first gray value, and the gray value of the pixel point in the processed image is obtained and recorded as a second gray value to serve as the basis of subsequent fusion.
F2, calculating a gray average of the first gray value and the second gray value;
f3, determining the average gray value of the first gray value and the second gray value as the gray value of the merged pixel.
The above steps F1 to F3 are illustrated below by specific examples: assuming that the gray value of a pixel point P1 in the overlapped region in the scene image is X1, and the gray value in the processed image is X2, the gray average value is obtained for X1 and X2, and the gray average value is rounded to obtain a gray value X3, and the gray value of the pixel point after fusion is X3. Through the process, all the pixel points in the overlapping area are obtained by fusing the corresponding pixel points based on the scene image and the processed image. Thus, the resulting new preview image is actually composed of three parts: one is the part of the scene image which is not processed outside the frame of the image to be processed; the second is the part of the processed image which is processed by the super-resolution reconstruction processing in the target detection frame; and thirdly, fusing the overlapping area of the scene image and the processed image between the image frame to be processed and the target detection frame.
As can be seen from the above, according to the embodiment of the application, after the preview image is obtained, if the preview image has the target to be processed, the preview image is segmented to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and only the image to be processed is subjected to the super-resolution reconstruction, so that the amount of processing data during the super-resolution reconstruction is reduced, and finally the processed image and the scene image are fused to obtain a new preview image, thereby achieving the targeted improvement of the definition of the shot target.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
In correspondence to the above-mentioned image hyper-resolution reconstruction method, an image hyper-resolution reconstruction apparatus provided in an embodiment of the present application is described below, with reference to fig. 5, where the image hyper-resolution reconstruction apparatus 5 includes:
an acquiring unit 501, configured to acquire a preview image;
a dividing unit 502, configured to, if a to-be-processed target exists in the preview image, divide the preview image to obtain a to-be-processed image including the to-be-processed target and a scene image not including the to-be-processed target, where the to-be-processed target is a target that meets a preset condition;
the processing unit 503 is configured to perform hyper-resolution reconstruction on the image to be processed to obtain a processed image;
a fusion unit 504, configured to fuse the processed image with the scene image to obtain a new preview image.
Optionally, the image hyper-resolution reconstruction apparatus 5 further includes:
a night scene detection unit, configured to detect whether a shooting scene of the preview image is a night scene after the preview image is acquired;
and the target to be processed detection unit is used for detecting whether the target to be processed exists in the preview image or not if the shooting scene of the preview image is a night scene.
Optionally, the night scene detection unit includes:
a first calculating subunit, configured to calculate a gray-scale average value of the preview image;
the first comparison subunit is used for comparing the gray average value of the preview image with a preset first gray average value threshold;
a night scene judging subunit, configured to determine that a shooting scene of the preview image is a night scene if the gray average of the preview image is smaller than the first gray average threshold;
the night scene determining subunit is further configured to determine that the shooting scene of the preview image is not a night scene if the average grayscale value of the preview image is not smaller than the first average grayscale value threshold.
Optionally, the target to be processed detecting unit includes:
a target detection subunit, configured to perform target detection on the preview image to obtain one or more targets included in the preview image;
the second calculating subunit is used for calculating the gray level average value of all the targets;
the second comparison subunit is used for comparing the gray level average value of all the targets with a preset second gray level average value threshold;
and the target to be processed determining subunit is used for determining all the targets as the target to be processed if the gray average value of all the targets is smaller than the second gray average value threshold.
Optionally, the dividing unit 502 includes:
a target detection frame acquiring subunit, configured to acquire a target detection frame of the target to be processed;
a to-be-processed image frame setting subunit, configured to set a to-be-processed image frame in the preview image based on the target detection frame, where the to-be-processed image frame and the target detection frame have the same shape, each boundary of the to-be-processed image frame is parallel to a corresponding boundary of the target detection frame, and each boundary of the to-be-processed image frame is separated from the corresponding boundary of the target detection frame by a preset distance;
a to-be-processed image determining subunit, configured to determine an image in the to-be-processed image frame as a to-be-processed image;
and a scene image determination subunit, configured to determine an image outside the target detection frame as a scene image.
Optionally, the image hyper-resolution reconstruction apparatus further includes:
a coordinate obtaining unit, configured to obtain coordinates of each vertex of the to-be-processed image frame in the preview image;
accordingly, the fusion unit 504 includes:
an overlap region acquiring subunit, configured to overlap the processed image with the scene image based on coordinates of vertices of the to-be-processed image frame in the preview image, so as to obtain an overlap region;
and an overlap region fusion subunit, configured to fuse, based on the overlap region, the edges of the scene image and the processed image to obtain a new preview image.
Optionally, the overlap region fusion subunit includes:
a gray level obtaining subunit, configured to, for any pixel point in the overlap area, obtain a gray level value of the pixel point in the scene image, and record the gray level value as a first gray level value, and obtain a gray level value of the pixel point in the processed image, and record the gray level value as a second gray level value;
a gray scale operator unit for calculating a gray scale average value of the first gray scale value and the second gray scale value;
and the gray determining subunit is used for determining the gray average value of the first gray value and the second gray value as the gray value of the pixel point after fusion.
As can be seen from the above, according to the embodiment of the application, after the image super-resolution reconstruction device acquires the preview image, if the preview image has the target to be processed, the preview image is segmented to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and only the image to be processed is subjected to super-resolution reconstruction, so that the amount of processing data during super-resolution reconstruction is reduced, and finally the processed image and the scene image are fused to obtain a new preview image, thereby achieving targeted improvement of the definition of the shot target.
An embodiment of the present application further provides an electronic device, please refer to fig. 6, where the electronic device 6 in the embodiment of the present application includes: a memory 601, one or more processors 602 (only one shown in fig. 6), and computer programs stored on the memory 601 and executable on the processors. Wherein: the memory 601 is used for storing software programs and modules, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601, so as to acquire resources corresponding to the preset events. Specifically, the processor 602 implements the following steps by running the computer program stored in the memory 601:
acquiring a preview image;
if the preview image has a target to be processed, segmenting the preview image to obtain a target to be processed including the target to be processed and a scene image not including the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing hyper-resolution reconstruction on the image to be processed to obtain a processed image;
and fusing the processed image and the scene image to obtain a new preview image.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, after the above acquiring the preview image, the processor 602 further implements the following steps when executing the computer program stored in the memory 601:
detecting whether the shooting scene of the preview image is a night scene or not;
and if the shooting scene of the preview image is a night scene, detecting whether the preview image has a target to be processed.
In a third possible embodiment based on the second possible embodiment, the detecting whether the shooting scene of the preview image is a night scene includes:
calculating the gray average value of the preview image;
comparing the gray level average value of the preview image with a preset first gray level average value threshold value;
if the average gray level of the preview image is smaller than the first average gray level threshold, determining that the shooting scene of the preview image is a night scene;
and if the average gray level of the preview image is not less than the first threshold, determining that the shooting scene of the preview image is not a night scene.
In a fourth possible implementation manner provided on the basis of the second possible implementation manner, the detecting whether the target to be processed exists in the preview image includes:
performing target detection on the preview image to obtain more than one target contained in the preview image;
calculating the gray level average value of all targets;
comparing the gray level average value of all the targets with a preset second gray level average value threshold value;
and if the gray level average value of all the targets is smaller than the second gray level average value threshold, all the targets are determined as the targets to be processed.
In a fifth possible embodiment based on the first possible embodiment, the second possible embodiment, the third possible embodiment, or the fourth possible embodiment, the dividing the preview image to obtain the to-be-processed image including the to-be-processed object and the scene image not including the to-be-processed object includes:
acquiring a target detection frame of the target to be processed;
setting a to-be-processed image frame in the preview image based on the target detection frame, wherein the to-be-processed image frame and the target detection frame have the same shape, each boundary of the to-be-processed image frame is parallel to the corresponding boundary of the target detection frame, and each boundary of the to-be-processed image frame is separated from the corresponding boundary of the target detection frame by a preset distance;
determining the image in the image frame to be processed as an image to be processed;
and determining the images outside the target detection frame as scene images.
In a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, the processor 602 further implements the following steps when executing the computer program stored in the memory 601:
obtaining the coordinates of each vertex of the image frame to be processed in the preview image;
correspondingly, the fusing the processed image with the scene image to obtain a new preview image includes:
overlapping the processed image and the scene image based on the coordinates of each vertex of the image frame to be processed in the preview image to obtain an overlapping area;
and fusing the edges of the scene image and the processed image based on the overlapped area to obtain a new preview image.
In a seventh possible embodiment based on the sixth possible embodiment, the obtaining a new preview image by fusing edges of the scene image and the processed image based on the overlap area includes:
aiming at any pixel point in the overlapping area, acquiring a gray value of the pixel point in the scene image, and recording the gray value as a first gray value, and acquiring a gray value of the pixel point in the processed image, and recording the gray value as a second gray value;
calculating the average gray value of the first gray value and the second gray value;
and determining the gray average value of the first gray value and the second gray value as the gray value of the fused pixel point.
Further, the electronic device may further include: one or more input devices and one or more output devices. The memory 601, processor 602, input devices, and output devices are connected by a bus.
It should be understood that in the embodiments of the present Application, the Processor 602 may be a Central Processing Unit (CPU), and the Processor may be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input devices may include a keyboard, a touch pad, a fingerprint sensor (for collecting fingerprint information of a user and direction information of the fingerprint), a microphone, etc., and the output devices may include a display, a speaker, etc.
Memory 601 may include both read-only memory and random-access memory, and provides instructions and data to processor 602. Some or all of memory 601 may also include non-volatile random access memory. For example, the memory 601 may also store device type information.
As can be seen from the above, according to the embodiment of the application, after the electronic device acquires the preview image, if the preview image has the target to be processed, the preview image is segmented to obtain the image to be processed including the target to be processed and the scene image not including the target to be processed, and only the image to be processed is subjected to the super-resolution reconstruction, so that the processing data amount during the super-resolution reconstruction is reduced, and finally the processed image and the scene image are fused to obtain a new preview image, thereby achieving the targeted improvement of the definition of the shooting target.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image hyper-resolution reconstruction method, comprising:
acquiring a preview image;
if the preview image has the target to be processed, segmenting the preview image to obtain the image to be processed containing the target to be processed and a scene image not containing the target to be processed, wherein the target to be processed is a target meeting a preset condition;
performing hyper-resolution reconstruction on the image to be processed to obtain a processed image;
and fusing the processed image and the scene image to obtain a new preview image.
2. The image hyper-differential reconstruction method of claim 1, wherein after said obtaining a preview image, said image hyper-differential reconstruction method further comprises:
detecting whether a shooting scene of the preview image is a night scene or not;
and if the shooting scene of the preview image is a night scene, detecting whether the preview image has a target to be processed.
3. The image hyper-resolution reconstruction method according to claim 2, wherein the detecting whether the shot scene of the preview image is a night scene comprises:
calculating the gray level average value of the preview image;
comparing the gray level average value of the preview image with a preset first gray level average value threshold value;
if the average gray level of the preview image is smaller than the first average gray level threshold, determining that the shooting scene of the preview image is a night scene;
and if the average gray level of the preview image is not less than the first average gray level threshold, determining that the shooting scene of the preview image is not a night scene.
4. The image hyper-resolution reconstruction method according to claim 2, wherein the detecting whether the object to be processed exists in the preview image comprises:
performing target detection on the preview image to obtain more than one target contained in the preview image;
calculating the gray level average value of all targets;
comparing the gray level average value of all the targets with a preset second gray level average value threshold value;
and if the gray level average value of all the targets is smaller than the second gray level average value threshold value, all the targets are determined as the targets to be processed.
5. The image hyper-resolution reconstruction method according to any one of claims 1 to 4, wherein the segmenting the preview image to obtain the to-be-processed image including the to-be-processed object and the scene image not including the to-be-processed object comprises:
acquiring a target detection frame of the target to be processed;
setting an image frame to be processed in the preview image based on the target detection frame, wherein the image frame to be processed and the target detection frame have the same shape, each boundary of the image frame to be processed is parallel to the corresponding boundary of the target detection frame, and each boundary of the image frame to be processed is separated from the corresponding boundary of the target detection frame by a preset distance;
determining the image in the image frame to be processed as an image to be processed;
and determining images outside the target detection frame as scene images.
6. The image hyper-differential reconstruction method of claim 5, further comprising:
obtaining the coordinates of each vertex of the image frame to be processed in the preview image;
the fusing the processed image and the scene image to obtain a new preview image includes:
overlapping the processed image and the scene image based on the coordinates of each vertex of the image frame to be processed in the preview image to obtain an overlapping area;
and based on the overlapping area, fusing the edges of the scene image and the processed image to obtain a new preview image.
7. The image hyper-resolution reconstruction method of claim 6, wherein the fusing the edges of the scene image and the processed image based on the overlap region to obtain a new preview image comprises:
aiming at any pixel point in the overlapping area, acquiring a gray value of the pixel point in the scene image, and recording the gray value as a first gray value, and acquiring a gray value of the pixel point in the processed image, and recording the gray value as a second gray value;
calculating the gray average value of the first gray value and the second gray value;
and determining the gray average value of the first gray value and the second gray value as the gray value of the pixel point after fusion.
8. An image hyper-resolution reconstruction apparatus, comprising:
an acquisition unit configured to acquire a preview image;
the segmentation unit is used for segmenting the preview image to obtain a to-be-processed image containing the to-be-processed target and a scene image not containing the to-be-processed target if the to-be-processed target exists in the preview image, wherein the to-be-processed target is a target meeting a preset condition;
the processing unit is used for carrying out hyper-resolution reconstruction on the image to be processed to obtain a processed image;
and the fusion unit is used for fusing the processed image and the scene image to obtain a new preview image.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911037652.1A 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment Active CN110796600B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911037652.1A CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
PCT/CN2020/123345 WO2021083059A1 (en) 2019-10-29 2020-10-23 Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911037652.1A CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110796600A true CN110796600A (en) 2020-02-14
CN110796600B CN110796600B (en) 2023-08-11

Family

ID=69441809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911037652.1A Active CN110796600B (en) 2019-10-29 2019-10-29 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Country Status (2)

Country Link
CN (1) CN110796600B (en)
WO (1) WO2021083059A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111968037A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Digital zooming method and device and electronic equipment
WO2021083059A1 (en) * 2019-10-29 2021-05-06 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and electronic device
CN113240687A (en) * 2021-05-17 2021-08-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN113572955A (en) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN114697543A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Image reconstruction method, related device and system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313630A (en) * 2021-05-27 2021-08-27 艾酷软件技术(上海)有限公司 Image processing method and device and electronic equipment
CN116630220B (en) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 Fluorescent image depth-of-field fusion imaging method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240605A1 (en) * 2007-03-27 2008-10-02 Seiko Epson Corporation Image Processing Apparatus, Image Processing Method, and Image Processing Program
JP2009177764A (en) * 2007-12-27 2009-08-06 Eastman Kodak Co Imaging apparatus
CN105517677A (en) * 2015-05-06 2016-04-20 北京大学深圳研究生院 Depth/disparity map post-processing method and apparatus
CN107835661A (en) * 2015-08-05 2018-03-23 深圳迈瑞生物医疗电子股份有限公司 Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101179497B1 (en) * 2008-12-22 2012-09-07 한국전자통신연구원 Apparatus and method for detecting face image
CN104820966B (en) * 2015-04-30 2016-01-06 河海大学 Asynchronous many video super-resolution methods of registration deconvolution during a kind of sky
CN109064399B (en) * 2018-07-20 2023-01-24 广州视源电子科技股份有限公司 Image super-resolution reconstruction method and system, computer device and storage medium thereof
CN110310229B (en) * 2019-06-28 2023-04-18 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device, and readable storage medium
CN110796600B (en) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080240605A1 (en) * 2007-03-27 2008-10-02 Seiko Epson Corporation Image Processing Apparatus, Image Processing Method, and Image Processing Program
JP2009177764A (en) * 2007-12-27 2009-08-06 Eastman Kodak Co Imaging apparatus
CN105517677A (en) * 2015-05-06 2016-04-20 北京大学深圳研究生院 Depth/disparity map post-processing method and apparatus
CN107835661A (en) * 2015-08-05 2018-03-23 深圳迈瑞生物医疗电子股份有限公司 Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN110288530A (en) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding
CN110298790A (en) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 A kind of pair of image carries out the processing method and processing device of super-resolution rebuilding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梁忠伟等: "基于灰度直方图拟合曲线的数字图像多阈值分割技术研究", 《现代制造工程》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021083059A1 (en) * 2019-10-29 2021-05-06 Oppo广东移动通信有限公司 Image super-resolution reconstruction method, image super-resolution reconstruction apparatus, and electronic device
CN111968037A (en) * 2020-08-28 2020-11-20 维沃移动通信有限公司 Digital zooming method and device and electronic equipment
CN114697543A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Image reconstruction method, related device and system
WO2022143921A1 (en) * 2020-12-31 2022-07-07 华为技术有限公司 Image reconstruction method, and related apparatus and system
CN114697543B (en) * 2020-12-31 2023-05-19 华为技术有限公司 Image reconstruction method, related device and system
CN113240687A (en) * 2021-05-17 2021-08-10 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and readable storage medium
CN113572955A (en) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN110796600B (en) 2023-08-11
WO2021083059A1 (en) 2021-05-06

Similar Documents

Publication Publication Date Title
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
US10410327B2 (en) Shallow depth of field rendering
US10805543B2 (en) Display method, system and computer-readable recording medium thereof
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108428214B (en) Image processing method and device
US11138695B2 (en) Method and device for video processing, electronic device, and storage medium
CN108769634B (en) Image processing method, image processing device and terminal equipment
CN109951635B (en) Photographing processing method and device, mobile terminal and storage medium
CN104182721A (en) Image processing system and image processing method capable of improving face identification rate
CN108431751B (en) Background removal
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110059666B (en) Attention detection method and device
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN111028276A (en) Image alignment method and device, storage medium and electronic equipment
CN113452901A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110837781B (en) Face recognition method, face recognition device and electronic equipment
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN108805883B (en) Image segmentation method, image segmentation device and electronic equipment
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN108776959B (en) Image processing method and device and terminal equipment
CN108270973B (en) Photographing processing method, mobile terminal and computer readable storage medium
CN111970451B (en) Image processing method, image processing device and terminal equipment
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant