WO2021083059A1 - 一种图像超分重建方法、图像超分重建装置及电子设备 - Google Patents

一种图像超分重建方法、图像超分重建装置及电子设备 Download PDF

Info

Publication number
WO2021083059A1
WO2021083059A1 PCT/CN2020/123345 CN2020123345W WO2021083059A1 WO 2021083059 A1 WO2021083059 A1 WO 2021083059A1 CN 2020123345 W CN2020123345 W CN 2020123345W WO 2021083059 A1 WO2021083059 A1 WO 2021083059A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
target
scene
preview image
Prior art date
Application number
PCT/CN2020/123345
Other languages
English (en)
French (fr)
Inventor
何慕威
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021083059A1 publication Critical patent/WO2021083059A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application belongs to the field of image processing technology, and in particular relates to an image super-division reconstruction method, an image super-division reconstruction device, electronic equipment and a computer-readable storage medium.
  • This application provides an image super-division reconstruction method, an image super-division reconstruction device, electronic equipment, and a computer-readable storage medium, which can improve the definition of a shooting target in a targeted manner.
  • an embodiment of the present application provides an image super-division reconstruction method, including:
  • the preview image has a target to be processed, the preview image is segmented to obtain a to-be-processed image that includes the to-be-processed target and a scene image that does not include the to-be-processed target, wherein the to-be-processed target is The goal that meets the preset conditions;
  • the processed image and the scene image are merged to obtain a new preview image.
  • an image super-division reconstruction device including:
  • the obtaining unit is used to obtain a preview image
  • the segmentation unit is configured to, if the preview image has a target to be processed, segment the preview image to obtain a to-be-processed image that includes the to-be-processed target and a scene image that does not include the to-be-processed target, wherein
  • the target to be processed is a target that meets the preset conditions
  • a processing unit configured to perform super-division reconstruction on the to-be-processed image to obtain a processed image
  • the fusion unit is used for fusing the processed image with the scene image to obtain a new preview image.
  • an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the computer program, Implement the method as described in the first aspect above.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program that implements the method described in the first aspect when the computer program is executed by a processor.
  • the embodiments of the present application also provide a computer program product, which implements the method as described in the first aspect when the above-mentioned computer program product runs on an electronic device.
  • FIG. 1 is a schematic diagram of an implementation process of an image super-division reconstruction method provided by an embodiment of the present application
  • Figure 2-1 is a schematic diagram of a target detection frame and a picture frame to be processed in the image super-division reconstruction method provided by an embodiment of the present application;
  • FIG. 2-2 is another schematic diagram of the target detection frame and the picture frame to be processed in the image super-division reconstruction method provided by the embodiment of the present application;
  • Figure 3-1 is a schematic diagram of an image to be processed in an image super-division reconstruction method provided by an embodiment of the present application
  • Fig. 3-2 is a schematic diagram of a scene image in the image super-division reconstruction method provided by an embodiment of the present application;
  • FIG. 4 is an example diagram of an overlapping area in the image super-division reconstruction method provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an image super-division reconstruction device provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • an embodiment of the present application proposes an image super-division reconstruction method.
  • the image super-division method can be applied to electronic devices such as smart phones, tablet computers, and digital cameras, and is not limited here.
  • an image super-division reconstruction method provided in an embodiment of the present application will be described. Please refer to FIG. 1, including:
  • Step 101 Obtain a preview image
  • an image capture operation can be performed by a camera mounted on an electronic device to obtain a preview image.
  • the aforementioned camera may be a front camera or a rear camera, which is not limited here.
  • Step 102 If there is a target to be processed in the preview image, segment the preview image to obtain a to-be-processed image that includes the target to be processed and a scene image that does not include the target to be processed;
  • the aforementioned target to be processed is a target that meets a preset condition.
  • the preview image is displayed on the screen of the above-mentioned electronic device. If a user's click instruction on the preview image is received, the target at the input coordinate position of the above-mentioned click instruction may be set It is determined as the target to be processed; or, the electronic device may also intelligently detect whether the preview image has the target to be processed, which is not limited here. That is, the aforementioned target to be processed may be determined by the user, or intelligently determined by the electronic device.
  • the preview image can be segmented to obtain the image to be processed, and the image to be processed contains the object to be processed; at the same time, a scene image can be obtained. Contains the above pending targets.
  • Step 103 Perform super-division reconstruction on the image to be processed to obtain a processed image
  • the above-mentioned to-be-processed image containing the target to be processed is subjected to super-division reconstruction processing.
  • the above-mentioned image to be processed can be processed by a preset super-resolution algorithm to obtain a super-resolution processed image.
  • the width and height of the super-resolution processed image are both N times the original image to be processed.
  • the value of N is 2 or 4; then, the bilinear interpolation method is used on the super-resolution processed image to obtain an image with the same size as the image to be processed. This image is a comparison of the image to be processed.
  • the processed image after super-division reconstruction is a preset super-resolution algorithm to obtain a super-resolution processed image.
  • the width and height of the super-resolution processed image are both N times the original image to be processed. ,
  • the value of N is 2 or 4; then, the bilinear interpolation method is used on the super-resolution processed image to obtain an image with the same
  • Step 104 Fusion the above-mentioned processed image with the above-mentioned scene image to obtain a new preview image.
  • the above-mentioned processed image can be merged with the above-mentioned scene image, and the merged image is a new preview image, and the new preview image is displayed on the electronic device In the screen for users to consult. Since the size of the processed image and the image to be processed are exactly the same, and the image to be processed is segmented from the original preview image, it can be based on the position of the image to be processed in the original preview image and the scene image in the original preview The position in the image realizes the fusion of the processed image and the scene image.
  • the screen of the electronic device no longer displays the original preview image, but displays the above-mentioned new preview image.
  • the above-mentioned image super-division reconstruction method can be optimized based on the application scene of the night scene, then After the foregoing step 101, the foregoing image super-division reconstruction method includes:
  • the electronic device may analyze the gray information of the preview image to determine whether the shooting scene of the preview image is a night scene.
  • the above step A1 includes:
  • the average value of the gray value of these pixels can be continuously calculated to obtain the average gray value of the preview image.
  • the electronic device can preset a first gray-scale average threshold.
  • the first gray-scale average threshold can also be changed by the user according to actual needs, which is not limited here.
  • the average gray value of the preview image is not less than the first gray average threshold value, it is determined that the shooting scene of the preview image is not a night scene.
  • the gray value of the preview image when the gray value is 0, it is completely black, and when the gray value is 255, it is completely white. Therefore, the smaller the average gray value of the preview image, the darker the shooting scene of the preview image.
  • the average gray value of the preview image is less than the first gray average threshold value, it can be determined that the shooting scene of the preview image is a night scene; when the gray average value of the preview image is not less than the first gray average value
  • the threshold it can be determined that the shooting scene of the above preview image is not a night scene.
  • the shooting scene of the preview image is a night scene, detect whether there is a target to be processed in the preview image.
  • the step of detecting whether there is a target to be processed in the preview image includes:
  • target detection may be further performed on the preview image to obtain one or more targets contained in the preview image.
  • the obtained targets can be filtered, and only the ones that the user is interested in are kept.
  • Target type For example, in the daily shooting process, people are the most common subject. Therefore, the target type that the user is interested in can be set as a human face.
  • the above step C1 can be specifically expressed as a preview The image performs face detection to obtain one or more faces contained in the preview image.
  • the user can also modify the above-mentioned target type of interest according to specific shooting requirements, which is not limited here.
  • the gray value of each target pixel can be obtained, and the average gray value of these pixels can be calculated based on this to obtain the gray average of all targets value. It should be noted that here is not a single target as a unit to calculate the gray average value, but all targets as a whole to calculate the gray average value.
  • the electronic device can preset a second gray-scale average threshold.
  • the second gray-scale average threshold can also be changed by the user according to actual needs, which is not limited here.
  • the gray average value calculated based on all the above targets is less than the second gray average threshold value, it is considered that the brightness of these targets in the preview image is relatively dark, and it is difficult to present a good shooting experience to the user.
  • These targets can be determined as targets to be processed. For example, assuming that in a night scene shooting scene, the target type that the user is interested in is a human face, the electronic device will detect whether there is a human face in the preview image, and if there are multiple human faces, continue to calculate the grayscale average of the multiple human faces , And compare with the foregoing second gray-scale average threshold. If the gray-scale average of the multiple faces is less than the foregoing second gray-scale average threshold, then the multiple faces are all determined as the target to be processed.
  • the above step of segmenting the preview image to obtain the to-be-processed image that contains the to-be-processed target and the scene image that does not include the to-be-processed target specifically includes:
  • the target detection frame of the target to be processed can be obtained and used as the basis for subsequent segmentation.
  • a target detection frame is generated to frame the target to be processed.
  • the target detection frame is a rectangle.
  • the target detection frame may also be a polygon, which is not limited here. Specifically, when the target is a human face, the above-mentioned target detection frame is a human face detection frame.
  • the shape of the image frame to be processed is the same as that of the target detection frame, and the size of the image frame to be processed is larger than the size of the target detection frame, and each boundary of the image frame to be processed corresponds to the target detection frame.
  • the borders are parallel, and each border of the image frame to be processed is separated from the corresponding border of the target detection frame by a preset distance.
  • Figure 2-1 is a schematic diagram of the corresponding image frame to be processed when the target detection frame is rectangular;
  • Figure 2-2 is when the target detection frame is a hexagon ,
  • the schematic diagram of the corresponding set image frame to be processed It can be seen that the distance between the target detection frame and the correspondingly set image frame to be processed is maintained at a fixed value.
  • the image within the image frame to be processed is determined as the image to be processed, that is, the image frame to be processed is the image to be processed the edge of.
  • the image frame to be processed is the shadow part. After removing the shadow part, the remaining image is the image to be processed.
  • the image outside the target detection frame is determined as the scene image, that is, the target detection frame is the inner edge of the image to be processed, and the original edge of the preview image is the outer edge of the image to be processed.
  • the target detection frame is the inner edge of the image to be processed
  • the original edge of the preview image is the outer edge of the image to be processed.
  • the above-mentioned image super-division reconstruction method further includes:
  • the processed image is actually an image obtained after the super-division reconstruction operation is performed on the image to be processed. Therefore, the shape and size of the processed image and the image to be processed are exactly the same.
  • the coordinates of the vertices of the image frame to be processed in the preview image can be obtained first, and the coordinates of the vertices of the image frame to be processed in the preview image are the processed images. The coordinates of each vertex in the preview image.
  • the coordinates are based on the image coordinate system, that is, the coordinate system constructed with the pixel as the unit of the upper left vertex of the image as the coordinate system, the abscissa u and ordinate of the pixel v is the number of columns and rows in the image array respectively.
  • step 104 includes:
  • the outer edge of the scene image is the edge of the original preview image
  • the image coordinate system of the preview image and the scene image are completely coincident.
  • the coordinates of the vertices of the image frame to be processed in the preview image can be determined.
  • the solid line part is composed of the scene image
  • the dotted line part is composed of the processed image.
  • the target detection frame constitutes the inner edge of the overlapping area
  • the image frame to be processed constitutes the outer edge of the overlapping area.
  • the parts outside the overlap area do not need to be processed, that is, in the scene image, the pixels outside the overlap area remain unchanged; in the processed image, the pixels outside the overlap area also remain unchanged. Only the pixels in the overlapping area are merged, so that the inner edge area of the scene image can be merged with the edge area of the processed image, so as to obtain a new preview image.
  • the above step E3 includes:
  • the gray value of the pixel in the scene image will be obtained. It is the first gray value, and the gray value of the pixel in the processed image is obtained at the same time, and it is recorded as the second gray value as the basis for subsequent fusion.
  • each pixel point in the overlapping area is obtained by fusion based on the scene image and the corresponding pixel point of the processed image.
  • the new preview image finally obtained is actually composed of three parts: one is the part of the unprocessed scene image outside the image frame to be processed; the other is the part of the scene image that has not been processed in the target detection frame. It is divided into the part of the processed image that is reconstructed; the third is the part of the overlapping area between the image frame to be processed and the target detection frame that combines the scene image and the processed image.
  • the preview image after the preview image is obtained, if there is a target to be processed in the preview image, the preview image will be segmented to obtain a to-be-processed image containing the target to be processed and not including the target to be processed.
  • the scene image of the target to be processed, and only the image to be processed is subjected to super-division reconstruction, which reduces the amount of processing data during super-division reconstruction, and finally merges the above-mentioned processed image with the above-mentioned scene image to obtain a new preview image.
  • Targeted improvement of the clarity of the shooting target Targeted improvement of the clarity of the shooting target.
  • the above-mentioned image super-division reconstruction device 5 includes:
  • the obtaining unit 501 is configured to obtain a preview image
  • the segmentation unit 502 is configured to, if there is a target to be processed in the preview image, segment the preview image to obtain a to-be-processed image that includes the target to be processed and a scene image that does not include the target to be processed, wherein the to-be-processed target The target is the target that meets the preset conditions;
  • the processing unit 503 is configured to perform super-division reconstruction on the above-mentioned to-be-processed image to obtain a processed image;
  • the fusion unit 504 is used for fusing the above-mentioned processed image with the above-mentioned scene image to obtain a new preview image.
  • the above-mentioned image super-division reconstruction device 5 further includes:
  • the night scene detection unit is configured to detect whether the shooting scene of the preview image is a night scene after the preview image is acquired;
  • the to-be-processed target detection unit is configured to detect whether there is a to-be-processed target in the preview image if the shooting scene of the preview image is a night scene.
  • the aforementioned night scene detection unit includes:
  • the first calculation subunit is used to calculate the average gray value of the preview image
  • the first comparison subunit is used to compare the gray average value of the preview image with the preset first gray average value threshold
  • a night scene judging subunit configured to determine that the shooting scene of the preview image is a night scene if the average gray value of the preview image is less than the first gray average threshold value
  • the night scene determination subunit is further configured to determine that the shooting scene of the preview image is not a night scene if the average gray value of the preview image is not less than the first gray average threshold value.
  • the aforementioned target detection unit to be processed includes:
  • the target detection subunit is configured to perform target detection on the preview image to obtain more than one target included in the preview image;
  • the second calculation subunit is used to calculate the average gray level of all targets
  • the second comparison subunit is used to compare the gray average values of all targets with the preset second gray average threshold value
  • the target determination subunit to be processed is configured to determine all the targets as the target to be processed if the average gray value of all the targets is less than the second gray average threshold value.
  • the foregoing dividing unit 502 includes:
  • the target detection frame obtaining subunit is used to obtain the target detection frame of the target to be processed
  • the to-be-processed image frame setting subunit is configured to set the to-be-processed image frame in the preview image based on the target detection frame, wherein the to-be-processed image frame and the target detection frame have the same shape, and the to-be-processed image frame Each boundary of is parallel to the corresponding boundary of the target detection frame, and each boundary of the image frame to be processed is separated from the corresponding boundary of the target detection frame by a preset distance;
  • the to-be-processed image determining subunit is used to determine the image in the to-be-processed image frame as the to-be-processed image;
  • the scene image determination subunit is used to determine the image outside the target detection frame as the scene image.
  • the above-mentioned image super-division reconstruction device further includes:
  • a coordinate acquiring unit configured to acquire the coordinates of each vertex of the image frame to be processed in the preview image
  • the aforementioned fusion unit 504 includes:
  • the overlapping area obtaining subunit is configured to overlap the processed image with the scene image based on the coordinates of each vertex of the to-be-processed image frame in the preview image to obtain an overlapping area;
  • the overlapping area fusion subunit is used for fusing the edges of the scene image and the processed image based on the overlapping area to obtain a new preview image.
  • the foregoing overlapping region fusion subunit includes:
  • the gray-scale acquisition subunit is used to acquire the gray-scale value of the pixel in the scene image for any pixel in the overlapping area, record it as the first gray-scale value, and acquire the processed image of the pixel
  • the gray value in is recorded as the second gray value
  • the gray-scale calculation subunit is used to calculate the gray-scale average value of the first gray-scale value and the second gray-scale value;
  • the gray-scale determination subunit is used to determine the gray-scale average value of the first gray-scale value and the second gray-scale value as the gray-scale value of the pixel point after fusion.
  • the preview image will be segmented to obtain the to-be-processed target that contains the target to be processed. Images and scene images that do not contain the above-mentioned target to be processed, and only perform super-division reconstruction on the image to be processed, reducing the amount of processed data during super-division reconstruction, and finally fusing the above-mentioned processed image with the above-mentioned scene image to obtain a new Preview images to achieve a targeted improvement of the clarity of the shooting target.
  • the embodiment of the present application also provides an electronic device. Please refer to FIG. 6.
  • the electronic device 6 in the embodiment of the present application includes: a memory 601, one or more processors 602 (only one is shown in FIG. 6) and stored in A computer program on the memory 601 that can be run on the processor.
  • the memory 601 is used to store software programs and modules, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601 to obtain resources corresponding to the aforementioned preset events.
  • the foregoing processor 602 implements the following steps when running the foregoing computer program stored in the memory 601:
  • the preview image is segmented to obtain a to-be-processed image including the target to be processed and a scene image that does not include the target to be processed, wherein the target to be processed satisfies a preset condition The goal;
  • the above-mentioned processed image is merged with the above-mentioned scene image to obtain a new preview image.
  • the foregoing processor 602 runs and stores it in the memory.
  • the above-mentioned computer program of 601 also implements the following steps:
  • the shooting scene of the preview image is a night scene, it is detected whether there is a target to be processed in the preview image.
  • the foregoing detection of whether the shooting scene of the preview image is a night scene includes:
  • the average gray value of the preview image is less than the first gray average threshold value, it is determined that the shooting scene of the preview image is a night scene;
  • the average gray value of the preview image is not less than the first gray average threshold value, it is determined that the shooting scene of the preview image is not a night scene.
  • the foregoing detection of whether there is a target to be processed in the foregoing preview image includes:
  • the average gray value of all the targets is less than the second gray average threshold value, then all the targets are determined as the target to be processed.
  • the foregoing segmentation of the preview image to obtain the to-be-processed image containing the to-be-processed target and the scene image that does not include the to-be-processed target includes:
  • the image frame to be processed is set in the preview image, wherein the shape of the image frame to be processed is the same as the shape of the target detection frame, and each boundary of the image frame to be processed is the same as that of the target detection frame.
  • the corresponding borders are parallel, and each border of the image frame to be processed is separated from the corresponding border of the target detection frame by a preset distance;
  • the image outside the target detection frame is determined as the scene image.
  • the foregoing processor 602 further implements the following steps when running the foregoing computer program stored in the memory 601:
  • the above-mentioned fusion of the above-mentioned processed image and the above-mentioned scene image to obtain a new preview image includes:
  • the edges of the above-mentioned scene image and the above-mentioned processed image are merged to obtain a new preview image.
  • the foregoing based on the foregoing overlapping area, the edges of the foregoing scene image and the foregoing processed image are merged to obtain a new preview image, including :
  • the gray average value of the first gray value and the second gray value is determined as the gray value of the pixel point after fusion.
  • the above electronic device may further include: one or more input devices and one or more output devices.
  • the memory 601, the processor 602, the input device and the output device are connected by a bus.
  • the processor 602 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors or digital signal processors (DSP). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the input device may include a keyboard, a touch panel, a fingerprint sensor (used to collect user fingerprint information and fingerprint orientation information), a microphone, etc.
  • the output device may include a display, a speaker, and the like.
  • the memory 601 may include a read-only memory and a random access memory, and provides instructions and data to the processor 602. A part or all of the memory 601 may also include a non-volatile random access memory. For example, the memory 601 may also store device type information.
  • the preview image will be segmented to obtain the to-be-processed image containing the target to be processed and the unprocessed image.
  • the disclosed device and method may be implemented in other ways.
  • the system embodiment described above is only illustrative.
  • the division of the above-mentioned modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined. Or it can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the aforementioned integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the above-mentioned computer program can be stored in a computer-readable storage medium, and the computer program When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the above-mentioned computer program includes computer program code, and the above-mentioned computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the above-mentioned computer-readable storage medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer readable memory, read-only memory (ROM, Read-Only Memory) ), Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • the content contained in the above-mentioned computer-readable storage medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable storage The medium does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种图像超分重建方法、图像超分重建装置、电子设备及计算机可读存储介质,其中,该方法包括:获取预览图像;若所述预览图像中存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;对所述待处理图像进行超分重建,得到处理后图像;将所述处理后图像与所述场景图像进行融合,得到新的预览图像。通过本申请方案,可针对性的提升拍摄目标的清晰度,同时能够减少电子设备的处理数据量。

Description

一种图像超分重建方法、图像超分重建装置及电子设备 技术领域
本申请属于图像处理技术领域,尤其涉及一种图像超分重建方法、图像超分重建装置、电子设备及计算机可读存储介质。
背景技术
人们在较为恶劣的拍摄场景下使用电子设备进行拍摄时,往往会出现拍摄目标不够清晰的情况。现有技术中,为了保障拍摄所得的图像的清晰度,用户可以通过电子设备对拍摄所得的图像整体进行超分重建。
发明内容
本申请提供了一种图像超分重建方法、图像超分重建装置、电子设备及计算机可读存储介质,可针对性的提升拍摄目标的清晰度。
第一方面,本申请实施例提供了一种图像超分重建方法,包括:
获取预览图像;
若所述预览图像存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;
对所述待处理图像进行超分重建,得到处理后图像;
将所述处理后图像与所述场景图像进行融合,得到新的预览图像。
第二方面,本申请实施例提供了一种图像超分重建装置,包括:
获取单元,用于获取预览图像;
分割单元,用于若所述预览图像存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;
处理单元,用于对所述待处理图像进行超分重建,得到处理后图像;
融合单元,用于将所述处理后图像与所述场景图像进行融合,得到新的预览图像。
第三方面,本申请实施例提供了一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面所述的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的方法。
第五方面,本申请实施例还提供了一种计算机程序产品,当上述计算机程序产品在电子设备上运行时,实现如第一方面所述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的图像超分重建方法的实现流程示意图;
图2-1是本申请实施例提供的图像超分重建方法中,目标检测框与待处理图相框的示意图;
图2-2是本申请实施例提供的图像超分重建方法中,目标检测框与待处理图相框的另一示意图;
图3-1是本申请实施例提供的图像超分重建方法中,待处理图像的示意图;
图3-2是本申请实施例提供的图像超分重建方法中,场景图像的示意图;
图4是本申请实施例提供的图像超分重建方法中,重叠区域的示例图;
图5是本申请实施例提供的图像超分重建装置的示意图;
图6是本申请实施例提供的电子设备的示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所提出的技术方案,下面通过具体实施例来进行说明。
实施例1
考虑到当前的超分重建的方式难以实现对拍摄目标的针对性处理,本申请实施例提出了一种图像超分重建方法。该图像超分方法可应用于智能手机、平板电脑、数码相机等电子设备中,此处不作限定。下面以该图像超分重建方法应用于智能手机为例,对本申请实施例提供的一种图像超分重建方法进行描述,请参阅图1,包括:
步骤101,获取预览图像;
在本申请实施例中,可通过电子设备所搭载的摄像头进行图像捕捉操作,以获得预览图像,其中,上述摄像头可以是前置摄像头,也可以是后置摄像头,此处不作限定。
步骤102,若上述预览图像中存在待处理目标,则对上述预览图像进行分割,得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像;
在本申请实施例中,上述待处理目标为满足预设条件的目标。可选地,可以是在获取 预览图像后,就在上述电子设备的屏幕上显示该预览图像,若接收到用户对该预览图像的点击指令,则可以将上述点击指令的输入坐标位置处的目标确定为待处理目标;或者,也可以是由电子设备智能检测该预览图像是否存在待处理目标,此处不作限定。也即,上述待处理目标可以是用户所确定的,也可以是电子设备智能确定的。在确定上述预览图像中存在有待处理目标时,即可对上述预览图像进行分割,得到待处理图像,该待处理图像中包含有上述待处理目标;同时还可得到场景图像,上述场景图像中不包含上述待处理目标。
步骤103,对上述待处理图像进行超分重建,得到处理后图像;
在本申请实施例中,为实现对待处理目标的针对性处理,此处仅对上述包含待处理目标的待处理图像进行超分重建处理。具体地,可通过预设的超分辨率算法对上述待处理图像进行处理,得到超分辨率处理后的图像,该超分辨率处理后的图像的宽度和高度均为原待处理图像的N倍,N的取值为2或4;随后,再对超分辨率处理后的图像使用双线性插值方法,即可得到与待处理图像的尺寸相同的图像,该图像即为对上述待处理图像进行超分重建后的处理后图像。
步骤104,将上述处理后图像与上述场景图像进行融合,得到新的预览图像。
在本申请实施例中,在得到上述处理后图像后,可将上述处理后图像与上述场景图像进行融合,融合后的图像即为新的预览图像,并将该新的预览图像显示在电子设备的屏幕中,供用户查阅。由于处理后图像与待处理图像的尺寸完全一致,而待处理图像是从原来的预览图像中分割所得,因而,可基于待处理图像在原来的预览图像中的位置,以及场景图像在原来的预览图像中的位置,实现对处理后图像与场景图像的融合。可选地,在得到新的预览图像后,电子设备的屏幕即不再显示原来的预览图像,而是显示上述新的预览图像。
可选地,考虑到电子设备在夜景中进行拍摄时,由于夜景中光线较暗,导致用户拍摄清晰图像的难度上升,因而,上述图像超分重建方法可基于夜景这一应用场景进行优化,则在上述步骤101之后,上述图像超分重建方法包括:
A1、检测上述预览图像的拍摄场景是否为夜景;
其中,在获取到预览图像后,电子设备可以对上述预览图像的灰度信息进行分析,以确定该预览图像的拍摄场景是否为夜景。具体地,上述步骤A1包括:
B1、计算上述预览图像的灰度平均值;
其中,在获取到预览图像的各个像素点的灰度值之后,可以继续对这些像素点的灰度值进行均值计算,以得到上述预览图像的灰度平均值。
B2、将上述预览图像的灰度平均值与预设的第一灰度平均值阈值进行比对;
其中,电子设备可以预先设定一第一灰度平均值阈值,当然,该第一灰度平均值阈值也可由用户根据实际需求再进行更改,此处不作限定。
B3、若上述预览图像的灰度平均值小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景为夜景;
B4、若上述预览图像的灰度平均值不小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景不为夜景。
其中,灰度值为0时为全黑,灰度值为255时为全白,因而,上述预览图像的灰度平均值越小,则认为该预览图像的拍摄场景越暗。当上述预览图像的灰度平均值小于上述第一灰度平均值阈值时,即可确定上述预览图像的拍摄场景为夜景;当上述预览图像的灰度平均值不小于上述第一灰度平均值阈值时,即可确定上述预览图像的拍摄场景不为夜景。
A2、若上述预览图像的拍摄场景为夜景,则检测上述预览图像中是否存在待处理目标。
其中,当确定上述预览图像的拍摄场景为夜景时,即可检测上述预览图像中是否存在待处理目标,也即满足预设条件的目标。具体地,该检测上述预览图像中是否存在待处理目标的步骤包括:
C1、对上述预览图像进行目标检测,得到上述预览图像所包含的一个以上目标;
其中,当确定预览图像的拍摄场景为夜景时,可进一步对该预览图像进行目标检测,得到预览图像所包含的一个或多个目标。考虑到上述目标有多种类型,而用户可能仅关心其中的某几种类型的目标,因而,可以在对上述预览图像进行目标检测后,对所得到的目标进行筛选,仅保留用户感兴趣的目标类型。例如,在日常的拍摄过程中,人物是最常见的拍摄对象,因而,可以将上述用户感兴趣的目标类型设定为人脸,则在这种应用场景下,上述步骤C1可具体表现为对预览图像进行人脸检测,以得到预览图像所包含的一个或多个人脸。当然,用户也可以根据具体的拍摄需求,自行修改上述感兴趣的目标类型,此处不作限定。
C2、计算所有目标的灰度平均值;
其中,在得到上述预览图像所包含的一个以上目标后,即可获取各个目标的像素点的灰度值,并以此对这些像素点的灰度值均值计算,以得到所有目标的灰度平均值。需要注意的是,此处不是以单个目标为单位进行灰度平均值的计算,而是以所有目标为一个整体进行灰度平均值的计算。
C3、将所有目标的灰度平均值与预设的第二灰度平均值阈值进行比对;
其中,电子设备可以预先设定一第二灰度平均值阈值,当然,该第二灰度平均值阈值也可由用户根据实际需求再进行更改,此处不作限定。
C4、若所有目标的灰度平均值小于上述第二灰度平均值阈值,则将所有目标均确定为 上述待处理目标。
其中,若基于上述所有目标所计算得到的灰度平均值小于上述第二灰度平均值阈值,则认为这些目标在上述预览图像中的亮度较暗,难以呈现给用户好的拍摄体验,基于此,可以将这些目标均确定为待处理目标。例如,假定在夜景的拍摄场景下,用户感兴趣的目标类型为人脸,则电子设备将检测预览图像中是否存在人脸,若存在多个人脸,则继续计算该多个人脸的灰度平均值,并与上述第二灰度平均值阈值进行比对,若这多个人脸的灰度平均值小于上述第二灰度平均值阈值,则将这多个人脸均确定为待处理目标。
可选地,上述对上述预览图像进行分割,得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像的步骤,具体包括:
D1、获取上述待处理目标的目标检测框;
其中,在对进行目标检测的过程中,会生成多个目标检测框。因而,在本步骤,可以获取上述待处理目标的目标检测框,并以此作为后续分割的基础。当然,若上述待处理目标是用户所确定的,则可以在基于用户所输入的点击指令确定待处理目标后,生成一目标检测框将该待处理目标框入其中。一般情况下,该目标检测框为矩形,当然,根据目标检测所采用的算法的不同,上述目标检测框也可以为多边形,此处不作限定。具体地,当目标为人脸时,上述目标检测框即为人脸检测框。
D2、基于上述目标检测框,在上述预览图像中设定待处理图像框;
其中,上述待处理图像框与上述目标检测框的形状相同,且上述待处理图像框的尺寸大于上述目标检测框的尺寸,且上述待处理图像框的每一边界分别与上述目标检测框的对应边界平行,且上述待处理图像框的每一边界分别与上述目标检测框的对应边界间隔预设的距离。如图2-1及图2-2所示,图2-1为目标检测框为矩形时,所对应设定的待处理图像框的示意图;图2-2为目标检测框为六边形时,所对应设定的待处理图像框的示意图。可见,目标检测框与对应设定的待处理图像框的距离保持为一固定值。
D3、将上述待处理图像框之内的图像确定为待处理图像;
其中,以预览图像为基础,当在预览图像中设定好待处理图像框之后,将上述待处理图像框之内的图像确定为待处理图像,也即,该待处理图像框为待处理图像的边缘。以目标检测框为矩形为例,如图3-1所示,上述待处理图像框之外为阴影部分,将该阴影部分剔除后,所保留的即为待处理图像。
D4、将上述目标检测框之外的图像确定为场景图像。
其中,以预览图像为基础,将上述目标检测框之外的图像确定为场景图像,也即,该目标检测框为待处理图像的内边缘,预览图像的原边缘为待处理图像的外边缘。仍以目标 检测框为矩形为例,如图3-2所示,上述目标检测框之内为阴影部分,将该阴影部分剔除后,所保留的即为场景图像。
可选地,为了更好的对场景图像及处理后图像进行融合,上述图像超分重建方法还包括:
E1、获取上述待处理图像框的各个顶点在上述预览图像中的坐标;
其中,处理后图像实际为对待处理图像进行了超分重建操作后所得到的图像,因而,该处理后图像与待处理图像的形状大小完全一致。为了实现处理后图像与场景图像的融合,可以先获取上述待处理图像框的各个顶点在上述预览图像中的坐标,该待处理图像框的各个顶点在上述预览图像中的坐标即为处理后图像的各个顶点在预览图像中的坐标。需要注意的是,该坐标是以图像坐标系为基础所得到的坐标,也即,是以图像的左上顶点为坐标系原点,以像素为单位构建的坐标系,像素的横坐标u与纵坐标v分别是在图像数组中所在的列数与所在行数。
相应地,上述步骤104,包括:
E2、基于上述待处理图像框的各个顶点在上述预览图像中的坐标,将上述处理后图像与上述场景图像重叠,得到重叠区域;
其中,由于场景图像的外边缘即为原来的预览图像的边缘,因而,预览图像与场景图像的图像坐标系是完全重合的。基于此,根据上述待处理图像框的各个顶点在上述预览图像中的坐标,即可确定上述处理后图像的各个顶点在上述场景图像中的坐标。如图4所示,实线部分所组成的是场景图像,虚线部分所组成的是处理后图像,场景图像与处理后图像之间存在一重叠区域,也即阴影部分。实际上,在进行图像分割时即可看出,目标检测框构成了该重叠区域的内边缘,待处理图像框构成了该重叠区域的外边缘。
E3、基于上述重叠区域,对上述场景图像与上述处理后图像的边缘进行融合,得到新的预览图像。
其中,重叠区域外的部分均无需再做其它处理,也即,场景图像中,重叠区域外的像素点保持不变;处理后图像中,重叠区域外的像素点也保持不变。仅对重叠区域的像素点进行融合处理,使得上述场景图像的内边缘区域能够与上述处理后图像的边缘区域进行融合,以此得到新的预览图像。具体地,上述步骤E3包括:
F1、针对上述重叠区域的任一像素点,获取上述像素点在上述场景图像中的灰度值,记为第一灰度值,并获取上述像素点在上述处理后图像中的灰度值,记为第二灰度值;
其中,由于该重叠区域既存在于场景图像中,也存在与处理后图像中,因而,针对上述重叠区域的任一像素点,都将去获取该像素点在上述场景图像中的灰度值,记为第一灰 度值,并同时获取该上述像素点在上述处理后图像中的灰度值,记为第二灰度值,以作为后续融合的基础。
F2、计算上述第一灰度值及第二灰度值的灰度平均值;
F3、将上述第一灰度值及第二灰度值的灰度平均值确定为融合后的上述像素点的灰度值。
以下通过具体实例说明上述步骤F1至F3:假定重叠区域的一像素点P1在场景图像中的灰度值为X1,在处理后图像中的灰度值为X2,对X1及X2求取灰度平均值,并对该灰度平均值进行取整处理,得到灰度值X3,则融合后的上述像素点的灰度值即为X3。通过上述过程,使得重叠区域的各个像素点均为基于场景图像及处理后图像的对应像素点融合而得。这样一来,最后所得到的新的预览图像,实际由三部分组成:其一是待处理图像框之外,未经任何处理的场景图像的部分;其二是目标检测框之内,经过超分重建处理的处理后图像的部分;其三是待处理图像框与目标检测框之间,融合了场景图像及处理后图像的重叠区域的部分。
由上可见,通过本申请实施例,在获取到预览图像后,若上述预览图像中存在有待处理目标,则会对预览图像进行分割,以得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,并仅对待处理图像进行超分重建,减少了超分重建时的处理数据量,并在最后将上述处理后图像与上述场景图像进行融合,得到新的预览图像,实现对拍摄目标的清晰度的针对性提升。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
实施例2
对应于上文所提出的图像超分重建方法,下面对本申请实施例提供的一种图像超分重建装置进行描述,请参阅图5,上述图像超分重建装置5包括:
获取单元501,用于获取预览图像;
分割单元502,用于若上述预览图像中存在待处理目标,则对上述预览图像进行分割,得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,其中,上述待处理目标为满足预设条件的目标;
处理单元503,用于对上述待处理图像进行超分重建,得到处理后图像;
融合单元504,用于将上述处理后图像与上述场景图像进行融合,得到新的预览图像。
可选地,上述图像超分重建装置5还包括:
夜景检测单元,用于在上述获取预览图像之后,检测上述预览图像的拍摄场景是否为夜景;
待处理目标检测单元,用于若上述预览图像的拍摄场景为夜景,则检测上述预览图像中是否存在待处理目标。
可选地,上述夜景检测单元,包括:
第一计算子单元,用于计算上述预览图像的灰度平均值;
第一比对子单元,用于将上述预览图像的灰度平均值与预设的第一灰度平均值阈值进行比对;
夜景判断子单元,用于若上述预览图像的灰度平均值小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景为夜景;
上述夜景判断子单元,还用于若上述预览图像的灰度平均值不小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景不为夜景。
可选地,上述待处理目标检测单元,包括:
目标检测子单元,用于对上述预览图像进行目标检测,得到上述预览图像所包含的一个以上目标;
第二计算子单元,用于计算所有目标的灰度平均值;
第二比对子单元,用于将所有目标的灰度平均值与预设的第二灰度平均值阈值进行比对;
待处理目标确定子单元,用于若所有目标的灰度平均值小于上述第二灰度平均值阈值,则将所有目标均确定为上述待处理目标。
可选地,上述分割单元502,包括:
目标检测框获取子单元,用于获取上述待处理目标的目标检测框;
待处理图像框设定子单元,用于基于上述目标检测框,在上述预览图像中设定待处理图像框,其中,上述待处理图像框与上述目标检测框的形状相同,上述待处理图像框的每一边界分别与上述目标检测框的对应边界平行,且上述待处理图像框的每一边界分别与上述目标检测框的对应边界间隔预设的距离;
待处理图像确定子单元,用于将上述待处理图像框之内的图像确定为待处理图像;
场景图像确定子单元,用于将上述目标检测框之外的图像确定为场景图像。
可选地,上述图像超分重建装置还包括:
坐标获取单元,用于获取上述待处理图像框的各个顶点在上述预览图像中的坐标;
相应地,上述融合单元504,包括:
重叠区域获取子单元,用于基于上述待处理图像框的各个顶点在上述预览图像中的坐标,将上述处理后图像与上述场景图像重叠,得到重叠区域;
重叠区域融合子单元,用于基于上述重叠区域,对上述场景图像与上述处理后图像的 边缘进行融合,得到新的预览图像。
可选地,上述重叠区域融合子单元,包括:
灰度获取子单元,用于针对上述重叠区域的任一像素点,获取上述像素点在上述场景图像中的灰度值,记为第一灰度值,并获取上述像素点在上述处理后图像中的灰度值,记为第二灰度值;
灰度计算子单元,用于计算上述第一灰度值及第二灰度值的灰度平均值;
灰度确定子单元,用于将上述第一灰度值及第二灰度值的灰度平均值确定为融合后的上述像素点的灰度值。
由上可见,通过本申请实施例,图像超分重建装置在获取到预览图像后,若上述预览图像中存在有待处理目标,则会对预览图像进行分割,以得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,并仅对待处理图像进行超分重建,减少了超分重建时的处理数据量,并在最后将上述处理后图像与上述场景图像进行融合,得到新的预览图像,实现对拍摄目标的清晰度的针对性提升。
实施例3
本申请实施例还提供了一种电子设备,请参阅图6,本申请实施例中的电子设备6包括:存储器601,一个或多个处理器602(图6中仅示出一个)及存储在存储器601上并可在处理器上运行的计算机程序。其中:存储器601用于存储软件程序以及模块,处理器602通过运行存储在存储器601的软件程序以及单元,从而执行各种功能应用以及数据处理,以获取上述预设事件对应的资源。具体地,上述处理器602通过运行存储在存储器601的上述计算机程序时实现以下步骤:
获取预览图像;
若上述预览图像中存在待处理目标,则对上述预览图像进行分割,得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,其中,上述待处理目标为满足预设条件的目标;
对上述待处理图像进行超分重建,得到处理后图像;
将上述处理后图像与上述场景图像进行融合,得到新的预览图像。
假设上述为第一种可能的实施方式,则在第一种可能的实施方式作为基础而提供的第二种可能的实施方式中,在上述获取预览图像之后,上述处理器602通过运行存储在存储器601的上述计算机程序时还实现以下步骤:
检测上述预览图像的拍摄场景是否为夜景;
若上述预览图像的拍摄场景为夜景,则检测上述预览图像中是否存在待处理目标。
在上述第二种可能的实施方式作为基础而提供的第三种可能的实施方式中,上述检测 上述预览图像的拍摄场景是否为夜景,包括:
计算上述预览图像的灰度平均值;
将上述预览图像的灰度平均值与预设的第一灰度平均值阈值进行比对;
若上述预览图像的灰度平均值小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景为夜景;
若上述预览图像的灰度平均值不小于上述第一灰度平均值阈值,则确定上述预览图像的拍摄场景不为夜景。
在上述第二种可能的实施方式作为基础而提供的第四种可能的实施方式中,上述检测上述预览图像中是否存在待处理目标,包括:
对上述预览图像进行目标检测,得到上述预览图像所包含的一个以上目标;
计算所有目标的灰度平均值;
将所有目标的灰度平均值与预设的第二灰度平均值阈值进行比对;
若所有目标的灰度平均值小于上述第二灰度平均值阈值,则将所有目标均确定为上述待处理目标。
在上述第一种可能的实施方式作为基础,或者上述第二种可能的实施方式作为基础,或者上述第三种可能的实施方式作为基础,或者上述第四种可能的实施方式作为基础而提供的第五种可能的实施方式中,上述对上述预览图像进行分割,得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,包括:
获取上述待处理目标的目标检测框;
基于上述目标检测框,在上述预览图像中设定待处理图像框,其中,上述待处理图像框与上述目标检测框的形状相同,上述待处理图像框的每一边界分别与上述目标检测框的对应边界平行,且上述待处理图像框的每一边界分别与上述目标检测框的对应边界间隔预设的距离;
将上述待处理图像框之内的图像确定为待处理图像;
将上述目标检测框之外的图像确定为场景图像。
在上述第五种可能的实施方式作为基础而提供的第六种可能的实施方式中,上述处理器602通过运行存储在存储器601的上述计算机程序时还实现以下步骤:
获取上述待处理图像框的各个顶点在上述预览图像中的坐标;
相应地,上述将上述处理后图像与上述场景图像进行融合,得到新的预览图像,包括:
基于上述待处理图像框的各个顶点在上述预览图像中的坐标,将上述处理后图像与上述场景图像重叠,得到重叠区域;
基于上述重叠区域,对上述场景图像与上述处理后图像的边缘进行融合,得到新的预 览图像。
在上述第六种可能的实施方式作为基础而提供的第七种可能的实施方式中,上述基于上述重叠区域,对上述场景图像与上述处理后图像的边缘进行融合,得到新的预览图像,包括:
针对上述重叠区域的任一像素点,获取上述像素点在上述场景图像中的灰度值,记为第一灰度值,并获取上述像素点在上述处理后图像中的灰度值,记为第二灰度值;
计算上述第一灰度值及第二灰度值的灰度平均值;
将上述第一灰度值及第二灰度值的灰度平均值确定为融合后的上述像素点的灰度值。
进一步,上述电子设备还可包括:一个或多个输入设备和一个或多个输出设备。存储器601、处理器602、输入设备和输出设备通过总线连接。
应当理解,在本申请实施例中,所称处理器602可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
输入设备可以包括键盘、触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备可以包括显示器、扬声器等。
存储器601可以包括只读存储器和随机存取存储器,并向处理器602提供指令和数据。存储器601的一部分或全部还可以包括非易失性随机存取存储器。例如,存储器601还可以存储设备类型的信息。
由上可见,通过本申请实施例,电子设备在获取到预览图像后,若上述预览图像中存在有待处理目标,则会对预览图像进行分割,以得到包含上述待处理目标的待处理图像及不包含上述待处理目标的场景图像,并仅对待处理图像进行超分重建,减少了超分重建时的处理数据量,并在最后将上述处理后图像与上述场景图像进行融合,得到新的预览图像,实现对拍摄目标的清晰度的针对性提升。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将上述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也 可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者外部设备软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的系统实施例仅仅是示意性的,例如,上述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,上述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,上述计算机程序包括计算机程序代码,上述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。上述计算机可读存储介质可以包括:能够携带上述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机可读存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,上述计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质 不包括是电载波信号和电信信号。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种图像超分重建方法,其特征在于,包括:
    获取预览图像;
    若所述预览图像中存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;
    对所述待处理图像进行超分重建,得到处理后图像;
    将所述处理后图像与所述场景图像进行融合,得到新的预览图像。
  2. 如权利要求1所述的图像超分重建方法,其特征在于,在所述获取预览图像之后,所述图像超分重建方法还包括:
    检测所述预览图像的拍摄场景是否为夜景;
    若所述预览图像的拍摄场景为夜景,则检测所述预览图像中是否存在待处理目标。
  3. 如权利要求2所述的图像超分重建方法,其特征在于,所述检测所述预览图像的拍摄场景是否为夜景,包括:
    计算所述预览图像的灰度平均值;
    将所述预览图像的灰度平均值与预设的第一灰度平均值阈值进行比对;
    若所述预览图像的灰度平均值小于所述第一灰度平均值阈值,则确定所述预览图像的拍摄场景为夜景;
    若所述预览图像的灰度平均值不小于所述第一灰度平均值阈值,则确定所述预览图像的拍摄场景不为夜景。
  4. 如权利要求2所述的图像超分重建方法,其特征在于,所述检测所述预览图像中是否存在待处理目标,包括:
    对所述预览图像进行目标检测,得到所述预览图像所包含的一个以上目标;
    计算所有目标的灰度平均值;
    将所有目标的灰度平均值与预设的第二灰度平均值阈值进行比对;
    若所有目标的灰度平均值小于所述第二灰度平均值阈值,则将所有目标均确定为所述待处理目标。
  5. 如权利要求1至4任一项所述的图像超分重建方法,其特征在于,所述对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,包括:
    获取所述待处理目标的目标检测框;
    基于所述目标检测框,在所述预览图像中设定待处理图像框,其中,所述待处理 图像框与所述目标检测框的形状相同,所述待处理图像框的每一边界分别与所述目标检测框的对应边界平行,且所述待处理图像框的每一边界分别与所述目标检测框的对应边界间隔预设的距离;
    将所述待处理图像框之内的图像确定为待处理图像;
    将所述目标检测框之外的图像确定为场景图像。
  6. 如权利要求5所述的图像超分重建方法,其特征在于,所述图像超分重建方法还包括:
    获取所述待处理图像框的各个顶点在所述预览图像中的坐标;
    所述将所述处理后图像与所述场景图像进行融合,得到新的预览图像,包括:
    基于所述待处理图像框的各个顶点在所述预览图像中的坐标,将所述处理后图像与所述场景图像重叠,得到重叠区域;
    基于所述重叠区域,对所述场景图像与所述处理后图像的边缘进行融合,得到新的预览图像。
  7. 如权利要求6所述的图像超分重建方法,其特征在于,所述基于所述重叠区域,对所述场景图像与所述处理后图像的边缘进行融合,得到新的预览图像,包括:
    针对所述重叠区域的任一像素点,获取所述像素点在所述场景图像中的灰度值,记为第一灰度值,并获取所述像素点在所述处理后图像中的灰度值,记为第二灰度值;
    计算所述第一灰度值及第二灰度值的灰度平均值;
    将所述第一灰度值及第二灰度值的灰度平均值确定为融合后的所述像素点的灰度值。
  8. 如权利要求1至4任一项所述的图像超分重建方法,其特征在于,所述对所述待处理图像进行超分重建,得到处理后图像,包括:
    通过预设的超分辨率算法对所述待处理图像进行处理,得到超分辨率处理后的图像,其中,所述超分辨率处理后的图像的宽度和高度均为所述待处理图像的N倍,N的取值为2或4;
    对所述超分辨率处理后的图像使用双线性插值方法,得到与所述待处理图像的尺寸相同的图像,作为处理后图像。
  9. 如权利要求1至4任一项所述的图像超分重建方法,其特征在于,在所述将所述处理后图像与所述场景图像进行融合,得到新的预览图像之后,所述图像超分重建方法还包括:
    显示所述新的预览图像。
  10. 一种图像超分重建装置,其特征在于,包括:
    获取单元,用于获取预览图像;
    分割单元,用于若所述预览图像中存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;
    处理单元,用于对所述待处理图像进行超分重建,得到处理后图像;
    融合单元,用于将所述处理后图像与所述场景图像进行融合,得到新的预览图像。
  11. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如下步骤:
    获取预览图像;
    若所述预览图像中存在待处理目标,则对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,其中,所述待处理目标为满足预设条件的目标;
    对所述待处理图像进行超分重建,得到处理后图像;
    将所述处理后图像与所述场景图像进行融合,得到新的预览图像。
  12. 如权利要求11所述的电子设备,其特征在于,在所述获取预览图像之后,所述处理器执行所述计算机程序时还实现以下步骤:
    检测所述预览图像的拍摄场景是否为夜景;
    若所述预览图像的拍摄场景为夜景,则检测所述预览图像中是否存在待处理目标。
  13. 如权利要求12所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,所述检测所述预览图像的拍摄场景是否为夜景,包括:
    计算所述预览图像的灰度平均值;
    将所述预览图像的灰度平均值与预设的第一灰度平均值阈值进行比对;
    若所述预览图像的灰度平均值小于所述第一灰度平均值阈值,则确定所述预览图像的拍摄场景为夜景;
    若所述预览图像的灰度平均值不小于所述第一灰度平均值阈值,则确定所述预览图像的拍摄场景不为夜景。
  14. 如权利要求12所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,所述检测所述预览图像中是否存在待处理目标,包括:
    对所述预览图像进行目标检测,得到所述预览图像所包含的一个以上目标;
    计算所有目标的灰度平均值;
    将所有目标的灰度平均值与预设的第二灰度平均值阈值进行比对;
    若所有目标的灰度平均值小于所述第二灰度平均值阈值,则将所有目标均确定为所述待处理目标。
  15. 如权利要求11至14任一项所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,所述对所述预览图像进行分割,得到包含所述待处理目标的待处理图像及不包含所述待处理目标的场景图像,包括:
    获取所述待处理目标的目标检测框;
    基于所述目标检测框,在所述预览图像中设定待处理图像框,其中,所述待处理图像框与所述目标检测框的形状相同,所述待处理图像框的每一边界分别与所述目标检测框的对应边界平行,且所述待处理图像框的每一边界分别与所述目标检测框的对应边界间隔预设的距离;
    将所述待处理图像框之内的图像确定为待处理图像;
    将所述目标检测框之外的图像确定为场景图像。
  16. 如权利要求15所述的电子设备,其特征在于,所述处理器执行所述计算机程序时还实现以下步骤:
    获取所述待处理图像框的各个顶点在所述预览图像中的坐标;
    所述将所述处理后图像与所述场景图像进行融合,得到新的预览图像,包括:
    基于所述待处理图像框的各个顶点在所述预览图像中的坐标,将所述处理后图像与所述场景图像重叠,得到重叠区域;
    基于所述重叠区域,对所述场景图像与所述处理后图像的边缘进行融合,得到新的预览图像。
  17. 如权利要求16所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,所述基于所述重叠区域,对所述场景图像与所述处理后图像的边缘进行融合,得到新的预览图像,包括:
    针对所述重叠区域的任一像素点,获取所述像素点在所述场景图像中的灰度值,记为第一灰度值,并获取所述像素点在所述处理后图像中的灰度值,记为第二灰度值;
    计算所述第一灰度值及第二灰度值的灰度平均值;
    将所述第一灰度值及第二灰度值的灰度平均值确定为融合后的所述像素点的灰度值。
  18. 如权利要求11至14任一项所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,所述对所述待处理图像进行超分重建,得到处理后图像,包括:
    通过预设的超分辨率算法对所述待处理图像进行处理,得到超分辨率处理后的图像,其中,所述超分辨率处理后的图像的宽度和高度均为所述待处理图像的N倍,N 的取值为2或4;
    对所述超分辨率处理后的图像使用双线性插值方法,得到与所述待处理图像的尺寸相同的图像,作为处理后图像。
  19. 如权利要求11至14任一项所述的电子设备,其特征在于,在所述将所述处理后图像与所述场景图像进行融合,得到新的预览图像之后,所述处理器执行所述计算机程序时还实现以下步骤:
    显示所述新的预览图像。
  20. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。
PCT/CN2020/123345 2019-10-29 2020-10-23 一种图像超分重建方法、图像超分重建装置及电子设备 WO2021083059A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911037652.1 2019-10-29
CN201911037652.1A CN110796600B (zh) 2019-10-29 2019-10-29 一种图像超分重建方法、图像超分重建装置及电子设备

Publications (1)

Publication Number Publication Date
WO2021083059A1 true WO2021083059A1 (zh) 2021-05-06

Family

ID=69441809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123345 WO2021083059A1 (zh) 2019-10-29 2020-10-23 一种图像超分重建方法、图像超分重建装置及电子设备

Country Status (2)

Country Link
CN (1) CN110796600B (zh)
WO (1) WO2021083059A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313630A (zh) * 2021-05-27 2021-08-27 艾酷软件技术(上海)有限公司 图像处理方法、装置及电子设备
CN116630220A (zh) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 一种荧光图像景深融合成像方法、装置及存储介质
CN118134765A (zh) * 2024-04-30 2024-06-04 国家超级计算天津中心 图像处理方法、设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796600B (zh) * 2019-10-29 2023-08-11 Oppo广东移动通信有限公司 一种图像超分重建方法、图像超分重建装置及电子设备
CN111968037A (zh) * 2020-08-28 2020-11-20 维沃移动通信有限公司 数码变焦方法、装置和电子设备
CN114697543B (zh) * 2020-12-31 2023-05-19 华为技术有限公司 一种图像重建方法、相关装置及系统
CN113240687A (zh) * 2021-05-17 2021-08-10 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和可读存储介质
CN113572955A (zh) * 2021-06-25 2021-10-29 维沃移动通信(杭州)有限公司 图像处理方法、装置及电子设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158371A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Apparatus and method for detecting facial image
CN104820966A (zh) * 2015-04-30 2015-08-05 河海大学 一种空时配准解卷积的非同步多视频超分辨率方法
CN109064399A (zh) * 2018-07-20 2018-12-21 广州视源电子科技股份有限公司 图像超分辨率重建方法和系统、计算机设备及其存储介质
CN110288530A (zh) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 一种对图像进行超分辨率重建的处理方法及装置
CN110298790A (zh) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 一种对图像进行超分辨率重建的处理方法及装置
CN110310229A (zh) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、终端设备及可读存储介质
CN110796600A (zh) * 2019-10-29 2020-02-14 Oppo广东移动通信有限公司 一种图像超分重建方法、图像超分重建装置及电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4424518B2 (ja) * 2007-03-27 2010-03-03 セイコーエプソン株式会社 画像処理装置、画像処理方法および画像処理プログラム
JP5149055B2 (ja) * 2007-12-27 2013-02-20 イーストマン コダック カンパニー 撮像装置
CN105517677B (zh) * 2015-05-06 2018-10-12 北京大学深圳研究生院 深度图/视差图的后处理方法和装置
CN107835661B (zh) * 2015-08-05 2021-03-23 深圳迈瑞生物医疗电子股份有限公司 超声图像处理系统和方法及其装置、超声诊断装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100158371A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Apparatus and method for detecting facial image
CN104820966A (zh) * 2015-04-30 2015-08-05 河海大学 一种空时配准解卷积的非同步多视频超分辨率方法
CN109064399A (zh) * 2018-07-20 2018-12-21 广州视源电子科技股份有限公司 图像超分辨率重建方法和系统、计算机设备及其存储介质
CN110288530A (zh) * 2019-06-28 2019-09-27 北京金山云网络技术有限公司 一种对图像进行超分辨率重建的处理方法及装置
CN110298790A (zh) * 2019-06-28 2019-10-01 北京金山云网络技术有限公司 一种对图像进行超分辨率重建的处理方法及装置
CN110310229A (zh) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、终端设备及可读存储介质
CN110796600A (zh) * 2019-10-29 2020-02-14 Oppo广东移动通信有限公司 一种图像超分重建方法、图像超分重建装置及电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313630A (zh) * 2021-05-27 2021-08-27 艾酷软件技术(上海)有限公司 图像处理方法、装置及电子设备
CN116630220A (zh) * 2023-07-25 2023-08-22 江苏美克医学技术有限公司 一种荧光图像景深融合成像方法、装置及存储介质
CN116630220B (zh) * 2023-07-25 2023-11-21 江苏美克医学技术有限公司 一种荧光图像景深融合成像方法、装置及存储介质
CN118134765A (zh) * 2024-04-30 2024-06-04 国家超级计算天津中心 图像处理方法、设备和存储介质

Also Published As

Publication number Publication date
CN110796600A (zh) 2020-02-14
CN110796600B (zh) 2023-08-11

Similar Documents

Publication Publication Date Title
WO2021083059A1 (zh) 一种图像超分重建方法、图像超分重建装置及电子设备
CN111028189B (zh) 图像处理方法、装置、存储介质及电子设备
CN111654594B (zh) 图像拍摄方法、图像拍摄装置、移动终端及存储介质
US9451173B2 (en) Electronic device and control method of the same
US11138695B2 (en) Method and device for video processing, electronic device, and storage medium
CN108833784B (zh) 一种自适应构图方法、移动终端及计算机可读存储介质
CN107909569B (zh) 一种花屏检测方法、花屏检测装置及电子设备
CN109040596B (zh) 一种调整摄像头的方法、移动终端及存储介质
CN110335216B (zh) 图像处理方法、图像处理装置、终端设备及可读存储介质
CN113126937B (zh) 一种显示终端调整方法及显示终端
CN107690804B (zh) 一种图像处理方法及用户终端
CN110855957B (zh) 图像处理方法及装置、存储介质和电子设备
CN112258404A (zh) 图像处理方法、装置、电子设备和存储介质
CN108805838B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
CN107357422B (zh) 摄像机-投影交互触控方法、装置及计算机可读存储介质
CN111429371A (zh) 图像处理方法、装置及终端设备
CN108769521B (zh) 一种拍照方法、移动终端及计算机可读存储介质
CN111340722B (zh) 图像处理方法、处理装置、终端设备及可读存储介质
JP2016197377A (ja) 画像補正用コンピュータプログラム、画像補正装置及び画像補正方法
CN108776959B (zh) 图像处理方法、装置及终端设备
CN108810407B (zh) 一种图像处理方法、移动终端及计算机可读存储介质
CN111861965A (zh) 图像逆光检测方法、图像逆光检测装置及终端设备
CN113592753B (zh) 基于工业相机拍摄的图像的处理方法、装置和计算机设备
CN111754411B (zh) 图像降噪方法、图像降噪装置及终端设备
CN111383171B (zh) 一种图片处理方法、系统及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883502

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883502

Country of ref document: EP

Kind code of ref document: A1