CN113570531A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113570531A
CN113570531A CN202110851688.4A CN202110851688A CN113570531A CN 113570531 A CN113570531 A CN 113570531A CN 202110851688 A CN202110851688 A CN 202110851688A CN 113570531 A CN113570531 A CN 113570531A
Authority
CN
China
Prior art keywords
image
magnification
target
zooming
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110851688.4A
Other languages
Chinese (zh)
Other versions
CN113570531B (en
Inventor
成凯华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110851688.4A priority Critical patent/CN113570531B/en
Publication of CN113570531A publication Critical patent/CN113570531A/en
Application granted granted Critical
Publication of CN113570531B publication Critical patent/CN113570531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image processing method, comprising: when the target zooming magnification is within a first preset zooming magnification range, zooming processing is carried out on each frame of image to be processed through multi-frame fusion to obtain a first target image of the target zooming magnification; when the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image of the target zooming magnification; the second preset zooming multiplying power is integral multiple of the upper limit value of the first preset zooming multiplying power range; when the target zooming magnification belongs to a third preset zooming magnification, zooming processing is carried out on each frame of image to be processed in a multi-frame fusion mode, a single-frame super-division mode and an interpolation processing mode to obtain a third target image of the target zooming magnification; the third preset zoom magnification is between the two second preset zoom magnifications. The method can flexibly realize the zooming processing under different zooming magnifications.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The zoom function of mobile phone photographing generally includes three processing modes, namely optical zoom, hybrid zoom and digital zoom. According to the difference of the focal length, different zooming modes can be selected for processing. However, the conventional zoom method has limited zoom factors and is inflexible in processing.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can flexibly realize zoom processing under different focal sections.
An image processing method comprising:
acquiring a target zooming magnification, and carrying out zooming processing on each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
An image processing apparatus, the apparatus comprising:
the first zooming module is used for acquiring a target zooming magnification, and zooming each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
the second zooming module is used for zooming each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-resolution mode under the condition that the target zooming magnification belongs to a second preset zooming magnification to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
the third zooming module is used for zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode under the condition that the target zooming magnification belongs to a third preset zooming magnification to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
An electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a target zooming magnification, and carrying out zooming processing on each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a target zooming magnification, and carrying out zooming processing on each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
According to the image processing method, the image processing device, the electronic equipment and the computer-readable storage medium, under the condition that the target zoom magnification is within the first preset zoom magnification range, zooming processing is performed on each frame of image to be processed in a multi-frame fusion mode, and the first target image under the target zoom magnification can be generated quickly. And under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of image to be processed by combining a multi-frame fusion mode and a single-frame over-division mode, the details of the image can be increased, and the loss of the image quality of the image is reduced, so that a second target image under the target zooming magnification is obtained, wherein the second preset zooming magnification is integral multiple of the upper limit value of the range of the first preset zooming magnification. Under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming processing is carried out on each frame of image to be processed through a multi-frame fusion mode, a single-frame overdivision mode and an interpolation processing mode to obtain a third target image under the target zooming magnification, the third preset zooming magnification is between two different second preset zooming magnifications, and according to the preset zooming magnification to which the target zooming magnification belongs, zooming processing under different focal sections can be carried out by flexibly selecting the multi-frame fusion mode, the single-frame overdivision mode, the interpolation processing mode and a combination mode of the multi-frame fusion mode, the single-frame overdivision mode and the interpolation processing mode, and loss of image quality under each focal section can be effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a multi-segment zoom-in-zoom-out diagram of an embodiment;
FIG. 4 is a schematic diagram of an AI hyper-neural network in accordance with an embodiment;
FIG. 5 is a flowchart of the steps for obtaining a third target image at a target zoom magnification in one embodiment;
FIG. 6 is a schematic diagram of an embodiment of generating an image with a zoom magnification of 4;
FIG. 7 is a schematic illustration of interpolation of bicubic interpolation in one embodiment;
FIG. 8 is a flowchart illustrating a process of zooming each frame of to-be-processed image in a multi-frame fusion manner according to an embodiment;
FIG. 9 is a schematic diagram of a process for determining a homography matrix in one embodiment;
FIG. 10 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
fig. 11 is a block diagram showing an internal configuration of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first target image may be referred to as a second target image, and similarly, a second target image may be referred to as a first target image, without departing from the scope of the present application. Both the first target image and the second target image are target images, but they are not the same target image.
Fig. 1 is a schematic diagram of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 110 and a server 120. The electronic device 110 obtains the target zoom magnification and each frame of the to-be-processed image, and sends the target magnification and each frame of the to-be-processed image to the server 120. Under the condition that the target zoom magnification is within the first preset zoom magnification range, the server 120 performs zoom processing on each frame of image to be processed in a multi-frame fusion mode to obtain a first target image under the target zoom magnification. The server 120 returns the first target image to the terminal 110. Under the condition that the target zooming magnification belongs to a second preset zooming magnification, the server 120 performs zooming processing on each frame of image to be processed in a multi-frame fusion mode and a single-frame super-resolution mode to obtain a second target image under the target zooming magnification; the second preset zoom magnification is an integral multiple of the upper limit value of the first preset zoom magnification range. The server 120 returns the second target image to the terminal 110. Under the condition that the target zoom magnification belongs to a third preset zoom magnification, the server 120 performs zoom processing on each frame of image to be processed in a multi-frame fusion mode, a single-frame super-division mode and an interpolation processing mode to obtain a third target image under the target zoom magnification; the third preset zoom magnification is between two different second preset zoom magnifications. The server 120 returns the third target image to the terminal 110.
Wherein the terminal 110 communicates with the server 120 through a network. The terminal 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 120 may be implemented by a stand-alone server or a server cluster composed of a plurality of servers.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. The image processing method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 2, the image processing method includes:
step 202, acquiring a target zoom magnification.
The target zoom magnification is a zoom magnification corresponding to a target image to be obtained. The target zoom magnification may be a zoom magnification set by a user before and during shooting through the camera, or may be a zoom magnification set after acquiring an image, so as to subject the image to zoom processing to obtain a target image at the target zoom magnification.
And 204, under the condition that the target zoom magnification is within the first preset zoom magnification range, zooming each frame of image to be processed in a multi-frame fusion mode to obtain a first target image under the target zoom magnification.
The image to be processed may be any one of an RGB (Red, Green, Blue) image, a RAW image, a grayscale image, a depth image, an image corresponding to a Y component in a YUV image, and the like. The RAW image is RAW data obtained by converting a captured light source signal into a digital signal by an image sensor. "Y" in YUV images represents brightness (Luma) and gray scale value, and "U" and "V" represent Chrominance (Chroma) and saturation, which are used to describe the color and saturation of the image and to specify the color of the pixel.
The first preset zoom magnification range is a preset zoom magnification range, for example, the first preset zoom magnification range is 1x to 1.6x, that is, the focal length executed by the camera is 1x to 1.6 x. The multi-frame fusion method (motion-stacking) is a processing method for fusing at least two frames of images.
Specifically, the electronic device may obtain the image to be processed from a local device or other devices or a network, or the electronic device may obtain the image to be processed by shooting a scene with a camera. The image to be processed may also be a video frame in a video acquired from a local or other device or network, or a video frame in a video shot by a camera.
The electronic device may obtain a target zoom magnification, compare the target zoom magnification with a first preset zoom magnification range, to determine whether the target zoom magnification is within the first preset zoom magnification range. And under the condition that the target zooming magnification is within a first preset zooming magnification range, zooming each frame of image to be processed in a multi-frame fusion mode to obtain a first target image under the target zooming magnification. Further, the electronic device enlarges each frame of image to be processed to a target zoom ratio, and performs fusion processing on each frame of image enlarged to the target zoom ratio to obtain a first target image.
Step 206, under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming each frame of image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image under the target zooming magnification; the second preset zoom magnification is an integral multiple of the upper limit value of the first preset zoom magnification range.
And the second preset zooming magnification is the zooming magnification of integral multiple of the upper limit value of the first preset zooming magnification range. For example, the first preset zoom magnification range is 1x-1.6x, and the upper limit value is 1.6x, the second preset zoom magnification includes, but is not limited to, 1.6x, 3.2x, 4.8x, 6.4x, and the like. The single-frame super-resolution mode is a processing mode for performing super-resolution reconstruction on a single-frame image.
The electronic device may obtain a target zoom magnification, and compare the target zoom magnification with a second preset zoom magnification to determine whether the target zoom magnification belongs to the second preset zoom magnification. And under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming each frame of image to be processed in a multi-frame fusion mode. And performing super-resolution reconstruction on the processing result of the multi-frame fusion mode through a single-frame super-resolution mode to obtain a second target image under the target zoom magnification.
Step 208, under the condition that the target zoom magnification belongs to a third preset zoom magnification, zooming each frame of image to be processed in a multi-frame fusion mode, a single-frame super-division mode and an interpolation processing mode to obtain a third target image under the target zoom magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
The third preset zoom magnification is a zoom magnification between two different second preset zoom magnifications, for example, the two second preset zoom magnifications are 1.6x and 3.2x, and each zoom magnification between 1.6x and 3.2x is taken as the third preset zoom magnification, such as 1.7x, 1.8x, and the like.
The interpolation processing method refers to performing interpolation processing on a single frame image to obtain an image with higher resolution, and for example, the interpolation processing method may be Bicubic (Bicubic), Bilinear interpolation, or Lanczos, but is not limited thereto.
Specifically, the electronic device may acquire the target zoom magnification and compare the target zoom magnification with a third preset zoom magnification to determine whether the target zoom magnification belongs to the third preset zoom magnification. And under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of image to be processed in a multi-frame fusion mode. And performing super-resolution reconstruction on the processing result of the multi-frame fusion mode through a single-frame super-resolution mode to obtain a reconstruction result. And zooming the reconstruction result in an interpolation processing mode to obtain a third target image under the target zooming magnification.
Fig. 3 is a schematic multi-segment zoom-in-zoom-out diagram of a continuous zoom magnification in one embodiment. Taking zoom 2x-5x as an example, the module 1 is configured to perform multi-frame fusion zoom processing, select 8-10 frames of images to be processed as input, and physically add details of a high-resolution image by performing sub-pixel interpolation. Module 1 performs a zoom process for focal segments 1x-1.6 x. The focal length is the zoom magnification.
The module 2 is configured to perform single frame AI (Artificial Intelligence) super-segmentation, and use an image at 1.6x output by multi-frame fusion zoom processing as an input image of the module 2, where the AI super-segmentation prevents image quality from being greatly attenuated under different zoom magnifications. The module 2 performs zoom processing at a focal length of an integral multiple of 1.6x, for example, a zoom magnification of 1,2,3 times of 1.6x, that is, a zoom magnification of 1.6x/3.2x/4.8 x.
The module 3 is used for executing Bicubic interpolation. Bicubic is used to continuously zoom in between the two zoom magnifications handled by block 2, taking a zoom magnification of 2x as an example. Obtaining amplification results under 1.6x and 3.2x through multi-frame fusion and single-frame AI over-division, and obtaining a 2x zooming result through weighting and fusing an up-sampling result and a down-sampling result through a Bicubic up-sampling result of 1.6x and a Bicubic down-sampling result of 3.2 x.
In this embodiment, the first preset zoom magnification range, the second preset zoom magnification and the third preset zoom magnification cover all zoom magnifications, and when the target zoom magnification is within the first preset zoom magnification range, the target image to be processed of each frame is zoomed in a multi-frame fusion manner, so that the first target image at the target zoom magnification can be generated quickly. And under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of image to be processed by combining a multi-frame fusion mode and a single-frame over-division mode, the details of the image can be increased, and the loss of the image quality of the image is reduced, so that a second target image under the target zooming magnification is obtained, wherein the second preset zooming magnification is integral multiple of the upper limit value of the range of the first preset zooming magnification. Under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming processing is carried out on each frame of image to be processed through a multi-frame fusion mode, a single-frame overdivision mode and an interpolation processing mode to obtain a third target image under the target zooming magnification, the third preset zooming magnification is between two different second preset zooming magnifications, and according to the preset zooming magnification to which the target zooming magnification belongs, zooming processing under different focal sections can be carried out by flexibly selecting the multi-frame fusion mode, the single-frame overdivision mode, the interpolation processing mode and a combination mode of the multi-frame fusion mode, the single-frame overdivision mode and the interpolation processing mode, and loss of image quality under each focal section can be effectively reduced.
In one embodiment, in a case that the target zoom magnification belongs to a second preset zoom magnification, performing zoom processing on each frame of an image to be processed in a multi-frame fusion mode and a single-frame super-resolution mode to obtain a second target image under the target zoom magnification includes:
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of image to be processed in a multi-frame fusion mode to obtain a first magnification image; the first magnification image is an image corresponding to the upper limit magnification of a first preset zooming magnification range; zooming the first-magnification image in a single-frame hyper-resolution mode to obtain a second target image under the target zooming magnification.
Specifically, the electronic device may acquire a target zoom magnification and compare the target zoom magnification with a second preset zoom magnification to determine whether the target zoom magnification belongs to the second preset zoom magnification. And determining the upper limit magnification in the first preset zooming magnification range under the condition that the target zooming magnification belongs to the second preset zooming magnification.
Determining a reference frame image from each frame of image to be processed according to the image acutance corresponding to each frame of image to be processed, amplifying the reference frame image to the upper limit multiplying power of a first preset zooming multiplying power range, and acquiring the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the upper limit multiplying power. And respectively determining matching point pairs between the reference frame image and each frame of image to be processed under the upper limit magnification according to the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the upper limit magnification. And determining homography matrixes between the reference frame images under the upper-limit magnification and the to-be-processed images of the frames respectively according to the matching point pairs. And performing registration processing on the to-be-processed images of the frames based on the homography matrix to respectively obtain the registration images under the upper limit magnification. And carrying out fusion processing on each registration image and the reference frame image with the upper limit multiplying power to obtain a first multiplying power image under the upper limit multiplying power.
And the electronic equipment performs zooming processing on the first magnification image in a single-frame over-division mode to obtain a second target image under the target zooming magnification. Further, the electronic device may input the first-magnification image into an AI hyper-derivative neural network, and output a second target image at the target zoom magnification through the AI hyper-derivative neural network.
In this embodiment, when the target zoom magnification belongs to the second preset zoom magnification, zoom processing is performed on each frame of to-be-processed image in a multi-frame fusion manner, so that the first-magnification image can be obtained quickly. Zooming processing is carried out on the first magnification image in a single-frame super-resolution mode, so that super-resolution reconstruction can be carried out on a processing result of a multi-frame fusion mode in the single-frame super-resolution mode under the condition that the multi-frame fusion mode cannot support the target zooming magnification, and more zooming magnifications can be supported. In addition, the second preset zooming magnification is integral multiple of the upper limit magnification of the first preset zooming magnification range, so that the continuity of zooming and the maintenance of high-magnification zooming image quality can be ensured.
In one embodiment, zooming the first-magnification image in a single-frame hyper-resolution manner to obtain a second target image at a target zoom magnification includes:
performing feature extraction on the first magnification image to obtain feature maps of all layers; and amplifying each layer of feature map to a target zooming magnification, and carrying out fusion processing on each layer of amplified feature map to obtain a second target image under the target zooming magnification.
Specifically, the electronic device may input the first-magnification image into an AI hyper-resolution neural network, and perform feature extraction on the first-magnification image through the AI hyper-resolution neural network to obtain each layer of feature map. And amplifying the characteristic diagrams of the layers to a target zooming magnification to obtain the characteristic diagrams of the layers under the target zooming magnification. And carrying out fusion processing on the characteristic diagrams of the layers under the target zoom magnification to obtain a second target image under the target zoom magnification.
In one embodiment, the AI hyper-neural network may include a feature extraction layer, a feature conversion layer, a magnification layer, and a fusion layer. And the characteristic extraction layer is used for extracting low-level characteristics of the first-magnification image to obtain each layer of shallow layer characteristic diagram. And the characteristic conversion layer performs nonlinear combination on each layer of shallow characteristic diagram to obtain each deep characteristic diagram. And the magnifying layer magnifies each deep characteristic map to a target size, wherein the target size is the size under the target zoom magnification, namely, the magnifying layer magnifies each deep characteristic map to the target zoom magnification to obtain each deep characteristic map under the target zoom magnification. And the fusion layer fuses the deep characteristic maps into a super-resolution image and outputs the super-resolution image, so that a second target image under the target zoom magnification can be obtained.
In this embodiment, the features of the first-magnification image are extracted to obtain key features of the first-magnification image, each layer of feature map is enlarged to a target zoom magnification, and the enlarged feature maps of each layer are subjected to fusion processing, so that the key features of different layers can be fused, and a second target image at the target zoom magnification can be reconstructed. In addition, the second target image is fused with more detail features, and the image quality is better.
Fig. 4 is a schematic structural diagram of an AI hyper-neural network according to an embodiment.
The whole network structure comprises four parts: a Feature extraction layer, a Feature conversion layer, a magnification layer and a fusion layer.
The Feature extraction Layer is to Extract image low-level features using a large perceptual field-of-view convolution kernel, such as gradient, brightness, magnitude relation, etc., and generally includes two layers, which may be referred to as Feature Extract layers a and B. In the Feature extraction layer, a relatively large convolution kernel is used, the depth of an output Feature Map (Feature Map) is constant, the size is constant (x 1), and the number of channels is increased. In the present embodiment, the parameters of the convolution kernel used can be referred to table 1.
TABLE 1 Feature extraction layer network architecture parameters
Figure BDA0003182553470000061
The Feature conversion Layer is a process (such as structure, shape and the like) for obtaining high-level features by nonlinear combination of low-level features extracted by the extraction Layer, and is called as a Feature Transform Layer, the more the number of layers is, the higher the nonlinear degree of the features is, the more complex image structure can be expressed, and the more reality of image reconstruction is facilitated. This part is usually selected from convolution with a small convolution kernel, constant depth and size (x 1), and with a constant number of channels, and is repeated M times, where M is 16. The parameters of the convolution kernel used by the Feature conversion layer can be referred to in table 2.
TABLE 2 Feature translation layer network architecture parameters
Figure BDA0003182553470000062
The enhancement Layer in this embodiment is not a convolution Layer, but Bicubic enhancement, and the Feature image obtained from the Feature conversion Layer is enhanced to a desired scale, for example, up to × 2, with c being 64, d being 8, and w being h. The Fusion Layer immediately follows, and the purpose is to fuse features with depth of 8 and channel number of 64 into one SR (Super Resolution) image output, called Fusion Layer. The parameters of these two layers can be referred to in table 3.
TABLE 3 network Structure parameters for the amplification layer and the fusion layer
Figure BDA0003182553470000063
In one embodiment, as shown in fig. 5, in a case that the target zoom magnification belongs to a third preset zoom magnification, performing zoom processing on each frame of to-be-processed image in a multi-frame fusion manner, a single-frame super-division manner, and an interpolation processing manner to obtain a third target image at the target zoom magnification includes:
step 502, under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of image to be processed in a multi-frame fusion mode to obtain a first magnification image; the first magnification image is an image corresponding to the upper limit magnification of a first preset zooming magnification range.
Specifically, the electronic device may acquire the target zoom magnification and compare the target zoom magnification with a third preset zoom magnification to determine whether the target zoom magnification belongs to the third preset zoom magnification. And determining the upper limit magnification in the first preset zooming magnification range under the condition that the target zooming magnification belongs to the third preset zooming magnification.
Determining a reference frame image from each frame of image to be processed according to the image acutance corresponding to each frame of image to be processed, amplifying the reference frame image to the upper limit multiplying power of a first preset zooming multiplying power range, and acquiring the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the upper limit multiplying power. And respectively determining matching point pairs between the reference frame image and each frame of image to be processed under the upper limit magnification according to the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the upper limit magnification. And determining homography matrixes between the reference frame images under the upper-limit magnification and the to-be-processed images of the frames respectively according to the matching point pairs. And performing registration processing on the to-be-processed images of the frames based on the homography matrix to respectively obtain the registration images under the upper limit magnification. And carrying out fusion processing on each registration image and the reference frame image with the upper limit multiplying power to obtain a first multiplying power image under the upper limit multiplying power.
Step 504, zooming the first-magnification image by adopting a single-frame hyper-resolution mode to obtain at least two second-magnification images; the at least two second magnification images comprise images respectively corresponding to two second preset zooming magnifications adjacent to the target zooming magnification.
Specifically, the electronic device determines at least two second preset zoom magnifications adjacent to the target zoom magnification. For example, the first preset zoom magnification range is 1x-1.6x, and the second preset zoom magnification ranges from 1.6x, 3.2x, 4.8x, and 6.4 x. If the target zoom magnification is 3x, it may be determined that two second preset zoom magnifications adjacent to 3x are 3.2x and 4.8x, respectively.
Zooming the first-magnification image by adopting a single-frame hyper-resolution mode to obtain a second-magnification image under each second preset zooming magnification of at least two second preset zooming magnifications. For example, a first magnification image having a zoom magnification of 1.6x is subjected to zoom processing by the single-frame super-resolution method, and a second magnification image having a zoom magnification of 3.2x and a second magnification image having a zoom magnification of 4.8x are obtained.
In one embodiment, the first-magnification image is input to an AI hyper-branched neural network. And extracting low-level features of the first-magnification image through an AI hyper-resolution neural network feature extraction layer to obtain a shallow layer feature map of each layer. And the characteristic conversion layer carries out nonlinear combination processing on each layer of shallow characteristic graph to obtain each deep characteristic graph. And the magnifying layer magnifies each deep characteristic map to two second preset zooming magnifications adjacent to the target zooming magnification, and the deep characteristic maps under the two second preset zooming magnifications are obtained respectively. The fusion layer fuses all the deep feature maps under the same second preset zoom magnification into a super-resolution image and outputs the super-resolution image, so that second magnification images respectively corresponding to the two second preset zoom magnifications can be obtained.
And step 506, respectively performing interpolation processing on the second magnification images, and performing fusion processing on the images obtained after the interpolation processing to obtain a third target image under the target zoom magnification.
Specifically, after super-resolution reconstruction is performed in a single-frame super-resolution mode, each second-power image is obtained. And respectively carrying out interpolation processing on each second magnification image to respectively obtain each interpolation image under the target zoom magnification, and carrying out fusion processing on each interpolation image to obtain a third target image under the target zoom magnification.
Further, through interpolation processing, the second magnification image of which the second preset zoom magnification is larger than the target zoom magnification in each second magnification image can be down-sampled to the target zoom magnification, so as to obtain an interpolated image at the target zoom magnification. And through interpolation processing, the second preset zoom magnification image with the second magnification smaller than the target zoom magnification is up-sampled to the target zoom magnification, and an interpolation image under the target zoom magnification is obtained. And then fusing the interpolation images to obtain a third target image under the target zoom magnification.
In this embodiment, zoom processing is performed on each frame of to-be-processed image in a multi-frame fusion manner, so that a first-magnification image can be obtained quickly. The zoom processing is carried out on the first magnification image in a single-frame super-resolution mode, so that under the condition that a multi-frame fusion mode cannot support the target zoom magnification, super-resolution reconstruction can be carried out on the processing result of the multi-frame fusion mode in the single-frame super-resolution mode, each second magnification image under integral multiple of the upper limit magnification of the first preset zoom magnification range is obtained, and the zoom continuity and the high-magnification zoom image quality can be guaranteed. By performing interpolation processing on each second-magnification image and performing fusion processing on the images obtained after the interpolation processing, a third target image under the target zoom magnification is obtained, so that the loss of image quality of the image can be reduced, the resolution of the generated image is better, and the image is clearer.
In one embodiment, the interpolating the second magnification images, and fusing the interpolated images to obtain a third target image at the target zoom magnification, respectively, includes:
respectively carrying out interpolation processing on each second magnification image to obtain each interpolation image under the target zoom magnification; determining weights respectively corresponding to the interpolation images according to the target zoom multiplying power and second preset zoom multiplying powers respectively corresponding to the second multiplying power images; and performing fusion processing on the interpolation images based on the weights corresponding to the interpolation images to obtain a third target image under the target zoom magnification.
Specifically, a second magnification image with a second preset zoom magnification larger than the target zoom magnification in each second magnification image is determined, and downsampling interpolation processing is performed on the second magnification image to obtain an interpolation image at the target zoom magnification. And determining a second magnification image with a second preset zoom magnification smaller than the target zoom magnification in each second magnification image, and performing up-sampling interpolation processing on the second magnification image to obtain an interpolation image under the target zoom magnification.
And calculating the weight corresponding to each interpolation image according to the target zoom multiplying power and the second preset zoom multiplying power corresponding to each second multiplying power image. Further, a first difference between a maximum second preset zoom magnification and a minimum second preset zoom magnification in each second magnification image is calculated, second differences between the second preset zoom magnification and the target zoom magnification corresponding to each second magnification image are calculated respectively, and a ratio of each second difference to the first difference is used as a weight corresponding to the corresponding interpolation image.
And performing fusion processing on the interpolation images based on the weights corresponding to the interpolation images to obtain a third target image under the target zoom magnification. Further, multiplying each weight by each pixel value in the corresponding interpolation image, averaging the pixel values of the matched feature points in each multiplied image, and taking the average value as the pixel value of the corresponding feature point in the third target image. And calculating the pixel value of each feature point in the third target image to obtain the third target image under the target zoom magnification.
As shown in fig. 6, the first preset zoom magnification range is 1x-1.6x, and the second preset zoom magnification ranges from 1.6x, 3.2x, 4.8x, and 6.4 x. The third preset zoom magnification is each zoom magnification between the two second preset zoom magnifications. The target zoom magnification is 4x, and is between the second preset zoom magnification 3.2x and the second preset zoom magnification 4.8x, so that the target zoom magnification belongs to the third preset zoom magnification. The electronic device can perform fusion processing on 8-10 frames of images to be processed in a multi-frame fusion mode to obtain a first-magnification image under 1.6 x.
Zooming the first-magnification image at 1.6x by single-frame super-resolution to obtain a second-magnification image at 3.2x and a second-magnification image at 4.8x, respectively. And (4) interpolating and amplifying the 3.2x second magnification image to 4x to obtain a 4x interpolated image A. And (4) carrying out interpolation reduction on the 4.8x second magnification image to 4x to obtain a 4x interpolation image B.
The (4.8-4)/(4.8-3.2) is taken as the interpolation weight of the interpolation image a corresponding to the 4.8x second magnification image. And (4-3.2)/(4.8-3.2) is used as the interpolation weight of the interpolation image B corresponding to the 4.8x second magnification image.
And according to the interpolation image A, the interpolation image B and the corresponding interpolation weight, fusing the interpolation image A and the interpolation image B to obtain a final image, namely a target image under 4 x.
In this embodiment, interpolation processing is performed on each second magnification image to obtain each interpolated image at the target zoom magnification, weights corresponding to each interpolated image are determined according to the target zoom magnification and a second preset zoom magnification corresponding to each second magnification image, and fusion processing is performed on each interpolated image based on the weight corresponding to each interpolated image, so that the image can obtain more detailed features, and the obtained third target image at the target zoom magnification is higher in image quality, thereby increasing the image resolution and reducing the loss of image quality.
In one embodiment, the interpolation processing is performed on each second magnification image to obtain each interpolated image at the target zoom magnification, and the interpolation processing includes:
for each second-magnification image in the second-magnification images, determining each to-be-interpolated point of the second-magnification image under the target zoom magnification; and for each point to be interpolated, determining the position of each sampling point within a preset range of the point to be interpolated, determining the corresponding pixel value of the point to be interpolated under the target zoom magnification according to the position of each sampling point and the corresponding weight, and obtaining an interpolated image corresponding to the second magnification image after obtaining the corresponding pixel value of each point to be interpolated in the second magnification image.
Specifically, for each second magnification image in the second magnification images, each to-be-interpolated point of the second magnification image at the target zoom magnification is determined according to the second preset zoom magnification and the target zoom magnification corresponding to the second magnification image. And corresponding to each point to be interpolated, determining the position of each sampling point in the preset range of the point to be interpolated in the second magnification image, and determining the weight corresponding to each sampling point. The sampling points may be feature points within a preset range, or may be sampled from feature points within a preset range.
The closer the sampling point is to the interpolation point, the higher the weight, and the farther the distance is, the lower the weight.
And determining the middle interpolation point corresponding to each row or the middle interpolation point corresponding to each column in the preset range according to the position and the corresponding weight of each sampling point. And determining the corresponding pixel value of the point to be interpolated under the target zoom magnification according to the coordinate of the middle interpolation point and the corresponding interpolation weight. And for each point to be interpolated in the second magnification image, according to the same processing mode, obtaining a corresponding pixel value of each point to be interpolated under the target zoom magnification, so as to obtain an interpolated image corresponding to the second magnification image. According to the same processing mode, interpolation images corresponding to the second magnification images can be obtained.
In one embodiment, determining the corresponding pixel value of the point to be interpolated at the target zoom magnification according to the position and the corresponding weight of each sampling point includes: determining a middle interpolation point corresponding to each column in a preset range according to the position and the corresponding weight of each sampling point; determining Euclidean distance between the intermediate interpolation point and the point to be interpolated aiming at each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point; and determining the corresponding pixel value of the point to be interpolated under the target zoom magnification based on each intermediate interpolation point and the corresponding interpolation weight.
In this embodiment, for each second magnification image in each second magnification image, each point to be interpolated of the second magnification image at the target zoom magnification is determined, for each point to be interpolated, the position of each sampling point within the preset range of the point to be interpolated is determined, and according to the position of each sampling point and the corresponding weight, the corresponding pixel value of the point to be interpolated at the target zoom magnification can be accurately determined until the corresponding pixel value of each point to be interpolated in the second magnification image is obtained, and then the corresponding interpolated image of the second magnification image is accurately obtained.
In one embodiment, determining the corresponding pixel value of the point to be interpolated at the target zoom magnification according to the position and the corresponding weight of each sampling point includes:
determining a middle interpolation point corresponding to each row in a preset range according to the position and the corresponding weight of each sampling point; determining Euclidean distance between the intermediate interpolation point and the point to be interpolated aiming at each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point; and determining the corresponding pixel value of the point to be interpolated under the target zoom magnification based on each intermediate interpolation point and the corresponding interpolation weight.
Specifically, the electronic device determines the position of the middle interpolation point corresponding to each row according to the position of the sampling point of each row in the corresponding second-magnification image within the preset range and the weight corresponding to each sampling point. And calculating the Euclidean distance between the intermediate interpolation point and the point to be interpolated for each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point. And calculating the product of each intermediate interpolation point and the corresponding interpolation weight, calculating the sum of each product, calculating the sum of each interpolation weight, and taking the ratio of the sum of each product to the sum of the interpolation weights as the corresponding pixel value of the point to be interpolated under the target zoom magnification.
Fig. 7 is a schematic diagram of interpolation of bicubic interpolation in one embodiment. Bicubic interpolation is an interpolation mode for determining the weight of a sampling point to be interpolated according to the distance between the point to be interpolated and the sampling point.
As shown in FIG. 7, the point to be interpolated is P, and the nearest 16 samples Z around the point to be interpolated P are selected11~Z14Respectively using [ Z ] first11,Z12,Z13,Z14]Inserting intermediate interpolation points R1Similarly, the intermediate interpolation point R is reinserted2、R3、R4Reuse [ R ]1、R2、R3、R4]The pixel value of P is interpolated. As shown in the figure, Z11,Z12,Z13,Z14The corresponding weights are-1, 0,1,2, respectively. The pixel value of P may also be interpolated by first processing the columns and then processing the rows, and the processing is similar and will not be described herein again.
The interpolation process takes the pixel value of the P point as an example, and the pixel value of the P point is calculated by the following formula.
Figure BDA0003182553470000091
Wherein d (P, R)t) Represents P to RtThe Euclidean distance of t ∈ (1,2,3,4), i.e., d (P, R)t) X. W (x) represents an interpolation weight calculated from the euclidean distance, and the interpolation weight is calculated as shown in the following formula, in which a takes a value of-0.5.
Figure BDA0003182553470000092
Wherein x is the Euclidean distance between the point to be interpolated and the intermediate interpolation point, such as the point to be interpolated P and the intermediate interpolation point R1The euclidean distance of (c).
In this embodiment, according to the position and the corresponding weight of each sampling point, an intermediate interpolation point corresponding to each row in a preset range is determined, for each intermediate interpolation point, a euclidean distance between the intermediate interpolation point and a point to be interpolated is determined, and the euclidean distance is used as the interpolation weight corresponding to the intermediate interpolation point. The interpolation weight represents the importance degree of the intermediate interpolation point, and the corresponding pixel value of the point to be interpolated under the target zoom magnification can be more accurately determined based on each intermediate interpolation point and the corresponding interpolation weight.
In one embodiment, in a case that a target zoom magnification is within a first preset zoom magnification range, zooming each frame of to-be-processed image in a multi-frame fusion manner to obtain a first target image at the target zoom magnification, the zooming method includes:
determining a reference frame image from each frame of image to be processed according to the image sharpness corresponding to each frame of image to be processed respectively under the condition that the target zoom magnification is within a first preset zoom magnification range; amplifying the reference frame image to a target zooming magnification, and performing registration processing on each frame of image to be processed based on the reference frame image under the target zooming magnification to obtain each registration image; and carrying out fusion processing on each registration image and the reference frame image at the target zoom magnification to obtain a first target image at the target zoom magnification.
The sharpness of an image is also called definition, and is an index reflecting the definition of an image plane and the sharpness of an image edge.
Specifically, the electronic device acquires a target zoom magnification and compares the target zoom magnification with a first preset zoom magnification range. Under the condition that the target zoom magnification is within a first preset zoom magnification range, the electronic equipment determines image acuteness corresponding to each frame of image to be processed respectively, and determines a reference frame image from each frame of image to be processed according to the image acuteness.
In one embodiment, before determining the reference frame image from the frames of images to be processed according to the image acuteness corresponding to the frames of images to be processed, the method further includes: and determining the local brightness of each frame of image to be processed, and determining the image sharpness of the image to be processed according to the local brightness. Further, the electronic device may divide the image to be processed into a plurality of windows, determine a maximum luminance value and a minimum luminance value in each window, and determine a local luminance of the image to be processed based on the maximum luminance value and the minimum luminance value in each window.
For example, the image Sharpness of the image to be processed may be calculated by the following formula:
Figure BDA0003182553470000101
dividing the whole image into k1 × k2 windows, wherein Imax,k,lAnd Imin,k,lRespectively representing the maximum and minimum brightness values in the k × l window.
And after determining the image sharpness of each frame of image to be processed, selecting the image to be processed with the maximum image sharpness as a reference frame image. And the electronic equipment enlarges the reference frame image to the target zoom magnification to obtain the reference frame image under the target zoom magnification. For example, a 1.2x reference frame image is enlarged to 1.6x, resulting in a 1.6x reference frame image.
And for each frame of image to be processed, performing registration processing on the image to be processed based on the reference frame image under the target zoom magnification to obtain registration images corresponding to each frame of image to be processed respectively. And carrying out fusion processing on each registration image and the reference frame image at the target zoom magnification to obtain a first target image at the target zoom magnification.
Further, the electronic device determines feature points matched between each registered image and a reference frame image at the target zoom magnification, calculates pixel mean values of the matched feature points, and obtains the first target image after the pixel mean values are used as pixel values of corresponding feature points in the first target image at the target zoom magnification until the pixel values of the feature points in the first target image at the target zoom magnification are obtained.
In this embodiment, each frame of to-be-processed image is subjected to registration processing on a reference frame image at a target zoom magnification, so that each frame of to-be-processed image and the reference frame image at the target zoom magnification are in the same image space, and it is ensured that other parameters, except for resolution, of each frame of to-be-processed image and the reference frame image at the target zoom magnification are consistent, so that high-resolution information at a sub-pixel level can be obtained after the registration image and the reference frame image are fused, and the quality of the generated first target image is clearer.
In one embodiment, the registration process for the images to be processed may include correspondence of image content, alignment of color space, and the like. The alignment of the image content can adopt a mode of detecting and matching the image characteristic points and calculating a homography matrix, and can also adopt a mode of matching image blocks. Aiming at the problem of mismatching of brightness and color space between an image to be processed and a reference frame image under a target zoom magnification, some photos of a standard color card can be introduced in the shooting process, the difference between the image to be processed and the reference frame image under the target zoom magnification is determined through the standard color card, an alignment model of the color space and the brightness is modeled and analyzed, histogram matching and other operations are carried out, and the alignment of the brightness and the color space is realized.
Fig. 8 is a schematic flow chart illustrating a process of performing zoom processing on each frame of to-be-processed image in a multi-frame fusion manner in one embodiment.
And acquiring a plurality of frames of YUV images, such as 8-10 frames of images. And determining a reference frame image by performing sharpness estimation on each frame of YUV image. And amplifying the reference frame to a target zoom magnification, and performing registration processing on the amplified reference frame image by the YUV images of other frames to obtain each registration image. And performing multi-frame fusion averaging on each registration image and the amplified reference frame image to obtain a multi-frame fusion result, namely obtaining a first target image under the target zoom magnification.
In one embodiment, performing registration processing on each frame of image to be processed based on a reference frame image at a target zoom magnification to obtain each registration image, including:
acquiring characteristic points of a reference frame image and characteristic points of each frame of image to be processed under the target zoom magnification; respectively determining matching point pairs between the reference frame image and each frame of image to be processed under the target zoom magnification according to the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the target zoom magnification; determining homography matrixes between the reference frame images and the frames of images to be processed under the target zooming magnification according to the matching point pairs; and performing registration processing on the to-be-processed images of the frames based on the homography matrix to respectively obtain the registration images of the target zoom magnification.
The feature point refers to a point where the image gray value changes drastically or a point where the curvature is large on the edge of the image (i.e., the intersection of two edges). Characteristic points such as eyes, nose tip, mouth corner, moles, center of object, etc., are not limited thereto.
The electronic equipment detects the gray value of each pixel point in the reference frame image under the target zoom magnification, and when the difference value of the gray values of the adjacent pixel points is larger than a threshold value, the area where the adjacent pixel points are located can be used as a feature point. And detecting the gray value of each pixel point in each frame of image to be processed, and when the difference value of the gray values of the adjacent pixel points is greater than a threshold value, taking the area where the adjacent pixel points are located as a characteristic point.
In one embodiment, the electronic device may extract corresponding feature points from each frame of the image to be processed according to the feature points in the reference frame image at the target zoom magnification. In another embodiment, the gray values of the pixels in the image to be processed may also be detected, and when the difference between the gray values of the adjacent pixels is greater than the threshold, the region where the adjacent pixels are located may be used as the feature point, and the corresponding feature point may be extracted from the remaining image to be processed and the reference frame image at the target zoom magnification.
And the electronic equipment combines the feature points extracted from the reference frame image under the target zoom magnification and the corresponding feature points of the image to be processed into matching point pairs.
In one embodiment, the reference frame image at the target zoom magnification may be divided into a plurality of regions, for example, a nine-square grid is added to the reference frame image at the target zoom magnification to divide the reference frame image into nine regions, and each point of the nine-square grid is taken as a feature point. And determining the characteristic points of the image to be processed according to the characteristic points in the reference frame image under the target zoom magnification to form matching point pairs.
Homography (Homography) is a concept in projective geometry, also known as projective transformation. It maps points (three-dimensional homogeneous vectors) on one projective plane onto another projective plane and maps straight lines into straight lines, having line-preserving properties. The homography matrix is then a mapping of points and points, and the exact location of the corresponding point of an image point on another image can be found using the homography matrix. The homography matrix can be used for representing the mapping relation between the reference frame image and the image to be processed under the target zoom magnification.
The electronic device may calculate a homography matrix between the reference frame image and the image to be processed at the target zoom magnification according to the matching point pairs determined between the reference frame image and the image to be processed at the target zoom magnification. The image to be processed can be mapped into the same image space as the reference frame image at the target zoom magnification by the homography matrix.
The electronic equipment performs registration processing on the image to be processed according to the homography matrix between the reference frame image and the image to be processed under the target zoom magnification, so as to obtain a corresponding registration image. Further, the image to be processed is mapped to the same image space as the reference frame image at the target zoom magnification according to the homography matrix, and the offset between the image to be processed and the reference frame image at the target zoom magnification is determined according to the matching point pairs. And then, the electronic equipment moves each characteristic point in the image to be processed by the offset to obtain a moved image, wherein the moved image is the registration image.
And according to the same processing mode, respectively corresponding homography matrixes between each frame of image to be processed and the reference frame image under the target zoom magnification can be calculated, and the corresponding image to be processed is subjected to registration processing according to the homography matrixes to obtain a corresponding registration image.
In this embodiment, before image registration, the reference frame image is enlarged to a target zoom magnification, and the images to be processed of other frames are registered with the enlarged reference frame image, so that each frame of images to be processed can provide sub-pixel-level high-resolution information for the reference frame image at the target zoom magnification. The reference frame image and each registration image under the target zoom magnification are fused, pixels with the same content of different frames can be fused together, the target image under the target zoom magnification can be accurately obtained, ghost can be avoided, and the image quality of the target image is improved.
In one embodiment, the purpose of image registration is to try to fuse pixels with the same content in different frames when fusing images. Before image registration, the reference frame needs to be interpolated and enlarged to a target size (for example, 1.6 ×), and other frame images need to be registered to the enlarged reference frame, so that the other frame images can provide high-resolution information of a sub-pixel level for the reference frame images.
Image registration is carried out by adopting homography matrix estimation based on SIFT (Scale-invariant feature transform) feature point detection, and the Kth image Y to be processed is estimatedkTo the reference frame image Y0Homography matrix H ofkThe following formula shows:
Figure BDA0003182553470000121
wherein (x, y) is HkThe 3x3 matrix is a homography matrix HkThe converted feature points (x, Y) are registered to the reference frame image Y0The latter coordinate, w, 1.
Calculate the homography matrix HkThen, the homography matrix H can be usedkCalculating YkWith respect to Y for each feature point0Offset of (1) [ MV ]xk,MVyk]Form and Y0And also HkThe main flow of the offset vector diagram for two channels of the same size is shown in fig. 9.
Reference frame image Y with size h x w0And an image Y to be processed of size h x wkRespectively carrying out feature point detection to obtain a reference frame image Y0Description of corresponding characteristic points and image to be processed YkCorresponding characteristic point description. According to the description of the characteristic points, the image Y to be processed is processedkAnd the reference frame image Y0And matching the characteristic points to obtain matched point pairs. Based on the matching point pairs, calculating an image Y to be processedkAnd reference frame image Y0Homography matrix H ofk
SIFT is an original algorithm for detecting key points, which essentially searches key points (feature points) in different scale spaces, calculates the size, direction and scale information of the key points, and describes the feature points by using the information to form the key points. The key points searched by SIFT are all quite prominent stable characteristic points which cannot be transformed by factors such as illumination, affine transformation and noise. After the feature points are obtained, the gradient histograms of the points around the feature points are used to form feature vectors
Figure BDA0003182553470000122
The feature vector is a description of the current feature point, Y0And YkTwo pictures are used for solving the characteristic points to obtain two groups of characteristic points
Figure BDA0003182553470000123
And
Figure BDA0003182553470000124
the process of feature point matching is to match two groups of found feature points by the Euclidean distance, and the following formula is shown:
Figure BDA0003182553470000131
find Y0And YkThe homography matrix H can be solved by DLT (Direct Linear Transformation) for 4 or more feature point pairs with the nearest distance on the two graphskThereby obtaining YkEach point in relation to Y0Offset of (1) [ MV ]x,MVy]. E.g. at Y by feature point matchingkHas the coordinates of (x) as the characteristic point1,y1),(x2,y2),…(xt,yt) Corresponding to Y0The coordinate of the characteristic point of (c) is (x)1、,y1、),(x2、,y2、),…(xt、,ytAnd) applying the homography matrix to the corresponding feature point pairs to obtain the following equation:
Figure BDA0003182553470000132
or AHK=0
Where a is a matrix with twice the number of rows of matching point pairs, the coefficients of the matching point pair equations are stacked into a matrix, and the least squares solution of H can be found using SVD (Singular Value Decomposition) algorithm to calculate Y for each framekRelative to Y0Offset of (1) [ MV ]xk,MVyk]。
In one embodiment, image registration may also employ SURF, corner points, or other features for feature point detection and description. The optical flow vector of each pixel from the image to be processed of each frame to the reference frame can be solved according to the brightness information around each point of the adjacent frames, and then the motion vector of the pixel is calculated according to the optical flow vector so as to carry out image registration.
In one embodiment, there is provided an image processing method including:
the electronic equipment acquires a target zoom magnification, and determines a reference frame image from each frame of image to be processed according to the image sharpness corresponding to each frame of image to be processed under the condition that the target zoom magnification is within a first preset zoom magnification range.
And then, the electronic equipment enlarges the reference frame image to a target zoom magnification, and obtains the characteristic points of the reference frame image and the characteristic points of the to-be-processed images of the frames under the target zoom magnification.
Then, the electronic device determines matching point pairs between the reference frame image and each frame of image to be processed at the target zoom magnification according to the feature points of the reference frame image and the feature points of each frame of image to be processed at the target zoom magnification.
Further, the electronic equipment determines homography matrixes between the reference frame images and the to-be-processed images respectively under the target zoom magnification according to the matching point pairs; and performing registration processing on the to-be-processed images of the frames based on the homography matrix to respectively obtain the registration images of the target zoom magnification.
Further, the electronic device performs fusion processing on each registration image and the reference frame image at the target zoom magnification to obtain a first target image at the target zoom magnification.
Optionally, under the condition that the target zoom magnification belongs to a second preset zoom magnification, performing zoom processing on each frame of image to be processed in a multi-frame fusion mode to obtain a first magnification image; the second preset zooming multiplying power is integral multiple of the upper limit value of the first preset zooming multiplying power range; the first magnification image is an image corresponding to the upper limit magnification of the first preset zoom magnification range.
Further, the electronic equipment performs feature extraction on the first-magnification image to obtain each layer of feature map; and amplifying each layer of feature map to a target zooming magnification, and carrying out fusion processing on each layer of amplified feature map to obtain a second target image under the target zooming magnification.
Optionally, under the condition that the target zoom magnification belongs to a third preset zoom magnification, performing zoom processing on each frame of image to be processed in a multi-frame fusion mode to obtain a first magnification image; the third preset zoom magnification is between two different second preset zoom magnifications; the first magnification image is an image corresponding to the upper limit magnification of the first preset zoom magnification range.
And then, the electronic equipment performs zooming processing on the first-magnification image in a single-frame hyper-division mode to respectively obtain two second-magnification images corresponding to two second preset zooming magnifications adjacent to the target zooming magnification.
Further, for each second-magnification image, determining each to-be-interpolated point of the second-magnification image under the target zoom magnification; and for each point to be interpolated, determining the position of each sampling point in the preset range of the point to be interpolated, and determining the corresponding middle interpolation point of each row in the preset range according to the position of each sampling point and the corresponding weight.
Then, determining the Euclidean distance between the intermediate interpolation point and the point to be interpolated aiming at each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point; and determining a pixel value corresponding to the point to be interpolated under the target zoom magnification based on each intermediate interpolation point and the corresponding interpolation weight until obtaining a pixel value corresponding to each point to be interpolated in the second magnification image, and obtaining an interpolation image corresponding to the second magnification image.
Further, the electronic device determines weights corresponding to the interpolation images according to the target zoom magnification and second preset zoom magnifications corresponding to the second magnification images respectively; and performing fusion processing on the interpolation images based on the weights corresponding to the interpolation images to obtain a third target image under the target zoom magnification.
In this embodiment, three processing modes under different zoom magnifications are provided, where the different zoom magnifications include a first preset zoom magnification range, a second preset zoom magnification and a third preset zoom magnification, the second preset zoom magnification is an integral multiple of an upper limit value of the first preset zoom magnification range, and the third preset zoom magnification is between the two different second preset zoom magnifications, so that the first preset zoom magnification range, the second preset zoom magnification and the third preset zoom magnification can cover all zoom magnifications, and thus zoom processing can be performed in a corresponding mode under each zoom magnification.
Under the condition that the target zoom magnification is within a first preset zoom magnification range, each frame of image to be processed is subjected to registration processing on a reference frame image under the target zoom magnification, so that each frame of image to be processed and the reference frame image under the target zoom magnification are in the same image space, other parameters except the resolution of each frame of image to be processed and the reference frame image under the target zoom magnification are kept consistent, high-resolution information of a sub-pixel level can be obtained after the registration image and the reference frame image are fused, and the image quality of the target image is improved.
Under the condition that the target zooming magnification belongs to the second preset zooming magnification, zooming processing is firstly carried out on each frame of image to be processed in a multi-frame fusion mode, and the first magnification image can be quickly obtained. The zoom processing is carried out on the first magnification image in a single-frame super-resolution mode, so that under the condition that a multi-frame fusion mode cannot support the target zoom magnification, super-resolution reconstruction can be carried out on the processing result of the multi-frame fusion mode in the single-frame super-resolution mode, each second magnification image under integral multiple of the upper limit magnification of the first preset zoom magnification range is obtained, and the zoom continuity and the high-magnification zoom image quality can be guaranteed. By performing interpolation processing on each second magnification image and performing fusion processing on the images obtained after the interpolation processing, a third target image at the target zoom magnification is obtained, and thus an image with higher resolution can be generated and the loss of image quality can be reduced.
Under the condition that the target zoom magnification belongs to a third preset zoom magnification, two second magnification images are obtained through a multi-frame fusion mode and a single-frame super-division mode, interpolation processing is respectively carried out on the two second magnification images to obtain each interpolation image under the target zoom magnification, weights respectively corresponding to the interpolation images are determined according to the target zoom magnification and the second preset zoom magnification respectively corresponding to the second magnification images, fusion processing is carried out on the interpolation images based on the weights respectively corresponding to the interpolation images, so that the images can obtain more detailed characteristics, the obtained third target image under the target zoom magnification is higher in definition, and therefore image quality loss can be reduced.
According to the embodiment, the multi-frame fusion mode, the single-frame super-division mode, the interpolation processing mode and the combination mode thereof can be flexibly selected to execute the zoom processing in different focal zones according to the preset zoom magnification to which the target zoom magnification belongs, so that the loss of the image quality in each focal zone can be effectively reduced, the image resolution can be increased, and the loss of the image quality can be reduced.
It should be understood that although the various steps in the flowcharts of fig. 2-9 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-9 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 10 is a block diagram showing the configuration of an image processing apparatus according to an embodiment. As shown in fig. 10, the apparatus includes:
an obtaining module 1002, configured to obtain a target zoom magnification.
The first zooming module 1004 is configured to, when the target zooming magnification is within a first preset zooming magnification range, perform zooming processing on each frame of to-be-processed image in a multi-frame fusion manner to obtain a first target image at the target zooming magnification.
The second zooming module 1006 is configured to, when the target zooming magnification belongs to a second preset zooming magnification, perform zooming processing on each frame of to-be-processed image in a multi-frame fusion manner and a single-frame super-resolution manner to obtain a second target image at the target zooming magnification; the second preset zoom magnification is an integral multiple of the upper limit value of the first preset zoom magnification range.
The third zooming module 1008 is configured to, when the target zooming magnification belongs to a third preset zooming magnification, perform zooming processing on each frame of to-be-processed image in a multi-frame fusion mode, a single-frame super-resolution mode and an interpolation processing mode to obtain a third target image at the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
In this embodiment, the first preset zoom magnification range, the second preset zoom magnification and the third preset zoom magnification cover all zoom magnifications, and when the target zoom magnification is within the first preset zoom magnification range, the target image to be processed of each frame is zoomed in a multi-frame fusion manner, so that the first target image at the target zoom magnification can be generated quickly. And under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of image to be processed by combining a multi-frame fusion mode and a single-frame over-division mode, the details of the image can be increased, and the loss of the image quality of the image is reduced, so that a second target image under the target zooming magnification is obtained, wherein the second preset zooming magnification is integral multiple of the upper limit value of the range of the first preset zooming magnification. Under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming processing is carried out on each frame of image to be processed through a multi-frame fusion mode, a single-frame overdivision mode and an interpolation processing mode to obtain a third target image under the target zooming magnification, the third preset zooming magnification is between two different second preset zooming magnifications, zooming processing under different focal sections can be carried out through flexibly selecting the multi-frame fusion mode, the single-frame overdivision mode, the interpolation processing mode and a combination mode thereof according to the preset zooming magnification to which the target zooming magnification belongs, and loss of image quality under each focal section can be effectively reduced.
In an embodiment, the second zooming module 1006 is further configured to perform zooming processing on each frame of to-be-processed image by using a multi-frame fusion method under a condition that the target zooming magnification belongs to a second preset zooming magnification, so as to obtain a first-magnification image; the first magnification image is an image corresponding to the upper limit magnification of a first preset zooming magnification range; zooming the first-magnification image in a single-frame hyper-resolution mode to obtain a second target image under the target zooming magnification.
In this embodiment, when the target zoom magnification belongs to the second preset zoom magnification, zoom processing is performed on each frame of to-be-processed image in a multi-frame fusion manner, so that the first-magnification image can be obtained quickly. Zooming processing is carried out on the first magnification image in a single-frame super-resolution mode, so that super-resolution reconstruction can be carried out on a processing result of a multi-frame fusion mode in the single-frame super-resolution mode under the condition that the multi-frame fusion mode cannot support the target zooming magnification, and more zooming magnifications can be supported. In addition, the second preset zooming magnification is integral multiple of the upper limit magnification of the first preset zooming magnification range, so that the continuity of zooming and the maintenance of high-magnification zooming image quality can be ensured.
In an embodiment, the second zooming module 1006 is further configured to perform feature extraction on the first-magnification image to obtain feature maps of each layer; and amplifying each layer of feature map to a target zooming magnification, and carrying out fusion processing on each layer of amplified feature map to obtain a second target image under the target zooming magnification.
In this embodiment, the features of the first-magnification image are extracted to obtain key features of the first-magnification image, each layer of feature map is enlarged to a target zoom magnification, and the enlarged feature maps of each layer are subjected to fusion processing, so that the key features of different layers can be fused, and a second target image at the target zoom magnification can be reconstructed. In addition, the second target image is fused with more detail features, and the image quality is better.
In an embodiment, the third zooming module 1008 is further configured to, when the target zooming magnification belongs to a third preset zooming magnification, perform zooming processing on each frame of to-be-processed image in a multi-frame fusion manner to obtain a first-magnification image; the first magnification image is an image corresponding to the upper limit magnification of a first preset zooming magnification range; zooming the first-magnification image by adopting a single-frame hyper-resolution mode to obtain at least two second-magnification images; at least two second magnification images comprising images respectively corresponding to two second preset zoom magnifications adjacent to the target zoom magnification; and respectively carrying out interpolation processing on the second magnification images, and carrying out fusion processing on the images obtained after the interpolation processing to obtain a third target image under the target zoom magnification.
In this embodiment, zoom processing is performed on each frame of to-be-processed image in a multi-frame fusion manner, so that a first-magnification image can be obtained quickly. The zoom processing is carried out on the first magnification image in a single-frame super-resolution mode, so that under the condition that a multi-frame fusion mode cannot support the target zoom magnification, super-resolution reconstruction can be carried out on the processing result of the multi-frame fusion mode in the single-frame super-resolution mode, each second magnification image under integral multiple of the upper limit magnification of the first preset zoom magnification range is obtained, and the zoom continuity and the high-magnification zoom image quality can be guaranteed. By performing interpolation processing on each second-magnification image and performing fusion processing on the images obtained after the interpolation processing, a third target image under the target zoom magnification is obtained, so that the loss of image quality of the image can be reduced, the resolution of the generated image is better, and the image is clearer.
In an embodiment, the third zooming module 1008 is further configured to perform interpolation processing on each second magnification image respectively to obtain each interpolated image at the target zooming magnification; determining weights respectively corresponding to the interpolation images according to the target zoom multiplying power and second preset zoom multiplying powers respectively corresponding to the second multiplying power images; and performing fusion processing on the interpolation images based on the weights corresponding to the interpolation images to obtain a third target image under the target zoom magnification.
In this embodiment, interpolation processing is performed on each second magnification image to obtain each interpolated image at the target zoom magnification, weights corresponding to each interpolated image are determined according to the target zoom magnification and a second preset zoom magnification corresponding to each second magnification image, and fusion processing is performed on each interpolated image based on the weight corresponding to each interpolated image, so that the image can obtain more detailed features, and the obtained third target image at the target zoom magnification is higher in image quality, thereby increasing the image resolution and reducing the loss of image quality.
In one embodiment, the third zoom module 1008 is further configured to determine, for each of the second magnification images, points to be interpolated of the second magnification image at the target zoom magnification; and for each point to be interpolated, determining the position of each sampling point within a preset range of the point to be interpolated, determining the corresponding pixel value of the point to be interpolated under the target zoom magnification according to the position of each sampling point and the corresponding weight, and obtaining an interpolated image corresponding to the second magnification image after obtaining the corresponding pixel value of each point to be interpolated in the second magnification image.
In this embodiment, for each second magnification image in each second magnification image, each point to be interpolated of the second magnification image at the target zoom magnification is determined, for each point to be interpolated, the position of each sampling point within the preset range of the point to be interpolated is determined, and according to the position of each sampling point and the corresponding weight, the corresponding pixel value of the point to be interpolated at the target zoom magnification can be accurately determined until the corresponding pixel value of each point to be interpolated in the second magnification image is obtained, and then the corresponding interpolated image of the second magnification image is accurately obtained.
In an embodiment, the third zooming module 1008 is further configured to determine a middle interpolation point corresponding to each row in the preset range according to the position and the corresponding weight of each sampling point; determining Euclidean distance between the intermediate interpolation point and the point to be interpolated aiming at each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point; and determining the corresponding pixel value of the point to be interpolated under the target zoom magnification based on each intermediate interpolation point and the corresponding interpolation weight.
In this embodiment, according to the position and the corresponding weight of each sampling point, an intermediate interpolation point corresponding to each row in a preset range is determined, for each intermediate interpolation point, a euclidean distance between the intermediate interpolation point and a point to be interpolated is determined, and the euclidean distance is used as the interpolation weight corresponding to the intermediate interpolation point. The interpolation weight represents the importance degree of the intermediate interpolation point, and the corresponding pixel value of the point to be interpolated under the target zoom magnification can be more accurately determined based on each intermediate interpolation point and the corresponding interpolation weight.
In one embodiment, the first zoom module 1004 is further configured to determine, when the target zoom magnification is within a first preset zoom magnification range, a reference frame image from each frame of to-be-processed image according to the image sharpness corresponding to each frame of to-be-processed image; amplifying the reference frame image to a target zooming magnification, and performing registration processing on each frame of image to be processed based on the reference frame image under the target zooming magnification to obtain each registration image; and carrying out fusion processing on each registration image and the reference frame image of the target zoom magnification to obtain a first target image under the target zoom magnification.
In this embodiment, each frame of to-be-processed image is subjected to registration processing on a reference frame image at a target zoom magnification, so that each frame of to-be-processed image and the reference frame image at the target zoom magnification are in the same image space, and it is ensured that other parameters, except for resolution, of each frame of to-be-processed image and the reference frame image at the target zoom magnification are consistent, so that high-resolution information at a sub-pixel level can be obtained after the registration image and the reference frame image are fused, and the quality of the generated first target image is clearer.
In one embodiment, the first zooming module 1004 is further configured to acquire feature points of a reference frame image and feature points of each frame of an image to be processed at the target zooming magnification; respectively determining matching point pairs between the reference frame image and each frame of image to be processed under the target zoom magnification according to the characteristic points of the reference frame image and the characteristic points of each frame of image to be processed under the target zoom magnification; determining homography matrixes between the reference frame images and the frames of images to be processed under the target zooming magnification according to the matching point pairs; and performing registration processing on the to-be-processed images of the frames based on the homography matrix to respectively obtain the registration images of the target zoom magnification.
In this embodiment, before image registration, the reference frame image is enlarged to a target zoom magnification, and the images to be processed of other frames are registered with the enlarged reference frame image, so that each frame of images to be processed can provide sub-pixel-level high-resolution information for the reference frame image at the target zoom magnification. The reference frame image and each registration image under the target zoom magnification are fused, pixels with the same content of different frames can be fused together, the target image under the target zoom magnification can be accurately obtained, ghost can be avoided, and the image quality of the target image is improved.
The division of the modules in the image processing apparatus is merely for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
For specific limitations of the image processing apparatus, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 11 is a schematic diagram of an internal structure of an electronic device in one embodiment. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device. The electronic device includes a processor and a memory connected by a system bus. The processor may include one or more processing units, among others. The processor may be a CPU (Central Processing Unit), a DSP (Digital Signal processor), or the like. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
Embodiments of the present application also provide a computer program product containing instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. The nonvolatile Memory may include a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), or a flash Memory. Volatile Memory can include RAM (Random Access Memory), which acts as external cache Memory. By way of illustration and not limitation, RAM is available in many forms, such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), SDRAM (Synchronous Dynamic Random Access Memory), Double Data Rate DDR SDRAM (Double Data Rate Synchronous Random Access Memory), ESDRAM (Enhanced Synchronous Dynamic Random Access Memory), SLDRAM (Synchronous Link Dynamic Random Access Memory), RDRAM (Random Dynamic Random Access Memory), and DRmb DRAM (Dynamic Random Access Memory).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. An image processing method, comprising:
acquiring a target zooming magnification, and carrying out zooming processing on each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming processing is carried out on each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-division mode to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
2. The method according to claim 1, wherein, in a case that the target zoom magnification belongs to a second preset zoom magnification, zooming each frame of the image to be processed in the multi-frame fusion mode and the single-frame super-resolution mode to obtain a second target image at the target zoom magnification comprises:
under the condition that the target zooming magnification belongs to a second preset zooming magnification, zooming each frame of the image to be processed in the multi-frame fusion mode to obtain a first magnification image; the first magnification image is an image corresponding to the upper limit magnification of the first preset zooming magnification range;
zooming the first magnification image in a single-frame hyper-resolution mode to obtain a second target image under the target zooming magnification.
3. The method according to claim 2, wherein the zooming the first-magnification image in a single-frame hyper-resolution manner to obtain a second target image at the target zoom magnification comprises:
performing feature extraction on the first magnification image to obtain feature maps of all layers;
and amplifying the characteristic diagrams of all layers to the target zooming magnification, and carrying out fusion processing on the amplified characteristic diagrams of all layers to obtain a second target image under the target zooming magnification.
4. The method according to claim 1, wherein in a case that the target zoom magnification belongs to a third preset zoom magnification, performing zoom processing on each frame of the image to be processed through the multi-frame fusion mode, the single-frame super-division mode, and the interpolation processing mode to obtain a third target image at the target zoom magnification includes:
under the condition that the target zooming magnification belongs to a third preset zooming magnification, zooming processing is carried out on each frame of the image to be processed in the multi-frame fusion mode to obtain a first magnification image; the first magnification image is an image corresponding to the upper limit magnification of the first preset zooming magnification range;
zooming the first-magnification image by adopting the single-frame hyper-resolution mode to obtain at least two second-magnification images; the at least two second magnification images comprise images respectively corresponding to two second preset zoom magnifications adjacent to the target zoom magnification;
and respectively carrying out interpolation processing on the second magnification images, and carrying out fusion processing on the images obtained after the interpolation processing to obtain a third target image under the target zoom magnification.
5. The method according to claim 4, wherein the interpolating each of the second magnification images, and fusing the interpolated images to obtain a third target image at the target zoom magnification, respectively comprises:
performing interpolation processing on each second magnification image respectively to obtain each interpolation image under the target zoom magnification;
determining weights respectively corresponding to the interpolation images according to the target zoom multiplying power and second preset zoom multiplying powers respectively corresponding to the second multiplying power images;
and performing fusion processing on each interpolation image based on the weight corresponding to each interpolation image to obtain a third target image under the target zoom magnification.
6. The method according to claim 5, wherein the separately interpolating each of the second magnification images to obtain each interpolated image at the target zoom magnification comprises:
for each second magnification image in the second magnification images, determining each point to be interpolated of the second magnification image at the target zoom magnification;
and for each point to be interpolated, determining the position of each sampling point within a preset range of the point to be interpolated, and determining the corresponding pixel value of the point to be interpolated under the target zoom magnification according to the position and the corresponding weight of each sampling point until obtaining the corresponding pixel value of each point to be interpolated in the second magnification image, and then obtaining the interpolated image corresponding to the second magnification image.
7. The method according to claim 6, wherein the determining the pixel value of the point to be interpolated at the target zoom magnification according to the position and the corresponding weight of each sampling point comprises:
determining a middle interpolation point corresponding to each row in the preset range according to the position and the corresponding weight of each sampling point;
determining Euclidean distance between the intermediate interpolation point and the point to be interpolated aiming at each intermediate interpolation point, and taking the Euclidean distance as the interpolation weight corresponding to the intermediate interpolation point;
and determining the corresponding pixel value of the point to be interpolated under the target zoom magnification based on each intermediate interpolation point and the corresponding interpolation weight.
8. The method according to claim 1, wherein in a case that the target zoom magnification is within a first preset zoom magnification range, zooming each frame of image to be processed in a multi-frame fusion manner to obtain a first target image at the target zoom magnification comprises:
determining a reference frame image from the to-be-processed images of each frame according to the image acutance corresponding to the to-be-processed images of each frame under the condition that the target zoom magnification is within a first preset zoom magnification range;
amplifying the reference frame image to the target zoom magnification, and performing registration processing on each frame of the image to be processed based on the reference frame image under the target zoom magnification to obtain each registration image;
and carrying out fusion processing on each registration image and the reference frame image at the target zoom magnification to obtain a first target image at the target zoom magnification.
9. The method according to claim 8, wherein the registering each frame of the image to be processed based on the reference frame image at the target zoom magnification to obtain each registered image comprises:
acquiring characteristic points of a reference frame image and characteristic points of the to-be-processed image of each frame under the target zoom magnification;
respectively determining matching point pairs between the reference frame image and each frame of image to be processed under the target zoom magnification according to the characteristic points of the reference frame image under the target zoom magnification and the characteristic points of each frame of image to be processed;
determining homography matrixes between the reference frame images and the to-be-processed images of the frames respectively under the target zoom magnification according to the matching point pairs;
and performing registration processing on the to-be-processed image of each frame based on the homography matrix to respectively obtain each registration image of the target zoom magnification.
10. An image processing apparatus characterized by comprising:
the first zooming module is used for acquiring a target zooming magnification, and zooming each frame of image to be processed in a multi-frame fusion mode under the condition that the target zooming magnification is within a first preset zooming magnification range to obtain a first target image under the target zooming magnification;
the second zooming module is used for zooming each frame of the image to be processed in a multi-frame fusion mode and a single-frame super-resolution mode under the condition that the target zooming magnification belongs to a second preset zooming magnification to obtain a second target image under the target zooming magnification; the second preset zooming magnification is integral multiple of the upper limit value of the first preset zooming magnification range;
the third zooming module is used for zooming each frame of the image to be processed in the multi-frame fusion mode, the single-frame super-division mode and the interpolation processing mode under the condition that the target zooming magnification belongs to a third preset zooming magnification to obtain a third target image under the target zooming magnification; the third preset zoom magnification is between two different second preset zoom magnifications.
11. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the method according to any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202110851688.4A 2021-07-27 2021-07-27 Image processing method, apparatus, electronic device, and computer-readable storage medium Active CN113570531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110851688.4A CN113570531B (en) 2021-07-27 2021-07-27 Image processing method, apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110851688.4A CN113570531B (en) 2021-07-27 2021-07-27 Image processing method, apparatus, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113570531A true CN113570531A (en) 2021-10-29
CN113570531B CN113570531B (en) 2024-09-06

Family

ID=78168005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110851688.4A Active CN113570531B (en) 2021-07-27 2021-07-27 Image processing method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113570531B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520874A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110536057A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111447359A (en) * 2020-03-19 2020-07-24 展讯通信(上海)有限公司 Digital zoom method, system, electronic device, medium, and digital imaging device
CN111784578A (en) * 2020-06-28 2020-10-16 Oppo广东移动通信有限公司 Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
CN111860363A (en) * 2020-07-24 2020-10-30 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111932587A (en) * 2020-08-03 2020-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112634160A (en) * 2020-12-25 2021-04-09 北京小米松果电子有限公司 Photographing method and device, terminal and storage medium
CN112887630A (en) * 2021-04-06 2021-06-01 南昌欧菲光电技术有限公司 Automatic exposure method, electronic device, and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110536057A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN111447359A (en) * 2020-03-19 2020-07-24 展讯通信(上海)有限公司 Digital zoom method, system, electronic device, medium, and digital imaging device
CN111784578A (en) * 2020-06-28 2020-10-16 Oppo广东移动通信有限公司 Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
CN111860363A (en) * 2020-07-24 2020-10-30 Oppo广东移动通信有限公司 Video image processing method and device, electronic equipment and storage medium
CN111932587A (en) * 2020-08-03 2020-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN112634160A (en) * 2020-12-25 2021-04-09 北京小米松果电子有限公司 Photographing method and device, terminal and storage medium
CN112887630A (en) * 2021-04-06 2021-06-01 南昌欧菲光电技术有限公司 Automatic exposure method, electronic device, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520874A (en) * 2022-01-28 2022-05-20 西安维沃软件技术有限公司 Video processing method and device and electronic equipment
CN114520874B (en) * 2022-01-28 2023-11-24 西安维沃软件技术有限公司 Video processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN113570531B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
CN110622497B (en) Device with cameras having different focal lengths and method of implementing a camera
WO2020192483A1 (en) Image display method and device
CN111402139B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
US9076234B2 (en) Super-resolution method and apparatus for video image
CN110717851A (en) Image processing method and device, neural network training method and storage medium
US11334961B2 (en) Multi-scale warping circuit for image fusion architecture
KR20180122548A (en) Method and apparaturs for processing image
US11688100B1 (en) Systems and methods for multi-sensor image enhancement
CN113628115B (en) Image reconstruction processing method, device, electronic equipment and storage medium
CN113935934A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113643333A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN114862734A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113570531B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
EP2966613A1 (en) Method and apparatus for generating a super-resolved image from an input image
US12094079B2 (en) Reference-based super-resolution for image and video enhancement
CN114049288A (en) Image generation method and device, electronic equipment and computer-readable storage medium
CN115731143A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Yu et al. Continuous digital zooming of asymmetric dual camera images using registration and variational image restoration
US11475240B2 (en) Configurable keypoint descriptor generation
CN115375780A (en) Color difference calculation method and device, electronic equipment, storage medium and product
CN115797194A (en) Image denoising method, image denoising device, electronic device, storage medium, and program product
US20220301104A1 (en) Pattern radius adjustment for keypoint descriptor generation
CN111242087B (en) Object identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant