CN116188341A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116188341A
CN116188341A CN202310003053.8A CN202310003053A CN116188341A CN 116188341 A CN116188341 A CN 116188341A CN 202310003053 A CN202310003053 A CN 202310003053A CN 116188341 A CN116188341 A CN 116188341A
Authority
CN
China
Prior art keywords
image
processing
target
frequency data
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310003053.8A
Other languages
Chinese (zh)
Inventor
倪攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310003053.8A priority Critical patent/CN116188341A/en
Publication of CN116188341A publication Critical patent/CN116188341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application discloses an image processing method and device, and belongs to the technical field of communication. The image processing method comprises the following steps: determining a first image; the first image is a reference image in image fusion; performing target processing on the first image to obtain a second image, and recording the position of the first image; the second image is a basic image of image fusion; the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content of the second image which is different from the first image; and according to the first image position, carrying out fusion processing on the first image and the second image to obtain a third image.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and an image processing device.
Background
At present, in the process of photographing by using terminal equipment (such as a mobile phone, a tablet personal computer and the like), a plurality of photographing scenes can improve the image quality in a multi-photographing fusion mode, namely, the intersection part of the visible areas among a plurality of cameras is aligned, and then the neural network model is utilized for image fusion, so that an image with improved image quality is obtained.
The neural network model is trained before being put into use. For a multi-shot fusion neural network model (hereinafter referred to as a multi-shot fusion model), it is necessary to prepare image data acquired by a plurality of cameras aligned as model input data and a manufactured multi-shot fusion result as true value data.
In the prior art, in order to obtain a multi-shot fusion result as true value data, a professional is generally asked to mark, and the professional subjectively judges the areas which should be fused and the areas which should not be fused, and fuses images Zhang Zhizuo more times. However, this method is time-consuming and laborious, and the difference of image content between the images to be fused is generally large, so that the difficulty of manual labeling is increased, and large errors are easy to generate.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problem that the acquisition difficulty is high for training data of a multi-shot fusion model in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a first image; the first image is a reference image in image fusion;
Performing target processing on the first image to obtain a second image, and recording the position of the first image; the second image is a basic image of image fusion; the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content of the second image which is different from the first image;
and according to the first image position, carrying out fusion processing on the first image and the second image to obtain a third image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for determining a first image; the first image is a reference image in image fusion;
the processing module is used for carrying out target processing on the first image to obtain a second image and recording the position of the first image; the second image is a basic image of image fusion; the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content of the second image which is different from the first image;
and the fusion processing module is used for carrying out fusion processing on the first image and the second image according to the first image position to obtain a third image.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps in the image processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement steps in an image processing method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one frame processor to implement the image processing method according to the first aspect.
In the embodiment of the application, the first image can simulate a reference image in image fusion, and the second image obtained by performing target processing on the first image can simulate a base image in image fusion. In the target processing process, the position information of different image contents between the second image and the first image can be recorded, and the image alignment can be performed according to the position information, so that an image alignment result is obtained. According to the image alignment result, an image fusion process may be performed to obtain a third image for simulating the multi-shot fusion result. According to the embodiment of the application, training data of the neural network model for image fusion can be obtained through one frame of image, namely input data subjected to image alignment processing and a multi-shot fusion result serving as a true value. Because the difference of the image contents between the first image and the second image is caused by target processing, the difference is controllable, and the image alignment is easy to be carried out based on the difference, compared with the manual alignment mode in the prior art, the scheme provided by the embodiment of the application is simpler and more convenient, the processing speed is high, and the processing efficiency is high
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram before and after image sharpness adjustment according to an embodiment of the present application;
FIG. 3 is a schematic diagram of the image noise adjustment before and after the image noise adjustment according to the embodiment of the present application;
FIG. 4 is a schematic diagram of two image noise adjustment steps according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram before and after brightness adjustment according to an embodiment of the present application;
FIG. 6 is a schematic diagram provided in an embodiment of the present application after addition of an occlusion image;
FIG. 7 is a schematic diagram of the image before and after warping provided in an embodiment of the present application;
fig. 8 is a schematic block diagram of an image processing apparatus provided in an embodiment of the present application;
FIG. 9 is a schematic block diagram of an electronic device provided by an embodiment of the present application;
fig. 10 is a schematic hardware structure of an electronic device provided in an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, where the image processing method is applied to an electronic device, that is, steps in the image processing method are performed by the electronic device.
The image processing method may include:
Step 101: a first image is determined.
The first image is used as a reference image in image fusion, that is, an auxiliary image for improving image quality in multi-shot fusion, for example, an image shot by a black-and-white lens under a high-quality night scene shooting scene is realized by matching a color lens with the black-and-white lens.
The first image is used as an auxiliary image for improving the image quality, and the image quality requirement is generally high, so in the embodiment of the application, the first image can be a high-quality image with low noise, high definition, accurate focusing and normal exposure, which is obtained in an environment with sufficient light in advance.
Step 102: and performing target processing on the first image to obtain a second image, and recording the position of the first image.
The second image is used as a basic image of image fusion, that is, an image to be improved in image quality in multi-shot fusion, for example, an image shot by a color lens in combination with a black-and-white lens under a high-quality night scene shooting scene.
In the embodiment of the present application, the second image obtained by performing the target processing on the first image may be used to simulate an image to be improved in multi-shot image fusion. The target processing may be set according to actual requirements, and it is assumed that for a shooting scene of which definition needs to be improved for the base image, the target processing may be processing operation of reducing definition, so that definition of the second image is smaller than that of the first image, and the first image and the second image are matched with the shooting scene. Wherein, for the image parameters such as definition, brightness, etc. of the second image to be lifted, the first image is better than the second image.
The target process described herein includes at least one of: adjusting image definition, adjusting image noise, adjusting image brightness, adding an occlusion image, and performing image warping. Specifically, the selection can be performed according to actual requirements.
The first image position is the position of a target sub-image in the second image, and the target sub-image is the image content different from the first image in the second image.
After the first image is subject to the target processing, if the second image has different image content than before the first image, a position (i.e., a first image position) of the different image content (i.e., a target sub-image) in the second image may be recorded. For example, in the case where the target processing is adding an occlusion image, the occlusion image in the second image is different from the first image in the second image, and then the position of the occlusion image in the second image is recorded.
Wherein, since the resolution of the first image is the same as that of the second image, the position of the different image content in the first image is the same as that of the different image content in the second image.
It will be appreciated that after the target processing is performed on the first image, if the image content of the second image is the same as that of the first image, the operation of recording the image position need not be performed.
Step 103: and according to the position of the first image, carrying out fusion processing on the first image and the second image to obtain a third image.
In the image fusion, image alignment is required, that is, determining a fusible image area and an unfused image area between two frames of images, and whether the images can be fused is generally determined by image content, for example, the same image content part is the fusible image area, and different image content parts are the unfused image area.
The fusion process generally refers to: for the image data about the same target in the multi-frame image, extracting the beneficial information in each image, and finally synthesizing to form a high-quality image, which is referred to as: and extracting the beneficial information in the same image in the first image and the second image, and comprehensively forming a third image with higher image quality.
After the image alignment is completed, the first image and the second image can be fused according to the image alignment result (i.e. the image area can be fused with the unfused image area), so as to obtain an image fusion result, and the image fusion result is used as a multi-shot fusion result.
In the embodiment of the application, the first image can simulate a reference image in image fusion, and the second image obtained by performing target processing on the first image can simulate a base image in image fusion. In the target processing process, the position information of different image contents between the second image and the first image can be recorded, and the image alignment can be performed according to the position information, so that an image alignment result is obtained. According to the image alignment result, an image fusion process may be performed to obtain a third image for simulating the multi-shot fusion result. According to the embodiment of the application, training data of the neural network model for image fusion can be obtained through one frame of image, namely input data subjected to image alignment processing and a multi-shot fusion result serving as a true value. Because the difference of the image content between the first image and the second image is caused by target processing, the difference is controllable, the image alignment is easy to be carried out based on the difference, compared with the manual alignment mode in the prior art, the scheme provided by the embodiment of the application is simpler and more convenient, the processing speed is high, and the processing efficiency is high. In addition, under the condition that a new fusion scene is required, the embodiment of the application can also rapidly provide a large amount of model training data for the new fusion scene, and shorten the preparation time of the training data.
As an alternative embodiment, in the case where the target process is to adjust the image sharpness, step 102: performing target processing on the first image to obtain a second image, which may include:
downsampling the first image with a first multiplying power to obtain a fourth image; and carrying out up-sampling of the second multiplying power on the fourth image to obtain a second image.
Wherein the product of the first multiplying power and the second multiplying power is 1.
In this embodiment, the first image may be downsampled at a first magnification, then the downsampled result (i.e., the fourth image) may be downsampled at a second magnification, and then the second image may be obtained. Because the product of the first multiplying power and the second multiplying power is 1, the size of the second image is the same as that of the first image, and subsequent image alignment processing is facilitated.
This sampling approach may lose some of the image information such that the sharpness of the second image is less than the sharpness of the first image. As shown in fig. 2, image a shown at 201 is obtained after downsampling by a factor of 0.5, and image a shown at 202 is obtained. After up-sampling of image a shown at 202 by a factor of 2, image a shown at 203 is obtained. As can be seen from fig. 2, the sharpness of the outline of image a shown at 203 is less than the sharpness of the outline of image a shown at 201.
The resolution of the images can be adjusted to simulate the resolution of different camera sensors in a multi-shot scene, namely, under the same shooting scene, the resolution of the images shot by the camera is different because of the different resolution of the different camera sensors, and the embodiment can simulate the situation.
In this embodiment, the first image may be downsampled at random magnification, that is, the value of the first magnification is random, and the random value range may be [0.4,0.75], which may also be set according to actual requirements. The downsampling mode and the upsampling mode can adopt bilinear difference values or nearest neighbor difference values.
It will be appreciated that a simple sharpness adjustment does not cause the second image to produce different image content than the first image, and therefore, in this case, there is no first image position information.
As an alternative embodiment, in the case where the target process is to adjust the image noise, step 102: performing target processing on the first image to obtain a second image, which may include:
adding target type noise to the first image to obtain a fifth image; and carrying out noise reduction processing on the fifth image to obtain a second image.
The target type noise described herein may include at least one of: gaussian noise, poisson noise.
In this embodiment, noise may be added to the first image, as shown in fig. 3, the image shown at 301 corresponds to the first image, and the image shown at 302 corresponds to the fifth image to which noise is added, and noise of the fifth image is redundant to the first image. By adding noise to the image, noise models of different camera sensors in a multi-shot scene can be simulated.
Alternatively, a random intensity of the target type noise may be added to the image. For example, in the case where the target type noise is gaussian noise, gaussian noise having a mean value of 0 and a standard deviation random value range of [0.01,0.1] may be added. For another example, in the case where the target type noise is poisson noise, poisson noise whose mean and variance are equal to the image pixel value may be added.
In this embodiment, the noise reduction processing may be further performed on the image to which noise is added, as shown in fig. 4, the image shown in 401 corresponds to the fifth image, the image shown in 402 corresponds to the second image, and the noise of the second image is less than that of the fourth image. The noise reduction process of different camera sensors in the multi-shot scene can be simulated by carrying out noise reduction processing on the image. Alternatively, the noise reduction processing may be performed by gaussian filtering or median filtering, or the like.
It will be appreciated that a simple sharpness adjustment does not cause the second image to produce different image content than the first image, and therefore, in this case, there is no first image position information.
As an alternative embodiment, in the case where the target process is to adjust the brightness of the image, step 102: performing target processing on the first image to obtain a second image, which may include:
and adjusting the gamma parameter of the first image to obtain a second image.
Under the same shooting scene, the sensitivity of different cameras to light may be different, so that exposure degree is inconsistent, that is, brightness of the images is inconsistent, and therefore, the exposure degree of different camera sensors in the multi-shooting scene can be simulated by adjusting gamma parameters (hereinafter referred to as gamma) of the images.
The brightness of the second image may be greater than that of the first image or smaller than that of the first image, and may be specifically adjusted according to actual requirements. Fig. 5 is a schematic diagram of the brightness of the first image before and after the brightness is reduced, the first image corresponding to the image shown in 501, and the second image corresponding to the image shown in 502, where the brightness of the second image is lower than the brightness of the first image.
Optionally, the first image may be adjusted by a random gamma value, that is, the gamma value is random, and the specific random value range may be set according to the brightness requirement of the up-image or the brightness requirement of the down-image.
Optionally, when adjusting the gamma value of the first image, the pixel value of the first image may be normalized, that is, the pixel value is converted into a value between 0 and 1; then, performing power operation of a preset exponent on the normalized pixel value; and then, performing inverse normalization processing on the pixel values subjected to the power operation, namely, removing normalization, and recovering the pixel values to the original numerical range, thereby obtaining a second image. The preset index may be a 1.0/gamma value, for example, when the gamma value is 1.8, the preset index is: 1.0/1.8.
It will be appreciated that a simple sharpness adjustment does not cause the second image to produce different image content than the first image, and therefore, in this case, there is no first image position information.
As an alternative embodiment, in the case where the target process is to add an occlusion image, step 102: performing target processing on the first image to obtain a second image, and recording the position of the first image, which may include:
And adding an occlusion image into the first image to obtain a second image, and recording the position of the occlusion image in the first image as the position of the first image.
Because the positions of different cameras on the terminal equipment are different, the image acquisition visual angles of the cameras are possibly caused to be different, so that differences exist among the acquired images, such as the differences among things seen by the left eye and the right eye of a person, therefore, the embodiment of the application can be used for simulating visual angle shielding among different cameras in a multi-shot scene by adding shielding images in the first image, namely simulating the differences among the images acquired by the different cameras.
In this embodiment, the number of occlusion images may be preset, i.e. a preset number of occlusion images is added to the first image. Of course, the number of the shielding images can also be selected randomly, namely, the shielding images with random number are added in the first image, the random value range can be an integer from 1 to 5, and the random value range can be set according to actual requirements. Optionally, the number of pixels occupied by the occlusion image may be smaller than or equal to the number of pixels in the first image in a preset proportion, where the preset proportion may be set according to actual requirements, for example, 1/10, so that a large difference between the first image and the second image and a real situation may be avoided.
Alternatively, the shape of the occlusion image may be polygonal, circular, elliptical, etc., as shown in fig. 6, the occlusion image includes: rectangle image 601, ellipse image 602, pentagonal graphic 603, and heptagonal graphic 604. The color of the shielding image can be set according to actual requirements. In addition, occlusion images of various shapes may have holes, as shown by pentagonal graphic 603. Further, the occlusion image may also have a texture image, as shown by the rectangular image 601, with diagonal textures. Holes and texture images can enrich the differences between images.
In the embodiment of the present application, in the process of adding the occlusion image to the first image, the position of the occlusion image in the second image may be recorded, where the position is the first image position, and the occlusion image is the target sub-image.
Optionally, the position of the occlusion image in the first image may be random, or may be preset, and may specifically be set according to actual requirements.
As an alternative embodiment, in the case where the target process is image warping, step 102: performing target processing on the first image to obtain a second image, and recording the position of the first image, which may include:
Adding a target grid identification in the first image; performing disturbance processing on a first grid unit in the target grid mark; and performing warping processing on the target sub-image in the first image according to the perturbation processing result of the first grid unit to obtain a second image, and recording the position of the target sub-image in the first image as the position of the first image.
Wherein the first grid cells are randomly selected, the number of the first grid cells is at least one, and the target sub-image is an image at the position of the first grid cells.
In the embodiment of the present application, as shown in 701 in fig. 7, a grid 7011 (i.e., a target grid representation) may be created in a first image, and random grid disturbance is added, and then image distortion is performed according to the grid disturbance, and as shown in 702 in fig. 7, 8 grid cells framed by a rectangular dashed box 7021 are grid cells performing disturbance processing, and the images of the 8 grid cell positions are subjected to distortion processing.
Through the processing procedure, compared with the first image, the second image is distorted, and alignment errors are easy to occur during image fusion, so that the situation of image alignment errors in a multi-shot scene can be simulated.
In this embodiment of the present application, in a process of performing image warping on a first image, a warped image position may be recorded, where the warped image is the first image position, and the warped image is the target sub-image. Of course, the location of the perturbed grid cells may also be recorded, both being identical.
As an alternative embodiment, step 103: according to the first image position, fusing the first image and the second image to obtain a third image, which may include:
acquiring first high-frequency data of a first image, second high-frequency data of a second image and first low-frequency data; and carrying out fusion processing on the first high-frequency data, the second high-frequency data and the first low-frequency data to obtain a third image.
The first high-frequency data is high-frequency data of a second image position in the first image, the second image position is an image position except a third image position, and the third image position corresponds to the first image position, that is, the image content of the third image position in the first image is different from the image content of the first image position in the second image. The second high frequency data is the high frequency data of the first image position in the second image.
In the embodiment of the application, the low-frequency data and the high-frequency data of the image can be separated, and then the required low-frequency data and the high-frequency data are selected to be fused according to the fusion requirement.
It is assumed that the low frequency data and the high frequency data separated from the first image are denoted as a_low and a_high, respectively, where the first image=a_low+a_high. The low frequency data and the high frequency data separated from the second image are respectively denoted as b_low and b_high, wherein the second image=b_low+b_high, and the third image=b_low+w×b_high+ (1-w) ×a_high. Wherein at the first image position and the third image position, w takes a value of 1, and at the remaining image positions, w takes a value of 0.
Finally, it should be noted that different target processes may be used in combination, for example, by adjusting the image sharpness and adjusting the image noise to obtain the second image. When the target processing method is used in combination, the execution sequence of different target processing can be set according to actual requirements, for example, the image definition can be adjusted first, then the image noise can be adjusted, and the image noise can be adjusted first, then the image definition can be adjusted. It will be appreciated that other image adjustment operations are used in conjunction with the addition of occlusion images, as there is typically a difference in the image acquisition perspectives of the different cameras.
The above is a description of the image processing method provided in the embodiment of the present application.
In summary, in the embodiment of the present application, based on the difference of the multiple shot images, for the high-quality image shot by the single camera, the base image used for simulating the multiple shot can be obtained through the target processing, such as the data degradation mode (reducing the image definition, increasing the image noise, etc.), without acquiring the real images shot by the multiple cameras, so that the method can quickly adapt to the newly added shooting scene. And because the difference of the image content between two frames of images is caused by target processing, the difference is controllable, and the image alignment and the image fusion are easy to carry out according to the difference.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
Fig. 8 is a schematic block diagram of an image processing apparatus applied to an electronic device provided in an embodiment of the present application.
As shown in fig. 8, the image processing apparatus may include:
a determining module 801 is configured to determine a first image.
The first image is a reference image in image fusion.
And a processing module 802, configured to perform target processing on the first image, obtain a second image, and record a position of the first image.
Wherein the second image is a base image of an image fusion, and the target processing includes at least one of: adjusting image definition, adjusting image noise, adjusting image brightness, adding shielding images, and performing image distortion; the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content which is different from the first image in the second image.
And a fusion module 803, configured to perform fusion processing on the first image and the second image according to the first image position, so as to obtain a third image.
Optionally, in a case where the target process is to adjust image sharpness, the processing module 802 includes:
and the downsampling unit is used for downsampling the first image at a first multiplying power to obtain a fourth image.
And the up-sampling unit is used for up-sampling the fourth image at a second multiplying power to obtain the second image.
Wherein the product of the first multiplying power and the second multiplying power is 1.
Optionally, in a case where the target process is to adjust image noise, the processing module 802 includes:
and the noise adding unit is used for adding the target type noise to the first image to obtain a fifth image.
And the noise reduction processing unit is used for carrying out noise reduction processing on the fifth image to obtain the second image.
Wherein the target type noise comprises at least one of: gaussian noise, poisson noise.
Optionally, in a case where the target process is to adjust brightness of an image, the processing module 802 includes:
and the adjusting unit is used for adjusting the gamma parameter of the first image to obtain the second image.
Optionally, in a case where the target process is to add an occlusion image, the processing module 802 includes:
and the processing and recording unit is used for adding an occlusion image into the first image to obtain the second image, and recording the position of the occlusion image in the first image as the position of the first image.
Optionally, in the case that the target process is image warping, the processing module 802 includes:
And the grid adding unit is used for adding the target grid identification in the first image.
And the disturbance processing unit is used for carrying out disturbance processing on the first grid unit in the target grid mark.
And the distortion processing unit is used for performing distortion processing on the target sub-image in the first image according to the disturbance processing result of the first grid unit to obtain the second image.
Wherein the target sub-image is an image at the first grid cell location.
And the recording unit is used for recording the position of the target sub-image in the first image as the first image position.
Optionally, the fusion processing module 803 includes:
a first acquisition unit configured to acquire first high-frequency data of the first image.
The first high-frequency data is high-frequency data of a second image position in the first image, the second image position is an image position except for the third image position, and the third image position corresponds to the first image position.
And a second acquisition unit configured to acquire second high-frequency data and first low-frequency data of the second image.
The second high-frequency data is the high-frequency data of the first image position in the second image.
And the fusion processing unit is used for carrying out fusion processing on the first high-frequency data, the second high-frequency data and the first low-frequency data to obtain the third image.
In summary, in the embodiment of the present application, based on the difference of the multiple shot images, for the high-quality image shot by the single camera, the base image used for simulating the multiple shot can be obtained through the target processing, such as the data degradation mode (reducing the image definition, increasing the image noise, etc.), without acquiring the real images shot by the multiple cameras, so that the method can quickly adapt to the newly added shooting scene. And because the difference of the image content between two frames of images is caused by target processing, the difference is controllable, and the image alignment and the image fusion are easy to carry out according to the difference.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiment of the image processing method shown in fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 9, an embodiment of the present application further provides an electronic device 900, including: the processor 901 and the memory 902, the memory 902 stores a program or an instruction that can be executed by the processor 901, where the program or the instruction implements each step of the above-mentioned image processing method embodiment when executed by the processor 901, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device 900 in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device.
Fig. 10 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the processor 1010 may be configured to: determining a first image; the first image is a reference image in image fusion; performing target processing on the first image to obtain a second image, and recording the position of the first image; wherein the second image is a base image of an image fusion, and the target processing includes at least one of: adjusting image definition, adjusting image noise, adjusting image brightness, adding shielding images, and performing image distortion; the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content of the second image which is different from the first image; and according to the first image position, carrying out fusion processing on the first image and the second image to obtain a third image.
Optionally, the processor 1010 may be further configured to: downsampling the first image with a first multiplying power to obtain a fourth image; up-sampling the fourth image at a second multiplying power to obtain a second image; wherein the product of the first multiplying power and the second multiplying power is 1.
Optionally, the processor 1010 may be further configured to: adding target type noise to the first image to obtain a fifth image under the condition that the target processing is to adjust image noise; carrying out noise reduction treatment on the fifth image to obtain the second image; wherein the target type noise comprises at least one of: gaussian noise, poisson noise.
Optionally, the processor 1010 may be further configured to: and under the condition that the target processing is to adjust the brightness of the image, adjusting the gamma parameter of the first image to obtain the second image.
Optionally, the processor 1010 may be further configured to: and when the target processing is to add an occlusion image, adding the occlusion image into the first image to obtain the second image, and recording the position of the occlusion image in the first image as the position of the first image.
Optionally, the processor 1010 may be further configured to: adding a target grid identification in the first image under the condition that the target processing is image warping; performing disturbance processing on a first grid unit in the target grid mark; performing distortion processing on the target sub-image in the first image according to the disturbance processing result of the first grid unit to obtain the second image; wherein the target sub-image is an image at the first grid cell location; and recording the position of the target sub-image in the first image as the position of the first image.
Optionally, the processor 1010 may be further configured to: acquiring first high-frequency data of the first image; acquiring second high-frequency data and first low-frequency data of the second image; and carrying out fusion processing on the first high-frequency data, the second high-frequency data and the first low-frequency data to obtain the third image. The first high-frequency data is high-frequency data of a second image position in the first image, the second image position is an image position except a third image position, and the third image position corresponds to the first image position; the second high frequency data is the high frequency data of the first image position in the second image.
In the embodiment of the invention, based on the difference of the multi-shot images, for the high-quality image shot by a single camera, a basic image used for simulating the multi-shot can be obtained through target processing such as a data degradation mode (reducing the image definition, increasing the image noise and the like), and the real images shot by a plurality of cameras are not required to be acquired, so that the method can be rapidly adapted to newly-increased shooting scenes. And because the difference of the image content between two frames of images is caused by target processing, the difference is controllable, and the image alignment and the image fusion are easy to carry out according to the difference.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile memory may be a ROM, a Programmable ROM (PROM), an Erasable Programmable EPROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be RAM, static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (Synchronous link DRAM) and Direct memory bus RAM (DRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM, RAM, magnetic disk, optical disk) and including several instructions for causing a terminal (which may be a mobile phone, a computer, a server or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An image processing method, the method comprising:
determining a first image; the first image is a reference image in image fusion;
performing target processing on the first image to obtain a second image, and recording the position of the first image; the second image is a basic image in image fusion, the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content which is different from the first image in the second image;
and according to the first image position, carrying out fusion processing on the first image and the second image to obtain a third image.
2. The image processing method according to claim 1, wherein, in the case where the target processing is to adjust the sharpness of the image, the target processing is performed on the first image to obtain a second image, including:
downsampling the first image with a first multiplying power to obtain a fourth image;
up-sampling the fourth image at a second multiplying power to obtain a second image;
and under the condition that the target processing is to adjust image noise, performing target processing on the first image to obtain a second image, wherein the method comprises the following steps:
Adding target type noise to the first image to obtain a fifth image;
and carrying out noise reduction processing on the fifth image to obtain the second image.
3. The image processing method according to claim 1, wherein, in the case where the target processing is to adjust brightness of an image, the target processing is performed on the first image to obtain a second image, including:
adjusting gamma parameters of the first image to obtain the second image;
and under the condition that the target processing is adding an occlusion image, performing target processing on the first image to obtain a second image, and recording the position of the first image, wherein the method comprises the following steps:
and adding an occlusion image into the first image to obtain the second image, and recording the position of the occlusion image in the first image as the position of the first image.
4. The image processing method according to claim 1, wherein, in the case where the target processing is image warping, the target processing of the first image to obtain a second image and recording the first image position includes:
adding a target grid identification in the first image;
Performing disturbance processing on a first grid unit in the target grid mark;
performing distortion processing on the target sub-image in the first image according to the disturbance processing result of the first grid unit to obtain the second image; wherein the target sub-image is an image at the first grid cell location;
and recording the position of the target sub-image in the first image as the position of the first image.
5. The image processing method according to claim 1, 3 or 4, wherein the fusing the first image and the second image according to the first image position to obtain a third image includes:
acquiring first high-frequency data of the first image; the first high-frequency data is high-frequency data of a second image position in the first image, the second image position is an image position except a third image position, and the third image position corresponds to the first image position;
acquiring second high-frequency data and first low-frequency data of the second image; the second high-frequency data is the high-frequency data of the first image position in the second image;
And carrying out fusion processing on the first high-frequency data, the second high-frequency data and the first low-frequency data to obtain the third image.
6. An image processing apparatus, characterized in that the apparatus comprises:
a determining module for determining a first image; the first image is a reference image in image fusion;
the processing module is used for carrying out target processing on the first image to obtain a second image and recording the position of the first image; the second image is a basic image in image fusion, the first image position is the position of a target sub-image in the second image, and the target sub-image is the image content which is different from the first image in the second image;
and the fusion module is used for carrying out fusion processing on the first image and the second image according to the first image position to obtain a third image.
7. The image processing apparatus according to claim 6, wherein in the case where the target process is to adjust the image sharpness, the processing module includes:
the downsampling unit is used for downsampling the first image at a first multiplying power to obtain a fourth image;
The up-sampling unit is used for up-sampling the fourth image at a second multiplying power to obtain the second image;
in the case where the target process is to adjust image noise, the processing module includes:
the noise adding unit is used for adding target type noise to the first image to obtain a fifth image;
and the noise reduction processing unit is used for carrying out noise reduction processing on the fifth image to obtain the second image.
8. The image processing apparatus according to claim 6, wherein in the case where the target process is to adjust the brightness of the image, the processing module includes:
the adjusting unit is used for adjusting the gamma parameter of the first image to obtain the second image;
in the case where the target processing is adding an occlusion image, the processing module includes:
and the processing and recording unit is used for adding an occlusion image into the first image to obtain the second image, and recording the position of the occlusion image in the first image as the position of the first image.
9. The image processing apparatus according to claim 6, wherein in the case where the target processing is image warping, the processing module includes:
A grid adding unit, configured to add a target grid identifier in the first image;
the disturbance processing unit is used for carrying out disturbance processing on the first grid unit in the target grid mark;
the distortion processing unit is used for performing distortion processing on the target sub-image in the first image according to the disturbance processing result of the first grid unit to obtain the second image; wherein the target sub-image is an image at the first grid cell location;
and the recording unit is used for recording the position of the target sub-image in the first image as the first image position.
10. The image processing apparatus according to claim 6, 8 or 9, wherein the fusion processing module includes:
a first acquisition unit configured to acquire first high-frequency data of the first image; the first high-frequency data is high-frequency data of a second image position in the first image, the second image position is an image position except a third image position, and the third image position corresponds to the first image position;
a second acquisition unit configured to acquire second high-frequency data and first low-frequency data of the second image;
The second high-frequency data is the high-frequency data of the first image position in the second image;
and the fusion processing unit is used for carrying out fusion processing on the first high-frequency data, the second high-frequency data and the first low-frequency data to obtain the third image.
CN202310003053.8A 2023-01-03 2023-01-03 Image processing method and device Pending CN116188341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310003053.8A CN116188341A (en) 2023-01-03 2023-01-03 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310003053.8A CN116188341A (en) 2023-01-03 2023-01-03 Image processing method and device

Publications (1)

Publication Number Publication Date
CN116188341A true CN116188341A (en) 2023-05-30

Family

ID=86441641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310003053.8A Pending CN116188341A (en) 2023-01-03 2023-01-03 Image processing method and device

Country Status (1)

Country Link
CN (1) CN116188341A (en)

Similar Documents

Publication Publication Date Title
CN108737738B (en) Panoramic camera and exposure method and device thereof
EP3238213B1 (en) Method and apparatus for generating an extrapolated image based on object detection
CN108337445A (en) Photographic method, relevant device and computer storage media
US11132770B2 (en) Image processing methods and apparatuses, computer readable storage media and electronic devices
CN111145135B (en) Image descrambling processing method, device, equipment and storage medium
CN107231524A (en) Image pickup method and device, computer installation and computer-readable recording medium
CN109035147B (en) Image processing method and device, electronic device, storage medium and computer equipment
CN111311523A (en) Image processing method, device and system and electronic equipment
CN111353965A (en) Image restoration method, device, terminal and storage medium
CN107093395B (en) Transparent display device and image display method thereof
CN112788254A (en) Camera image matting method, device, equipment and storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN116188341A (en) Image processing method and device
CN115049572A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112887611A (en) Image processing method, device, equipment and storage medium
CN112561787A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112446848A (en) Image processing method and device and electronic equipment
CN110166710A (en) Image composition method, device, equipment and medium
KR101864454B1 (en) Apparatus and method for composing images in an image processing device
CN116506732B (en) Image snapshot anti-shake method, device and system and computer equipment
CN114339029B (en) Shooting method and device and electronic equipment
CN114363521B (en) Image processing method and device and electronic equipment
TW202411934A (en) Image processing method and device
CN115797160A (en) Image generation method and device
CN116721170A (en) Image processing method, image processing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination