CN115633262B - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN115633262B
CN115633262B CN202211646996.4A CN202211646996A CN115633262B CN 115633262 B CN115633262 B CN 115633262B CN 202211646996 A CN202211646996 A CN 202211646996A CN 115633262 B CN115633262 B CN 115633262B
Authority
CN
China
Prior art keywords
image
images
shutter function
shutter
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211646996.4A
Other languages
Chinese (zh)
Other versions
CN115633262A (en
Inventor
孙佳男
姚通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211646996.4A priority Critical patent/CN115633262B/en
Publication of CN115633262A publication Critical patent/CN115633262A/en
Application granted granted Critical
Publication of CN115633262B publication Critical patent/CN115633262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to the field of image processing, and provides an image processing method and electronic equipment, wherein the image processing method comprises the following steps: displaying a first interface; detecting a first operation of a first control; in response to a first operation, acquiring a first image; selecting pixels in the first image based on a shutter function, and generating N images, wherein the N images comprise second images and N-1 third images, the second images are images obtained based on pixels acquired by the global shutter function, and the N-1 third images are images obtained based on the pixels acquired by the first shutter function; and obtaining a processed image based on the second image, the N-1 third images and the image processing model, wherein the processed image is an image for removing motion blur of an image area where the moving object is located. Based on the scheme of the application, the motion blur in the image can be removed based on the single-frame image acquired by the electronic equipment, and the processing efficiency of the electronic equipment for removing the motion blur in the image is improved.

Description

Image processing method and electronic device
Technical Field
The present invention relates to the field of images, and in particular, to an image processing method and an electronic device.
Background
With the development of image technology in electronic devices, users have increasingly demanded shooting functions. For example, when a user photographs a moving object, it is generally desirable for an electronic device to be able to acquire an image of the moving object at a moving instant; since the photographic subject is in motion, electronic devices are often required to reduce motion blur in images; currently, in order to reduce motion blur in an image, a multi-frame image is generally used to remove motion blur in an image; however, when the multi-frame image is used to remove the motion blur, the multi-frame image needs to be acquired, so that the electronic device needs to have a longer time, resulting in lower efficiency of removing the motion blur in the image.
Therefore, how to improve the efficiency of removing motion blur in an image of an electronic device is a problem to be solved.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can remove motion blur in an image based on a single frame image acquired by the electronic equipment and improve the processing efficiency of the electronic equipment for removing the motion blur in the image.
In a first aspect, an image processing method is provided, applied to an electronic device, and includes:
displaying a first interface, wherein the first interface comprises a preview image and a first control, and the first control is a control for indicating photographing;
Detecting a first operation of the first control, wherein the first operation is an operation for indicating photographing;
in response to the first operation, a first image is acquired, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the duration of the global shutter function for indicating that the shutter is opened is longer than that of the first shutter function for indicating that the shutter is opened;
selecting pixels in the first image based on the shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on the pixels acquired by the global shutter function, the N-1 third images are images obtained based on the pixels acquired by the first shutter function, the size of any one of the N images is smaller than the size of the first image, and N is an integer larger than 2;
Obtaining a processed image based on the second image, the N-1 third images and an image processing model; the image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where the moving object is located.
It should be understood that when a moving object is included in a photographed object, the longer the photographing time period, the greater the probability of motion blur being present in an acquired image, or the greater the image area of motion blur in the image; in general, the longer the shutter is opened when an image is acquired, the longer the exposure time is when an image is acquired; the shutter function may be used to indicate to the electronic device whether the shutter is open or closed when capturing an image.
It should also be understood that the global shutter function may refer to shutters that open for a frame exposure period T; the first shutter function may refer to a local shutter function, which refers to a partial-duration shutter open in one frame exposure duration T and a partial-duration shutter closed; since the exposure time of the pixels collected based on the local shutter function is shorter than that of the pixels collected based on the global shutter function; therefore, there is no or less motion blur in the image generated based on the pixels acquired by the local shutter function; further, since the exposure time period of the pixels collected based on the local shutter function is shorter than that of the pixels collected based on the global shutter function, the brightness of the image generated by the pixels collected based on the local shutter function is low.
Based on the scheme of the application, when the electronic equipment acquires the image of the moving object, the electronic equipment can acquire a first image according to different shutter function acquisition pixels; the first image can be subjected to image division processing according to the shutter function, and N small-size images are obtained; the N small-sized images include a second image (e.g., an image including motion blur and having higher brightness) and N-1 third images (e.g., an image having no motion blur and having lower brightness); based on the second image, the N-1 third images and the image processing model, a processed image can be obtained; according to the technical scheme, the electronic equipment can remove motion blur in the image area where the moving object is located based on the acquired single-frame image; compared with a mode of removing motion blur by adopting multiple frames of images, the method and the device for removing motion blur in the images can remove motion blur in the images based on single frames of images, and improve the efficiency of removing motion blur in the images of the electronic equipment to a certain extent.
With reference to the first aspect, in some implementations of the first aspect, the first shutter function includes N-1 shutter sub-functions, where the N-1 shutter sub-functions are in one-to-one correspondence with the N-1 third images, different sub-functions in the N-1 shutter sub-functions have partial differences corresponding to the time when the shutter is opened, and the obtaining the processed image based on the second image, the N-1 third images and the image processing model includes:
Determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
and inputting the second image and the target image into the image processing model to obtain the processed image.
Based on the scheme of the application, the electronic device can firstly determine the target image in the N-1 third images, and determine the clearest image in the N-1 third images obtained according to the pixels acquired by the first shutter function (for example, different local shutter functions); since the N-1 third images are images generated according to different local shutter functions, the time for which the local shutter function indicates that the shutter is open is shorter than the time for which the global shutter function indicates that the shutter is open, the N-1 third images may be short-exposure images as compared with the second images; in the scheme of the application, the electronic equipment can firstly select a clear image from N-1 images with shorter exposure time, and input the clear image and the images with longer exposure time into an image processing model for processing; compared with directly inputting N-1 images with shorter exposure time and images with longer exposure time into the image processing model, the scheme of the application can reduce the operation amount of the image processing model to a certain extent and improve the processing efficiency of the image processing model; i.e. to improve the processing efficiency for removing motion blur in the image.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
calculating the similarity between the second image and the target image;
the step of inputting the second image and the target image into the image processing model to obtain the processed image includes:
and inputting the second image, the target image and the similarity into the image processing model to obtain the processed image.
It should be noted that, the second image may refer to an image composed of pixels with an exposure time period of T; the target image may be a clear image among images composed of pixels having an exposure time length less than T; since the target image exposure period is relatively short compared to the second image exposure period, the image brightness of the target image is lower than that of the second image.
Based on the scheme of the application, a second image (for example, an image with longer exposure time), a target image (for example, a clear image with shorter exposure time) and the similarity can be input into the image processing model; when the image processing model processes the target image, the similarity can be used as prior information; the image area where the moving object is located in the target image and the image area where the shooting object except the moving object is located in the target image can be determined based on the similarity; therefore, local brightness improvement processing is carried out on the image area where the moving object is located in the target image, and the operation amount of the image processing model is reduced; in the scheme of the application, the image processing model can quickly identify the image area where the moving object is located in the target image based on the similarity, and the image area where the moving object is located is subjected to local brightness improvement processing, so that the processing efficiency of the image processing model is improved, namely the processing efficiency of removing the motion blur in the image is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the obtaining a processed image based on the second image, the N-1 third images, and an image processing model includes:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
obtaining a fourth image based on the second image and the target image, wherein the fourth image comprises a second image area and a third image area, the second image area is an image area where the moving object is located in the target image, and the third image area is an image area except the image area where the moving object is located in the second image;
and inputting the fourth image into the image processing model to obtain the processed image.
It should be understood that the target image may refer to the sharpest of the N-1 third images.
Based on the scheme of the application, a fourth image can be generated based on the second image and the target image; the fourth image comprises an image area where the moving object in the target image is located and an image area except the image area where the moving object in the second image is located; it can be understood that the fourth image includes an image area where a moving area in the target image is located and an image area where a non-moving area in the second image is located; because the target image is an image generated based on the pixels acquired by the first shutter function, the definition of the image area where the motion area is located in the target image is higher; because the second image is an image generated based on pixels acquired by the global shutter function, the brightness of an image area where a non-moving object is located in the second image is higher; and inputting the fourth image into an image processing model, so that brightness improvement processing is carried out on an image area where the moving object is located in the fourth image, and a processed image is obtained.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes:
calculating the similarity between the second image and the target image;
the step of inputting the fourth image into an image processing model to obtain the processed image comprises the following steps:
and inputting the similarity between the fourth image and the similarity to the image processing model to obtain the processed image.
Based on the scheme of the application, the similarity can be used as priori information of an image processing model; the image area where the moving object is located in the fourth image can be rapidly identified based on the similarity image processing model, and brightness improvement processing is carried out on the image area where the moving object is located in the fourth image, so that a processed image is obtained; the similarity can be used as prior information in the image processing model, so that the operation amount of the image processing model can be reduced, and the processing efficiency of the image model is improved.
With reference to the first aspect, in certain implementation manners of the first aspect, the obtaining a fourth image based on the second image and the target image includes:
when the similarity between the image area of the second image and the corresponding image area in the target image is larger than a preset similarity, acquiring pixels of the image area of the second image;
And replacing the pixels of the corresponding image area in the target image based on the pixels of the image area of the second image to obtain the fourth image.
With reference to the first aspect, in certain implementation manners of the first aspect, the determining a target image in the N-1 third images includes:
calculating the gradient value of each third image in the N-1 third images to obtain N-1 gradient values;
and obtaining the target image based on the N-1 gradient values.
It should be noted that, the gradient refers to deriving adjacent pixels in the small-sized image; if the gradient is larger, the change of the adjacent pixels is larger; it is understood that the larger the variation of adjacent pixels, the clearer the pixels of the image area corresponding to the gradient.
With reference to the first aspect, in certain implementation manners of the first aspect, the image processing model is further used for denoising.
With reference to the first aspect, in certain implementation manners of the first aspect, the image processing model is obtained through the following training method:
acquiring a sample video, wherein the resolution of the sample video is larger than a preset resolution, the sample video comprises M continuous images, the M continuous images comprise image areas where sample moving objects are located, and M is an integer larger than or equal to 3;
Carrying out fusion processing on M-1 continuous images in the M continuous images to obtain a first sample image;
performing brightness reduction treatment on an M-th frame image in the M-frame continuous images to obtain a second sample image;
calculating a sample similarity between the first sample image and the second sample image;
inputting the sample similarity, the first sample image and the second sample image into an image processing model to be trained, and outputting a predicted image;
and updating parameters of the image processing model to be trained based on the image difference between the predicted image and the Mth frame image to obtain the image processing model.
In a second aspect, an electronic device is provided, the electronic device comprising one or more processors and memory; the memory is coupled to the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform:
displaying a first interface, wherein the first interface comprises a preview image and a first control, and the first control is a control for indicating photographing;
detecting a first operation of the first control, wherein the first operation is an operation for indicating photographing;
In response to the first operation, a first image is acquired, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the duration of the global shutter function for indicating that the shutter is opened is longer than that of the first shutter function for indicating that the shutter is opened;
selecting pixels in the first image based on the shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on the pixels acquired by the global shutter function, the N-1 third images are images obtained based on the pixels acquired by the first shutter function, the size of any one of the N images is smaller than the size of the first image, and N is an integer larger than 2;
obtaining a processed image based on the second image, the N-1 third images and an image processing model; the image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where the moving object is located.
With reference to the second aspect, in certain implementations of the second aspect, the first shutter function includes N-1 shutter sub-functions, the N-1 shutter sub-functions are in one-to-one correspondence with the N-1 third images, different sub-functions of the N-1 shutter sub-functions have partial differences corresponding to moments in time when the shutters open, and the one or more processors call the computer instructions to cause the electronic device to perform:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
and inputting the second image and the target image into the image processing model to obtain the processed image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
calculating the similarity between the second image and the target image;
the step of inputting the second image and the target image into the image processing model to obtain the processed image includes:
and inputting the second image, the target image and the similarity into the image processing model to obtain the processed image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
obtaining a fourth image based on the second image and the target image, wherein the fourth image comprises a second image area and a third image area, the second image area is an image area where the moving object is located in the target image, and the third image area is an image area except the image area where the moving object is located in the second image;
and inputting the fourth image into the image processing model to obtain the processed image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
calculating the similarity between the second image and the target image;
the step of inputting the fourth image into an image processing model to obtain the processed image comprises the following steps:
And inputting the similarity between the fourth image and the similarity to the image processing model to obtain the processed image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
when the similarity between the image area of the second image and the corresponding image area in the target image is larger than a preset similarity, acquiring pixels of the image area of the second image;
and replacing the pixels of the corresponding image area in the target image based on the pixels of the image area of the second image to obtain the fourth image.
With reference to the second aspect, in certain implementations of the second aspect, the one or more processors invoke the computer instructions to cause the electronic device to perform:
calculating the gradient value of each third image in the N-1 third images to obtain N-1 gradient values;
and obtaining the target image based on the N-1 gradient values.
With reference to the second aspect, in some implementations of the second aspect, the image processing model is further configured to perform denoising processing.
With reference to the second aspect, in certain implementations of the second aspect, the image processing model is obtained by the following training method:
Acquiring a sample video, wherein the resolution of the sample video is larger than a preset resolution, the sample video comprises M continuous images, the M continuous images comprise image areas where sample moving objects are located, and M is an integer larger than or equal to 3;
carrying out fusion processing on M-1 continuous images in the M continuous images to obtain a first sample image;
performing brightness reduction treatment on an M-th frame image in the M-frame continuous images to obtain a second sample image;
calculating a sample similarity between the first sample image and the second sample image;
inputting the sample similarity, the first sample image and the second sample image into an image processing model to be trained, and outputting a predicted image;
and updating parameters of the image processing model to be trained based on the image difference between the predicted image and the Mth frame image to obtain the image processing model.
In a third aspect, an electronic device is provided, comprising means for performing the image processing method of the first aspect or any implementation of the first aspect.
In a fourth aspect, an electronic device is provided that includes one or more processors and memory; the memory is coupled with the one or more processors, the memory for storing computer program code, the computer program code comprising computer instructions that the one or more processors call to cause the electronic device to perform the image processing method of the first aspect or any implementation of the first aspect.
In a fifth aspect, there is provided a chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the first aspect or any of the image processing methods of the first aspect.
In a sixth aspect, there is provided a computer readable storage medium storing computer program code which, when executed by an electronic device, causes the electronic device to perform the image processing method of the first aspect or any implementation manner of the first aspect.
In a seventh aspect, there is provided a computer program product comprising: computer program code which, when run by an electronic device, causes the electronic device to perform the image processing method of the first aspect or any implementation of the first aspect.
In the embodiment of the application, when the electronic device acquires the image of the moving object, the electronic device may acquire the first image according to different shutter function acquisition pixels; the first image can be subjected to image division processing according to the shutter function, and N small-size images are obtained; the N small-sized images include a second image (e.g., an image including motion blur and having higher brightness) and N-1 third images (e.g., an image having no motion blur and having lower brightness); based on the second image, the N-1 third images and the image processing model, a processed image can be obtained; according to the technical scheme, the electronic equipment can remove motion blur in the image area where the moving object is located based on the acquired single-frame image; compared with a mode of removing motion blur by adopting multiple frames of images, the method and the device for removing motion blur in the images can remove motion blur in the images based on single frames of images, and improve the efficiency of removing motion blur in the images of the electronic equipment to a certain extent.
Drawings
FIG. 1 is a schematic diagram of different shutter functions;
FIG. 2 is a schematic diagram of an overall shutter function provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a local shutter function provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a hardware system suitable for use with the electronic device of the present application;
FIG. 5 is a schematic diagram of a software system suitable for use with the electronic device of the present application;
fig. 6 is a schematic diagram of an application scenario applicable to an embodiment of the present application provided in the embodiment of the present application;
FIG. 7 is a schematic flow chart of an image processing method provided in an embodiment of the present application;
FIG. 8 is a schematic flow chart of another image processing method provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a different shutter function and a small-size image provided by an embodiment of the present application;
FIG. 10 is a schematic flow chart of a training method of an image processing model provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
FIG. 12 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
FIG. 13 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
FIG. 14 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
FIG. 15 is a schematic diagram of a graphical user interface suitable for use with embodiments of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, the following terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
In order to facilitate understanding of embodiments of the present application, related concepts related to the embodiments of the present application will be briefly described first.
1. Shutter function
The shutter function is used to indicate the open or closed state of the shutter at each time unit (e.g., one slot).
It should be noted that the shutter function may include a global shutter function and a local shutter function; the global shutter function may mean that in all time slots of a frame exposure time period T, the shutter is in an open state; the local shutter function may mean that the shutter is in an open state in a partial time slot of one frame exposure time period T, and the shutter is in a closed state in the partial time slot; furthermore, different shutter functions may be employed for different pixels in the image; it will be appreciated that different pixels in the same image may take different exposure durations; images acquired by different shutter functions may be referred to as multiple exposure images, which refers to a technique of forming a single image by taking two or more independent exposures and then overlapping them when the images are acquired.
It should be appreciated that camera imaging may be abstracted as an integration (e.g., shutter function. Scene illumination) over time t; thus, in the exposure image frames for which the exposure time is determined, different shutter functions determine different camera shooting capabilities.
Illustratively, as shown in FIG. 1, a global shutter function, an asynchronous stop shutter function, a fixed duration offset shutter function, a variable duration offset shutter function, an arbitrary duration offset shutter function are illustrated; the global shutter function may be that the shutters are all in an open state in one frame of exposure time; the asynchronous stop shutter function may mean that the time for opening the shutter is the same for each pixel point of the image in one frame exposure period, and the time for closing the shutter is different; the fixed duration offset shutter function may refer to that when each pixel in the image is acquired, the shutter opening duration is the same, and the shutter is in a closed state at a part of time in one frame of exposure duration; the variable-duration shutter function may refer to that when each pixel in an image is acquired, shutter opening durations are different, and the shutter is in a closed state at part of time in one frame of exposure duration; the offset shutter function of any duration may refer to that the duration of shutter opening for each pixel when an image is acquired may be any duration. Since the longer the exposure time length is, the slower the speed of collecting images is; in addition, if the exposure time period for one frame image is different, a High-Dynamic Range (HDR) image can be generated; therefore, for the above door opening function, the electronic device cannot generate an HDR image based on the global shutter function, and the image capturing speed is slow; the electronic device can generate an HDR image based on the asynchronous stop shutter function, and the speed of collecting the image is slower; the electronic equipment cannot generate an HDR image based on the fixed duration offset shutter function, and the acquisition speed is high; the electronic equipment can generate an HDR image based on the variable duration variable shutter function, and the image acquisition speed is high; the electronic device can generate an HDR image based on the offset shutter function of any time length, and the speed of acquiring the image is high.
It should also be appreciated that one frame exposure time may be divided into smaller time units by a shutter function; for example, as shown in fig. 2, the exposure time of one frame image may be divided into a plurality of time slots; the shutter function employed in one slot is different for one pixel, and the acquired pixels are different.
2. Global shutter function
The global shutter function refers to a state in which the shutter function is data-on for all slots of one frame exposure for one pixel.
Illustratively, as shown in fig. 2, assume that a frame exposure duration may include 12 slots (slots); the pixels shown in fig. 2 employ a global shutter function, which can be understood as being in an on state in each of the time slots in one frame exposure time period (e.g., time slot 1 to time slot 12).
It will be appreciated that in a time slot, if the shutter function is on, then the pixels can collect image data in that time slot; if the shutter function is closed, then the pixels may not collect image data in that time slot.
Illustratively, the Global Shutter function includes a target Shutter function (Global Shutter) and a Rolling Shutter function (Rolling Shutter), the target Shutter function being imaged by simultaneously exposing all pixel points of the photosensitive element for a certain period of time; the rolling shutter function images by using all pixels of the photosensitive element to perform a line-by-line rotational exposure for a certain period of time.
3. Local shutter function
The local shutter function means that the shutter function is in an on state in a part of all time slots of one frame exposure for one pixel; and the shutter function is in the closed state during part of the time slot.
Illustratively, as shown in fig. 1, the asynchronous stop shutter function, the fixed duration offset shutter function, the variable duration offset shutter function, and the arbitrary duration offset shutter function are partial shutter functions.
Illustratively, as shown in fig. 3, assume that a frame exposure duration may include 12 slots (slots); the pixels shown in fig. 3 employ a local shutter function, and the shutter function is in an open state in time slots 1 to 4; it can be understood that image data is acquired in time slot 1 to time slot 4; the shutter function is in a closed state in time slots 4 to 8; it is understood that image data is not acquired in the time slots 5 to 8; the shutter function is in an open state in time slots 9 to 12; it is understood that image data is acquired in time slots 9 to 12.
It should be understood that when a moving object is included in a shooting scene, the local shutter function collects data in a part of the exposure duration, since the global shutter function collects data in each unit time of the exposure duration when an image of the moving object is collected; therefore, the time period when the shutter is in the open state when the global shutter function is adopted is longer than the time period when the shutter is opened when the local shutter function is adopted; when a moving object is shot, the longer the shutter is opened, the more serious the motion blur exists in the image; it can be understood that the local shutter function is adopted, so that the opening time of the shutter is reduced, the motion blur can be reduced to a certain extent, and the definition of the image is improved.
4. Exposure time
The exposure time refers to the time from when the camera shutter is opened to when the light is irradiated to the film or photoreceptor.
5. Neural network
Neural networks refer to networks formed by joining together a plurality of individual neural units, i.e., the output of one neural unit may be the input of another neural unit; the input of each neural unit may be connected to a local receptive field of a previous layer to extract features of the local receptive field, which may be an area composed of several neural units.
6. Convolutional neural network (convolutional neuron network CNN)
A convolutional neural network is a deep neural network with a convolutional structure. The convolutional neural network comprises a feature selector consisting of a convolutional layer and a sub-sampling layer, which can be regarded as a filter. The convolution layer refers to a neuron layer in the convolution neural network, which performs convolution processing on an input signal. In the convolutional layer of the convolutional neural network, one neuron may be connected with only a part of adjacent layer neurons. A convolutional layer typically contains a number of feature planes, each of which may be composed of a number of neural elements arranged in a rectangular pattern. Neural elements of the same feature plane share weights, where the shared weights are convolution kernels. Sharing weights can be understood as the way image information is extracted is independent of location. The convolution kernel can be initialized in the form of a matrix with random size, and reasonable weight can be obtained through learning in the training process of the convolution neural network. In addition, the direct benefit of sharing weights is to reduce the connections between layers of the convolutional neural network, while reducing the risk of overfitting.
7. Back propagation algorithm
The neural network can adopt a Back Propagation (BP) algorithm to correct the parameter in the initial neural network model in the training process, so that the reconstruction error loss of the neural network model is smaller and smaller.
Illustratively, passing the input signal forward until output produces an error loss, and updating parameters in the initial neural network model by back-propagating the error loss information, thereby converging the error loss. The back propagation algorithm is a back propagation motion that dominates the error loss, and aims to obtain parameters of the optimal neural network model, such as a weight matrix.
The image processing method and the electronic device provided in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 4 shows a hardware system suitable for the electronic device of the present application.
The electronic device 100 may be a cell phone, a smart screen, a tablet computer, a wearable electronic device, an in-vehicle electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a projector, etc., and the specific type of the electronic device 100 is not limited in the embodiments of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The configuration shown in fig. 4 does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than those shown in FIG. 4, or electronic device 100 may include a combination of some of the components shown in FIG. 4, or electronic device 100 may include sub-components of some of the components shown in FIG. 4. The components shown in fig. 4 may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. For example, the processor 110 may include at least one of the following processing units: application processors (application processor, AP), modem processors, graphics processors (graphics processing unit, GPU), image signal processors (image signal processor, ISP), controllers, video codecs, digital signal processors (digital signal processor, DSP), baseband processors, neural-Network Processors (NPU). The different processing units may be separate devices or integrated devices. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. For example, the processor 110 may include at least one of the following interfaces: inter-integrated circuit, I2C) interfaces, inter-integrated circuit audio (inter-integrated circuit sound, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM interfaces, USB interfaces.
Illustratively, in embodiments of the present application, the processor 110 may be configured to perform the image processing methods provided by embodiments of the present application; for example, a first interface is displayed, wherein the first interface comprises a preview image and a first control, and the first control is a control for indicating photographing; detecting a first operation of a first control, wherein the first operation is an operation for indicating photographing; in response to a first operation, acquiring a first image, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the time length of the global shutter function indicating that the shutter is opened is longer than the time length of the first shutter function indicating that the shutter is opened; selecting pixels in a first image based on a shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on pixels acquired by a global shutter function, the N-1 third images are images obtained based on pixels acquired by the first shutter function, the size of any one of the N images is smaller than that of the first image, and N is an integer larger than 2; obtaining a processed image based on the second image, the N-1 third images and the image processing model; the image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where a moving object is located.
The connection relationship between the modules shown in fig. 4 is merely illustrative, and does not limit the connection relationship between the modules of the electronic device 100. Alternatively, the modules of the electronic device 100 may also use a combination of the various connection manners in the foregoing embodiments.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The electronic device 100 may implement display functions through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 may be used to display images or video.
Alternatively, the display screen 194 may be used to display images or video. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a Mini light-emitting diode (Mini LED), a Micro light-emitting diode (Micro LED), a Micro OLED (Micro OLED), or a quantum dot LED (quantum dot light emitting diodes, QLED). In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
Illustratively, the electronic device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
Illustratively, the ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the camera, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can carry out algorithm optimization on noise, brightness and color of the image, and can optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
Illustratively, a camera 193 (which may also be referred to as a lens) is used to capture still images or video. The shooting function can be realized by triggering and starting through an application program instruction, such as shooting and acquiring an image of any scene. The camera may include imaging lenses, filters, image sensors, and the like. Light rays emitted or reflected by the object enter the imaging lens, pass through the optical filter and finally are converged on the image sensor. The imaging lens is mainly used for converging and imaging light emitted or reflected by all objects (also called a scene to be shot and a target scene, and also called a scene image expected to be shot by a user) in a shooting view angle; the optical filter is mainly used for filtering out redundant light waves (such as light waves except visible light, such as infrared light) in the light; the image sensor may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The image sensor is mainly used for photoelectrically converting a received optical signal into an electrical signal, and then transmitting the electrical signal to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.
Illustratively, the digital signal processor is configured to process digital signals, and may process other digital signals in addition to digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Illustratively, video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, and MPEG4.
Illustratively, the gyroscopic sensor 180B may be used to determine a motion pose of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x-axis, y-axis, and z-axis) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B can also be used for scenes such as navigation and motion sensing games.
For example, the acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically, x-axis, y-axis, and z-axis). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The acceleration sensor 180E may also be used to recognize the gesture of the electronic device 100 as an input parameter for applications such as landscape switching and pedometer.
Illustratively, a distance sensor 180F is used to measure distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, for example, in a shooting scene, the electronic device 100 may range using the distance sensor 180F to achieve fast focus.
Illustratively, ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
Illustratively, the fingerprint sensor 180H is used to capture a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to perform functions such as unlocking, accessing an application lock, taking a photograph, and receiving an incoming call.
Illustratively, the touch sensor 180K, also referred to as a touch device. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor 180K may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 and at a different location than the display 194.
The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is described below.
Fig. 5 is a schematic diagram of a software system of an electronic device according to an embodiment of the present application.
As shown in fig. 5, an application layer 210, an application framework layer 220, a hardware abstraction layer 230, a driver layer 240, and a hardware layer 250 may be included in the system architecture.
Illustratively, the application layer 210 may include a camera application.
Optionally, the application layer 210 may also include gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
Illustratively, the application framework layer 220 provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer; the application framework layer may include some predefined functions.
For example, the application framework layer 220 may include a camera access interface; camera management and camera devices may be included in the camera access interface. Wherein camera management may be used to provide an access interface to manage the camera; the camera device may be used to provide an interface to access the camera.
Illustratively, the hardware abstraction layer 230 is used to abstract hardware. For example, the hardware abstraction layer may include a camera abstraction layer and other hardware device abstraction layers; the camera abstract layer may include a camera device 1, a camera device 2, and the like; the camera hardware abstraction layer may be coupled to a camera algorithm library, and the camera hardware abstraction layer may invoke algorithms in the camera algorithm library.
Illustratively, the camera algorithm library includes an image processing algorithm, and the image processing algorithm is used for executing the image processing method provided by the embodiment of the application.
The driver layer 240 is used to provide drivers for different hardware devices. For example, the drive layer may include a camera device drive.
The hardware layer 250 may include a camera module and other hardware devices.
Currently, when a user shoots a moving object, it is generally expected that an electronic device can acquire an image of the moving object at a moving moment; since the photographic subject is in motion, electronic devices are often required to reduce motion blur in images; currently, in order to reduce motion blur in an image, a multi-frame image is generally used to remove motion blur in an image; however, when the multi-frame image is used to remove the motion blur, the multi-frame image needs to be acquired, so that the electronic device needs to have a longer time, resulting in lower efficiency of removing the motion blur in the image.
In view of this, embodiments of the present application provide an image processing method and an electronic device; in the scheme of the application, when the electronic equipment acquires the image of the moving object, the electronic equipment can acquire the first image according to different shutter function acquisition pixels; the first image can be subjected to image division processing according to the shutter function, and N small-size images are obtained; the N small-sized images include a second image (e.g., an image including motion blur and having higher brightness) and N-1 third images (e.g., an image having no motion blur and having lower brightness); based on the second image, the N-1 third images and the image processing model, a processed image can be obtained; according to the technical scheme, the electronic equipment can remove motion blur in the image area where the moving object is located based on the acquired single-frame image; compared with a mode of removing motion blur by adopting multiple frames of images, the method and the device for removing motion blur in the images can remove motion blur in the images based on single frames of images, and improve the efficiency of removing motion blur in the images of the electronic equipment to a certain extent.
Fig. 6 is a schematic diagram of an application scenario of an image processing method provided in an embodiment of the present application.
The image processing method in the embodiment of the application can be applied to the field of photographing; for example, the image processing method of the present application may be applied to capturing an image of a moving object; the moving object may refer to a moving user, a moving object, or an image (e.g., a movie) played in a video, among others.
For example, as shown in fig. 6, a preview image, which is an image of a moving object, is included in the preview interface 260; since the moving object is in a moving state when the moving object is photographed, motion blur due to movement of the moving object occurs in the image.
Optionally, in the case that the electronic device has enough operation capability, the image processing method in the embodiment of the application may also be applied to the preview state, the recorded video field, the video call field, or other image processing fields.
Illustratively, the preview state includes:
large aperture preview, night scene preview, portrait preview, photo preview, video preview, multi-mirror video preview, etc.
Illustratively, the video call scenario may include, but is not limited to, the following:
Video call, video conference application, long and short video application, video live broadcast application, video net class application, portrait intelligent fortune mirror application scene, system camera video recording function video recording, video monitoring, or portrait shooting scene such as intelligent cat eye, etc.
It should be understood that the foregoing is illustrative of an application scenario, and is not intended to limit the application scenario of the present application in any way.
The image processing method provided in the embodiment of the present application is described in detail below with reference to fig. 7 to 10.
Fig. 7 is a schematic flowchart of an image processing method provided in an embodiment of the present application. The method 300 includes may be performed by the electronic device shown in fig. 1; the method 300 includes S310 to S350, and S310 to S350 are described in detail below, respectively.
S310, displaying a first interface.
The first interface comprises a preview image and a first control, wherein the first control is a control for indicating photographing.
Illustratively, the first interface may refer to a preview interface of the camera application, as shown in (a) of fig. 12; the preview interface includes a preview image and a photographing control 604.
Optionally, the electronic device may run the camera application before the first interface is displayed.
For example, a user may instruct an electronic device to run a camera application by clicking an icon of a "camera" application; or when the electronic equipment is in the screen locking state, the user can instruct the electronic equipment to run the camera application through a gesture of sliding rightwards on the display screen of the electronic equipment. Or when the electronic equipment is in the screen locking state, the screen locking interface comprises an icon of the camera application program, and the user can instruct the electronic equipment to run the camera application program by clicking the icon of the camera application program. Or when the electronic equipment runs other applications, the applications have the authority of calling the camera application program; the user may instruct the electronic device to run the camera application by clicking on the corresponding control.
For example, while the electronic device is running an instant messaging type application, the user may instruct the electronic device to run the camera application, etc., by selecting a control for the camera function.
It should be appreciated that the above is illustrative of the operation of running a camera application; the camera application program can be run by the voice indication operation or other operation indication electronic equipment; the present application is not limited in any way.
S320, detecting a first operation of the first control.
The first operation is an operation for indicating photographing.
Illustratively, the first operation may be a click operation on the photographing control 604, as shown in (d) of fig. 12.
It should be appreciated that the first operation described above is illustrated as a click operation; in the embodiment of the present application, the first operation may also refer to an operation of instructing the electronic device to take a picture through a voice instruction operation, or other operations, which is not limited in this application.
S330, responding to the first operation, and acquiring a first image.
The first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image collected based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are collected, the shutter function comprises a global shutter function and a first shutter function, and the time length of the global shutter function for indicating that the shutter is opened is longer than that of the first shutter function for indicating that the shutter is opened.
It should be appreciated that the global shutter function is used to indicate that the shutter is open in all slots of a frame exposure time period T, as shown in fig. 2; the first shutter function may be a local shutter function; the local shutter function is used to indicate that in a time slot of one frame exposure time period T, the shutter is open in a partial time slot and closed in a partial time slot, as shown in fig. 3.
Illustratively, the first image may refer to raw image data collected by an image sensor in the electronic device; for example, a Raw image.
Alternatively, the first image may refer to the full-size image shown in fig. 8; for a specific description, see the following related description of fig. 8, which is not repeated here.
It should be noted that, in the embodiment of the present application, the electronic device obtains the first image based on the shutter functions of different shooting capabilities; for example, the first image is a bayer-format Raw image, and when four pixels (e.g., RGGB pixels) adjacent to each other in the first image are acquired, each pixel point may correspond to a global shutter function or a local shutter function; the shutter function employed by any two of the four adjacent pixels may be different.
S340, selecting pixels in the first image based on the shutter function, and generating N images.
The N images comprise second images and N-1 third images, the second images are images obtained based on pixels collected by a global shutter function, the N-1 third images are images obtained based on pixels collected by a first shutter function, the size of any one of the N images is smaller than that of the first image, and N is an integer larger than 2.
It should be understood that when a moving object is included in a photographed object, the longer the photographing time period, the greater the probability of motion blur being present in an acquired image, or the greater the image area of motion blur in the image; in general, the longer the shutter is opened when an image is acquired, the longer the exposure time is when an image is acquired; the shutter function may be used to indicate to the electronic device whether the shutter is open or closed when capturing an image.
It should also be understood that the global shutter function may refer to shutters that open for a frame exposure period T; the first shutter function may refer to a local shutter function, which refers to a partial-duration shutter open in one frame exposure duration T and a partial-duration shutter closed; since the exposure time of the pixels collected based on the local shutter function is shorter than that of the pixels collected based on the global shutter function; therefore, there is no or less motion blur in the image generated based on the pixels acquired by the local shutter function; further, since the exposure time period of the pixels collected based on the local shutter function is shorter than that of the pixels collected based on the global shutter function, the brightness of the image generated by the pixels collected based on the local shutter function is low.
Illustratively, the N shutter functions may include a global shutter function and a different local shutter function; selecting a plurality of pixels acquired based on a shutter function from the first image according to the shutter function to obtain an image; for example, the first image is a Raw image in RGGB format; the adjacent 4 pixels adopt global shutter functions, local shutter functions 1, 2 and 3 respectively; the size of the first image may be w×h; selecting pixels acquired through a global shutter function in the first image, and generating 1/4 (W.times.H) image 1; selecting pixels acquired by the local shutter function 1 in the first image, an image 2 of 1/4 (w×h) can be generated; selecting pixels acquired by the local shutter function 2 in the first image, an image 3 of 1/4 (w×h) can be generated; selecting the pixels acquired by the local shutter function 3 in the first image may generate an image 4 of 1/4 (w×h).
S350, obtaining a processed image based on the second image, the N-1 third images and the image processing model.
The image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where a moving object is located.
Optionally, the image processing model is obtained by the following training method:
acquiring a sample video, wherein the resolution of the sample video is larger than a preset resolution, the sample video comprises M continuous images, the M continuous images comprise image areas where sample moving objects are located, and M is an integer larger than or equal to 3;
carrying out fusion processing on M-1 continuous images in the M continuous images to obtain a first sample image;
performing brightness reduction treatment on an M-th frame image in M-frame continuous images to obtain a second sample image;
calculating a sample similarity between the first sample image and the second sample image;
inputting the sample similarity, the first sample image and the second sample image into an image processing model to be trained, and outputting a predicted image;
and updating parameters of the image processing model to be trained based on the image difference between the predicted image and the Mth frame image to obtain the image processing model.
Optionally, the image processing model is further used for denoising.
For example, the image processing model may be used for the luminance boosting process and the denoising process.
Alternatively, the training method of the image processing model may be referred to the following description of fig. 10; and will not be described in detail herein.
And S360, displaying or saving the processed image.
Alternatively, the electronic device may display the processed image while the electronic device is in the preview state.
Illustratively, the preview state includes:
a photographing preview state, a night scene preview state, an aperture preview state, and the like.
Optionally, in the case where the electronic device is in a photographing mode, the electronic device may save the processed image; upon detecting an operation of clicking on an image in the gallery, the processed image is displayed.
Optionally, the first shutter function includes N-1 shutter sub-functions, where the N-1 shutter sub-functions correspond to the N-1 third images one by one, and different sub-functions in the N-1 shutter sub-functions have partial differences in time when the shutter is opened, and the obtaining a processed image based on the second image, the N-1 third images and the image processing model includes:
determining target images in the N-1 third images, wherein the definition of the target images is higher than images except the target images in the N-1 third images;
and inputting the second image and the target image into an image processing model to obtain a processed image.
Alternatively, the second image may be the image using the global shutter function shown in fig. 8, see the following description related to fig. 8, which is not repeated here.
It should be appreciated that the first shutter function may refer to a local shutter function; the partial shutter function means that the partial-duration shutter is opened in one frame exposure duration T and the partial-duration shutter is closed as shown in fig. 3; n-1 shutter sub-functions refer to functions of different local shutters; any one of the N-1 subfunctions may refer to one implementation of a local shutter function.
For example, the electronic device may first determine a target image of the N-1 third images, determine to select pixels in the first image according to different second shutter sub-functions (e.g., different local shutter functions), and generate the sharpest image of the N-1 third images; since the N-1 third images are images generated according to different local shutter functions, a clear image can be selected from the N-1 third images, i.e., a target image in the N-1 third images is determined.
Alternatively, the target image may be a clear image shown in S430 in fig. 8, see the related description in fig. 8, which will not be repeated here.
In the embodiment of the application, the electronic device can firstly select a clear image from N-1 images with shorter exposure time, and input the clear image and the images with longer exposure time into the image processing model for processing; compared with directly inputting N-1 images with shorter exposure time and images with longer exposure time into the image processing model, the scheme of the application can reduce the operation amount of the image processing model to a certain extent and improve the processing efficiency of the image processing model; i.e. to improve the processing efficiency for removing motion blur in the image.
Optionally, determining the target image in the N-1 third images includes:
calculating the gradient value of each third image in the N-1 third images to obtain N-1 gradient values;
and obtaining a target image based on the N-1 gradient values.
It should be noted that, the gradient refers to deriving adjacent pixels in the small-sized image; if the gradient is larger, the change of the adjacent pixels is larger; it is understood that the larger the variation of adjacent pixels, the clearer the pixels of the image area corresponding to the gradient.
In an embodiment of the present application, the gradient of each of the N-1 third images may be calculated first; and determining the image with the largest gradient as a clear image according to the N-1 gradients, namely determining the image with the largest gradient as a target image.
Optionally, the method further comprises:
calculating the similarity between the second image and the target image;
inputting the second image and the target image into an image processing model to obtain a processed image, wherein the method comprises the following steps of:
and inputting the second image, the target image and the similarity into an image processing model to obtain a processed image.
By way of example, the similarity between the second image and the target image may be calculated using any existing algorithm, which is not limited in this application.
It should be noted that, the second image may refer to an image composed of pixels with an exposure time period of T; the target image may be a clear image among images composed of pixels having an exposure time length less than T; since the target image exposure period is relatively short compared to the second image exposure period, the image brightness of the target image is lower than that of the second image.
For example, a second image (e.g., an image with a longer exposure time), a target image (e.g., a clear image with a shorter exposure time), and a similarity may be input into the image processing model; when the image processing model processes the target image, the similarity can be used as prior information; the image area where the moving object is located in the target image and the image area where the shooting object except the moving object is located in the target image can be determined based on the similarity; thereby realizing the local brightness improvement processing of the image area where the moving object is located in the target image.
In the scheme of the application, the image processing model can quickly identify the image area where the moving object is located in the target image based on the similarity, and the image area where the moving object is located is subjected to local brightness improvement processing, so that the processing efficiency of the image processing model is improved, namely the processing efficiency of removing motion blur in the image is improved.
Optionally, obtaining the processed image based on the second image, the N-1 third images and the image processing model includes:
determining target images in the N-1 third images, wherein the definition of the target images is higher than images except the target images in the N-1 third images;
obtaining a fourth image based on the second image and the target image, wherein the fourth image comprises a second image area and a third image area, the second image area is an image area where a moving object in the target image is located, and the third image area is an image area except the image area where the moving object is located in the second image;
and inputting the fourth image into an image processing model to obtain a processed image.
Alternatively, the fourth image may be the first processed image shown in fig. 8, which is described in detail in connection with fig. 8.
In embodiments of the present application, a fourth image may be generated based on the second image and the target image; the fourth image comprises an image area where the moving object in the target image is located and an image area except the image area where the moving object in the second image is located; it can be understood that the fourth image includes an image area where a moving area in the target image is located and an image area where a non-moving area in the second image is located; because the target image is an image generated based on the pixels acquired by the first shutter function, the definition of the image area where the motion area is located in the target image is higher; because the second image is an image generated based on pixels acquired by the global shutter function, the brightness of an image area where a non-moving object is located in the second image is higher; and inputting the fourth image into an image processing model, so that brightness improvement processing is carried out on an image area where the moving object is located in the fourth image, and a processed image is obtained.
Optionally, the method further comprises:
calculating the similarity between the second image and the target image;
inputting the fourth image into an image processing model to obtain a processed image, comprising:
and inputting the fourth image and the similarity into an image processing model to obtain a processed image.
In the embodiment of the application, the similarity can be used as priori information of an image processing model; the image area where the moving object is located in the fourth image can be rapidly identified based on the similarity image processing model, and brightness improvement processing is carried out on the image area where the moving object is located in the fourth image, so that a processed image is obtained; the similarity can be used as prior information in the image processing model, so that the operation amount of the image processing model can be reduced, and the processing efficiency of the image model is improved.
Optionally, obtaining a fourth image based on the second image and the target image includes:
when the similarity between the image area of the second image and the corresponding image area in the target image is greater than the preset similarity, acquiring pixels of the image area of the second image;
and carrying out replacement processing on the pixels of the corresponding image area in the target image based on the pixels of the image area of the second image to obtain a fourth image.
In the embodiment of the application, when the electronic device acquires the image of the moving object, the electronic device may acquire the first image according to different shutter function acquisition pixels; the first image can be subjected to image division processing according to the shutter function, and N small-size images are obtained; the N small-sized images include a second image (e.g., an image including motion blur and having higher brightness) and N-1 third images (e.g., an image having no motion blur and having lower brightness); based on the second image, the N-1 third images and the image processing model, a processed image can be obtained; according to the technical scheme, the electronic equipment can remove motion blur in the image area where the moving object is located based on the acquired single-frame image; compared with a mode of removing motion blur by adopting multiple frames of images, the method and the device for removing motion blur in the images can remove motion blur in the images based on single frames of images, and improve the efficiency of removing motion blur in the images of the electronic equipment to a certain extent.
Fig. 8 is a schematic flowchart of another image processing method provided in an embodiment of the present application. The method 400 includes may be performed by the electronic device shown in fig. 1; the method 400 includes steps S410 to S460, and S410 to S460 are described in detail below, respectively.
S410, acquiring a full-size image.
Alternatively, before acquiring the full-size image, the electronic device may determine whether a moving object exists in the photographic objects based on the preview image; if the moving object is stored in the shooting object, executing the image processing method shown in fig. 8 to remove the motion blur in the image; if there is no moving object in the photographed object, the electronic device may employ any image processing method.
For example, an operation of the electronic device indicating photographing is detected, and a full-size image is acquired in response to the operation of indicating photographing.
Optionally, the electronic device includes an image sensor, and the resolution size of the image frame acquired by the image sensor may be full size (full size); it is understood that the image captured by the image sensor is a full-size image.
For example, assuming that the maximum resolution supported by the camera in the electronic device is 4096×2160, the resolution of the acquired full-size image may be a Raw image of 4096×2160.
It should be understood that the above Raw image with a full-size image of 4096×2160 is illustrated, and the full-size image is not limited in this application.
S420, carrying out image division processing on the full-size image based on a shutter function to obtain N small-size images.
It should be appreciated that the shutter function is used to indicate the open state or the closed state of the shutter at each time unit (e.g., one slot). For example, the shutter function may include a global shutter function and a local shutter function; wherein, the global shutter function means that the shutters are in an open state in all time slots of one frame exposure for one pixel; the local shutter function may refer to that the shutter is in an open state in a part of all slots of one frame exposure for one pixel.
It should be noted that the camera module imaging may be abstracted as (e.g., shutter function, scene illumination) integration at time t; thus, in the exposure image frames for which the exposure time is determined, different shutter functions determine different camera shooting capabilities.
It should also be appreciated that the local shutter function may be different if the shutter open corresponds to a different time slot in all time slots of a frame exposure.
For example, as shown in (c) of fig. 9, the shutter is in an open state at time slots 1 to 4 and in a closed state at time slots 5 to 12, then (c) of fig. 9 may correspond to the first shutter function; as shown in (e) of fig. 9, the shutter is in an open state at time slots 5 to 8, and the shutter is in a closed state at time slots 1 to 4, and time slots 9 to 12, then (e) of fig. 9 may correspond to the second shutter function; as in (g) of fig. 9, where the shutters are in an open state at time slots 9 to 12 and in a closed state at time slots 1 to 8, then (g) of fig. 9 may correspond to a third shutter function; wherein the first shutter function may refer to opening the shutter for the first third of the duration of all slots of a frame exposure; the second shutter function may refer to opening the shutter for the middle third of all slots of a frame exposure; the third shutter function may refer to opening the shutter for the last third of the duration of all slots of a frame exposure; any two shutter functions of the first shutter function, the second shutter function and the third shutter function are different.
Alternatively, pixels in the full size image that use the same shutter function may be selected to form a small size image.
Illustratively, assume that all exposure durations of one frame image are T; the shutter functions include a global shutter function, a first shutter function, a second shutter function, and a third shutter function; the global shutter function means that the shutters are all in an open state over the exposure period T, as shown in (a) of fig. 9; the first shutter function means that in the front T/3 of all exposure periods, the shutter is in an open state, and the shutter is in a closed state for the rest of the time, as shown in (c) of fig. 9; the second shutter function means that in the middle T/3 of all exposure periods, the shutter is in an open state, and the rest of the time, the shutter, as shown in (e) of fig. 9, is in a closed state; the third shutter function means that in the post-T/3 of all exposure periods, the shutter is in an open state, and the rest of the time, the shutter, as shown in (g) of fig. 9, is in a closed state; according to the global shutter function, a pixel point set 1 with the exposure time length of T can be selected from the full-size image to obtain an image 1, such as an image 481 shown in (b) of fig. 9; according to the first shutter function, a pixel point set 2 with the exposure time length of the previous T/3 time length can be selected from the full-size image, so that an image 2 is obtained, and an image 482 shown in (d) of fig. 9 is obtained; according to the second shutter function, a pixel point set 3 with the exposure time length of middle T/3 time length can be selected from the full-size image, so that an image 3 is obtained, and an image 483 shown in (f) in fig. 9 is obtained; according to the third shutter function, a pixel point set 4 with the exposure time length of T/3 time length after the exposure time length can be selected from the full-size image, and an image 4 is obtained; as shown in fig. 9 (h) of image 484.
It should be appreciated that since the exposure time period of image 481 is T, the exposure time periods of image 482, image 483 and image 484 are all T/3; thus, the exposure time of the image 481 is long; the exposure time of the image 481 is longer, so that the light entering amount of the electronic device is more when the image 481 is acquired, and the brightness of the image 481 is brighter; in addition, since there is a moving photographic subject in the photographic scene, the longer the exposure time, the greater the possibility of motion blur in the acquired image 481; the exposure time period for images 482 through 484 is T/3, so images 482 through 484 are darker than image 481; however, because of the short exposure time, the sharpness of images 482 through 484 is better than the sharpness of image 481.
Illustratively, an exposure frame including 3 slots (e.g., 1 st slot, 2 nd slot, and 3 rd slot) is described as an example; one exposure frame includes 3 time slots, and then 4 shutter functions can be corresponding, and the shutter functions are global shutter functions which are uniformly opened in the 3 time slots respectively; a first partial shutter function in which the 1 st slot shutter is in an open state and the 2 nd slot shutter and the 3 rd slot shutter are in a closed state; a second partial shutter function in which the 2 nd slot shutter is in an open state and the 1 st slot shutter and the 3 rd slot shutter are in a closed state; a third partial shutter function in which the 3 rd slot shutter is in an open state and in which the 1 st slot shutter and the 2 nd slot shutter are in a closed state; corresponding pixel points can be selected from the full-size image according to 4 shutter functions respectively, so that 4 small images with the image size being one quarter of the original size of the full-size image are obtained; assuming that 4 pixels (e.g., RGGB pixels) adjacent to each other in the full-size Raw image, a global shutter function, a first local shutter function, a second local shutter function, and a third local shutter function are used, respectively; in the full-size Raw image, a small-size Raw image 1 is obtained according to pixels adopting a global shutter function; in the full-size Raw image, obtaining a small-size Raw image 2 according to pixels adopting a first local shutter function; in the full-size Raw image, obtaining a small-size Raw image 3 according to pixels adopting a second local shutter function; in the full-size Raw image, a small-size Raw image 4 is obtained from pixels employing the third partial shutter function.
S430, determining clear images in the N small-size images.
It should be appreciated that since the N small-sized images include an image employing a global shutter function and an image employing a local shutter function; since the exposure time of the image adopting the local shutter function is shorter than that of the image adopting the global shutter function, the shooting scene comprises an operation object; therefore, the sharpness of the image using the local door opening function is higher than the sharpness of the acquired global shutter function.
Alternatively, gradients of N small-sized images may be calculated; a sharp image of the N small-sized images is determined based on the gradient.
It should be noted that, the gradient refers to deriving adjacent pixels in the small-sized image; if the gradient is larger, the change of the adjacent pixels is larger; it is understood that the larger the variation of adjacent pixels, the clearer the pixels of the image area corresponding to the gradient.
For example, for any one of N small-sized images, calculating the absolute value of the gradient between two adjacent pixels in one image; averaging absolute values of gradients according to the number of pixel points in the image to obtain gradient values of the image; and determining the image with the largest gradient in the N gradient values as a clear image according to the gradient values of the N images.
Optionally, the N small-sized images include 1 image of the global shutter function and 3 images of the local shutter function; the sharpest image of the 3 images of the sampled local shutter function may be determined from the gradient values of the 3 images of the sampled local shutter function.
Optionally, denoising the N small-size images to obtain denoised small-size images; acquiring the gradient of the denoised small-size image, wherein the larger the gradient of the image is, the clearer the image is; the sharpest image of the N small-sized images can be determined from the gradient.
Alternatively, the sharpest image of the N small-sized images may be taken as the reference image.
S440, calculating the similarity between the clear image and the image adopting the global shutter function.
It should be appreciated that since a moving object is included in a photographed scene, the exposure time of an image employing a global shutter function is long, so that the presence of motion blur in the image results in poor image definition, and furthermore, the brightness of an image with a long exposure time is brighter; the clear image is an image generated by adopting a local shutter function, and the exposure time of the image is relatively short, so that motion blur does not exist in the clear image or is small, and the detail information of the image is good; however, the brightness of a clear image is darker due to the relatively short exposure time.
Alternatively, a sliding window may be selected, and the size of the sliding window may be 4×4; the similarity between the first vector in the sharp image and the second vector in the global shutter function is calculated from the sliding window.
For example, when the similarity is greater than a preset similarity threshold, it is described that the image content between the first image area in the clear image corresponding to the sliding window and the second image area in the image adopting the global shutter function is similar, and the image areas where the non-moving objects corresponding to the first image area and the second image area are located are indicated; at this time, since the brightness of the image using the global shutter function is high, the second image area can be selected; when the similarity is smaller than or equal to a preset similarity threshold value, the fact that the image content between a first image area in the clear image corresponding to the sliding window and a second image area in the image adopting the global shutter function is dissimilar is indicated, and the image areas where the moving objects corresponding to the first image area and the second image area are located; since the clear image is shorter in exposure time than the image using the global shutter function; when the exposure time is short, the acquired image of the moving object has no or less motion blur, so that the first image area can be selected.
S450, obtaining a first processed image based on the clear image and the similarity by adopting the global shutter function.
Optionally, replacing pixels with similarity less than or equal to a preset similarity threshold in the image adopting the global shutter function to the clear image to obtain the first processed image.
It should be appreciated that there is no motion blur in the clear image and the image is darker, and there is motion blur in the image area where the moving object is located in the image using the global shutter function and the image is brighter; in embodiments of the present application, a first set of pixels in an image employing a global shutter function may be acquired based on similarity; replacing corresponding pixels in the clear image by the first pixel set to obtain a first processed image; the first processing image comprises pixels of an image area where a moving object in the clear image is located and pixels of an image area where a non-moving object in the image adopting the global shutter function is located; thus, the first processed image is an image from which motion blur is removed compared to a full-size image.
It should be noted that, since the first processed image includes the pixels of the image area where the moving object in the clear image is located and the pixels of the image area where the non-moving object in the image using the global shutter function is located; therefore, the brightness of the image area where the moving object is located in the first processed image is darker; in the embodiment of the present application, in order to increase the brightness of the image area where the moving object is located in the first processed image, so that the brightness of the image area where the moving object is located in the first processed image is consistent with the brightness of the whole image, the first processed image may be input to the pre-processed neural network model for brightness increase processing.
S460, inputting the first processed image and the similarity into an image processing model to obtain a processed image.
It should be understood that the processed image may refer to an image from which motion blur is removed and brightness of an image area where a moving object in the first processed image is located is increased.
In an embodiment of the present application, the first processed image and the similarity may be input to the image processing model; the similarity can be regarded as prior information, so that the image processing model can rapidly and accurately identify an image area needing to be processed in the first processed image when the brightness of the first processed image is improved; thereby improving the processing efficiency of the image processing model.
Optionally, the image processing model may further perform denoising processing on the first processed image; it is understood that the processed image may be a clear and noiseless image.
Alternatively, in an embodiment of the present application, the first processed image may be input to an image processing model, resulting in a processed image.
Alternatively, the image processing model may be a pre-trained neural network model; for example, the image processing model may be a convolutional neural network trained in advance, and the training method of the image processing model may be described with reference to the following description of fig. 9.
In the embodiment of the application, when the electronic device acquires the image of the moving object, the electronic device can acquire the full-size image according to different shutter functions; the full-size image can be subjected to image division processing according to the shutter function, and N small-size images are obtained; determining an image (e.g., an image including motion blur and having higher brightness) and a clear image (e.g., an image without motion blur and having lower brightness) of the N small-sized images using a global shutter function; obtaining a first processed image through an image area where a moving object is located in the clear image and an image area where a non-moving object is located in the image adopting the global shutter function; inputting the first processed image and the similarity into a pre-trained neural network model to obtain a processed image; according to the technical scheme, the electronic equipment can remove motion blur in the image area where the moving object is located based on the acquired single-frame image; compared with a mode of removing motion blur by adopting multiple frames of images, the method and the device for removing motion blur in the images can remove motion blur in the images based on single frames of images, and improve the efficiency of removing motion blur in the images of the electronic equipment to a certain extent.
The training method of the image processing model is described in detail below with reference to fig. 10.
Fig. 10 is a schematic flowchart of a training method of an image processing model according to an embodiment of the present application. The method 500 may be performed by the electronic device shown in fig. 1; the method includes S510 to S530, and S510 to S530 are described in detail below, respectively.
S510, acquiring training data.
By way of example, the training data may be a high definition video comprising moving objects; for example, the training data may be 8K video, i.e., 7680 x 4320 resolution video; the high-definition video can comprise N frames of images; assuming that 7 continuous images are included in the high-definition video, carrying out fusion processing on the 1 st frame image to the 6 th frame image to obtain a sample long exposure image; the sample short exposure image is obtained by carrying out brightness reduction treatment on the 7 th frame image; taking the 7 th frame image as a sample image, namely a true value; it is understood that the 7 th frame image which is not processed is taken as a training target.
Optionally, if the image processing model is capable of denoising, the sample short-exposure image may also be subjected to noise-increasing processing.
S520, inputting the first sample image and the similarity to an image processing model to be trained to obtain a predicted image.
The image processing model to be trained is used for carrying out brightness improvement processing on a dark light area in the image.
Optionally, the image processing model to be trained can also perform denoising processing on the image.
Illustratively, the first sample image refers to an image derived based on the sample long-exposure image and the sample short-exposure image; for example, the first sample image includes an image area where a moving object is located in the sample short-exposure image and an image area where a non-moving object is located in the sample long-exposure image.
Alternatively, the similarity may refer to a value of similarity at a pixel level between the sample short-exposure image and the sample long-exposure image.
Alternatively, in one example, the sample long-exposure image, the sample short-exposure image, and the similarity may be directly input into the image processing model to be trained. S530, updating parameters of the image processing model based on the difference between the predicted image and the sample image to obtain a trained image processing model.
Wherein the sample image may be a training target; for example, the sample image may refer to the 7 th frame image in S510.
Illustratively, differences between each pixel point in the predicted image and the sample image are calculated, and parameters of the image processing to be trained are trained through a back propagation algorithm, so that the image processing to be trained is processed, and the trained image processing is obtained.
Alternatively, the image processing may be a pre-trained neural network; for example, the image processing may be a pre-trained convolutional neural network.
An example of an interface schematic in an electronic device is described below with reference to fig. 11 to 12.
Fig. 11 is an interface schematic diagram of an electronic device according to an embodiment of the present application.
In the embodiment of the application, after the electronic device runs the camera application, a preview interface of the camera application comprises motion blur; after the electronic device detects that the user clicks the control, the image processing method provided by the embodiment of the application can be executed, namely the motion blur is removed; and when the user clicks a photographing control, the electronic equipment acquires an image, wherein the image is an image for removing motion blur.
In one example, after the electronic device detects the operation of clicking on the control 603 as shown in (b) in fig. 12, the image processing method provided in the embodiment of the present application is executed.
Illustratively, as shown in fig. 11, the graphical user interface (graphical user interface, GUI) shown in fig. 11 (a) is a desktop 601 of the electronic device; the electronic device detects a click operation on the control 602 of the camera application on the desktop 601, as shown in (b) in fig. 11; after the electronic device detects a click operation on the control 602 of the camera application, the electronic device runs the camera application; for example, as shown in (a) of fig. 12, the electronic device may display a photo preview interface; the shooting preview interface comprises a preview image, a control 603 and a shooting control 604, wherein the preview image comprises motion blur (such as noise); the electronic device detects a click operation on the control 603, as shown in (b) in fig. 12; after the electronic device detects the clicking operation on the control 603, the electronic device may execute the image processing method provided in the embodiment of the present application; after the electronic device detects a click operation on the control 603, a preview interface is displayed, as shown in (c) in fig. 12; the electronic device detects a click operation on the photographing control 604, as shown in (d) in fig. 12; after the electronic device detects the operation of the photographing control 604, a display interface as shown in (a) in fig. 13 is displayed, in which a thumbnail image display area 605 is included; the electronic apparatus detects a click operation on the thumbnail image display area 605, as shown in (b) in fig. 13; after the electronic device detects a click operation on the preview image display area 605, displaying the processed image; as shown in fig. 13 (c).
It should be understood that the above description is exemplified with the application of the image processing method of the present application to capturing images; the image processing method can be applied to preview images under the condition that the electronic equipment has enough operation capability; it is understood that the preview image displayed in the preview interface may be an image from which motion blur is removed.
In one example, the image processing method provided by the embodiment of the present application is performed after the electronic device detects the operation of clicking the control 704 as shown in (b) in fig. 15.
Illustratively, as shown in fig. 14, the graphical user interface (graphical user interface, GUI) shown in fig. 14 (a) is a desktop 701 of the electronic device; the electronic device detects a click operation of the control 702 of the camera application on the desktop 701, as shown in (b) in fig. 14; after the electronic device detects a click operation on control 702 of the camera application, the electronic device runs the camera application; for example, as shown in fig. 14 (c), the electronic device may display a photo preview interface including a preview image and a setting control 703, where the preview image includes motion blur (e.g., noise); the electronic device detects a click operation on the setting control 703, as shown in (d) in fig. 14; after the electronic device detects the click operation on the setting control 703, the electronic device may display a setting interface including a control 704 to remove motion blur in the setting interface, as shown in (a) in fig. 15; the electronic device detects a click operation on the control 704 for removing motion blur, and as shown in (b) in fig. 15, after the electronic device detects a click operation on the control 704 for removing motion blur, the electronic device is triggered to execute the image processing method provided in the embodiment of the present application.
It should be noted that the foregoing is illustrative of a display interface in an electronic device, and the present application is not limited thereto.
It should be appreciated that the above illustration is to aid one skilled in the art in understanding the embodiments of the application and is not intended to limit the embodiments of the application to the specific numerical values or the specific scenarios illustrated. It will be apparent to those skilled in the art from the foregoing description that various equivalent modifications or variations can be made, and such modifications or variations are intended to be within the scope of the embodiments of the present application.
The image processing method provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 15; an embodiment of the device of the present application will be described in detail below with reference to fig. 16 to 17. It should be understood that the apparatus in the embodiments of the present application may perform the methods in the embodiments of the present application, that is, specific working procedures of the following various products may refer to corresponding procedures in the embodiments of the methods.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 800 includes a display module 810 and a processing module 820.
The display module 810 is configured to display a first interface, where the first interface includes a preview image and a first control, and the first control is a control indicating photographing; the processing module 820 is configured to detect a first operation on the first control, where the first operation is an operation for indicating photographing; in response to the first operation, a first image is acquired, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the duration of the global shutter function for indicating that the shutter is opened is longer than that of the first shutter function for indicating that the shutter is opened; selecting pixels in the first image based on the shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on the pixels acquired by the global shutter function, the N-1 third images are images obtained based on the pixels acquired by the first shutter function, the size of any one of the N images is smaller than the size of the first image, and N is an integer larger than 2; obtaining a processed image based on the second image, the N-1 third images and an image processing model; the image processing model is used for carrying out brightness lifting processing, and the processed image is an image for removing motion blur of an image area where the moving object is located; and displaying or storing the processed image.
Optionally, as an embodiment, the first shutter function includes N-1 shutter sub-functions, where the N-1 shutter sub-functions are in one-to-one correspondence with the N-1 third images, and different sub-functions in the N-1 shutter sub-functions have different portions corresponding to the time when the shutter is opened, and the processing module 820 is specifically configured to:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
and inputting the second image and the target image into the image processing model to obtain the processed image.
Optionally, as an embodiment, the processing module 820 is further configured to:
calculating the similarity between the second image and the target image;
the step of inputting the second image and the target image into the image processing model to obtain the processed image includes:
and inputting the second image, the target image and the similarity into the image processing model to obtain the processed image.
Optionally, as an embodiment, the processing module 820 is specifically configured to:
Determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
obtaining a fourth image based on the second image and the target image, wherein the fourth image comprises a second image area and a third image area, the second image area is an image area where the moving object is located in the target image, and the third image area is an image area except the image area where the moving object is located in the second image;
and inputting the fourth image into the image processing model to obtain the processed image.
Optionally, as an embodiment, the processing module 820 is further configured to:
calculating the similarity between the second image and the target image;
the step of inputting the fourth image into an image processing model to obtain the processed image comprises the following steps:
and inputting the similarity between the fourth image and the similarity to the image processing model to obtain the processed image.
Optionally, as an embodiment, the processing module 820 is specifically configured to:
when the similarity between the image area of the second image and the corresponding image area in the target image is larger than a preset similarity, acquiring pixels of the image area of the second image;
And replacing the pixels of the corresponding image area in the target image based on the pixels of the image area of the second image to obtain the fourth image.
Optionally, as an embodiment, the processing module 820 is specifically configured to:
calculating the gradient value of each third image in the N-1 third images to obtain N-1 gradient values;
and obtaining the target image based on the N-1 gradient values.
Optionally, as an embodiment, the image processing model is further used for denoising.
Optionally, as an embodiment, the image processing model is obtained by the following training method:
acquiring a sample video, wherein the resolution of the sample video is larger than a preset resolution, the sample video comprises M continuous images, the M continuous images comprise image areas where sample moving objects are located, and M is an integer larger than or equal to 3;
carrying out fusion processing on M-1 continuous images in the M continuous images to obtain a first sample image;
performing brightness reduction treatment on an M-th frame image in the M-frame continuous images to obtain a second sample image;
calculating a sample similarity between the first sample image and the second sample image;
Inputting the sample similarity, the first sample image and the second sample image into an image processing model to be trained, and outputting a predicted image;
and updating parameters of the image processing model to be trained based on the image difference between the predicted image and the Mth frame image to obtain the image processing model.
The electronic device 800 is embodied as a functional module. The term "module" herein may be implemented in software and/or hardware, and is not specifically limited thereto.
For example, a "module" may be a software program, a hardware circuit, or a combination of both that implements the functionality described above. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 17 shows a schematic structural diagram of an electronic device provided in the present application. The dashed line in fig. 17 indicates that the unit or the module is optional; the electronic device 900 may be used to implement the image processing method described in the method embodiments described above.
The electronic device 900 includes one or more processors 901, which one or more processors 901 may support the electronic device 900 to implement the image processing methods in the method embodiments. The processor 901 may be a general purpose processor or a special purpose processor. For example, the processor 1101 may be a central processing unit (central processing unit, CPU), digital signal processor (digital signal processor, DSP), application specific integrated circuit (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA), or other programmable logic device such as discrete gates, transistor logic, or discrete hardware components.
Alternatively, the processor 901 may be configured to control the electronic device 900, execute a software program, and process data of the software program. The electronic device 900 may also include a communication unit 905 to enable input (reception) and output (transmission) of signals.
For example, the electronic device 900 may be a chip, the communication unit 905 may be an input and/or output circuit of the chip, or the communication unit 905 may be a communication interface of the chip, which may be an integral part of a terminal device or other electronic device.
As another example, the electronic device 900 may be a terminal device, the communication unit 905 may be a transceiver of the terminal device, or the communication unit 905 may include one or more memories 902 in the communication unit 900, on which a program 904 is stored, the program 904 being executable by the processor 901, generating the instructions 903 such that the processor 901 performs the image processing method described in the above-described method embodiments according to the instructions 903.
Optionally, the memory 902 may also have data stored therein.
Alternatively, the processor 901 may also read data stored in the memory 902, which may be stored at the same memory address as the program 904, or which may be stored at a different memory address than the program 904.
Alternatively, the processor 901 and the memory 902 may be provided separately or may be integrated together, for example, on a System On Chip (SOC) of the terminal device.
Illustratively, the memory 902 may be used to store a related program 904 of the image processing method provided in the embodiment of the present application, and the processor 901 may be used to call the related program 904 of the image processing method stored in the memory 902 when executing the image processing method, to execute the image processing method of the embodiment of the present application; for example, a first interface is displayed, wherein the first interface comprises a preview image and a first control, and the first control is a control for indicating photographing; detecting a first operation of a first control, wherein the first operation is an operation for indicating photographing; in response to a first operation, acquiring a first image, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the time length of the global shutter function indicating that the shutter is opened is longer than the time length of the first shutter function indicating that the shutter is opened; selecting pixels in a first image based on a shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on pixels acquired by a global shutter function, the N-1 third images are images obtained based on pixels acquired by the first shutter function, the size of any one of the N images is smaller than that of the first image, and N is an integer larger than 2; obtaining a processed image based on the second image, the N-1 third images and the image processing model; the image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where a moving object is located.
Optionally, the present application also provides a computer program product which, when executed by the processor 901, implements the image processing method in any of the method embodiments of the present application.
For example, the computer program product may be stored in the memory 902, such as the program 904, and the program 904 is finally converted into an executable object file capable of being executed by the processor 901 through preprocessing, compiling, assembling, and linking.
Optionally, the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a computer, implements the image processing method according to any of the method embodiments of the present application. The computer program may be a high-level language program or an executable object program.
For example, the computer-readable storage medium is, for example, the memory 902. The memory 902 may be volatile memory or nonvolatile memory, or the memory 902 may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described embodiments of the electronic device are merely illustrative, e.g., the division of the modules is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean that the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In addition, the term "and/or" herein is merely an association relation describing an association object, and means that three kinds of relations may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application should be defined by the claims, and the above description is only a preferred embodiment of the technical solution of the present application, and is not intended to limit the protection scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (12)

1. An image processing method, applied to an electronic device, comprising:
displaying a first interface, wherein the first interface comprises a preview image and a first control, and the first control is a control for indicating photographing;
detecting a first operation of the first control, wherein the first operation is an operation for indicating photographing;
in response to the first operation, a first image is acquired, wherein the first image comprises a first image area, the first image area is an image area where a moving object in a shooting object is located, the first image is an image acquired based on a shutter function, the shutter function is used for indicating that a shutter is opened or closed when pixels in the first image are acquired, the shutter function comprises a global shutter function and a first shutter function, and the duration of the global shutter function for indicating that the shutter is opened is longer than that of the first shutter function for indicating that the shutter is opened;
selecting pixels in the first image based on the shutter function, and generating N images, wherein the N images comprise a second image and N-1 third images, the second image is an image obtained based on the pixels acquired by the global shutter function, the N-1 third images are images obtained based on the pixels acquired by the first shutter function, the size of any one of the N images is smaller than the size of the first image, and N is an integer larger than 2;
Obtaining a processed image based on the second image, the N-1 third images and an image processing model; the image processing model is used for performing brightness lifting processing, and the processed image is an image for removing motion blur of an image area where the moving object is located;
and displaying or storing the processed image.
2. The image processing method according to claim 1, wherein the first shutter function includes N-1 shutter sub-functions, the N-1 shutter sub-functions are in one-to-one correspondence with the N-1 third images, different sub-functions of the N-1 shutter sub-functions have partial differences corresponding to the time when the shutter is opened, and the obtaining the processed image based on the second image, the N-1 third images and the image processing model includes:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
and inputting the second image and the target image into the image processing model to obtain the processed image.
3. The image processing method according to claim 2, further comprising:
Calculating the similarity between the second image and the target image;
the step of inputting the second image and the target image into the image processing model to obtain the processed image includes:
and inputting the second image, the target image and the similarity into the image processing model to obtain the processed image.
4. The image processing method according to claim 1, wherein the obtaining the processed image based on the second image, the N-1 third images, and an image processing model includes:
determining a target image in the N-1 third images, wherein the definition of the target image is higher than images except the target image in the N-1 third images;
obtaining a fourth image based on the second image and the target image, wherein the fourth image comprises a second image area and a third image area, the second image area is an image area where the moving object is located in the target image, and the third image area is an image area except the image area where the moving object is located in the second image;
and inputting the fourth image into the image processing model to obtain the processed image.
5. The image processing method as claimed in claim 4, further comprising:
calculating the similarity between the second image and the target image;
the step of inputting the fourth image into an image processing model to obtain the processed image comprises the following steps:
and inputting the similarity between the fourth image and the similarity to the image processing model to obtain the processed image.
6. The image processing method according to claim 5, wherein the obtaining a fourth image based on the second image and the target image includes:
when the similarity between the image area of the second image and the corresponding image area in the target image is larger than a preset similarity, acquiring pixels of the image area of the second image;
and replacing the pixels of the corresponding image area in the target image based on the pixels of the image area of the second image to obtain the fourth image.
7. The image processing method according to any one of claims 2 to 6, wherein the determining a target image of the N-1 third images includes:
calculating the gradient value of each third image in the N-1 third images to obtain N-1 gradient values;
And obtaining the target image based on the N-1 gradient values.
8. The image processing method according to any one of claims 1 to 6, wherein the image processing model is further used for performing denoising processing.
9. The image processing method according to any one of claims 1 to 6, wherein the image processing model is obtained by a training method of:
acquiring a sample video, wherein the resolution of the sample video is larger than a preset resolution, the sample video comprises M continuous images, the M continuous images comprise image areas where sample moving objects are located, and M is an integer larger than or equal to 3;
carrying out fusion processing on M-1 continuous images in the M continuous images to obtain a first sample image;
performing brightness reduction treatment on an M-th frame image in the M-frame continuous images to obtain a second sample image;
calculating a sample similarity between the first sample image and the second sample image;
inputting the sample similarity, the first sample image and the second sample image into an image processing model to be trained, and outputting a predicted image;
and updating parameters of the image processing model to be trained based on the image difference between the predicted image and the Mth frame image to obtain the image processing model.
10. An electronic device, comprising:
one or more processors and memory;
the memory is coupled with the one or more processors, the memory for storing computer program code comprising computer instructions that the one or more processors invoke to cause the electronic device to perform the image processing method of any of claims 1-9.
11. A chip system for application to an electronic device, the chip system comprising one or more processors for invoking computer instructions to cause the electronic device to perform the image processing method of any of claims 1 to 9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 9.
CN202211646996.4A 2022-12-21 2022-12-21 Image processing method and electronic device Active CN115633262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211646996.4A CN115633262B (en) 2022-12-21 2022-12-21 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211646996.4A CN115633262B (en) 2022-12-21 2022-12-21 Image processing method and electronic device

Publications (2)

Publication Number Publication Date
CN115633262A CN115633262A (en) 2023-01-20
CN115633262B true CN115633262B (en) 2023-04-28

Family

ID=84911095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211646996.4A Active CN115633262B (en) 2022-12-21 2022-12-21 Image processing method and electronic device

Country Status (1)

Country Link
CN (1) CN115633262B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117135293A (en) * 2023-02-24 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101331754A (en) * 2005-10-14 2008-12-24 京瓷株式会社 Imaging device and imaging method
CN101473266A (en) * 2006-06-22 2009-07-01 诺基亚公司 Method and system for image stabilization
CN105874781A (en) * 2014-01-10 2016-08-17 高通股份有限公司 System and method for capturing digital images using multiple short exposures
CN112509520A (en) * 2020-12-11 2021-03-16 深圳市智联汇网络系统企业(有限合伙) Display method, display and storage medium for eliminating afterimage of organic light-emitting device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8508606B2 (en) * 2009-11-20 2013-08-13 Hon Hai Precision Industry Co., Ltd. System and method for deblurring motion blurred images
US9176070B2 (en) * 2014-04-07 2015-11-03 Hseb Dresden Gmbh Inspection assembly

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101331754A (en) * 2005-10-14 2008-12-24 京瓷株式会社 Imaging device and imaging method
CN101473266A (en) * 2006-06-22 2009-07-01 诺基亚公司 Method and system for image stabilization
CN105874781A (en) * 2014-01-10 2016-08-17 高通股份有限公司 System and method for capturing digital images using multiple short exposures
CN112509520A (en) * 2020-12-11 2021-03-16 深圳市智联汇网络系统企业(有限合伙) Display method, display and storage medium for eliminating afterimage of organic light-emitting device

Also Published As

Publication number Publication date
CN115633262A (en) 2023-01-20

Similar Documents

Publication Publication Date Title
WO2022267565A1 (en) Video photographing method, and electronic device and computer-readable storage medium
WO2021223500A1 (en) Photographing method and device
CN113452898B (en) Photographing method and device
CN113706414B (en) Training method of video optimization model and electronic equipment
CN115061770B (en) Method and electronic device for displaying dynamic wallpaper
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN116744120B (en) Image processing method and electronic device
US20210409588A1 (en) Method for Shooting Long-Exposure Image and Electronic Device
WO2024045670A1 (en) Method for generating high-dynamic-range video, and electronic device
CN115633262B (en) Image processing method and electronic device
CN113705665A (en) Training method of image transformation network model and electronic equipment
CN114390212B (en) Photographing preview method, electronic device and storage medium
WO2023124202A1 (en) Image processing method and electronic device
CN116055895B (en) Image processing method and device, chip system and storage medium
CN115767290B (en) Image processing method and electronic device
CN116128739A (en) Training method of downsampling model, image processing method and device
CN116258633A (en) Image antireflection method, training method and training device for image antireflection model
WO2021154807A1 (en) Sensor prioritization for composite image capture
CN116668836B (en) Photographing processing method and electronic equipment
WO2023160220A1 (en) Image processing method and electronic device
CN116723417B (en) Image processing method and electronic equipment
CN115767287B (en) Image processing method and electronic equipment
WO2023160221A1 (en) Image processing method and electronic device
CN116012262B (en) Image processing method, model training method and electronic equipment
CN115767262B (en) Photographing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant