CN116233625A - Image processing method, electronic equipment and chip - Google Patents

Image processing method, electronic equipment and chip Download PDF

Info

Publication number
CN116233625A
CN116233625A CN202111456653.7A CN202111456653A CN116233625A CN 116233625 A CN116233625 A CN 116233625A CN 202111456653 A CN202111456653 A CN 202111456653A CN 116233625 A CN116233625 A CN 116233625A
Authority
CN
China
Prior art keywords
frame
reference frame
image
electronic device
registration result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111456653.7A
Other languages
Chinese (zh)
Inventor
张旭东
邢一博
郭鑫涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111456653.7A priority Critical patent/CN116233625A/en
Publication of CN116233625A publication Critical patent/CN116233625A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method, electronic equipment and a chip relate to the technical field of image processing, and the method comprises the following steps: acquiring an image frame set obtained by shooting a target object, wherein the image frame set comprises a first reference frame, a second reference frame and a plurality of frames of image frames, the first reference frame is a fusion frame subjected to multi-frame noise reduction treatment, and the exposure time length of the first reference frame is the same as that of the second reference frame; determining a target reference frame according to the definition of the first reference frame and the second reference frame; registering the target reference frame with the multi-frame image frame to obtain a registration result; and outputting a result image according to the registration result. The method can improve the image quality of the result image and meet the service requirement.

Description

Image processing method, electronic equipment and chip
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an electronic device, and a chip.
Background
With the continuous development of image processing technology, the image processing technology is widely applied to electronic devices such as mobile phones, tablet computers, cameras and the like. Based on the above, the electronic device may take a picture of an object such as a person, a landscape, or a building, obtain an image of the object, and record the object by means of the image.
In actual shooting, for a scene with a High Dynamic Range (HDR), a continuous shooting mode (also simply referred to as a continuous shooting mode) is generally used to perform shooting in order to obtain a better image quality. The continuous shooting mode is to continuously expose the same object to the image for a plurality of times by the pointer, obtain a plurality of image frames, process the plurality of image frames, for example, fuse the plurality of image frames with different exposure time, and output the processed image.
However, in some cases, the image quality may not be improved by performing processing such as fusion on a plurality of image frames, but may be reduced, and it is difficult to satisfy the service requirement.
Disclosure of Invention
The purpose of the present application is: the image processing method, the electronic equipment and the chip are provided, so that the image quality is improved, and the service requirement is met.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides an image processing method, which may be performed by an electronic device, the method comprising: the method comprises the steps that electronic equipment obtains an image frame set obtained by shooting a target object, wherein the image frame set comprises a first reference frame, a second reference frame and a plurality of frames of image frames, the first reference frame is a fusion frame subjected to multi-frame noise reduction treatment, and the exposure time of the first reference frame is identical to that of the second reference frame; the electronic device may then compare the sharpness of the first reference frame to the sharpness of the second reference frame to determine a target reference frame. And then carrying out registration processing by utilizing the target reference frame and a plurality of frames of image frames in the image frame set to obtain a registration result, and determining a result image based on the registration result.
In the method, the electronic equipment uses the target reference frame with higher definition to carry out subsequent processing, so that the image quality of the result image can be improved. And the first reference frame is a fusion frame obtained after multi-frame noise reduction and fusion processing, and compared with the second reference frame, the first reference frame is clearer, so that the first reference frame can be used for subsequent processing under the condition that the second reference frame is unclear due to hand shake when a user photographs. Therefore, the electronic equipment performs subsequent processing based on the first reference frame with higher definition, so that the image quality of an output result image can be ensured, and the user experience is improved.
The following is a detailed description of several different situations:
when the target reference frame is the second reference frame, that is, the second reference frame is clearer compared with the first reference frame, the electronic device can use the second reference frame to perform registration processing with the multi-frame image frame, so as to obtain a registration result. When the registration result represents that the second reference frame and the multi-frame image frame can be registered, the electronic equipment carries out fusion processing on the second reference frame and the multi-frame image frame and outputs a fusion frame fused with the second reference frame; when the registration result represents that the second reference frame is not aligned with the multi-frame image frame, the second reference frame is directly output, so that the image quality of the output result image is ensured.
When the target reference frame is the first reference frame, that is, the first reference frame is clearer than the second reference frame, the electronic device can use the first reference frame to perform registration processing with the multi-frame image frame, so as to obtain a registration result. When the registration result represents that the first reference frame and the multi-frame image frame can be registered, the electronic equipment carries out fusion processing on the first reference frame and the multi-frame image frame and outputs a fusion frame fused with the first reference frame; when the registration result represents that the first reference frame is not aligned with the multi-frame image frame, the first reference frame is directly output, so that the image quality of the output result image is ensured.
When the target reference frame includes the first reference frame and the second reference frame, that is, the sharpness of the first reference frame is followed by the sharpness of the second reference frame. For example, the difference between the sharpness of the first reference frame and the sharpness of the second reference frame is lower than a preset difference. In this case, the electronic device may perform registration processing on the first reference frame and the second image frame and the multi-frame image frame, respectively, so as to obtain a registration result.
And when the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is not registered with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and then outputting a fused frame fused with the first reference frame. And when the registration result indicates that the first reference frame and the multi-frame image frame are not registered and the second reference frame and the multi-frame image frame are registered, fusing the second reference frame and the multi-frame image frame, and then outputting a fused frame fused with the second reference frame. That is, when the definition of the first reference frame is close to the definition of the second reference frame, the electronic device performs the subsequent fusion processing with the multi-frame image frame by using the fusion-capable reference frame, thereby improving the image quality of the result image and meeting the service requirement.
When the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is registered with the multi-frame image frame, the electronic equipment can utilize the first reference frame to fuse with the multi-frame image frame and output a fused frame fused with the first reference frame because the first reference frame is a fused frame obtained after multi-frame noise reduction and fusion treatment. Of course, the electronic device may also fuse the multi-frame image frame with the second reference frame, and output a fused frame fused with the second reference frame.
When the registration result indicates that the first reference frame and the multi-frame image frame are not registered and the second reference frame and the multi-frame image frame are not registered, the electronic equipment can directly output the first reference frame because the first reference frame is a fusion frame obtained after multi-frame noise reduction and fusion processing. Of course, the electronic device may also directly output the second reference frame.
In some possible implementations, the target reference frame is derived based on the sharpness of the first reference frame and the sharpness of the second reference frame, and the target reference frame may be a higher-sharpness reference frame of the first reference frame and the second reference frame. Thus, the electronic device may take the target reference frame as a baseline frame for subsequent frame out, such as multi-frame image frames. The definition of the target reference frame is higher, so that the exposure time of the target reference frame can be reduced, and the target reference frame can have higher signal-to-noise ratio. After the exposure time of the target reference frame (reference frame) is reduced, the exposure time of the image frames of the electronic device which are sequentially output frames is also reduced, and further more image details can be obtained.
Further, since the exposure time of the reference frame is shorter, i.e. the exposure time of the short exposure frame is shorter, and the method is also introducing a first image frame, the exposure time of which is longer than the exposure time of the first reference frame, i.e. the exposure time of the long exposure frame is longer. Therefore, the dynamic range of the result image obtained by the electronic equipment after fusion processing of the target reference frame and the multi-frame image frame is wider, and the image quality of the result image is further improved.
In some possible implementations, the multi-frame image frame includes a first image frame having an exposure time greater than an exposure time of the first reference frame and a second image frame having an exposure time less than the exposure time of the first reference frame.
In a second aspect, the present application provides an electronic device, comprising: a memory and a processor;
one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
In a third aspect, the present application provides a computer storage medium comprising computer instructions which, when run on a mobile terminal, cause the electronic device to perform the method of any one of the first aspects.
In a fourth aspect, the present application provides a computer program product comprising instructions; the instructions, when executed by an electronic device, cause the electronic device to perform the method of any of the first aspects.
In a fifth aspect, the present application provides a chip for processing image signals acquired by a camera to perform the steps of the method according to any one of the first aspects.
It should be appreciated that the description of technical features, aspects, benefits or similar language in this application does not imply that all of the features and advantages may be realized with any single embodiment. Conversely, it should be understood that the description of features or advantages is intended to include, in at least one embodiment, the particular features, aspects, or advantages. Therefore, the description of technical features, technical solutions or advantageous effects in this specification does not necessarily refer to the same embodiment. Furthermore, the technical features, technical solutions and advantageous effects described in the present embodiment may also be combined in any appropriate manner. Those of skill in the art will appreciate that an embodiment may be implemented without one or more particular features, aspects, or benefits of a particular embodiment. In other embodiments, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments.
Drawings
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present application;
FIG. 2A is a schematic diagram of a desktop interface according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of a camera interface according to an embodiment of the present application;
fig. 3 is a schematic diagram of an image signal processor according to an embodiment of the present application;
fig. 4 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 5A is a schematic diagram of a resulting image according to an embodiment of the present application;
FIG. 5B is a schematic illustration of yet another resulting image provided in an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and drawings are used for distinguishing between different objects and not for limiting the specified sequence.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
For clarity and conciseness in the description of the following embodiments, a brief description of the related art will be given first:
the multi-frame fusion technology is to fuse multiple dynamic range images with different exposure time, so as to obtain a high dynamic range image. The high dynamic range image has a wider brightness range, and further the high dynamic range image can show a better visual effect.
However, in some cases, after processing such as fusion is performed on a plurality of image frames, the image quality of the fused image is deteriorated. For example, when there are poor quality images in the dynamic range images with different exposure times, the image quality of the high dynamic range image is reduced after the dynamic range images with different exposure times are fused, which makes it difficult to meet the business requirements.
In view of this, the present embodiments provide an image processing method that may be performed by an electronic device. Specifically, the method comprises the following steps: the method comprises the steps that electronic equipment obtains an image frame set obtained by shooting a target object, wherein the image frame set comprises a first reference frame, a second reference frame and a plurality of frames of image frames, the first reference frame is an image frame subjected to noise reduction treatment, and the exposure time of the first reference frame is the same as that of the second reference frame; the electronic device may then compare the sharpness of the first reference frame to the sharpness of the second reference frame, determine a target reference frame, and then utilize the target reference frame for subsequent processing. The electronic equipment performs registration processing by utilizing the target reference and the multi-frame image frame to obtain a registration result; then, the electronic device outputs a result image according to the registration result.
In the method, the first reference frame is an image frame subjected to multi-frame noise reduction processing, and the second reference frame is also introduced, so that the electronic equipment compares the definition of the first reference frame with that of the second reference frame before fusion processing is carried out on the multi-frame image frame. Thus, the electronic device can select the image frame with higher definition in the first reference frame and the frame in the second image as the target reference frame. And then, the electronic equipment carries out registration processing on the target reference frame and the multi-frame image frame to obtain a registration result, and a result image is determined based on the registration result. If the registration result represents that the target reference frame and the multi-frame image frame can be registered, the electronic equipment performs fusion processing by using the target reference frame and the multi-frame image frame; otherwise, the electronic device directly outputs the target reference frame. Because the definition of the target reference frame is higher, the definition of the obtained result image is higher, so that the quality of the result image is improved, and the service requirement is met.
In some examples, the electronic device may be a camera, a cell phone, a tablet, a desktop, a laptop, a notebook, an Ultra mobile personal computer (Ultra-mobile Personal Computer, UMPC), a handheld computer, a netbook, a personal digital assistant (Personal Digital Assistant, PDA), a wearable electronic device, a smart watch, etc., and the specific form of the electronic device is not particularly limited in this application. In this embodiment, the structure of the electronic device may be shown in fig. 1, and fig. 1 is a schematic structural diagram of the electronic device according to the embodiment of the present application.
As shown in fig. 1, the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the configuration illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus. In other embodiments, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. For example, in the present application, the processor 110 may be an image signal processor.
The electronic device may implement shooting functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
In general, an image signal processor includes an Image Front End (IFE), a bayer process stage (bayer processing segment, BPS), an image processing engine (image processing engine, IPE), and an encoder. Alternatively, the encoder may encode based on the JPEG (joint photographic experts group) algorithm.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal and then transmits the electrical signal to the ISP to be converted into a digital image signal. ISP processes the digital image signal to obtain YUV image signal. In some embodiments, the electronic device may include 1 or N cameras 193, N being a positive integer greater than 1.
As shown in fig. 2A, the electronic device may present a schematic view of a desktop interface to a user, the desktop interface including a plurality of applications that the user may launch by clicking on an icon of the application, the electronic device based on a user's clicking operation on the icon of the application. Based on this, the user can click on the icon 201 of the camera application, and the electronic device can start the camera application based on the click operation of the icon 201 of the camera application by the user.
Fig. 2B is a schematic diagram of a camera interface according to an embodiment of the present application. The electronic device may provide a plurality of photographing modes, such as a front HDR photographing mode, a rear HDR photographing mode, and the like. The camera interface includes a capture control 210, a mode selection control 220, and an image preview area 230. The photographing control 210 is used for triggering a photographing event, the mode selection control 220 is used for switching a photographing mode, and the image preview area 230 is used for previewing a photographing picture. Fig. 2B shows a schematic diagram of a camera interface for a front HDR photographing mode. The image processing method provided by the embodiment of the application can be applied to a scene of a front HDR photographing mode. Of course, in other embodiments, the image processing method may also be applied in a scene of a post HDR photographing mode. And can also be used for shooting video or scenes of video conversation, etc.
Taking the pre-HDR photographing mode as an example, after the electronic device starts the camera application, the camera 193 starts to collect the preview frame sequence of the target object and store it in a buffer (buffer) for previewing or other steps for use by the user. The target object may be a person, a landscape, a building, etc. In some examples, the preview frame sequence may be as shown in table 1 below:
TABLE 1
Image frame Preview frame 1 Preview frame 2 Preview frame 7 Preview frame 8
Type(s) N N N N N
EV value of 0 0 0 0 0
Wherein, the type N (normal) indicates that the image frame is a normal exposure frame, i.e., the exposure time is a reference time; EV (exposure values) values are used to characterize the image frame exposure time. Here, the expression ev=0 indicates that the exposure time of the image frame is a reference time, which may be, for example, 100ms or 30ms or the like.
Note that, the preparation buffer (buffer) stores a plurality of preview frames, and in table 1 above, only 8 preview frames taken out by the electronic device when the user triggers a photographing event are shown, or 5 preview frames may be taken out.
In the process that the electronic device presents the preview image of the target image to the user, the user can click the shooting control 210 to trigger a shooting event, and after the electronic device detects the click operation of the shooting control 210 by the user, the electronic device acquires an image frame set so as to output a result image subsequently.
In some examples, the set of image frames may be as shown in table 2 below:
TABLE 2
Image frame Reference frame 1 Reference frame 2 Image frame 1 Image frame 2 Image frame 3 Image frame 4
Type(s) N N L S ES ES
EV value of 0 0 0.5 -2 -4 -5
Where L (long) indicates that the image frame is a long exposure frame, i.e., the exposure time is longer than the reference time, S (short) indicates that the image frame is a short exposure frame, i.e., the exposure time is shorter than the reference time, and ES (extra short) indicates that the image frame is an ultra-short exposure frame, i.e., the exposure time is shorter than the exposure time of the short exposure frame. ev=0.5 means that the exposure time of the image frame is longer than the reference time, and when the reference time is 100ms, the exposure time of the image frame 1 is 141.42ms, the exposure time of the similar image frame 2 is 25ms, the exposure time of the image frame 3 is 6.25ms, and the exposure time of the image frame 4 is 3.125ms. The reference frame 1 is a fusion frame obtained by performing multi-frame noise reduction processing on the preview frame in table 1, the reference frame 2 and the image frame 1 are newly added image frames, the image frame 2, the image frame 3 and the image frame 4 are original image frames, and the following description is given respectively. Referring to fig. 3, when a photographing event occurs, data fed back by a camera (sensor) may reach IFE first, for example, the data fed back by the camera may be 5 or 8 frames of raw (raw) images fed back by the camera, for example, 8 frames of preview frames shown in table 1 above, and IFE may be subjected to color correction first; then, the 8-frame preview frame output by the IFE may be noise-reduced based on the BPS and IPE. In the BPS, noise reduction processing such as removing dead pixels, phase focusing, demosaicing and the like can be performed on the image frames output by the IFE; in IPE, based on the multi-frame noise reduction (multi frame noise reduction, MFNR) technique, multi-frame noise reduction and fusion processing may be performed on the 8-frame preview frame output by IFE, so as to obtain a first reference frame (i.e. reference frame 1 shown in table 2).
Reference frame 2 is the same as the exposure table of reference frame 1, and reference frame 2 can be used as a reference frame in the present application to adjust subsequent outgoing frames, such as image frame 1, image frame 2, image frame 3, and image frame 4. When the user triggers a photo event, the electronic device sequentially goes out of reference frame 2, image frame 1, image frame 2, image frame 3, and image frame 4.
With continued reference to fig. 3, the isp may compare the sharpness between the first reference frame and the second reference frame to determine a target reference frame, where the target reference frame is a reference frame whose sharpness is not lower than that of the other reference frame, i.e., a reference frame with higher sharpness is selected from the first reference frame and the second reference frame for subsequent processing. Then, the target reference frame and the subsequent multi-frame image frames (for example, image frames 1-4) are utilized for registration processing, when the target reference frame and the multi-frame image frames meet the registration preset condition, the target reference frame and the multi-frame image frames are subjected to fusion processing, and then the fusion frame fused with the target reference frame is output; and when the target reference frame and the multi-frame image frame cannot reach the registration preset condition, directly outputting the target reference frame. Wherein the registration preset condition may be that a portion (e.g. 80%) between the plurality of images is logically identical, i.e. that a portion of the plurality of images reflects the same target area. Therefore, the quality of the output result image is better, and the user experience is improved.
The electronic device implements display functions via a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information. The display screen 194 is used to display images, for example, the display screen 194 may display the resulting images described above.
In order to make the technical solution of the present application clearer and easier to understand, the image processing method provided by the embodiments of the present application is described below with reference to the accompanying drawings in terms of an electronic device. As shown in fig. 4, the present disclosure is a flowchart of an image processing method according to an embodiment of the present disclosure, where the method includes:
s401: and the electronic equipment acquires an image frame set obtained by shooting the target object.
The image frame set includes a first reference frame (reference frame 1 shown in table 2 above), a second reference frame (reference frame 2 shown in table 2 above), and a multi-frame image frame (image frames 1-4 shown in table 2 above). The first reference frame is a fusion frame after noise reduction and fusion treatment, and the exposure time of the first reference frame is the same as that of the second reference frame.
As shown in fig. 3, when the user triggers a photographing event, the camera outputs 8 frames of original (raw) images (also may be 5 frames of raw images, or 6 frames of raw images) until the image reaches the IFE, and after the IFE processing, the camera outputs 8 frames of original (raw) images (also may be 5 frames of raw images, or 6 frames of raw images) until the image reaches the IFE. After the processing of pre-filtering, blending, post-filtering and the like, the noise reduction processing can be performed on the image output by the IFE based on the BPS and IPE, so as to obtain a noise-reduced and blended YUV image, and the YUV image is used as the first reference frame.
In some examples, when a user triggers a photo event, the electronic device may sequentially obtain a first reference frame, a second reference frame, and a multi-frame image frame. In order to make the electronic device achieve shooting, the electronic device may obtain a multi-frame preview frame based on a zero-second delay (ZSL) technique, and when a shooting event (for example, a shutter triggered by a user) occurs, the electronic device takes out the multi-frame preview frame from a preview buffer (buffer), performs multi-frame noise reduction and fusion processing, and further obtains a first reference frame, where a second reference frame and a multi-frame image frame are obtained by sequentially outputting frames. Thus, the delay time between the first reference frame and the second reference frame acquired by the electronic device is large, for example, the delay time may be 300ms, and the delay time between the second reference frame and the multi-frame image frame acquired by the electronic device is small, for example, the delay time may be 20ms.
It should be noted that, the electronic device performing the multi-frame denoising and fusion processing to obtain the first reference frame based on the multi-frame preview frame taken out from the preview buffer (buffer) is merely illustrative, and in other embodiments, the electronic device may also perform the multi-frame denoising and fusion processing to obtain the first reference frame based on the 8-frame image frame (for example, the image frame before the reference frame 2) before the shutter event triggered by the user.
The first reference frame is different from the exposure time of each of the plurality of image frames. Taking the exposure time of the first reference frame as 100ms as an example, the exposure time of the image frame 2 in the multi-frame image frame may be 25ms, the exposure time of the image frame 3 may be 6.25ms, and the exposure time of the image frame 4 may be 3.125ms. Thus, the dynamic range of the fused frame obtained by fusing the first reference frame with the image frame 2, the image frame 3 and the image frame 4 is wider. In addition, in the embodiment of the present application, an image frame 1 is additionally added, where the exposure time of the image frame 1 is greater than that of the first reference frame, and the exposure time of the image frame 1 may be 141.42ms. Because the image frame 1 with longer exposure time is introduced, the dynamic range of the fusion frame obtained by fusion processing of the first reference frame and the image frames 1-4 is further widened, and the image quality of the resultant image is improved. Similarly, the dynamic range of the fusion frame obtained after the fusion processing can be further improved by fusing the second reference frame with the image frames 1 to 4.
S402: and the electronic equipment determines a target reference frame according to the definition of the first reference frame and the second reference frame.
The definition of an image may refer to the fineness of the picture surface. In some examples, the electronic device may calculate the sharpness of the first reference frame and the sharpness of the second reference frame, respectively, and then compare the sharpness of the first reference frame and the sharpness of the second reference frame, thereby determining a reference frame with higher sharpness from the first reference frame and the second reference frame as the target reference frame.
Taking the example that the electronic device calculates the definition of the first reference frame, the electronic device may calculate the square of the gray difference of each group of two adjacent pixels in the first reference frame based on a gradient function (for example, a brenner function), then sum the squares of the gray differences of each group of two adjacent pixels to calculate a mean value, and take the mean value as the definition of the first reference frame, wherein the larger the mean value is, the clearer the image is represented. Of course, in other examples, the electronic device may also calculate the sharpness of the first reference frame based on other gradient functions (e.g., tenengrad function, laplacian function). Similarly, the electronic device may also calculate the sharpness of the second reference frame based on the same manner.
The method for calculating the definition of the first reference frame or the second reference frame by the electronic device is not particularly limited, and one skilled in the art may select other methods to calculate the definition of the first reference frame or the second reference frame according to actual needs.
The electronic equipment can judge the definition of the first reference frame and the definition of the second reference frame, and when the definition of the first reference frame is higher than that of the second reference frame, the target reference frame is the first reference frame; when the definition of the second reference frame is higher than that of the first reference frame, the target reference frame is the second reference frame; when the sharpness of the first reference frame is consistent with the sharpness of the second reference frame, the target reference frame may be the first reference frame or the second reference frame.
It should be noted that the fact that the definition of the first reference frame is consistent with the definition of the second reference frame may mean that the difference between the definition of the first reference frame and the definition of the second reference frame is smaller than the preset difference, that is, it indicates that the definition of the first reference frame is close to the definition of the second reference frame.
S403: and the electronic equipment carries out registration processing on the target reference frame and the multi-frame image frame to obtain a registration result.
In some examples, the electronic device may also perform registration processing on a target reference frame with higher definition and a multi-frame image frame in the first reference frame and the second reference frame, to obtain a registration result of the target reference frame and the multi-frame image frame. In this embodiment, the electronic device performs subsequent image processing based on the target reference frame with better definition, so that the definition of the output result image is better, the fineness of the picture is improved, and the image quality of the result image is improved.
Taking the registration processing of the first reference frame and the multi-frame image frame by the electronic device as an example, the electronic device can output a pixel displacement matrix of three channels of Red Green Blue (RGB) based on pixel displacement between the first reference frame and the multi-frame image frame to be fused, perform Gaussian blur processing on the pixel displacement matrix to obtain a Gaussian blur matrix, and input the Gaussian blur matrix into a meansStdDev function to obtain a standard deviation. When the standard deviation is greater than a preset threshold, indicating that the first reference frame and the multi-frame image frame are not registered; when the standard deviation is less than or equal to a preset threshold, registration of the first reference frame and the multi-frame image frame is indicated. Similarly, the electronic device may perform registration processing on the second reference frame and the multi-frame image frame based on a similar manner, so as to obtain a registration result of the second reference frame and the multi-frame image frame.
The method for registering the target reference frame and the multi-frame image frame by the electronic device is not particularly limited, and a person skilled in the art can select other methods for registering the target reference frame and the multi-frame image frame according to actual needs, for example, considering the efficiency of an algorithm and the like, so as to obtain a registration result.
S404: and the electronic equipment outputs a result image according to the registration result.
When the registration results are different, the resulting images are different. Specifically, the result image may be a first reference frame, a second reference frame, a first fusion frame or a second fusion frame, where the first fusion frame is a fusion frame obtained by fusing the first reference frame and the multi-frame image frame, and the second fusion frame is a fusion frame obtained by fusing the second reference frame and the multi-frame image frame.
In some examples, the electronic device may determine a reference frame of the first reference frame and the second reference frame that is not less in sharpness than the other of the two, i.e., determine the target reference frame. And then the electronic equipment is combined with the registration result to output a target reference frame or a fusion frame fused with the target reference frame. Specifically, when the registration result represents that the target reference frame is aligned with the multi-frame image frame, the pair of target reference frames and the multi-frame image frame are fused, and a fused frame fused with the target reference frame is output, for example, the electronic device performs weight calculation based on the target reference frame and the multi-frame image frame, so as to obtain the fused frame. And outputting the target reference frame when the registration result represents that the target reference frame is not aligned with the multi-frame image frame.
For example, the sharpness of the first reference frame is not lower than the sharpness of the second reference frame may include two cases: the sharpness of the first reference frame is higher than the sharpness of the second reference frame and the sharpness of the first reference frame is consistent with the sharpness of the second reference frame.
When the definition of the first reference frame is higher than that of the second reference frame, the electronic equipment determines the first reference frame as a target reference frame, and the electronic equipment outputs the first reference frame or the first fusion frame by combining the registration result. Specifically, when the registration result represents that the first reference frame is aligned with the multi-frame image frame, the electronic equipment performs fusion processing on the first reference frame and the multi-frame image frame and outputs a first fusion frame; when the registration result indicates that the first reference frame and the multi-frame image frame are not aligned, the electronic device directly outputs the first reference frame.
In some scenes, after the shutter is triggered, the handheld electronic device moves, so that the camera is displaced, and thus the second reference frame and the multi-frame image frame obtained by the electronic device after the frame is sequentially output are poor in definition, for example, the second reference frame and the multi-frame image frame are blurred. The first reference frame is obtained by the electronic equipment through multi-frame noise reduction and fusion processing of multi-frame preview frames taken out from a preview buffer based on the moment of a photographing event, so that even if a user generates a hand shake after triggering a shutter, the electronic equipment takes the first reference frame as a result image, the image quality of the result image can be ensured, and the photographing experience of the user is improved.
When the definition of the first reference frame is consistent with the definition of the second reference frame, the electronic equipment determines the first reference frame or the second reference frame as a target reference frame, and the electronic equipment outputs a first fusion frame or a second fusion frame by combining the registration result. Specifically, when the registration result represents the registration of a first reference frame and a multi-frame image frame, the electronic equipment performs fusion processing on the first reference frame and the multi-frame image frame and outputs a first fusion frame; when the registration result represents the registration of the second reference frame and the multi-frame image frame, the electronic equipment carries out fusion processing on the second reference frame and the multi-frame image frame and outputs a second fusion frame; when the registration result indicates that the first reference frame and the multi-frame image frame are registered, and the second reference frame and the multi-frame image frame are also registered, the electronic device can optionally perform fusion processing on the first reference frame and the second reference frame and the multi-frame image frame, and the obtained fusion frame is used as a result image. Because the first reference frame is a fusion frame obtained after multi-frame noise reduction and fusion processing, the electronic device can also fuse the first reference frame with multi-frame image frames, and the obtained first fusion frame is used as a result image, so that the image quality of the result image is further improved.
Similarly, the definition of the second reference frame is not lower than that of the second reference frame may also include two cases: the second reference frame has a higher definition than the first reference frame and the second reference frame has a definition that is consistent with the definition of the first reference frame.
When the definition of the second reference frame is higher than that of the first reference frame, the electronic equipment determines the second reference frame as a target reference frame, and the electronic equipment outputs a second reference frame or a second fusion frame by combining the registration result; specifically, when the registration result represents that the second reference frame is aligned with the multi-frame image frame, the electronic equipment performs fusion processing on the second reference frame and the multi-frame image frame and outputs a second fusion frame; when the registration result indicates that the second reference frame is not aligned with the multi-frame image frame, the electronic device directly outputs the second reference frame.
When the definition of the first reference frame is consistent with the definition of the second reference frame, the process of outputting the result image by the electronic device may refer to the above example, which is not described herein.
In some embodiments, the multi-frame image frame includes a plurality of image frames of different exposure times. For example, the multi-frame image frames include image frame 2, image frame 3, and image frame 4. Wherein the exposure time of image frame 2 is greater than the exposure time of image frame 3, the exposure time of image frame 3 is greater than the exposure time of image frame 4, and the exposure time of the target reference frame is greater than image frame 2. The electronic device can perform fusion processing on the image frames 2, 3 and 4 and the target reference frame, so that the dynamic range of the fusion frame obtained after the fusion processing is improved.
In other examples, the multi-frame image frames further include image frame 1 having an exposure time greater than an exposure time of the target reference frame, which is greater than the exposure times of image frame 2, image frame 3, and image frame 4. The exposure time of the image frame 1 is longer, i.e., the exposure time of the long exposure frame becomes longer; the target image frame is a higher-definition image frame in the first reference frame and the second reference frame, taking the target reference frame as the first reference frame as an example, the first reference frame is a fusion frame after multi-frame noise reduction and fusion processing, the electronic device does not need longer exposure time to exchange for a higher signal to noise ratio of the first reference frame, and further can reduce the exposure time of the first reference frame, and because the reference time is reduced, the exposure time of multi-frame image frames obtained by sequentially taking frames out of the frames is also reduced, namely, the exposure time of short exposure frames (the image frames 2 to 4 shown in table 2) is shorter, and further more image details can be obtained. After the electronic equipment fuses the first reference frame and the multi-frame image frame, the dynamic range of the obtained fused frame becomes wider, and the visual effect of the fused frame is further improved.
Based on this, the image processing method provided in the embodiment of the present application may be applied to a scene with a poor light environment, for example, a scene of capturing street in the evening. In some examples, the electronic device may set the sensitivity ISO of the camera to 1000 or other values greater than 1000, such as 1250, 1600, etc. In this embodiment of the present application, a long exposure frame (such as the image frame 1 shown in table 2) is further introduced, so that after the electronic device performs fusion processing on the multi-frame image frame and the target reference frame, the dynamic range of the obtained fusion frame is wider. Especially in the scene that light environment is relatively poor, electronic equipment also can shoot the better image of visual effect, satisfies the business demand.
As shown in fig. 5A and fig. 5B, fig. 5A is a schematic diagram of an image obtained in a conventional manner, and fig. 5B is a schematic diagram of an image obtained after being processed by the image processing method provided in the embodiment of the present application. As can be seen from fig. 5A and 5B, fig. 5B is not only clearer than fig. 5A, but also the dynamic range of the image is wider.
In other examples, the electronic device may select a set number of image frames whose sharpness satisfies the preset condition from the image frames 2 to 4, and then perform fusion processing on the target reference frame, the set number of image frames, and the image frame 1 to obtain a fused frame. The set number may be an even number, and the set number of image frames whose sharpness satisfies the preset condition may be an even number of image frames before sharpness ranking.
For example, the definition of the image frame 2 is greater than the definition of the image frame 3, the definition of the image frame 4 is greater than the definition of the image frame 2, and the electronic device may perform fusion processing on the image frame 2, the image frame 4, the image frame 1, and the target reference frame.
The embodiment of the application does not specifically limit the manner of fusing the target reference frame and the multi-frame image frame by the electronic device, and a person skilled in the art can select a proper manner according to actual needs to perform fusion processing.
Based on the above description, the embodiments of the present application provide an image processing method. In the method, in an introduced second reference frame, a target reference frame for subsequent processing is determined based on the definition of the second reference frame and the first reference frame; the target reference frame is a reference frame with the definition not lower than that of the other reference frame in the first reference frame and the second reference frame, and the target reference frame is used for carrying out subsequent processing, so that the definition of a result image can be improved. The first reference frame is a fusion frame obtained after multi-frame noise reduction and fusion processing, and compared with the second reference frame, the first reference frame is clearer, so that the first reference frame can be used for subsequent processing under the condition that the second reference frame is unclear due to hand shake of a user during photographing. In this way, the electronic device may determine a higher definition reference frame of the first reference frame and the second reference frame, i.e., determine the target reference frame. And the electronic equipment performs registration processing on the target reference frame and the multi-frame image frame, and if the target reference frame can be registered with the multi-frame image frame, the target reference frame and the multi-frame image frame are fused. Otherwise, the electronic device directly outputs the target reference frame. Because the definition of the target reference frame is higher, the definition of the obtained result image is higher, the quality of the result image frame is further improved, and the service requirement is met.
Furthermore, before the electronic device performs the multi-frame fusion process, the target reference frame with higher definition is fused with the multi-frame image frame by comparing the definition of the first reference frame with the definition of the second reference frame, and as the definition of the target reference frame is higher, the higher signal to noise ratio can be achieved by replacing lower exposure time, and for the multi-frame image frame, the exposure time of the multi-frame image frame is shortened, more image details can be obtained after the exposure time is shortened, so that the dynamic range of the resultant image obtained after the fusion process is further improved. The lower exposure time reduces the ghost probability and further improves the quality of the resulting image.
Furthermore, the electronic device introduces a long exposure frame (such as image frame 1 shown in table 2) in the multi-frame image frame during the fusion process, and the exposure time of the long exposure frame is longer than that of the target reference frame. Therefore, the exposure time of the long exposure frame is longer, the exposure time of the short exposure frame is shorter, and the electronic equipment further improves the dynamic range of the obtained result image and further improves the image quality of the result image after carrying out fusion processing on the multi-frame image frame and the target reference frame.
Some embodiments of the present application also provide an electronic device, as shown in fig. 6, that may include one or more cameras 601, one or more processors 602, memory 603, and one or more computer programs 604, which may be connected via one or more communication buses 605. Wherein the one or more computer programs 604 are stored in the memory 603 and configured to be executed by the one or more processors 602, the one or more computer programs 604 comprise instructions that may be used to perform the various steps performed by the handset as in the corresponding embodiment of fig. 4. Of course, the electronic device shown in fig. 6 may further include other devices, such as a sensor module, an audio module, and a SIM card interface, which is not limited in any way by the embodiment of the present application.
Optionally, the camera 601 is configured to obtain an image frame set obtained by photographing a target object, where the image frame set includes a first reference frame, a second reference frame, and a plurality of frame image frames, the first reference frame is a fusion frame that is subjected to multi-frame noise reduction processing, and an exposure time of the first reference frame is the same as an exposure time of the second reference frame; the processor 602 is configured to determine a target reference frame according to the sharpness of the first reference frame and the second reference frame; registering the target reference frame with the multi-frame image frame to obtain a registration result; and outputting a result image according to the registration result.
Optionally, when the target reference frame is the second reference frame, the processor 602 is specifically configured to fuse the second reference frame with the multi-frame image frame when the registration result indicates that the second reference frame is aligned with the multi-frame image frame, and output a fused frame fused with the second reference frame.
Optionally, the processor 602 is specifically configured to output the second reference frame when the registration result characterizes that the second reference frame is misaligned with the multi-frame image frame.
Optionally, when the target reference frame is the first reference frame, the processor 602 is specifically configured to fuse the first reference frame with the multi-frame image frame when the registration result indicates that the first reference frame is aligned with the multi-frame image frame, and output a fused frame fused with the first reference frame.
Optionally, the processor 602 is specifically configured to output the first reference frame when the registration result characterizes that the first reference frame is misaligned with the multi-frame image frame.
Optionally, when the target reference frame includes the first reference frame and the second reference frame, the processor 602 is specifically configured to:
When the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is not registered with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and outputting a fused frame fused with the first reference frame; or alternatively, the first and second heat exchangers may be,
when the registration result represents that the first reference frame is not registered with the multi-frame image frame and the second reference frame is registered with the multi-frame image frame, fusing the second reference frame with the multi-frame image frame, and outputting a fused frame fused with the second reference frame; or alternatively, the first and second heat exchangers may be,
when the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is registered with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and outputting a fused frame fused with the first reference frame; or alternatively, the first and second heat exchangers may be,
outputting the first reference frame when the registration result characterizes that the first reference frame is not registered with the multi-frame image frame and the second reference frame is not registered with the multi-frame image frame.
The embodiment of the application also provides a chip for processing the image signals acquired by the camera so as to execute the method in the embodiment.
The present embodiment also provides a computer-readable storage medium including instructions that, when executed on an electronic device, cause the electronic device to perform the relevant method steps of fig. 4 to implement the method of the above embodiment.
The present embodiment also provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the relevant method steps as in fig. 4 to implement the method of the above embodiments.
The present embodiment also provides a control device comprising a processor and a memory for storing computer program code comprising computer instructions which, when executed by the processor, perform the relevant method steps as in fig. 4 to implement the method in the above embodiments. The control device may be an integrated circuit IC or a system on chip SOC. The integrated circuit can be a general-purpose integrated circuit, a field programmable gate array FPGA, or an application specific integrated circuit ASIC.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided in this embodiment, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present embodiment may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present embodiment may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the respective embodiments. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, comprising:
acquiring an image frame set obtained by shooting a target object, wherein the image frame set comprises a first reference frame, a second reference frame and a plurality of frames of image frames, the first reference frame is a fusion frame subjected to multi-frame noise reduction treatment, and the exposure time length of the first reference frame is the same as that of the second reference frame;
determining a target reference frame according to the definition of the first reference frame and the second reference frame;
registering the target reference frame with the multi-frame image frame to obtain a registration result;
and outputting a result image according to the registration result.
2. The method of claim 1, wherein when the target reference frame is the second reference frame, the outputting a result image according to the registration result comprises:
And when the registration result represents that the second reference frame is aligned with the multi-frame image frame, fusing the second reference frame with the multi-frame image frame, and outputting a fused frame fused with the second reference frame.
3. The method according to claim 1 or 2, wherein the outputting a result image according to the registration result comprises:
outputting the second reference frame when the registration result characterizes that the second reference frame is not aligned with the multi-frame image frame.
4. A method according to any of claims 1-3, wherein when the target reference frame is the first reference frame, the outputting a resulting image according to the registration result comprises:
and when the registration result represents that the first reference frame is aligned with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and outputting a fused frame fused with the first reference frame.
5. The method according to any one of claims 1-4, wherein said outputting a result image based on said registration result comprises:
outputting the first reference frame when the registration result characterizes that the first reference frame is not aligned with the multi-frame image frame.
6. The method of any of claims 1-5, wherein when the target reference frame comprises the first reference frame and the second reference frame, the outputting a resultant image based on the registration result comprises:
when the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is not registered with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and outputting a fused frame fused with the first reference frame; or alternatively, the first and second heat exchangers may be,
when the registration result represents that the first reference frame is not registered with the multi-frame image frame and the second reference frame is registered with the multi-frame image frame, fusing the second reference frame with the multi-frame image frame, and outputting a fused frame fused with the second reference frame; or alternatively, the first and second heat exchangers may be,
when the registration result represents that the first reference frame is registered with the multi-frame image frame and the second reference frame is registered with the multi-frame image frame, fusing the first reference frame with the multi-frame image frame, and outputting a fused frame fused with the first reference frame; or alternatively, the first and second heat exchangers may be,
outputting the first reference frame when the registration result characterizes that the first reference frame is not registered with the multi-frame image frame and the second reference frame is not registered with the multi-frame image frame.
7. The method of any of claims 1-6, wherein the multi-frame image frame comprises a first image frame and a second image frame, the first image frame having an exposure time greater than an exposure time of the first reference frame, the second image frame having an exposure time less than an exposure time of the first reference frame.
8. An electronic device, comprising: a memory and a processor;
one or more computer programs are stored in the memory, the one or more computer programs comprising instructions; the instructions, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 7.
9. A computer storage medium comprising computer instructions which, when run on an electronic device, perform the method of any of claims 1 to 7.
10. A computer program product comprising instructions; the instructions, when executed by an electronic device, cause the electronic device to perform the method of any one of claims 1 to 7.
11. A chip for processing image signals acquired by a camera to perform the method of any one of claims 1 to 7.
CN202111456653.7A 2021-12-01 2021-12-01 Image processing method, electronic equipment and chip Pending CN116233625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111456653.7A CN116233625A (en) 2021-12-01 2021-12-01 Image processing method, electronic equipment and chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111456653.7A CN116233625A (en) 2021-12-01 2021-12-01 Image processing method, electronic equipment and chip

Publications (1)

Publication Number Publication Date
CN116233625A true CN116233625A (en) 2023-06-06

Family

ID=86579141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111456653.7A Pending CN116233625A (en) 2021-12-01 2021-12-01 Image processing method, electronic equipment and chip

Country Status (1)

Country Link
CN (1) CN116233625A (en)

Similar Documents

Publication Publication Date Title
US10827140B2 (en) Photographing method for terminal and terminal
CN111885294B (en) Shooting method, device and equipment
CN109863742B (en) Image processing method and terminal device
US9451173B2 (en) Electronic device and control method of the same
JP5784587B2 (en) Method and device for image selection and combination
WO2018176925A1 (en) Hdr image generation method and apparatus
CN108391060B (en) Image processing method, image processing device and terminal
CN112150399A (en) Image enhancement method based on wide dynamic range and electronic equipment
CN111726521B (en) Photographing method and photographing device of terminal and terminal
US20210021833A1 (en) Static video recognition
CN113850367A (en) Network model training method, image processing method and related equipment thereof
CN104469191A (en) Image denoising method and device
CN113810590A (en) Image processing method, electronic device, medium, and system
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
US10235745B2 (en) Image processing method, computer storage medium, apparatus and terminal
WO2022267506A1 (en) Image fusion method, electronic device, storage medium, and computer program product
CN109003272B (en) Image processing method, device and system
WO2024027331A1 (en) Photographing method and related apparatus
EP4228236A1 (en) Image processing method and electronic device
CN113744139A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115426449B (en) Photographing method and terminal
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN116233625A (en) Image processing method, electronic equipment and chip
CN113873142B (en) Multimedia processing chip, electronic device, and moving image processing method
WO2017084011A1 (en) Method and apparatus for smoothing video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination