CN116309130A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116309130A
CN116309130A CN202310129714.1A CN202310129714A CN116309130A CN 116309130 A CN116309130 A CN 116309130A CN 202310129714 A CN202310129714 A CN 202310129714A CN 116309130 A CN116309130 A CN 116309130A
Authority
CN
China
Prior art keywords
image
image frame
noise reduction
frequency information
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310129714.1A
Other languages
Chinese (zh)
Inventor
徐荣鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202310129714.1A priority Critical patent/CN116309130A/en
Publication of CN116309130A publication Critical patent/CN116309130A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, and belongs to the technical field of image processing. The image processing method comprises the following steps: acquiring a first image frame and a second image frame of a target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame; acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information at a first moment and second high-frequency information at a second moment in an original image of a target video; and carrying out noise reduction processing on the second image frame according to the first image frame and the target information.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and an image processing device.
Background
With the continuous development of terminal technology, video shooting has become an indispensable function of terminal equipment, and image shooting quality is a key index for evaluating the terminal shooting function.
In order to improve the image shooting quality, in a shooting scene with larger image noise, the terminal equipment can automatically perform noise reduction processing in the shooting process. For example, in a night scene video shooting scene, due to lower ambient brightness, the noise of an image sensor of the terminal equipment is larger, and an image signal processor (Image Signal Processor, ISP) can perform on-line noise reduction processing on the acquired image through a noise reduction algorithm.
However, noise reduction processing in the shooting process is limited by factors such as equipment performance, power consumption, heating value and the like, an artificial intelligent model with larger calculation force cannot be used, and the noise reduction effect is to be improved. In the off-line noise reduction mode, although an artificial intelligent model with high computational power can be used, the input of the model is generally video processed by an image signal processor, high-frequency information of an image is damaged, definition is insufficient, and the off-line noise reduction effect is limited by the quality of the input image.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and device, which can solve the problem that the noise reduction effect of a video image is to be improved in the prior art.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first image frame and a second image frame of a target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame;
acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame;
And carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring a first image frame and a second image frame of the target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame;
the second acquisition module is used for acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame;
and the first noise processing module is used for carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps in the image processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which, when executed by a processor, implement steps in an image processing method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image processing method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one frame processor to implement the image processing method according to the first aspect.
In the embodiment of the application, in the video shooting process, lossless high-frequency information can be obtained and stored based on the original image which is not subjected to noise reduction processing. When the video image is subjected to offline noise reduction processing, the video image is applied to the noise reduction processing process, so that the loss of high-frequency information of the image to be subjected to noise reduction can be compensated, the image definition can be improved, and the noise reduction effect can be improved.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a camera main interface provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a video editing main interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a custom noise reduction editing interface provided in an embodiment of the present application;
FIG. 5 is an exemplary schematic diagram of a noise reduction process provided by an embodiment of the present application;
fig. 6 is a schematic diagram of an example of obtaining high frequency information and motion vectors provided by an embodiment of the present application;
fig. 7 is a schematic block diagram of an image processing apparatus provided in an embodiment of the present application;
FIG. 8 is a schematic block diagram of an electronic device provided by an embodiment of the present application;
fig. 9 is a schematic hardware structure of an electronic device provided in an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type and do not limit the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application, where the image processing method is applied to an electronic device, that is, steps in the image processing method are performed by the electronic device.
The image processing method may include:
step 101: a first image frame and a second image frame of a target video are acquired.
The first image frame and the second image frame are adjacent image frames in the target video, and the first image frame is the previous image frame of the second image frame.
In the embodiment of the application, the recorded video can be subjected to offline noise reduction processing frame by frame. When the frame-by-frame noise reduction is performed, besides the image frame to be noise reduced is needed to be decoded from the video file, the previous image frame of the image frame to be noise reduced is needed to be decoded for performing the noise reduction on the image frame to be noise reduced. In this embodiment of the present application, the second image frame is an image frame to be noise reduced, and the first image frame is a previous image frame of the image frame to be noise reduced. The offline noise reduction processing refers to noise reduction processing performed after the video shooting is completed.
Alternatively, during video photographing, video images may be encoded using the h.264 or h.265 standard in the MPEG-4 standard (SO/IEC 14496), and photographed video, i.e., target video, may be saved in the MP4 format. Of course, other encoding modes can be used for image encoding, and the photographed video can be stored in other formats, which can be specifically set according to actual requirements.
Step 102: and acquiring target information.
Wherein, the target information may include: the method comprises the steps of obtaining first high-frequency information at a first moment and second high-frequency information at a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame. The original image of the target video refers to an image which is not subjected to noise reduction processing in the video shooting process, such as a Raw image which is not subjected to noise reduction processing.
The first time is the time corresponding to the first image frame, and the second time is the time corresponding to the second image frame. Assuming that the first image frame is an image frame at time t-1 in the target video and the second image frame is an image frame at time t in the target video, the first time is time t-1 and the second time is time t.
In the embodiment of the application, in the shooting process of the video, the high-frequency information of the image which is not subjected to noise reduction processing is reserved. The high frequency information of an image is information of image positions where the intensity of the image (e.g. brightness, gray scale, etc.) varies drastically, i.e. information of so-called edge, contour positions, to which the human eye is more sensitive. When the offline noise reduction processing is carried out on the target video, the original high-frequency information reserved during shooting can provide richer image contour information, and the image definition after the noise reduction processing is improved.
Step 103: and carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
According to the first image frame and the first high-frequency information, a first image frame with richer high-frequency information can be obtained, and similarly, according to the second image frame and the second high-frequency information, a second image frame with richer high-frequency information can be obtained. Based on the first image frame and the second image frame with richer high-frequency information, noise reduction processing is carried out, so that high-frequency information loss of the image to be noise reduced can be compensated, image definition is improved, and noise reduction effect is improved.
As an alternative embodiment, in step 101: the image processing method may further include, before acquiring the first image frame and the second image frame of the target image frame:
step A1: and receiving a noise reduction intensity value input by a user.
In the embodiment of the application, for offline noise reduction processing, a self-defined noise reduction function is provided, through which a user can self-define noise reduction intensity, including reducing image noise and increasing image noise, so as to meet different requirements of the user. The range of the noise reduction intensity can be [ -alpha, alpha ], and alpha is larger than 0. When the noise reduction intensity is greater than 0, the image noise is reduced; when the noise reduction intensity is less than 0, it means increasing the image noise.
As shown in fig. 2, a main interface 201 of the album application displays a plurality of video files, and after a user edits a video as long as required, for example, a "night scene a" video, and then selects an edit control below the interface, the user accesses an edit main interface 301 for the "night scene a" video, as shown in fig. 3. After that, after clicking the user-defined noise reduction control, the user enters the user-defined noise reduction editing interface 401, and the noise reduction intensity of the image can be adjusted by sliding the noise reduction intensity adjustment control 4011 left and right. Sliding from the noise reduction intensity 0 value to the left increases the image noise, which can make the video effect more grainy. Sliding from the noise reduction intensity 0 value to the right reduces the image noise, and can enable the video effect to be smoother. The image display area 4012 displays a processing preview result of the single-frame video image. After the user determines the adjustment effect, the terminal device can process the night scene a video based on the user-defined noise reduction intensity, and store the processed video to the local album.
In the embodiment of the application, by adding the user-defined noise reduction function, different image processing requirements of users can be met.
Step A2: in the case where the noise reduction intensity value is greater than 0, a step of acquiring a first image frame and a second image frame of the target image frame is performed.
When the noise reduction intensity value set by the user is greater than 0, which indicates that the noise reduction process is to be performed on the image, steps 101 to 103 may be performed.
Step A3: and under the condition that the noise reduction intensity value is smaller than 0, acquiring a second image frame in the target video and second high-frequency information at a second moment, and adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high-frequency information.
When the noise reduction intensity value set by the user is smaller than 0, it is stated that the image noise is to be increased, and the noise increase processing can be performed on the second image frame based on the noise reduction intensity value set by the user and the second high-frequency information.
Optionally, "adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high frequency information" in step A3 may include:
acquiring the ratio of the absolute value of the noise reduction intensity value to the maximum noise reduction intensity value; obtaining the product of the second high-frequency information and the ratio; the product is added to the second image frame to obtain a third image frame with the noise intensity adjusted.
In this embodiment of the present invention, since the second high frequency information is the high frequency information separated from the original image, the noise form is not damaged by the ISP module, the noise granularity is finer, and meanwhile, part of the object edge and detail information are retained, so that the image noise can be added to directly perform high frequency superposition on the second image frame, that is, the product of the second high frequency information and the ratio of the absolute value of the noise reduction intensity value set by the user and the maximum noise reduction intensity value is superimposed on the second image frame, so as to obtain a third image frame, that is, the second image frame after the noise intensity is adjusted.
The above scheme will be illustrated by taking fig. 5 as an example.
As shown in fig. 5, this example may include:
step 501: and entering a custom noise reduction editing mode.
Step 502: and acquiring a noise reduction intensity value beta set by a user.
Wherein, the value range of the noise reduction intensity value is [ -alpha, alpha ], -alpha is not less than beta is not less than alpha.
Step 503: and judging the relation between the noise reduction intensity value beta and 0. If the noise reduction intensity value β is less than 0, the process proceeds to step 504.
Step 504: acquiring a first image I at t moment from recorded video 1
Wherein the first image I 1 Is a YUV image corresponding to the second image frame described in step 101.
Step 505: from the first image I 1 The Y channel image is separated and recorded as a second image I 2 And buffer the first image I 1 U, V channel images of (c).
Step 506: acquiring high-frequency information Res at time t t
The time t corresponds to the second time, the high-frequency information Res t Corresponding to the second high frequency information in the target information described in step 102.
Step 507: according to the first image I 1 A second image I2, a noise reduction intensity value beta and high frequency information Res t Performing granularity processing to obtain a third image I 3
Here, theThe third image I 3 Corresponding to the aforementioned third image frame.
The granularity processing process can be expressed as follows:
Figure BDA0004083542980000071
in formula (1), concat refers to combining and arranging image data of Y, U, V channels according to the original YUV sequence, and U refers to a first image I 1 V represents the first image I 1 And alpha is the maximum noise reduction intensity value.
Step 508: will third image I 3 And (5) performing recompression encoding and writing the recompression encoding into a new video file.
As an alternative embodiment, the target information may further include: a target motion vector of the image content of the second image to the image content of the first image, the target motion vector being used for pixel alignment between the first image frame and the second image frame during noise reduction. The first image is an image corresponding to the first image frame and subjected to noise reduction processing, such as a Raw image subjected to noise reduction processing. The second image is an original image which corresponds to the second image frame and is not subjected to noise reduction processing, such as a Raw image which is not subjected to noise reduction processing.
Step 103: the noise reduction processing of the second image frame according to the first image frame and the target information may include:
step B1: the first high frequency information is superimposed on the first image frame to obtain a fourth image frame.
Step B2: and overlapping the second high-frequency information to the second image frame to obtain a fifth image frame.
Step B3: and carrying out pixel alignment on the fourth image frame and the fifth image frame based on the target motion vector to obtain a sixth image frame.
Step B4: and inputting the fifth image frame, the sixth image frame and the noise reduction intensity value into a target noise reduction model to obtain a seventh image frame after noise reduction processing.
The target noise reduction model is a model which is trained in advance and has large calculation force, and noise reduction treatment can be carried out by adopting the model, so that the noise reduction effect can be improved.
According to the embodiment of the application, according to the first image frame and the first high-frequency information, a fourth image frame with richer high-frequency information can be obtained; similarly, according to the second image frame and the second high-frequency information, a fifth image frame with more abundant high-frequency information can be obtained. The noise reduction processing is carried out on the basis of the fourth image frame and the fifth image frame with richer high-frequency information, so that the high-frequency information loss of the second image frame can be compensated, the image definition of the seventh image frame after the noise reduction processing is improved, and the noise reduction effect is further improved.
The following will take fig. 5 as an example to illustrate the schemes described in steps B1 to B4.
As shown in fig. 5, this example may include:
step 501: and entering a custom noise reduction editing mode.
Step 502: and acquiring a noise reduction intensity value beta set by a user.
Wherein, the value range of the noise reduction intensity value is [ -alpha, alpha ], -alpha is not less than beta is not less than alpha.
Step 503: and judging the relation between the noise reduction intensity value beta and 0. If the noise reduction intensity value β is greater than 0, the process proceeds to step 509.
Step 509: acquiring a first image I at t moment from recorded video 1 And a fourth image I at time t-1 4
Fourth image I as described herein 4 Is a YUV image corresponding to the second image frame in step 101. The time t-1 described herein corresponds to the aforementioned first time.
Step 510: from the first image I 1 The Y channel image is separated and recorded as a second image I 2 The method comprises the steps of carrying out a first treatment on the surface of the From the fourth image I 4 The Y channel image is separated out and recorded as a fifth image I 5
Step 511: acquiring high-frequency information Res at time t t High-frequency information Res at time t-1 t-1 And motion vectors MV from time t to time t-1 t And buffer the first image I 1 U, V channel images of (2)。
The high frequency information Res described herein t-1 Corresponds to the first high frequency information in the target information described in step 102. Motion vector MV as described herein t Corresponding to the target motion vector in the target information.
Step 512: according to the first image I 1 Second image I 2 Fifth image I 5 High frequency information Res t High frequency information Res t-1 Motion vector MV t Noise reduction processing is carried out by combining the first AI model, and a sixth image I is obtained 6
The first AI model described herein corresponds to the target noise reduction model in step B4. The first AI model is an offline noise reduction model, so that the power consumption and the calculation force are large, and a better noise reduction effect can be obtained.
Sixth image I as described herein 6 Corresponding to the seventh image frame in step B4.
The offline noise reduction process may be expressed as:
I′ 2 =I 2 +resize(Res t ) (2)
I′ 5 =I 5 +resize(Res t-1 ) (3)
I″ 5 =warp(I′ 5 ,MV t ) (4)
I 6 =Concat(f 2 (I′ 2 ,I″ 5 ,β/α;θ),U,V) (5)
in equation (2), the size (Res t ) Representing the high frequency information Res t The image size is adjusted to the second image I 2 As such, since the size of the secondary original image may be the same as the final generated image (e.g., the first image frame I 1 ) Since the sizes of the high frequency information are different, the image size adjustment of the high frequency information can be performed. Wherein the high frequency information Res can be interpolated using a nearest neighbor interpolation algorithm t Is adjusted to the second image I 2 The same image size. Equation (2) represents the second image I 2 And high frequency information Res after image size adjustment t Overlapping to obtain an image I 'for restoring high-frequency information' 2
In equation (3), the size (Res t-1 ) Representing the high frequency information Res t-1 Is resized to the fifth image I 5 As such, since the size of the secondary original image may be the same as the final generated image (e.g., fourth image frame I 4 ) Since the sizes of the high frequency information are different, the image size adjustment of the high frequency information can be performed. Wherein the high frequency information Res can be interpolated using a nearest neighbor interpolation algorithm t-1 Adjusted to be in line with fifth image I 5 The same image size. Equation (3) shows that the fifth image I 5 And high frequency information Res after image size adjustment t-1 Overlapping to obtain an image I 'for restoring high-frequency information' 5
Equation (4) shows the motion vector MV t Offset vector of each pixel point, image I' 5 And image I' 2 Performing pixel alignment to obtain an image I 5 . Prior to this, the MV may be interpolated bi-linearly t Upsampling to the fifth image I 5 The same image size.
Formula (5) shows that I' 2 、I″ 5 Beta is input into the first AI model, and then the output result of the first AI model is combined with the first image I 1 U, V channels of images are combined and arranged to obtain a sixth image I after noise reduction treatment 6 . Wherein f 2 The Concat refers to the training parameters of the model, which are the function of the first AI model, and the output result of the first AI model, i.e. the Y-channel image after the noise reduction process, is combined with the first image frame I 1 Is merged and arranged according to the original YUV sequence.
Wherein the first AI model may be a convolutional neural network (Convolutional Neural Networks, CNN) model and the network structure may be a UNet structure.
Step 513: will sixth image I 6 And (5) performing recompression encoding and writing the recompression encoding into a new video file.
As an alternative embodiment, how to obtain the target motion vector and the second high frequency information is described below.
In step 101: the image processing method may further include, before acquiring the first image frame and the second image frame of the target video:
step C1: a first image and a second image are acquired.
Step C2: and performing motion estimation according to the first image and the second image to obtain a target motion vector of the image content of the second image relative to the image content of the first image.
It should be noted that, in motion estimation, each frame of image in an image sequence is divided into a plurality of non-overlapping image blocks, then, in a certain given search range from each image block in a current image frame to a previous image frame or a next image frame, an image block most similar to the current image block, that is, a matching image block, is found according to a certain matching criterion, and a motion displacement is calculated according to a relative position between the matching image block and the current image block, where the obtained motion position is a motion vector of the current image block, and the finally obtained motion vectors of all image blocks of the current image frame are motion vectors of the image content of the current image frame relative to the previous image frame or the next image frame. For the embodiments of the present application, the second image is a current image frame, and the first image is a previous image frame to the current image frame.
Step C3: a third image is acquired.
The third image is an image obtained by denoising the second image, and may be understood as a first image obtained by denoising.
Step C4: and carrying out noise separation on the second image and the third image to obtain second high-frequency information at a second moment.
The schemes described in steps C1 to C4 are exemplified below by taking fig. 6 as an example.
As shown in fig. 6, this example may include:
step 601: recording of the video is started.
Step 602: acquiring a seventh image I at time t from an image sensor 7
Seventh image I as described herein 7 Corresponding to the second image described in step C1. In this example, a seventh image I 7 Is a Raw image which is not subjected to noise reduction processing.
Step 603: obtaining eighth image I at t-1 time from buffer memory 8
Eighth image I described herein 8 Corresponding to the first image described in step C1, in this example, the eighth image I 8 And (5) a cached Raw diagram after the Raw domain noise reduction processing is carried out at the time t-1.
Step 604: for the seventh image I 7 And eighth image I 8 Motion estimation is performed to obtain a seventh image I 7 Image content of (2) relative to eighth image I 8 Motion vector MV of image content of (a) t And calculate the MV t Writing into the intermediate file.
Motion vector MV as described herein t Corresponding to the target motion vector described in step C2. Motion vector MV t Specifically, the method can be obtained through the following processes:
first, I is as follows 7 And I 8 The two images are respectively divided into m multiplied by n blocks with the same size, wherein n is more than or equal to 1 and less than or equal to H, m is more than or equal to 1 and less than or equal to W, and H, W are respectively the height and the width of the two images. The image blocks at the same position of the two images are respectively represented as
Figure BDA0004083542980000111
Wherein i is E [1, m],j∈[1,n]。
Then, using gray projection method, calculate
Figure BDA0004083542980000112
Spatial position offset vector MV in both x and y directions t . In particular, when m, n=1, this corresponds to calculating an offset vector for each pixel position.
Specifically, the gray projection method calculation process may be:
first, the calculation is performed by the equation (6) and the equation (7)
Figure BDA0004083542980000113
A projection in a vertical direction and a projection in a horizontal direction.
Figure BDA0004083542980000114
Figure BDA0004083542980000115
Wherein in formula (6)
Figure BDA0004083542980000116
Representation->
Figure BDA0004083542980000117
Projection in vertical direction, H B Representation->
Figure BDA0004083542980000118
Is high. +.>
Figure BDA0004083542980000119
Representation->
Figure BDA00040835429800001110
Projection in horizontal direction, W B Representation->
Figure BDA00040835429800001111
Is not limited to a wide range.
Then, the absolute values of the projection deviations in the x-direction and the y-direction are respectively calculated by the formula (8) to be the minimum values u, v, and are taken as MV t
Figure BDA0004083542980000121
Wherein, lambda and gamma in the formula (8) are algorithm parameters, and represent the search spaces of u and v.
Alternatively, a block matching algorithm may be used to perform vector estimation, where the block matching algorithm may refer to a Non Local mean filtering Non Local Means algorithm, traverse the sum of absolute differences (Sum of absolute differences, SAD) value of all image blocks in the search window, and take the u, v minimum as the actual offset vector MV.
Alternatively, the actual offset vector MV may also be calculated using a sparse optical flow tracking KLT optical flow method.
Alternatively, the result provided by other modules that can detect motion vectors, such as a mobile phone gyroscope, can also be used as the actual offset vector MV.
Step 605: according to MV at time t t Eighth image I 8 And a seventh image I 7 Performing pixel alignment to obtain a ninth image I 9
Ninth image I described herein 9 Corresponding to the third image in step C4.
Alternatively, the MV may be first interpolated by bilinear interpolation t Upsampling to eighth image I 8 The same width and height dimensions, then according to MV t The (u, v) offset vector of each pixel point is used for generating an eighth image I 8 And a seventh image I 7 Pixel alignment is performed to make the eighth image I 8 Transform to seventh image I 7 The same viewing angle position, a ninth image I is obtained 9
Wherein, in pixel alignment, the moving image may use bicubic interpolation algorithm.
Step 606: seventh image I 7 And a ninth image I 9 Simultaneously input into a second AI model to obtain a tenth image I 10
Wherein the tenth image I 10 And (3) the noise reduction image at the time t corresponds to the third image in the step C3. For the tenth image I 10 It can be cached for the time t+1 to call.
Wherein, the tenth image I can be obtained by the formula (9) 10
I 10 =f 1 (Concat(I 7 ,I 9 );θ) (9)
Concat in equation (9) means that the seventh image I is to be 7 And a ninth image I 9 Stacking f 1 And representing a first AI model function, wherein θ is a training parameter of the model.
Optionally, the second AI model may be a CNN network model, where the parameters are pre-trained parameters, and the network structure may be a UNet structure, where the computing power and the power consumption of the UNet structure may satisfy real-time operation at the device side.
Step 607: seventh image I 7 And tenth image I 10 Noise separation is carried out to obtain high-frequency information Res at time t t And the Res obtained by calculation t Writing into the intermediate file.
Wherein Res is t =I 1 -I 4
Due to I 4 Is limited by the power consumption and computational effort of the second AI model, there may be a detail loss problem, res t The problem of irreversible loss of high-frequency information can be partially solved, and the high-frequency information is used in an offline noise reduction stage, so that the offline noise reduction effect is improved.
Step 608: will tenth image I 10 And sending the video to an ISP pipeline, and outputting a final video after RGB and YUV domain processing.
Wherein the intermediate file exists in pairs with the final video file. The intermediate file may be a binary file with any suffix, such as bin, meta, data, etc., which is not limited in the embodiment of the present application.
The above is a description of the image processing method provided in the embodiment of the present application.
In summary, in the embodiment of the present application, lossless high-frequency information of an image is stored in a video capturing process, and when offline noise reduction processing is performed on a video image, the lossless high-frequency information of the image to be noise reduced can be compensated and the image definition can be improved by applying the lossless high-frequency information to the noise reduction processing process, so that the noise reduction effect is improved. Through the noise reduction mode of combining on-line and off-line, better noise reduction effect can be realized. In addition, the embodiment of the application also provides a self-defined noise reduction adjustment function, so that a user can adjust the noise reduction effect more freely and conveniently.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus provided in the embodiment of the present application will be described by taking an example in which the image processing apparatus executes an image processing method.
Fig. 7 is a schematic block diagram of an image processing apparatus applied to an electronic device provided in an embodiment of the present application.
As shown in fig. 7, the image processing apparatus may include:
a first acquiring module 701, configured to acquire a first image frame and a second image frame of a target video.
Wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame.
A second obtaining module 702, configured to obtain target information.
Wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame.
A first noise processing module 703, configured to perform noise reduction processing on the second image frame according to the first image frame and the target information.
Optionally, the apparatus may further include:
and the receiving module is used for receiving the noise reduction intensity value input by the user.
A control module for controlling the step of executing the first image frame and the second image frame of the acquisition target image frame in the case that the noise reduction intensity value is greater than 0;
The second noise processing module is used for acquiring the second image frame in the target video and second high-frequency information at the second moment under the condition that the noise reduction intensity value is smaller than 0; and adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high-frequency information.
Optionally, the second noise processing module may include:
the first acquisition unit is used for acquiring the ratio of the absolute value of the noise reduction intensity value to the maximum noise reduction intensity value.
And a second acquisition unit configured to acquire a product of the second high-frequency information and the ratio.
And the first superposition processing unit is used for superposing the product to the second image frame to obtain a third image frame with the noise intensity adjusted.
Optionally, the target information may further include: a target motion vector from the second image at the second time to the first image at the first time; the first image is an image which corresponds to the first image frame and is subjected to noise reduction processing, and the second image is an original image which corresponds to the second image frame and is not subjected to noise reduction processing.
The first noise processing module 703 may include:
And the second superposition processing unit is used for superposing the first high-frequency information on the first image frame to obtain a fourth image frame.
And the third superposition processing unit is used for superposing the second high-frequency information on the second image frame to obtain a fifth image frame.
And the pixel alignment unit is used for carrying out pixel alignment on the fourth image frame and the fifth image frame based on the target motion vector to obtain a sixth image frame.
And the noise reduction processing unit is used for inputting the fifth image frame, the sixth image frame and the noise reduction intensity value into a target noise reduction model to obtain a seventh image frame after noise reduction processing.
Optionally, the apparatus may further include:
and a third acquisition module for acquiring the first image and the second image.
And the motion estimation module is used for carrying out motion estimation according to the first image and the second image to obtain the target motion vector of the image content of the second image relative to the image content of the first image.
A fourth acquisition module for acquiring a third image; the third image is an image obtained after the noise reduction processing of the second image.
And the noise separation module is used for carrying out noise separation on the second image and the third image to obtain the second high-frequency information at the second moment.
In summary, in the embodiment of the present application, during video capturing, lossless high-frequency information is obtained based on an original image that is not subjected to noise reduction processing and stored. When the video image is subjected to offline noise reduction processing, the video image is applied to the noise reduction processing process, so that the loss of high-frequency information of the image to be subjected to noise reduction can be compensated, the image definition can be improved, and the noise reduction effect can be improved.
The image processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The image processing apparatus provided in this embodiment of the present application can implement each process implemented by the embodiment of the image processing method shown in fig. 1, and in order to avoid repetition, a description is omitted here.
Optionally, as shown in fig. 8, an embodiment of the present application further provides an electronic device 800, including: the processor 801 and the memory 802, the memory 802 stores a program or an instruction that can be executed by the processor 801, where the program or the instruction implements each step of the above-mentioned image processing method embodiment when executed by the processor 801, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device 800 in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device.
Fig. 9 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input unit 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, and processor 910.
Those skilled in the art will appreciate that the electronic device 900 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 910 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the processor 910 may be configured to: acquiring a first image frame and a second image frame of a target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame; acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame; and carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
Optionally, the processor 910 may also be configured to: receiving a noise reduction intensity value input by a user; executing the step of acquiring the first image frame and the second image frame of the target image frame in the case that the noise reduction intensity value is greater than 0; and under the condition that the noise reduction intensity value is smaller than 0, acquiring the second image frame in the target video and second high-frequency information at the second moment, and adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high-frequency information.
Optionally, the processor 910 may also be configured to: acquiring the ratio of the absolute value of the noise reduction intensity value to the maximum noise reduction intensity value; obtaining the product of the second high-frequency information and the ratio; and adding the product to the second image frame to obtain a third image frame with the noise intensity adjusted.
Optionally, the target information may further include: the target motion vector from the second image at the second moment to the first image at the first moment, wherein the first image is an image which corresponds to the first image frame and is subjected to noise reduction treatment, and the second image is an original image which corresponds to the second image frame and is not subjected to noise reduction treatment.
The processor 910 may also be configured to: the first high-frequency information is overlapped to the first image frame, and a fourth image frame is obtained; the second high-frequency information is overlapped to the second image frame, and a fifth image frame is obtained; performing pixel alignment on the fourth image frame and the fifth image frame based on the target motion vector to obtain a sixth image frame; and inputting the fifth image frame, the sixth image frame and the noise reduction intensity value into a target noise reduction model to obtain a seventh image frame after noise reduction processing.
Optionally, the processor 910 may also be configured to: acquiring the first image and the second image; performing motion estimation according to the first image and the second image to obtain the target motion vector of the image content of the second image relative to the image content of the first image; acquiring a third image; the third image is an image obtained after the noise reduction processing of the second image; and carrying out noise separation on the second image and the third image to obtain the second high-frequency information at the second moment.
In the embodiment of the invention, in the video shooting process, lossless high-frequency information is obtained based on an original image which is not subjected to noise reduction treatment and is stored. When the video image is subjected to offline noise reduction processing, the video image is applied to the noise reduction processing process, so that the loss of high-frequency information of the image to be subjected to noise reduction can be compensated, the image definition can be improved, and the noise reduction effect can be improved.
It should be understood that in the embodiment of the present application, the input unit 904 may include a graphics processor (Graphics Processing Unit, GPU) 9041 and a microphone 9042, and the graphics processor 9041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes at least one of a touch panel 9071 and other input devices 9072. Touch panel 9071, also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 909 may include a volatile memory or a nonvolatile memory, or the memory 909 may include both volatile and nonvolatile memories. The nonvolatile memory may be a ROM, a Programmable ROM (PROM), an Erasable Programmable EPROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash memory. The volatile memory may be RAM, static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (Synchronous link DRAM) and Direct memory bus RAM (DRAM). Memory 909 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the image processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the image processing method described above, and achieve the same technical effects, and are not repeated herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM, RAM, magnetic disk, optical disk) and including several instructions for causing a terminal (which may be a mobile phone, a computer, a server or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (10)

1. An image processing method, the method comprising:
acquiring a first image frame and a second image frame of a target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame;
acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame;
and carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
2. The image processing method according to claim 1, wherein before the first image frame and the second image frame of the acquisition target image frame, the method further comprises:
receiving a noise reduction intensity value input by a user;
executing the step of acquiring the first image frame and the second image frame of the target image frame in the case that the noise reduction intensity value is greater than 0;
and under the condition that the noise reduction intensity value is smaller than 0, acquiring the second image frame in the target video and second high-frequency information at the second moment, and adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high-frequency information.
3. The image processing method according to claim 2, wherein the adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high frequency information includes:
acquiring the ratio of the absolute value of the noise reduction intensity value to the maximum noise reduction intensity value;
obtaining the product of the second high-frequency information and the ratio;
and adding the product to the second image frame to obtain a third image frame with the noise intensity adjusted.
4. The image processing method according to claim 1 or 2, wherein the target information further includes: a target motion vector of image content of a second image relative to image content of a first image, the first image being an original image corresponding to the first image frame and subjected to noise reduction processing, the second image being an original image corresponding to the second image frame and not subjected to noise reduction processing;
the noise reduction processing for the second image frame according to the first image frame and the target information includes:
the first high-frequency information is overlapped to the first image frame, and a fourth image frame is obtained;
the second high-frequency information is overlapped to the second image frame, and a fifth image frame is obtained;
Performing pixel alignment on the fourth image frame and the fifth image frame based on the target motion vector to obtain a sixth image frame;
and inputting the fifth image frame, the sixth image frame and the noise reduction intensity value into a target noise reduction model to obtain a seventh image frame after noise reduction processing.
5. The image processing method according to claim 4, characterized in that the method further comprises:
acquiring the first image and the second image;
performing motion estimation according to the first image and the second image to obtain the target motion vector of the image content of the second image relative to the image content of the first image;
acquiring a third image; the third image is an image obtained after the noise reduction processing of the second image;
and carrying out noise separation on the second image and the third image to obtain the second high-frequency information at the second moment.
6. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a first image frame and a second image frame of the target video; wherein the first image frame and the second image frame are adjacent image frames, and the first image frame is a previous image frame of the second image frame;
The second acquisition module is used for acquiring target information; wherein the target information includes: the method comprises the steps of obtaining first high-frequency information of a first moment and second high-frequency information of a second moment in an original image of a target video, wherein the first moment is a moment corresponding to a first image frame, and the second moment is a moment corresponding to a second image frame;
and the first noise processing module is used for carrying out noise reduction processing on the second image frame according to the first image frame and the target information.
7. The image processing apparatus according to claim 6, wherein the apparatus further comprises:
the receiving module is used for receiving the noise reduction intensity value input by the user;
a control module for controlling the step of executing the first image frame and the second image frame of the acquisition target image frame in the case that the noise reduction intensity value is greater than 0;
and the second noise processing module is used for acquiring the second image frame in the target video and the second high-frequency information at the second moment under the condition that the noise reduction intensity value is smaller than 0, and adjusting the noise intensity of the second image frame according to the noise reduction intensity value and the second high-frequency information.
8. The image processing apparatus of claim 7, wherein the second noise processing module comprises:
the first acquisition unit is used for acquiring the ratio of the absolute value of the noise reduction intensity value to the maximum noise reduction intensity value;
a second acquisition unit configured to acquire a product of the second high-frequency information and the ratio;
and the first superposition processing unit is used for superposing the product to the second image frame to obtain a third image frame with the noise intensity adjusted.
9. The image processing apparatus according to claim 6 or 7, wherein the target information further includes: a target motion vector of image content of a second image relative to image content of a first image, the first image being an original image corresponding to the first image frame and subjected to noise reduction processing, the second image being an original image corresponding to the second image frame and not subjected to noise reduction processing;
the first noise processing module includes:
a second superimposing unit, configured to superimpose the first high-frequency information on the first image frame to obtain a fourth image frame;
a third superposition processing unit, configured to superimpose the second high-frequency information on the second image frame to obtain a fifth image frame;
A pixel alignment unit, configured to perform pixel alignment on the fourth image frame and the fifth image frame based on the target motion vector, to obtain a sixth image frame;
and the noise reduction processing unit is used for inputting the fifth image frame, the sixth image frame and the noise reduction intensity value into a target noise reduction model to obtain a seventh image frame after noise reduction processing.
10. The image processing apparatus according to claim 9, wherein the apparatus further comprises:
a third acquisition module for acquiring the first image and the second image;
the motion estimation module is used for carrying out motion estimation according to the first image and the second image to obtain the target motion vector of the image content of the second image relative to the image content of the first image;
a fourth acquisition module for acquiring a third image; the third image is an image obtained after the noise reduction processing of the second image;
and the noise separation module is used for carrying out noise separation on the second image and the third image to obtain the second high-frequency information at the second moment.
CN202310129714.1A 2023-02-16 2023-02-16 Image processing method and device Pending CN116309130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310129714.1A CN116309130A (en) 2023-02-16 2023-02-16 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310129714.1A CN116309130A (en) 2023-02-16 2023-02-16 Image processing method and device

Publications (1)

Publication Number Publication Date
CN116309130A true CN116309130A (en) 2023-06-23

Family

ID=86833351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310129714.1A Pending CN116309130A (en) 2023-02-16 2023-02-16 Image processing method and device

Country Status (1)

Country Link
CN (1) CN116309130A (en)

Similar Documents

Publication Publication Date Title
Rao et al. A Survey of Video Enhancement Techniques.
CN108520223B (en) Video image segmentation method, segmentation device, storage medium and terminal equipment
US20230030020A1 (en) Defining a search range for motion estimation for each scenario frame set
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
WO2019135916A1 (en) Motion blur simulation
CN113850833A (en) Video frame segmentation using reduced resolution neural networks and masks of previous frames
WO2023151511A1 (en) Model training method and apparatus, image moire removal method and apparatus, and electronic device
CN114339030B (en) Network live video image stabilizing method based on self-adaptive separable convolution
Raj et al. Feature based video stabilization based on boosted HAAR Cascade and representative point matching algorithm
Lee et al. Fast 3D video stabilization using ROI-based warping
CN115294055A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN115035456A (en) Video denoising method and device, electronic equipment and readable storage medium
Lee et al. Smartgrid: Video retargeting with spatiotemporal grid optimization
WO2024067512A1 (en) Video dense prediction method and apparatus therefor
CN113014817A (en) Method and device for acquiring high-definition high-frame video and electronic equipment
Liu et al. Tanet: Target attention network for video bit-depth enhancement
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN116309130A (en) Image processing method and device
KR102585573B1 (en) Content-based image processing
CN115471413A (en) Image processing method and device, computer readable storage medium and electronic device
CN114782280A (en) Image processing method and device
CN115914834A (en) Video processing method and device
CN113344807A (en) Image restoration method and device, electronic equipment and storage medium
Lai et al. Correcting face distortion in wide-angle videos
Wang et al. Near-infrared fusion for deep lightness enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination