WO2023115859A1 - 压缩图像修复方法及装置、电子设备、存储介质和程序产品 - Google Patents

压缩图像修复方法及装置、电子设备、存储介质和程序产品 Download PDF

Info

Publication number
WO2023115859A1
WO2023115859A1 PCT/CN2022/100470 CN2022100470W WO2023115859A1 WO 2023115859 A1 WO2023115859 A1 WO 2023115859A1 CN 2022100470 W CN2022100470 W CN 2022100470W WO 2023115859 A1 WO2023115859 A1 WO 2023115859A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video frame
compressed
original
determining
Prior art date
Application number
PCT/CN2022/100470
Other languages
English (en)
French (fr)
Inventor
许通达
袁涛
邵一璠
王岩
秦红伟
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023115859A1 publication Critical patent/WO2023115859A1/zh

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to but not limited to the field of computer technology, and in particular relates to a compressed image repair method and device, electronic equipment, storage media and program products.
  • each image frame in the video is compressed to reduce the video size.
  • the repair method of the related technology for the compressed video frame performs blind noise reduction without calibrated video frame loss, which is too complicated and the noise reduction effect is poor.
  • the disclosure proposes a compressed image restoration method and device, electronic equipment, storage media and program products.
  • a compressed image restoration method including:
  • the loss detection model is obtained by training the compressed video frame corresponding to the original video frame as an input sample, and the labeled distribution image corresponding to the compressed video frame as a labeled sample, and each of the labeled distribution images is obtained through the corresponding compressed video frame.
  • a video frame, and a residual video frame of an original video frame corresponding to the compressed video frame is determined.
  • a compressed image restoration device including:
  • the image repair part is configured to repair the compressed image through a preset non-blind repair algorithm to obtain a pre-restored image
  • the loss determination part is configured to input the compressed image into the trained loss detection model to obtain a corresponding loss distribution image
  • an original image determination section configured to determine an original image based on the compressed image, the pre-inpainted image, and the loss distribution image
  • the loss detection model is obtained by training the compressed video frame corresponding to the original video frame as an input sample, and the labeled distribution image corresponding to the compressed video frame as a labeled sample, and each of the labeled distribution images is obtained through the corresponding compressed video frame.
  • a video frame, and a residual video frame of an original video frame corresponding to the compressed video frame is determined.
  • an electronic device including: a processor; a memory configured to store processor-executable instructions; wherein the processor is configured to call the instructions stored in the memory to execute The above compressed image restoration method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above compressed image restoration method is implemented.
  • a computer program product including computer readable code, when the computer readable code is run in an electronic device, a processor in the electronic device is configured to implement the above compression image restoration methods.
  • the compressed image is repaired by a preset non-blind repair algorithm to obtain a pre-restored image, and then the compressed image is input into a trained loss detection model to obtain a corresponding loss distribution image.
  • the original image is determined from the compressed image, the pre-inpainted image, and the loss distribution image.
  • the input samples are compressed video frames, and the labeled samples are determined according to the residual video frames between the corresponding compressed video frames and the original video frames.
  • the loss detection model obtained through training directly compresses the image for loss calibration, and then corrects the preliminary repaired image through the loss output by the model, thereby improving the restoration quality of the compressed image.
  • the present disclosure can implement loss calibration for different compressed images through a loss detection model, which reduces storage and transmission costs.
  • FIG. 1 shows a flow chart of a compressed image restoration method according to an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of a process of determining a loss distribution image according to an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a process of training a loss detection model according to an embodiment of the present disclosure
  • Fig. 4 shows a schematic diagram of a video frame preprocessing process according to an embodiment of the present disclosure
  • Fig. 5 shows a schematic diagram of a pixel area corresponding to a pixel position according to an embodiment of the present disclosure
  • Fig. 6 shows a schematic diagram of a process of determining a pixel area according to an embodiment of the present disclosure
  • Fig. 7 shows a schematic diagram of determining an original image according to an embodiment of the present disclosure
  • Fig. 8 shows a schematic diagram of a compressed image restoration device according to an embodiment of the present disclosure
  • Fig. 9 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure.
  • Fig. 10 shows a schematic diagram of another electronic device according to an embodiment of the present disclosure.
  • Fig. 1 shows a flow chart of a compressed image restoration method according to an embodiment of the present disclosure.
  • the method for restoring a compressed image in the embodiment of the present disclosure may be executed by an electronic device such as a terminal device or a server.
  • the terminal device may be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle-mounted device, Mobile or fixed terminals such as wearable devices.
  • the server can be a single server or a server cluster composed of multiple servers. Any electronic device can implement the compressed image restoration method by calling the computer-readable instructions stored in the memory by the processor.
  • the embodiments of the present disclosure can be applied to the scene of repairing any compressed image, for example, the scene of image repair after compressing a single image, or the scene of sequentially performing image repair on each video frame after video compression.
  • the embodiments of the present disclosure can also be used to restore other lossless images calibrated with lossless distributions.
  • the compressed image restoration method of the embodiment of the present disclosure may include the following steps S10 to S30.
  • step S10 the compressed image is repaired by a preset non-blind repair algorithm to obtain a pre-restored image.
  • the compressed image is a loss image without loss distribution calibration, which may be a compressed image obtained after compressing a single image, or any video frame in a compressed video obtained after video encoding and compression .
  • the loss distribution of the compressed image is not known in advance without calibration of the loss distribution.
  • the preset non-blind inpainting algorithm can be any non-blind inpainting algorithm, for example, by inputting the compressed image into DnCNN (Denoising Convolutional Neural Network, feed-forward denoising convolutional neural network) or CBDNeT (Convolutional Blind Denoising Network, volume Blind denoising neural network) and other methods for image noise reduction.
  • DnCNN Denoising Convolutional Neural Network, feed-forward denoising convolutional neural network
  • CBDNeT Convolutional Blind Denoising Network, volume Blind denoising neural network
  • the non-blind restoration algorithm performs image restoration when the loss distribution of the compressed image is known, but the loss distribution of the compressed image in the embodiments of the present disclosure is unknown, in order to reduce the omission of some areas in the compressed image during the restoration process , to repair all areas in the compressed image, that is, to repair both the areas that need to be repaired and those that do not need to be repaired. Therefore, the pre-restored image is an erroneous image obtained by repairing all pixel positions of the compressed image.
  • Step S20 inputting the compressed image into the trained loss detection model to obtain a corresponding loss distribution image.
  • the loss distribution image of the compressed image may be obtained by predicting the loss distribution of the compressed image by inputting the compressed image into the loss detection model.
  • the size of the loss distribution image is the same as that of the compressed image, and the pixel at each pixel position in the loss distribution image represents the pixel loss intensity of the corresponding pixel position in the compressed image.
  • the loss detection model is obtained by training the compressed video frames corresponding to the original video frames as input samples, and the labeled distribution images corresponding to the compressed video frames as labeled samples.
  • each label distribution image may be determined by the corresponding compressed video frame, and the residual video frame between the original video frame corresponding to the compressed video frame.
  • the loss detection model obtained through training directly compresses the image for loss calibration, and then corrects the preliminary repaired image through the loss output by the model, thereby improving the restoration quality of the compressed image.
  • the present disclosure can implement loss calibration for different compressed images through a loss detection model, which reduces storage and transmission costs.
  • Fig. 2 shows a schematic diagram of a process of determining a loss distribution image according to an embodiment of the present disclosure.
  • the compressed image 20 can be input into the loss detection model 21, and the loss detection model 21 automatically performs loss calibration on the compressed image 20, and directly outputs the loss distribution image 22 .
  • the above loss detection model can accurately and quickly realize the loss distribution calibration of the compressed image 20 .
  • Fig. 3 shows a flowchart of a process of training a loss detection model according to an embodiment of the present disclosure.
  • the training process of the loss detection model may include the following steps S31 to S34.
  • Step S31 Determine at least one original video frame, and a compressed video frame corresponding to each original video frame.
  • the original video frame is an image not compressed by means of video coding and the like, and may be randomly extracted from the uncompressed original video.
  • Each original video frame has a corresponding compressed video frame, which can be obtained by extracting the compressed video obtained after compressing the original video.
  • the position of the original video frame in the original video is the same as the position of the corresponding compressed video frame in the compressed video.
  • the compressed video frame is the i-th frame in the compressed video obtained by compressing the original video.
  • the process of determining at least one original video frame and the compressed video frame corresponding to each original video frame may include determining at least one original video and the compressed video corresponding to each original video. At least one video frame is randomly extracted from each original video as an original video frame, and a compressed video frame corresponding to the original video frame is extracted from a corresponding compressed video. Wherein, the corresponding compressed video frame in the compressed video is the video frame where the original video frame in the compressed video is located.
  • the original video and the corresponding compressed video may be determined by first determining at least one uncompressed original video, and for each original video, randomly selecting a corresponding encoder and encoding strength.
  • video encoding is performed on each original video to obtain a corresponding compressed video. That is to say, the original video is encoded by a corresponding encoder with a corresponding encoding strength to obtain a compressed video corresponding to the original video.
  • Step S32 according to each of the original video frames and the corresponding compressed video frames, determine a residual video frame of each of the compressed video frames.
  • each compressed video frame may be determined according to each original video frame and the corresponding compressed video frame The residual video frame of .
  • the residual video frame may be determined by directly calculating the difference between each original video frame and the compressed video frame. For example, when the original video frame is expressed as matrix X and the compressed video frame is expressed as matrix Y, the residual video frame can be expressed by the difference between matrix X and matrix Y.
  • each original video frame and corresponding compressed video frame may be preprocessed first. Then calculate the difference between each original video frame after preprocessing and the corresponding compressed video frame to obtain the residual video frame of each compressed video frame.
  • the preprocessing The process may be to perform high-pass filtering on each original video frame and the corresponding compressed video frame. This filtering process can remove the low-frequency signals in the original video frame and the compressed video frame, and retain the high-frequency signal that is relatively large to the human visual image, which simplifies the calculation amount and ensures the accuracy of the extracted residual video frame.
  • the way of high-pass filtering the original video frame and the compressed video frame may be the same or different.
  • the filtering method of any video frame may be to directly input the video frame into a high-pass filter to remove the low-frequency signal contained therein to obtain the processed video frame.
  • the video frame may be input into a low-pass filter to obtain a low-pass video frame from which high-frequency signals are removed, and then the low-pass video frame may be subtracted from the input video frame to complete the preprocessing process.
  • Fig. 4 shows a schematic diagram of a video frame preprocessing process according to an embodiment of the present disclosure.
  • the original video frame X can be directly input into the low-pass filter 40 to obtain the low-pass video frame Z, and then the preprocessed video frame Z can be obtained by subtracting the low-pass video frame Z from the original video frame X
  • the original video frame X' completes the preprocessing process.
  • the low-pass filter 40 may be a mean low-pass filter with a preset size, such as 4 ⁇ 4.
  • the preprocessing method of the compressed video frame is the same as that of the original video frame, and the processed compressed video frame can also be obtained through the method shown in FIG. 4 .
  • Step S33 determining an annotation distribution image according to each residual video frame.
  • an annotation distribution image of each compressed video may be determined according to the corresponding residual video frame.
  • determining the manner of labeling the distribution image may include: determining the pixel area corresponding to each pixel position in the residual video frame; determining the feature value of each pixel position corresponding to the pixel area; according to The eigenvalues at each pixel location determine the label distribution image.
  • the size of the pixel area corresponding to each pixel position in the residual video frame is the same, which may be preset, for example, may be 3 ⁇ 3.
  • each pixel position is at a specific position in the pixel area, and the pixel area corresponding to each pixel position can be determined as an area with the pixel position as the upper left corner and a preset size.
  • the specific position of each pixel position in the pixel area may also determine that the pixel area corresponding to each pixel position is an area with the pixel position as the center and a preset size.
  • determining the pixel area corresponding to each pixel position in the residual video frame may be an image frame with a predetermined size, and determining the corresponding pixel area of each pixel position The pixel area is when the pixel position is at the center of the image frame, the image frame includes a residual video frame area.
  • Fig. 5 shows a schematic diagram of a pixel position corresponding to a pixel area according to an embodiment of the disclosure.
  • the pixel area corresponding to each pixel position is determined in the residual video frame 50 through an image frame 51 of a preset size.
  • the length and width of the preset size are both set to odd numbers, for example, the size of the image frame 51 may be 3 ⁇ 3.
  • the length and width of the preset size can be odd or even, for example, the size of the image frame 51 can be 3 ⁇ 3 or 4 ⁇ 4.
  • each pixel position is in the middle of the corresponding pixel area as an example.
  • the image frame 51 of the preset size can be moved to the position where the pixel value is 111 in the middle, and the area in the image frame 51 at this time can be used as the pixel The pixel area corresponding to the pixel position with value 111.
  • the pixel position of the corresponding pixel area is located at the edge of the residual video frame, so that when there is a blank position in the image frame 51 when the pixel position is in the center of the image frame 51, it can be filled by copying the edge of the residual video frame 50 Empty space within the image frame 51.
  • the pixel area corresponding to each pixel position can be obtained by sliding the image frame. That is to say, the image frame with a fixed size can also be slid on the residual video frame with a preset step size of 1 to obtain the pixel area corresponding to each pixel position.
  • FIG. 6 shows a schematic diagram of a process of determining a pixel area according to an embodiment of the present disclosure.
  • sliding in the residual video frame 50 through an image frame 51 of a preset size Determine the pixel area corresponding to each pixel location.
  • the residual can be copied according to the size of the image frame 51
  • an expanded image 52 is obtained to ensure that each pixel position in the residual video frame 50 can completely obtain the corresponding pixel area.
  • the edges of the residual video frame 50 can be copied once to obtain the expanded image 52; when the size of the image frame 51 is 5 ⁇ 5, the residual video frame 50 can be The edges of are copied twice to obtain the expanded image 52 .
  • the image frame 51 can be slid with a preset step size of 1 from the preset position of the expanded image 52, such as the upper left corner and the upper right corner, to determine the pixel position corresponding to the center of the image frame 51 after each slide. pixel area.
  • the Eigenvalues at pixel locations may be obtained by calculating the mean square value of each pixel included in the corresponding pixel area.
  • the source of noise in a compressed video frame is Gaussian noise with a mean value of 0 and the noise distribution of adjacent pixels is smooth is described as an example. Since the square mean value in each pixel area is the maximum likelihood estimate of the noise distribution variance, the square mean value of each pixel in the pixel area can be used as the feature value. For example, when the size of the pixel area corresponding to the pixel position i is 3 ⁇ 3, calculate the square of the pixel values in the pixel position i and the surrounding eight adjacent pixel positions, and then divide by 9 to obtain the feature value of the pixel position i. Alternatively, other calculation methods may be used to calculate the pixel value in each pixel area to obtain the feature value corresponding to the middle pixel position of the pixel area.
  • each feature value may be stored in a corresponding pixel position to obtain a label distribution image. For example, first create a blank image with the same size as the residual video frame, write the feature value corresponding to each pixel position in the residual video frame into the corresponding pixel position of the blank image, and obtain the label distribution image.
  • a training set for training the information detection model can be created according to each compressed video frame and the corresponding label distribution image .
  • the compressed video frames in the training set are obtained as the input samples of the loss detection model, and the labeled distribution images corresponding to the compressed video frames are used as the labeled samples of the input samples, and the labeled information of the samples is compared with the output of the loss detection model.
  • the model loss is obtained by comparison, and the parameters of the loss detection model are adjusted by using the model loss, and finally the trained loss detection model is obtained.
  • the above-mentioned method for training a loss detection model can accurately obtain the loss distribution in the compression process of the original video frame through the residual video frame between the original video frame and the compressed video frame, and train it to accurately predict A loss detection model for the loss distribution of compressed video frames. Therefore, after the compressed image is input into the loss detection model, the loss detection model can accurately detect the loss distribution of the compressed image, and output a loss distribution image representing the loss distribution of the compressed image.
  • Step S30 determining an original image according to the compressed image, the pre-restored image and the loss distribution image.
  • the original image is determined according to the above three images.
  • the loss intensity of different pixel positions in the compressed image is different during the compression process, that is, pixels at different positions require different restoration efforts for restoration. Repair, pixels with large loss need more intensive restoration.
  • the pre-restored image is the image obtained after repairing the compressed image through any non-blind repair algorithm, in which each pixel is repaired by the same repair strength. Therefore, it is necessary to characterize the loss distribution of the compressed image, that is, the loss distribution image of the loss of different pixel positions, adjust the compressed image and the pre-restored image and merge them to obtain the original image.
  • the original image may be determined by transparently mixing the compressed image and the pre-restored image based on the loss distribution image to obtain the original image.
  • the method of transparency blending can be to calculate the weighted sum between the compressed image and the pre-restored image to obtain the original image.
  • the loss distribution image can be used as the weight of the pre-restored image, and then the loss distribution image is subtracted from the matrix with the same size as the loss distribution image and each pixel value is 1 to obtain the inverse loss distribution image, and the inverse loss distribution image
  • the weighted sum between the pre-inpainted image and the compressed image is calculated to obtain the original image.
  • the value range of each pixel in the loss distribution image is [0,1].
  • the original image can be represented by Q ⁇ N+P ⁇ (1-N).
  • Fig. 7 shows a schematic diagram of determining an original image according to an embodiment of the present disclosure.
  • an inverse loss distribution image 73 is determined through the loss distribution image 72 .
  • the loss distribution image 72 is used as the weight of the pre-restored image 70
  • the inverse loss distribution image 73 is used as the weight of the compressed image 71
  • the pre-restored image 70 and the compressed image 71 are transparently fused to obtain the original image 74 . That is to say, the product of the loss distribution image 72 and the pre-restored image 70 and the product of the pre-repaired image 70 and the compressed image 71 are calculated, and finally the two products are added to obtain the original image 74 .
  • the compressed image is initially corrected through the preset non-blind repair algorithm, and then the loss distribution of the compressed image is directly calibrated through the loss detection model obtained through training.
  • the compressed image is corrected, and the repair quality of the compressed image is improved.
  • the present disclosure can realize loss distribution calibration for different compressed images through a loss detection model, which reduces storage and transmission costs.
  • the existing video repair methods are mostly based on a single quantization parameter (Quantization Parameter, QP) of a single encoder, which is difficult to generalize to different encoders and different bit rates. Control algorithms with different quantization parameters.
  • QP Quantization Parameter
  • the related technology adopts a fixed quantization parameter restoration model to perform compressed video restoration. It is difficult to generalize in the actual use process, requiring users to manually adjust the video quality and select the model, and cannot get rid of the dependence on manual labor, so it is difficult to automate large-scale processing, and because multiple models need to be used for multiple scenarios, it wastes storage space and transmission bandwidth.
  • the embodiment of the present disclosure considers the noise of eight neighboring pixels of the sampled original pixel for estimating the mean value of the noise of the original pixel itself. Assuming that the noise distribution in the adjacent pixel area is similar, it is proposed to calibrate the noise variance by using the eight-neighborhood of the pixel.
  • the compressed image restoration method provided by the embodiment of the present disclosure includes at least the following steps 110 to 150:
  • Step 110 perform high-pass filtering on the acquired original video frame x (equivalent to the original image) and compressed video frame x' (equivalent to the compressed image).
  • the original video frame x can be processed with a 4x4 mean filter to obtain the low-pass filtered frame z of the original video frame x; then the difference between the original video frame x and the low-pass filtered frame z is obtained to obtain the filtered original video frame y. In the same way, the corresponding compressed video frame y' can be obtained.
  • Step 120 calculate the residual video frame d between the filtered original video frame y and the compressed video frame y', perform loss calibration in an eight-neighborhood manner, and obtain a label distribution image n.
  • the pixels in the residual video frame d are traversed, the pixels in the 4x4 sliding window are taken, and the mean value of the square of each pixel value is calculated.
  • the noise source is Gaussian noise with a mean value of 0
  • the noise distribution of adjacent pixels is smooth
  • the sample variance of the 16 pixels is the maximum likelihood estimate (Max Likelihood Estimator) of the noise distribution variance, and this mean value is stored in the label distribution The corresponding position of image n, output label distribution image n.
  • the embodiment of the present disclosure puts forward the assumption that the noise distribution of adjacent pixels is the same, and samples the noise of the eight neighboring pixels of the original pixel to estimate the mean value of the noise of the pixel itself, thereby overcoming the compression noise without real probability model labeling by estimating model parameters Difficulties.
  • Step 130 using the data set of variable quantization parameters of the variable encoder, input the compressed video frame y' under the supervision of the noise distribution frame n to train the loss detection model.
  • Step 140 use any non-blind repair algorithm to repair the compressed video frame y', and obtain the wrong pre-repair image y".
  • Step 150 transparently blend the compressed video frame y' and the pre-repaired image y" with reference to the label distribution image n, and obtain the mixed image as the repaired original video frame.
  • the embodiment of the present disclosure proposes a video repair method guided by no reference to subjective quality estimation, so that the original video repair method that is difficult to generalize can be used in different encoders, bit rate control methods, and bit rates, and can be used in multiple encoders and multiple codes. Improve the subjective quality of transcoded video at a lower rate.
  • the embodiment of the present disclosure adopts the combination of the quality estimation model parameter repair model, and the repair result is guided by the quality estimation model, thereby automating the restoration of video compression loss. Simultaneous use of a single model reduces storage and transport costs.
  • the compressed image restoration method provided by the embodiments of the present disclosure may be applied in at least the following fields: the field of video transcoding, the field of image and video editing, and the field of image and video restoration.
  • the method includes but is not limited to the following scenarios: for video service providers, low-quality videos uploaded by users can be repaired, old over-compressed videos can be enhanced, and better video quality can be provided. For video production and secondary creation personnel, the quality of video materials can be repaired.
  • the present disclosure also provides a compressed image restoration device, electronic equipment, computer-readable storage medium, and programs, all of which can be used to implement any compressed image restoration method provided in the present disclosure, and refer to the corresponding technical solutions and descriptions in the method section record accordingly.
  • Fig. 8 shows a schematic diagram of an apparatus for restoring a compressed image according to an embodiment of the present disclosure.
  • the compressed image restoration device of the embodiment of the present disclosure may include an image restoration part 80 , a loss determination part 81 and an original image determination part 82 .
  • the image restoration part 80 is configured to repair the compressed image through a preset non-blind restoration algorithm to obtain a pre-restored image
  • the loss determination part 81 is configured to input the compressed image into the trained loss detection model to obtain a corresponding loss distribution image
  • an original image determination section 82 configured to determine an original image based on the compressed image, the pre-repaired image, and the loss distribution image
  • the loss detection model is obtained by training the compressed video frame corresponding to the original video frame as an input sample, and the labeled distribution image corresponding to the compressed video frame as a labeled sample, and each of the labeled distribution images is obtained through the corresponding compressed video frame.
  • a video frame, and a residual video frame of an original video frame corresponding to the compressed video frame is determined.
  • the original image determining part 82 includes: an image fusion subsection configured to transparently blend the compressed image and the pre-restored image to obtain an original image based on the loss distribution image.
  • the training process of the loss detection model includes: determining at least one original video frame, and a compressed video frame corresponding to each original video frame; The compressed video frame, determine the residual video frame of each of the compressed video frames; determine the label distribution image according to each of the residual video frames; use each of the compressed video frames as an input sample, and each of the compressed video frames The annotated distribution images corresponding to the compressed video frames are used as annotated samples for training to obtain a loss detection model.
  • the determining at least one original video frame and the compressed video frame corresponding to each original video frame includes: determining at least one original video, and the compressed video frame corresponding to each original video ; Randomly extract at least one video frame from each of the original videos as an original video frame, and extract a compressed video frame corresponding to the original video frame from the corresponding compressed video.
  • the determining at least one original video and the compressed video corresponding to each of the original videos includes: determining at least one original video; for each of the original videos, randomly selecting the corresponding encoder and encoding intensity; according to the corresponding encoder and encoding intensity, video encoding is performed on each of the original videos to obtain a compressed video.
  • the determining the residual video frame of each compressed video frame according to each of the original video frames and the corresponding compressed video frame includes: for each of the original video frames Frame and the corresponding compressed video frame are preprocessed; calculate the difference between each of the original video frame and the corresponding compressed video frame after preprocessing, and obtain the residual video frame of each compressed video frame .
  • the preprocessing each of the original video frames and the corresponding compressed video frames includes: performing high-pass processing on each of the original video frames and the corresponding compressed video frames filtering.
  • the determining the annotation distribution image according to each residual video frame includes: for each residual video frame, respectively performing the following steps: The pixel area corresponding to each pixel position; determine the feature value of the pixel position corresponding to each pixel area; determine the label distribution image according to the feature value of each pixel position.
  • the determining the pixel area corresponding to each pixel position in the residual video frame includes: determining an image frame with a preset size; determining the pixel area corresponding to each pixel position as the When the pixel position is at the center of the image frame, the residual video frame area included in the image frame.
  • the pixel area corresponding to each pixel position may be obtained by sliding the image frame.
  • the determining the eigenvalue of the pixel position corresponding to each of the pixel regions includes: for each of the pixel regions, calculating the mean square value of the pixels included therein to obtain the eigenvalue.
  • the determining the label distribution image according to the feature value of each pixel position includes: storing each feature value in the corresponding pixel position to obtain the label distribution image.
  • the functions or parts included in the apparatus provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and for specific implementation, refer to the descriptions of the above method embodiments.
  • Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and the above-mentioned method is implemented when the computer program instructions are executed by a processor.
  • the computer readable storage medium may be a non-transitory computer readable storage medium.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
  • An embodiment of the present disclosure also provides a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • Electronic devices may be provided as terminals, servers, or other forms of devices.
  • FIG. 9 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a terminal such as a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • electronic device 800 may include one or more of the following components: processing component 802, memory 804, power supply component 806, multimedia component 808, audio component 810, input/output (Input/Output, I/O) interface 812 , sensor component 814 and communication component 816 .
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as those associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above method. Additionally, processing component 802 may include one or more components that facilitate interaction between processing component 802 and other components. For example, processing component 802 may include a multimedia portion to facilitate interaction between multimedia component 808 and processing component 802 .
  • the memory 804 is configured to store various types of data to support operations at the electronic device 800 . Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
  • Memory 804 can be realized by any type of volatile or nonvolatile memory device or their combination, such as static random access memory (SRAM, Static Random-Access Memory), electrically erasable programmable read-only memory (EEPROM) , Static Random-Access Memory), Erasable Programmable Read-Only Memory (EPROM, Electrically Erasable Programmable Read-Only Memory), Programmable Read-Only Memory (PROM, Programmable Read-Only Memory), Read-Only Memory (ROM, Read-Only Memory) Only Memory), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Electrically Erasable Programmable Read-Only Memory
  • PROM Programmable Read-Only
  • the power supply component 806 provides power to various components of the electronic device 800 .
  • Power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 800 .
  • the multimedia component 808 includes a screen providing an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD, Liquid Crystal Display) and a touch panel (TP, Touch Panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or swipe action, but also detect duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capability.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC, Microphone).
  • the microphone When the electronic device 800 is in an operation mode, such as a calling mode, a recording mode and a voice recognition mode, the microphone is configured to receive an external audio signal. Received audio signals may be further stored in memory 804 or sent via communication component 816 .
  • the audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and the peripheral interface part, which may be a keyboard, a click wheel, buttons, and the like. These buttons may include, but are not limited to: a home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessments of various aspects of electronic device 800 .
  • the sensor component 814 can detect the open/close state of the electronic device 800, the relative positioning of components, such as the display and the keypad of the electronic device 800, and the sensor component 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in position of components, presence or absence of user contact with electronic device 800 , electronic device 800 orientation or acceleration/deceleration and temperature changes in electronic device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • the sensor component 814 may also include an optical sensor, such as a Complementary Metal-Oxide Semiconductor (CMOS, Complementary Metal-Oxide-Semiconductor) or a Charge Coupled Device (CCD, Charge Coupled Device) image sensor, for use in imaging applications.
  • CMOS Complementary Metal-Oxide Semiconductor
  • CCD Charge Coupled Device
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access wireless networks based on communication standards, such as wireless networks (Wi-Fi, Wireless Fidelity), second-generation mobile communication technologies (2G, The 2nd Generation) or third-generation mobile communication technologies (3G, The 3rd Generation) Generation), or a combination of them.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC, Near Field Communication) part to facilitate short-range communication.
  • NFC Near Field Communication
  • the NFC part can be based on radio frequency identification (RFID, Radio Frequency Identification) technology, infrared data association (IrDA, Infrared Data Association) technology, ultra-wideband (UWB, Ultra Wide Band) technology, Bluetooth (BT, Blue Tooth) technology and other techniques to achieve.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC, Application Specific Integrated Circuit), digital signal processors (DSP, Digital Signal Processor), digital signal processing devices (DSPD, Digital Signal Processing Device), Programmable Logic Device (PLD, Programmable Logic Device), Field Programmable Gate Array (FPGA, Field Programmable Gate Array), Controller, Microcontroller, Microprocessor or other electronic components, configured to perform the above method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • Controller Microcontroller, Microprocessor or other electronic components, configured to perform the above method.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to implement the above method.
  • FIG. 10 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922 , which may include one or more processors, and a memory resource represented by memory 1932 configured to store instructions executable by processing component 1922 , such as application programs.
  • An application program stored in memory 1932 may include one or more portions each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above method.
  • Electronic device 1900 may also include a power supply component 1926 configured to perform power management of electronic device 1900, a wired or wireless network interface 1950 configured to connect electronic device 1900 to a network, and an input-output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on the operating system stored in the memory 1932, such as the Microsoft server operating system (Windows Server TM ), the graphical user interface-based operating system (Mac OS X TM ) introduced by Apple Inc., and the multi-user and multi-process computer operating system (Unix TM ), a free and open source Unix-like operating system (Linux TM ), an open source Unix-like operating system (FreeBSD TM ), or similar systems.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface-based operating system
  • Unix TM multi-user and multi-process computer operating system
  • Linux TM free and open source Unix-like operating system
  • FreeBSD TM open source Unix-like operating system
  • a non-transitory computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to implement the above method.
  • Embodiments of the present disclosure may be systems, methods and/or computer program products.
  • a computer program product may include a computer-readable storage medium carrying computer-readable program instructions for causing a processor to implement various aspects of embodiments of the present disclosure.
  • a computer readable storage medium may be a tangible device that can retain and store instructions for use by an instruction execution device.
  • a computer readable storage medium may be, for example, but is not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may include: portable computer disk, hard disk, random access memory (RAM, Random Access Memory), read-only memory, erasable programmable read-only memory (EPROM or flash memory), static random access memory, Portable Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disk (DVD, Digital Video Disc), memory sticks, floppy disks, mechanically encoded devices such as punched cards with instructions stored thereon Or the protrusion structure in the groove, and any suitable combination of the above.
  • RAM random access memory
  • EPROM or flash memory erasable programmable read-only memory
  • static random access memory Portable Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • DVD Digital Versatile Disk
  • memory sticks floppy disks
  • mechanically encoded devices such as punched cards with instructions stored thereon Or the protrusion structure in the groove, and any suitable combination of the above.
  • computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., pulses of light through fiber optic cables), or transmitted electrical signals.
  • Computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or a network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA, Industry Standard Architecture) instructions, machine instructions, machine-related instructions, pseudocode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., as well as conventional procedural programming languages such as "C" or similar programming languages.
  • Computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer via any kind of network, including a local area network (LAN, Local Area Network) or a wide area network (WAN, Wide Area Network), or may be connected to an external computer (e.g., using Internet Service Provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • electronic circuits such as programmable logic circuits, field programmable gate arrays, or programmable logic arrays, can execute computer-readable program instructions by using state information of computer-readable program instructions to personalize electronic circuits , so as to realize various aspects of the present disclosure.
  • These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause computers, programmable data processing devices and/or other devices to work in a specific way, so that the computer-readable medium storing instructions includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks in flowcharts and/or block diagrams.
  • each block in a flowchart or block diagram may represent a portion, a program segment, or a portion of an instruction that includes one or more Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the computer program product can be realized by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
  • the compressed image is repaired by a preset non-blind repair algorithm to obtain a pre-restored image, and then the compressed image is input into a trained loss detection model to obtain a corresponding loss distribution image.
  • the original image is determined from the compressed image, the pre-inpainted image, and the loss distribution image.
  • the input samples are compressed video frames, and the labeled samples are determined according to the residual video frames corresponding to the compressed video frames and the original video frames.
  • the loss detection model obtained through training directly compresses the image for loss calibration, and then corrects the preliminary repaired image through the loss output by the model, thereby improving the restoration quality of the compressed image.
  • the present disclosure can implement loss calibration for different compressed images through a loss detection model, which reduces storage and transmission costs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本公开涉及一种压缩图像修复方法及装置、电子设备、存储介质和程序产品,通过预设的非盲修复算法对压缩图像进行修复得到预修复图像,再将压缩图像输入训练得到的损失检测模型中得到对应的损失分布图像。根据压缩图像、预修复图像和损失分布图像确定原始图像。其中,损失检测模型训练过程中,输入样本为压缩视频帧,标注样本根据对应压缩视频帧与原始视频帧的残差视频帧确定。

Description

压缩图像修复方法及装置、电子设备、存储介质和程序产品
相关申请的交叉引用
本公开基于申请号为202111565590.9、申请日为2021年12月20日、申请名称为“压缩图像修复方法及装置、电子设备和存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本公开。
技术领域
本公开涉及但不限于计算机技术领域,尤其涉及一种压缩图像修复方法及装置、电子设备、存储介质和程序产品。
背景技术
在视频编码过程中,会对视频中每一帧图像帧进行压缩以减小视频体积。在还原压缩视频时,由于其中每一视频帧的损失难以标定,相关技术对压缩视频帧的修复方式在未标定视频帧损失的情况下进行盲降噪,复杂度过高且降噪效果差。
发明内容
本公开提出了一种压缩图像修复方法及装置、电子设备、存储介质和程序产品。
根据本公开的第一方面,提供了一种压缩图像修复方法,包括:
通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像;
将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像;
根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,
其中,所述损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,所述压缩视频帧对应的标注分布图像作为标注样本训练得到,每个所述标注分布图像通过对应的所述压缩视频帧,和所述压缩视频帧对应的原始视频帧的残差视频帧确定。
根据本公开的第二方面,提供了一种压缩图像修复装置,包括:
图像修复部分,配置为通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像;
损失确定部分,配置为将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像;
原始图像确定部分,配置为根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,
其中,所述损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,所述压缩视频帧对应的标注分布图像作为标注样本训练得到,每个所述标注分布图像通过对应的所述压缩视频帧,和所述压缩视频帧对应的原始视频帧的残差视频帧确定。
根据本公开的第三方面,提供了一种电子设备,包括:处理器;配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述压缩图像修复方法。
根据本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序 指令,所述计算机程序指令被处理器执行时实现上述压缩图像修复方法。
根据本公开的第五方面,提供了一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行配置为实现上述压缩图像修复方法。
在本公开实施例中,通过预设的非盲修复算法对压缩图像进行修复得到预修复图像,再将压缩图像输入训练得到的损失检测模型中得到对应的损失分布图像。根据压缩图像、预修复图像和损失分布图像确定原始图像。其中,损失检测模型训练过程中,输入样本为压缩视频帧,标注样本根据对应的压缩视频帧与原始视频帧的残差视频帧确定。本公开通过训练得到的损失检测模型直接压缩图像进行损失标定,再通过模型输出的损失对初步修复后的图像进行修正,提升了压缩图像的修复质量。同时,本公开通过一个损失检测模型即可实现不同的压缩图像进行损失标定,减少了存储和传输成本。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的一种压缩图像修复方法的流程图;
图2示出根据本公开实施例的一种确定损失分布图像过程的示意图;
图3示出根据本公开实施例的一种训练损失检测模型过程的流程图;
图4示出根据本公开实施例的一种视频帧预处理过程的示意图;
图5示出根据本公开实施例的一种像素位置对应像素区域的示意图;
图6示出根据本公开实施例的一种确定像素区域过程的示意图;
图7示出根据本公开实施例的一种确定原始图像的示意图;
图8示出根据本公开实施例的一种压缩图像修复装置的示意图;
图9示出根据本公开实施例的一种电子设备的示意图;
图10示出根据本公开实施例的另一种电子设备的示意图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。
另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的实施细节。本领域技术人员应当理解,没有某些实施细节,本公开同样可以实施。在一些实施例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开 的主旨。
图1示出根据本公开实施例的一种压缩图像修复方法的流程图。在一种可能的实现方式中,本公开实施例的压缩图像修复方法可以通过终端设备或服务器等电子设备执行。其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等移动或固定终端。服务器可以为单独的服务器或者多个服务器组成的服务器集群。任意电子设备执均可以通过处理器调用存储器中存储的计算机可读指令的方式来实现该压缩图像修复方法。
本公开实施例可以应用于对任意压缩图像进行修复的场景,例如在对单张图像压缩后的图像修复,或者对视频压缩后依次对其中每一视频帧进行图像修复的场景。在一些实施方式中,本公开实施例还可以用于还原其他无损失分布标定的损失图像。
如图1所示,本公开实施例的压缩图像修复方法可以包括以下步骤S10至S30。
步骤S10、通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像。
在一种可能的实现方式中,压缩图像为未进行损失分布标定的损失图像,可以为单张图像压缩后得到的压缩图像,或者是视频在经过编码压缩后得到的压缩视频中的任意视频帧。其中,未进行损失分布标定即预先不清楚压缩图像的损失分布。在获取压缩图像后,先通过预设的非盲修复算法对压缩图像进行修复,得到预先修复图像。可选地,预设的非盲修复算法可以为任意非盲修复算法,例如通过将压缩图像输入DnCNN(Denoising Convolutional Neural Network,前馈去噪卷积神经网络)或CBDNeT(Convolutional Blind Denoising Network,卷积盲去噪神经网络)等方式进行图像降噪。
在一种实现方式中,非盲修复算法是在压缩图像损失分布已知的情况下进行图像修复,而本公开实施例的压缩图像损失分布未知,为了减少压缩图像中部分区域在修复过程中遗漏,对压缩图像中全部区域均进行修复,即对需要修复和不需要修复的区域均进行修复。因此,预修复图像为一个对压缩图像全部像素位置进行修复得到的错误图像。
步骤S20、将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像。
在一种可能的实现方式中,可以通过将压缩图像输入损失检测模型中,对压缩图像的损失分布进行预测,得到损失分布图像。其中,损失分布图像与压缩图像尺寸相同,损失分布图像中每个像素位置的像素表征压缩图像中对应像素位置的像素损失强度。损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,以及压缩视频帧对应的标注分布图像作为标注样本进行训练得到。可选地,每个标注分布图像可以通过对应的压缩视频帧,与压缩视频帧对应的原始视频帧之间的残差视频帧确定。
本公开通过训练得到的损失检测模型直接压缩图像进行损失标定,再通过模型输出的损失对初步修复后的图像进行修正,提升了压缩图像的修复质量。同时,本公开通过一个损失检测模型即可实现不同的压缩图像进行损失标定,减少了存储和传输成本。
图2示出根据本公开实施例的一种确定损失分布图像过程的示意图。如图2所示,在确定未标定损失分布的压缩图像20后,可以将压缩图像20输入损失检测模型21,由损失检测模型21对压缩图像20自动进行损失标定后,直接输出损失分布图像22。上述损失检测模型能够准确、快速地实现压缩图像20的损失分布标定。
图3示出根据本公开实施例的一种训练损失检测模型过程的流程图。如图3所示,在本公开实施例中,损失检测模型的训练过程可以包括以下步骤S31至S34。
步骤S31、确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧。
在一种可能的实现方式中,原始视频帧为未通过视频编码等方式压缩的图像,可以在未压缩的原始视频中随机提取得到。每个原始视频帧具有一个对应的压缩视频帧,可 以通过在压缩原始视频后得到的压缩视频中提取得到。其中,原始视频帧在原始视频中的位置与对应的压缩视频帧在压缩视频中的位置相同。例如,当原始视频帧为原始视频中的第i帧时,压缩视频帧为将原始视频压缩处理后得到的压缩视频中的第i帧。
可选地,本公开实施例确定至少一个原始视频帧,以及每个原始视频帧对应的压缩视频帧的过程可以包括确定至少一个原始视频,以及每个原始视频对应的压缩视频。从各原始视频中随机抽取至少一个视频帧作为原始视频帧,并在对应的压缩视频中抽取原始视频帧对应的压缩视频帧。其中,压缩视频中对应的压缩视频帧为压缩视频中原始视频帧所在位置的视频帧。原始视频和对应压缩视频的确定方式可以为先确定至少一个未压缩的原始视频,对于每个原始视频,随机选取对应的编码器和编码力度。在一些实施方式中,基于每个原始视频对应的编码器和编码力度,对每个原始视频进行视频编码得到对应的压缩视频。也就是说,通过对应的编码器以对应的编码力度对原始视频进行编码,得到原始视频对应的压缩视频。
步骤S32、根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧。
在一种可能的实现方式中,在确定多个原始视频帧,以及每个原始视频帧对应的压缩视频帧后,可以根据每个原始视频帧和对应的压缩视频帧,确定每个压缩视频帧的残差视频帧。可选地,残差视频帧的确定方式可以为直接计算每个原始视频帧和压缩视频帧的差。例如,当原始视频帧表示为矩阵X,压缩视频帧表示为矩阵Y时,残差视频帧可以通过矩阵X与矩阵Y之差表示。
可选地,对于一些特殊的应用场景,还可以先对每个原始视频帧和对应的压缩视频帧进行预处理。再计算预处理后每个原始视频帧和对应的压缩视频帧的差,得到每个压缩视频帧的残差视频帧。例如,由于低频信号噪声对人眼视觉的影响较小,且视频编码过程中对高频信号进行编码,在压缩视频帧为对原始视频进行视频编码的得到的压缩视频中视频帧时,预处理过程可以为对每个原始视频帧和对应的压缩视频帧进行高通滤波。该滤波过程能够去除原始视频帧和压缩视频帧中的低频信号,保留对人眼视觉影像较大的高频信号,简化了计算量的同时确保提取得到残差视频帧的准确性。
在一些实施方式中,对原始视频帧和压缩视频帧进行高通滤波的方式可以相同或不同。任意视频帧的滤波方式可以为直接将视频帧输入高通滤波器,以去除其中包括的低频信号得到处理后的视频帧。或者,还可以将视频帧输入低通滤波器得到去除高频信号的低通视频帧,再由输入的视频帧减去低通视频帧,以完成预处理过程。
图4示出根据本公开实施例的一种视频帧预处理过程的示意图。如图4所示,对于原始视频帧X,可以直接将原始视频帧X输入低通滤波器40得到低通视频帧Z,进而通过原始视频帧X减去低通视频帧Z得到预处理后的原始视频帧X’,完成预处理过程。可选地,低通滤波器40可以为尺寸预设的均值低通滤波器,例如4×4。压缩视频帧的预处理方式与原始视频帧的预处理方式相同,也可以通过图4所示的方法得到处理后的压缩视频帧。
步骤S33、根据每个所述残差视频帧确定标注分布图像。
在一种可能的实现方式中,可以在确定每个压缩视频帧对应的残差视频帧后,根据对应的残差视频帧确定每个压缩视频的标注分布图像。
其中,对于每个残差视频帧,确定标注分布图像的方式可以包括:确定残差视频帧中每个像素位置对应的像素区域;确定每个所述像素区域对应的像素位置的特征值;根据每个像素位置的特征值确定标注分布图像。
可选地,残差视频帧中每个像素位置对应的像素区域尺寸相同,可以预先设定,例如可以为3×3。在一种实现方式中,每个像素位置在所述像素区域中的特定位置,可以 确定每个像素位置对应的像素区域为以所述像素位置为左上角,且尺寸预设的区域。在另一种实现方式中,每个像素位置在所述像素区域中的特定位置还可以确定每个像素位置对应的像素区域为以所述像素位置为中心位置,且尺寸预设的区域。
可选地,在预先设定像素位置为对应像素区域的中间位置时,确定残差视频帧中每个像素位置对应的像素区域可以为确定预设尺寸的图像框,确定每个像素位置对应的像素区域为像素位置在图像框中心位置时,图像框中包括残差视频帧区域。
图5示出根据本公开实施例的一种像素位置对应像素区域的示意图。如图5所示,在确定残差视频帧50后,通过预设尺寸的图像框51在残差视频帧50中确定每个像素位置对应的像素区域。其中,在确定每个像素位置在对应像素区域的中间位置的情况下,预设尺寸的长和宽都设定为奇数,例如图像框51的尺寸可以为3×3。在确定每个像素位置在对应像素区域的左上角、右下角等特定边缘位置的情况下,预设尺寸的长和宽可以为奇数或偶数,例如图像框51的尺寸可以为3×3或4×4。
以确定每个像素位置在对应像素区域的中间位置为例进行说明。在需要确定像素值为111的像素位置对应的像素区域时,可以将预设尺寸的图像框51移动至正中间位置为像素值为111的位置,并将此时图像框51中区域作为该像素值为111的像素位置对应的像素区域。可选地,当需要确定对应像素区域的像素位置位于残差视频帧边缘,使像素位置在图像框51中心时图像框51中存在空白位置时,可以通过复制残差视频帧50边缘的方式填充图像框51内的空白位置。
在一些实施方式中,每个像素位置对应的像素区域可以通过滑动图像框的方式获取。也就是说,还可以通过固定尺寸的图像框以预设步长1在残差视频帧上滑动,得到每个像素位置对应的像素区域。
图6示出根据本公开实施例的一种确定像素区域过程的示意图,如图6所示,在确定残差视频帧50后,通过预设尺寸的图像框51在残差视频帧50中滑动确定每个像素位置对应的像素区域。其中,由于在需要确定对应像素区域的像素位置位于残差视频帧50边缘时,会使像素位置在图像框51中心时图像框51中存在空白位置,可以先根据图像框51的尺寸复制残差视频帧50边缘,得到扩张图像52,以确保残差视频帧50中每个像素位置都能完整地获取到对应的像素区域。例如,当图像框51的尺寸为3×3时,可以将残差视频帧50的边缘分别复制一次得到扩张图像52,当图像框51的尺寸为5×5时,可以将残差视频帧50的边缘分别复制两次得到扩张图像52。
在得到扩张图像52后,可以从扩张图像52的预设位置,例如左上角、右上角开始以预设步长1滑动图像框51,以确定每一次滑动后图像框51正中心的像素位置对应的像素区域。
在一种可能的实现方式中,对于每个压缩视频帧对应的残差视频帧,在通过上述方式确定其中每个像素位置的像素区域后,可以根据相应像素区域中的全部像素值计算得到该像素位置的特征值。可选地,对于每个像素区域,可以通过计算相应像素区域中包括的各像素的平方均值得到特征值。
以压缩视频帧中噪声来源为0均值的高斯噪声,且临近像素噪声分布平滑的情况为例进行说明。由于每个像素区域中的平方均值是噪声分布方差的最大似然估计,可以将像素区域中各像素的平方均值作为特征值。例如,当像素位置i对应的像素区域尺寸为3×3时,分别计算像素位置i以及周围八个相邻像素位置中像素值的平方,再除以9得到像素位置i的特征值。或者,还可以采用其他计算方式计算每个像素区域中像素值,得到像素区域的中间像素位置对应的特征值。
可选地,在确定当前残差视频帧中每个像素位置的特征值后,可以将每个特征值存入对应的像素位置,得到标注分布图像。例如先创建一个尺寸与残差视频帧相同的空白 图像,将残差视频帧中每个像素位置对应的特征值写入该空白图像的对应像素位置,得到标注分布图像。
S34、将每个所述压缩视频帧作为输入样本,每个所述压缩视频帧对应的标注分布图像作为标注样本进行训练,得到损失检测模型。
在一种可能的实现方式中,在通过上述步骤确定每个压缩视频帧对应的标注分布图像后,可以根据每个压缩视频帧和对应的标注分布图像,创建用于训练信息检测模型的训练集。在一些实施方式中,获取训练集中的压缩视频帧作为损失检测模型的输入样本,并将压缩视频帧对应的标注分布图像作为输入样本的标注样本,将样本的标注信息与损失检测模型的输出进行对比得到模型损失,并利用模型损失调节损失检测模型的参数,最终得到训练好的损失检测模型。
在本公开实施例中,上述训练损失检测模型的方法能够通过原始视频帧和压缩视频帧之间的残差视频帧,准确地得到原始视频帧压缩过程中的损失分布,并训练得到能够准确预测压缩视频帧损失分布的损失检测模型。因此,压缩图像在输入损失检测模型后,损失检测模型能够准确地检测到压缩图像的损失分布,并输出表征压缩图像损失分布的损失分布图像。
步骤S30、根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像。
在一种可能的实现方式中,在确定压缩图像,对压缩图像盲修复得到的预修复图像,以及表征压缩图像损失分布的损失分布图像后,根据上述三种图像确定原始图像。其中,压缩图像在压缩过程中不同像素位置的损失强度不同,即不同位置像素需要不同的修复力度进行修复,例如对于不存在损失的像素不需要进行修复,较小损失的像素需要较轻力度的修复,损失较大的像素需要较大力度的修复。而预修复图像为通过任意非盲修复算法修复压缩图像后得到的图像,其中每个像素均被相同的修复力度进行修复。因此,需要通过表征压缩图像损失分布即不同像素位置损失情况的损失分布图像,调整压缩图像和预修复图像后合并,得到原始图像。
可选地,原始图像的确定方式可以为基于损失分布图像,对所述压缩图像和所述预修复图像进行透明度混合得到原始图像。透明度混合的方式可以为计算压缩图像和预修复图像之间的加权和,得到原始图像。其中,可以将损分布图像作为预修复图像的权重,再通过与损失分布图像尺寸相同且其中每个像素值均为1的矩阵减去损失分布图像,得到逆损失分布图像,将逆损失分布图像作为压缩图像的权重,计算预修复图像和压缩图像之间的加权和得到原始图像。可选地,损失分布图像中每个像素值的取值范围为[0,1]。例如,在损失分布图像表征为矩阵N,压缩图像表征为矩阵P,预修复图像表征为矩阵Q的情况下,原始图像可以表征为Q×N+P×(1-N)。
图7示出根据本公开实施例的一种确定原始图像的示意图。如图7所示,在确定压缩图像71、对应的预修复图像70以及对应的损失分布图像72后,通过损失分布图像72确定逆损失分布图像73。在一些实施方式中,将损失分布图像72作为预修复图像70的权重,逆损失分布图像73作为压缩图像71的权重,对预修复图像70和压缩图像71进行透明度融合,得到原始图像74。也就是说计算损失分布图像72和预修复图像70的乘积,以及预修复图像70和压缩图像71的乘积,最后将两个乘积相加得到原始图像74。
本公开实施例先通过预设的非盲修复算法对压缩图像进行初步修正,再通过训练得到的损失检测模型直接对压缩图像进行损失分布标定,最终基于损失检测模型标定的损失分布对初步修复后的压缩图像进行修正,提升了压缩图像的修复质量。同时,本公开通过一个损失检测模型即可实现对不同的压缩图像进行损失分布标定,减少了存储和传输成本。
下面结合一个具体实施例对上述生理状态检测方法进行说明,然而值得注意的是,该具体实施例仅是为了更好地说明本公开,并不构成对本公开的不当限定。
由于视频压缩噪声存在标定与估计的困难,当前已有的视频修复方法,多基于单个编码器的单个量化参数(Quantization Parameter,QP)进行修复,难以泛化至不同的编码器、不同的码率控制算法与不同的量化参数。
相关技术采用固定量化参数修复模型进行压缩视频修复。在实际使用过程中难以泛化,需要用户手动调节视频质量并选择模型,无法摆脱对人工的依赖,因而难以大规模自动化处理,而且因需真对多个场景使用多个模型,浪费储存空间和传输带宽。
针对压缩噪声无真实概率模型标定的困难,本公开实施例考虑采样原始像素的八邻域像素的噪声,用于估计原始像素点本身的噪声均值。假定临近像素区域噪声分布近似,提出利用像素八邻域对噪声方差进行标定。本公开实施例提供的压缩图像修复方法至少包括以下步骤110至步骤150:
步骤110,对获取的原始视频帧x(相当于原始图像)以及压缩视频帧x’(相当于压缩图像)进行高通滤波。
因低频损失难以判断,且人眼难以察觉,固首先对原始视频帧x进行高通滤波,得到滤波后的原始视频帧y以及压缩视频帧y’。在实施中,可以使用4x4的均值滤波器处理原始视频帧x,得到原始视频帧x的低通滤波帧z;再对原始视频帧x和低通滤波帧z求差,得到滤波后的原始视频帧y。同样的方式可以得到对应压缩视频帧y’。
步骤120,计算经过滤波得到的原始视频帧y与压缩视频帧y’之间的残差视频帧d,用八邻域方式进行损失标定,得到标注分布图像n。
在实施中,遍历残差视频帧d的像素,取4x4的滑动窗口内的像素,计算每一像素值平方的均值。假定噪声来源为0均值的高斯噪声,且临近像素噪声分布平滑的前提下,此16像素的样本方差即为该噪声分布方差的最大似然估计(Max Likelihood Estimator),将此均值存入标注分布图像n的对应位置,输出标注分布图像n。
本公开实施例提出临近像素噪声分布相同的假设,并采样原始像素八邻域像素的噪声,用于估计像素点本身的噪声均值,从而通过估计模型参数的方式克服了压缩噪声无真实概率模型标注的困难。
步骤130,使用可变编码器可变量化参数的数据集,在噪声分布帧n的监督下输入压缩视频帧y’以训练损失检测模型。
步骤140,使用任意非盲修复算法对压缩视频帧y’进行修复,得到错误的预修复图像y”。
步骤150,参照标注分布图像n对压缩视频帧y’和预修复图像y”进行透明度混合,得到混合图像作为修复的原始视频帧。
本公开实施例提出通过无参考主观质量估计引导的视频修复方法,使得原有泛化困难的视频修复方法得以在不同编码器,码率控制方法以及比特率使用,在多种编码器多种码率下提升转码视频主观质量。本公开实施例采用质量估计模型参数修复模型相结合,由质量估计模型引导修复结果,进而使得视频压缩损失修复自动化。同时使用单一模型,减少了储存和传输成本。
本公开实施例提供的压缩图像修复方法可以在至少以下领域被应用:视频转码领域、图像视频编辑领域、图像视频修复领域。该方法包括但不限于以下场景:对视频服务供应商来说,可以修复用户上传的低质量视频,增强过度压缩的老视频,提供更好的视频质量。对视频生产与二次创作人员来说,可以修复视频素材质量。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。本领域技术人员可 以理解,在具体实施方式的上述方法中,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
此外,本公开还提供了压缩图像修复装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种压缩图像修复方法,相应技术方案和描述和参见方法部分的相应记载。
图8示出根据本公开实施例的一种压缩图像修复装置的示意图。如图8所示,本公开实施例的压缩图像修复装置可以包括图像修复部分80、损失确定部分81和原始图像确定部分82。
图像修复部分80,配置为通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像;
损失确定部分81,配置为将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像;
原始图像确定部分82,配置为根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,
其中,所述损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,所述压缩视频帧对应的标注分布图像作为标注样本训练得到,每个所述标注分布图像通过对应的所述压缩视频帧,和所述压缩视频帧对应的原始视频帧的残差视频帧确定。
在一种可能的实现方式中,所述原始图像确定部分82包括:图像融合子部分,配置为基于所述损失分布图像,对所述压缩图像和所述预修复图像进行透明度混合得到原始图像。
在一种可能的实现方式中,所述损失检测模型的训练过程包括:确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧;根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧;根据每个所述残差视频帧确定标注分布图像;将每个所述压缩视频帧作为输入样本,每个所述压缩视频帧对应的标注分布图像作为标注样本进行训练,得到损失检测模型。
在一种可能的实现方式中,所述确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧包括:确定至少一个原始视频,以及每个所述原始视频对应的压缩视频;从各所述原始视频中随机抽取至少一个视频帧作为原始视频帧,并在对应的压缩视频中抽取所述原始视频帧对应的压缩视频帧。
在一种可能的实现方式中,所述确定至少一个原始视频,以及每个所述原始视频对应的压缩视频包括:确定至少一个原始视频;对于每个所述原始视频,随机选取对应的编码器和编码力度;根据对应的编码器和编码力度,对每个所述原始视频进行视频编码得到压缩视频。
在一种可能的实现方式中,所述根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧包括:对每个所述原始视频帧和对应的所述压缩视频帧进行预处理;计算预处理后每个所述原始视频帧和对应的所述压缩视频帧之间的差,得到每个所述压缩视频帧的残差视频帧。
在一种可能的实现方式中,所述对每个所述原始视频帧和对应的所述压缩视频帧进行预处理包括:对每个所述原始视频帧和对应的所述压缩视频帧进行高通滤波。
在一种可能的实现方式中,所述根据每个所述残差视频帧确定标注分布图像包括:对于每个所述残差视频帧,分别执行以下步骤:确定所述残差视频帧中每个像素位置对应的像素区域;确定每个所述像素区域对应的像素位置的特征值;根据每个所述像素位置的特征值确定标注分布图像。
在一种可能的实现方式中,所述确定所述残差视频帧中每个像素位置对应的像素区 域包括:确定预设尺寸的图像框;确定每个所述像素位置对应的像素区域为所述像素位置在所述图像框的中心位置时,所述图像框中包括的残差视频帧区域。
在一种可能的实现方式中,每个所述像素位置对应的像素区域可以通过滑动所述图像框的方式获取。
在一种可能的实现方式中,所述确定每个所述像素区域对应的像素位置的特征值包括:对于每个所述像素区域,计算其中包括的各像素的平方均值得到特征值。
在一种可能的实现方式中,所述根据每个所述像素位置的特征值确定标注分布图像包括:将每个所述特征值存入对应的所述像素位置,得到标注分布图像。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以配置为执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;配置为存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
本公开实施例还提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图9示出根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话、计算机、数字广播终端、消息收发设备、游戏控制台、平板设备、医疗设备、健身设备、个人数字助理等终端。
参照图9,电子设备800可以包括以下一个或多个组件:处理组件802、存储器804、电源组件806、多媒体组件808、音频组件810、输入/输出(Input/Output,I/O)的接口812,传感器组件814以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示、电话呼叫、数据通信、相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个部分,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体部分,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM,Static Random-Access Memory),电可擦除可编程只读存储器(EEPROM,Static Random-Access Memory),可擦除可编程只读存储器(EPROM,Electrically Erasable Programmable Read-Only Memory),可编程只读存储器(PROM,Programmable Read-Only Memory),只读存储器(ROM,Read Only Memory),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD,Liquid Crystal Display)和触摸面板(TP,Touch Panel)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自 用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC,Microphone),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口部分之间提供接口,上述外围接口部分可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态、组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如互补金属氧化物半导体(CMOS,Complementary Metal-Oxide-Semiconductor)或电荷耦合装置(CCD,Charge Coupled Device)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如无线网络(Wi-Fi,Wireless Fidelity),第二代移动通信技术(2G,The 2nd Generation)或第三代移动通信技术(3G,The 3nd Generation),或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC,Near Field Communication)部分,以促进短程通信。例如,在NFC部分可基于射频识别(RFID,Radio Frequency Identification)技术,红外数据协会(IrDA,Infrared Data Association)技术,超宽带(UWB,Ultra Wide Band)技术,蓝牙(BT,Blue Tooth)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数字信号处理设备(DSPD,Digital Signal Processing Device)、可编程逻辑器件(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、控制器、微控制器、微处理器或其他电子元件实现,配置为执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图10示出根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图10,电子设备1900包括处理组件1922,可以包括一个或多个处理器,以及由存储器1932所代表的存储器资源,配置为存储可由处理组件1922 的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的部分。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如微软服务器操作系统(Windows Server TM),苹果公司推出的基于图形用户界面操作系统(Mac OS X TM),多用户多进程的计算机操作系统(Unix TM),自由和开放原代码的类Unix操作系统(Linux TM),开放原代码的类Unix操作系统(FreeBSD TM)或类似系统。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开实施例可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开实施例的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是(但不限于)电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质可以包括:便携式计算机盘、硬盘、随机存取存储器(RAM,Random Access Memory)、只读存储器、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器、便携式压缩盘只读存储器(CD-ROM,Compact Disc Read-Only Memory)、数字多功能盘(DVD,Digital Video Disc)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA,Industry Standard Architecture)指令、机器指令、机器相关指令、伪代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言诸如Smalltalk、C++等,以及常规的过程式编程语言如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN,Local Area Network)或广域网(WAN,Wide Area Network)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列或可编程逻辑阵列,该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方 面。
这里参照根据本公开实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个部分、程序段或指令的一部分,所述部分、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
该计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。
工业实用性
本公开实施例中,通过预设的非盲修复算法对压缩图像进行修复得到预修复图像,再将压缩图像输入训练得到的损失检测模型中得到对应的损失分布图像。根据压缩图像、预修复图像和损失分布图像确定原始图像。其中,损失检测模型训练过程中,输入样本为压缩视频帧,标注样本根据对应压缩视频帧与原始视频帧的残差视频帧确定。本公开通过训练得到的损失检测模型直接压缩图像进行损失标定,再通过模型输出的损失对初步修复后的图像进行修正,提升了压缩图像的修复质量。同时,本公开通过一个损失检测模型即可实现不同的压缩图像进行损失标定,减少了存储和传输成本。

Claims (27)

  1. 一种压缩图像修复方法,所述方法包括:
    通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像;
    将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像;
    根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,
    其中,所述损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,所述压缩视频帧对应的标注分布图像作为标注样本训练得到,每个所述标注分布图像通过对应的所述压缩视频帧,和所述压缩视频帧对应的原始视频帧的残差视频帧确定。
  2. 根据权利要求1所述的方法,其中,所述根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,包括:
    基于所述损失分布图像,对所述压缩图像和所述预修复图像进行透明度混合得到所述原始图像。
  3. 根据权利要求1或2所述的方法,其中,所述损失检测模型的训练过程,包括:
    确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧;
    根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧;
    根据每个所述残差视频帧确定所述标注分布图像;
    将每个所述压缩视频帧作为输入样本,每个所述压缩视频帧对应的标注分布图像作为标注样本进行训练,得到损失检测模型。
  4. 根据权利要求3所述的方法,其中,所述确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧,包括:
    确定至少一个原始视频,以及每个所述原始视频对应的压缩视频;
    从各所述原始视频中随机抽取至少一个视频帧作为所述至少一个原始视频帧,并在对应的压缩视频中抽取每个所述原始视频帧对应的压缩视频帧。
  5. 根据权利要求4所述的方法,其中,所述确定至少一个原始视频,以及每个所述原始视频对应的压缩视频,包括:
    确定至少一个原始视频;
    对于每个所述原始视频,随机选取对应的编码器和编码力度;
    根据所述对应的编码器和编码力度,对每个所述原始视频进行视频编码得到所述压缩视频。
  6. 根据权利要求3至5中任意一项所述的方法,其中,所述根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧,包括:
    对每个所述原始视频帧和对应的所述压缩视频帧进行预处理;
    计算预处理后每个所述原始视频帧和对应的所述压缩视频帧之间的差,得到每个所述压缩视频帧的残差视频帧。
  7. 根据权利要求6所述的方法,其中,所述对每个所述原始视频帧和对应的所述压缩视频帧进行预处理包括:
    对每个所述原始视频帧和对应的所述压缩视频帧进行高通滤波。
  8. 根据权利要求3至7中任意一项所述的方法,其中,所述根据每个所述残差视频帧确定所述标注分布图像,包括:
    对于每个所述残差视频帧,分别执行以下步骤:
    确定所述残差视频帧中每个像素位置对应的像素区域;
    确定每个所述像素区域对应的像素位置的特征值;
    根据每个所述像素位置的特征值确定所述标注分布图像。
  9. 根据权利要求8所述的方法,其中,所述确定所述残差视频帧中每个像素位置对应的像素区域,包括:
    确定预设尺寸的图像框;
    确定每个所述像素位置对应的像素区域为所述像素位置在所述图像框的中心位置时,所述图像框中包括的残差视频帧区域。
  10. 根据权利要求9所述的方法,其中,每个所述像素位置对应的像素区域通过滑动所述图像框的方式获取。
  11. 根据权利要求8至10中任意一项所述的方法,其中,所述确定每个所述像素区域对应的像素位置的特征值,包括:
    确定每个所述像素区域所包括的各像素的平方均值得到所述特征值。
  12. 根据权利要求8至11中任意一项所述的方法,其中,所述根据每个所述像素位置的特征值确定所述标注分布图像,包括:
    将每个所述特征值存入对应的所述像素位置,得到所述标注分布图像。
  13. 一种压缩图像修复装置,所述装置包括:
    图像修复部分,配置为通过预设的非盲修复算法对压缩图像进行修复,得到预修复图像;
    损失确定部分,配置为将所述压缩图像输入训练得到的损失检测模型中,得到对应的损失分布图像;
    原始图像确定部分,配置为根据所述压缩图像、所述预修复图像和所述损失分布图像确定原始图像,
    其中,所述损失检测模型通过将原始视频帧对应的压缩视频帧作为输入样本,所述压缩视频帧对应的标注分布图像作为标注样本训练得到,每个所述标注分布图像通过对应的所述压缩视频帧,和所述压缩视频帧对应的原始视频帧的残差视频帧确定。
  14. 根据权利要求13所述的装置,其中,所述原始图像确定部分包括图像融合子部分,配置为基于所述损失分布图像,对所述压缩图像和所述预修复图像进行透明度混合得到所述原始图像。
  15. 根据权利要求13或14所述的装置,其中,所述损失检测模型的训练过程包括:
    确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧;根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧;根据每个所述残差视频帧确定所述标注分布图像;将每个所述压缩视频帧作为输入样本,每个所述压缩视频帧对应的标注分布图像作为标注样本进行训练,得到损失检测模型。
  16. 根据权利要求15所述的装置,其中,所述确定至少一个原始视频帧,以及每个所述原始视频帧对应的压缩视频帧,包括:确定至少一个原始视频,以及每个所述原始视频对应的压缩视频;从各所述原始视频中随机抽取至少一个视频帧作为所述至少一个原始视频帧,并在对应的压缩视频中抽取每个所述原始视频帧对应的压缩视频帧。
  17. 根据权利要求16所述的装置,其中,所述确定至少一个原始视频,以及每个所述原始视频对应的压缩视频,包括:确定至少一个原始视频;对于每个所述原始视频,随机选取对应的编码器和编码力度;根据所述对应的编码器和编码力度,对每个所述原始视频进行视频编码得到所述压缩视频。
  18. 根据权利要求15至17任一项所述的装置,其中,所述所述根据每个所述原始视频帧和对应的所述压缩视频帧,确定每个所述压缩视频帧的残差视频帧,包括:
    对每个所述原始视频帧和对应的所述压缩视频帧进行预处理;计算预处理后每个所述原始视频帧和对应的所述压缩视频帧之间的差,得到每个所述压缩视频帧的残差视频 帧。
  19. 根据权利要求18所述的装置,其中,所述对每个所述原始视频帧和对应的所述压缩视频帧进行预处理包括:对每个所述原始视频帧和对应的所述压缩视频帧进行高通滤波。
  20. 根据权利要求15至19任一项所述的装置,其中,所述根据每个所述残差视频帧确定所述标注分布图像,包括:对于每个所述残差视频帧,分别执行以下步骤:确定所述残差视频帧中每个像素位置对应的像素区域;确定每个所述像素区域对应的像素位置的特征值;根据每个所述像素位置的特征值确定所述标注分布图像。
  21. 根据权利要求20所述的装置,其中,所述确定所述残差视频帧中每个像素位置对应的像素区域,包括:确定预设尺寸的图像框;确定每个所述像素位置对应的像素区域为所述像素位置在所述图像框的中心位置时,所述图像框中包括的残差视频帧区域。
  22. 根据权利要求21所述的装置,其中,每个所述像素位置对应的像素区域通过滑动所述图像框的方式获取。
  23. 根据权利要求19至22中任意一项所述的装置,其中,所述确定每个所述像素区域对应的像素位置的特征值,包括:确定每个所述像素区域所包括的各像素的平方均值得到所述特征值。
  24. 根据权利要求19至23中任意一项所述的装置,其中,所述根据每个所述像素位置的特征值确定所述标注分布图像,包括:将每个所述特征值存入对应的所述像素位置,得到所述标注分布图像。
  25. 一种电子设备,包括:
    处理器;
    配置为存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至12中任意一项所述的方法。
  26. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至12中任意一项所述的方法。
  27. 一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行配置为实现权利要求1至12中任意一项所述的方法。
PCT/CN2022/100470 2021-12-20 2022-06-22 压缩图像修复方法及装置、电子设备、存储介质和程序产品 WO2023115859A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111565590.9A CN114240787A (zh) 2021-12-20 2021-12-20 压缩图像修复方法及装置、电子设备和存储介质
CN202111565590.9 2021-12-20

Publications (1)

Publication Number Publication Date
WO2023115859A1 true WO2023115859A1 (zh) 2023-06-29

Family

ID=80759652

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100470 WO2023115859A1 (zh) 2021-12-20 2022-06-22 压缩图像修复方法及装置、电子设备、存储介质和程序产品

Country Status (2)

Country Link
CN (1) CN114240787A (zh)
WO (1) WO2023115859A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114240787A (zh) * 2021-12-20 2022-03-25 北京市商汤科技开发有限公司 压缩图像修复方法及装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593834A (zh) * 2013-12-03 2014-02-19 厦门美图网科技有限公司 一种智能添加景深的图像增强方法
CN106056552A (zh) * 2016-05-31 2016-10-26 努比亚技术有限公司 一种图像处理方法及移动终端
CN110677649A (zh) * 2019-10-16 2020-01-10 腾讯科技(深圳)有限公司 基于机器学习的去伪影方法、去伪影模型训练方法及装置
CN111988621A (zh) * 2020-06-19 2020-11-24 新加坡依图有限责任公司(私有) 视频处理器训练方法、装置、视频处理装置及视频处理方法
US20210118113A1 (en) * 2019-10-16 2021-04-22 International Business Machines Corporation Image recovery
CN114240787A (zh) * 2021-12-20 2022-03-25 北京市商汤科技开发有限公司 压缩图像修复方法及装置、电子设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593834A (zh) * 2013-12-03 2014-02-19 厦门美图网科技有限公司 一种智能添加景深的图像增强方法
CN106056552A (zh) * 2016-05-31 2016-10-26 努比亚技术有限公司 一种图像处理方法及移动终端
CN110677649A (zh) * 2019-10-16 2020-01-10 腾讯科技(深圳)有限公司 基于机器学习的去伪影方法、去伪影模型训练方法及装置
US20210118113A1 (en) * 2019-10-16 2021-04-22 International Business Machines Corporation Image recovery
CN111988621A (zh) * 2020-06-19 2020-11-24 新加坡依图有限责任公司(私有) 视频处理器训练方法、装置、视频处理装置及视频处理方法
CN114240787A (zh) * 2021-12-20 2022-03-25 北京市商汤科技开发有限公司 压缩图像修复方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114240787A (zh) 2022-03-25

Similar Documents

Publication Publication Date Title
US11544820B2 (en) Video repair method and apparatus, and storage medium
CN113766313B (zh) 视频数据处理方法及装置、电子设备和存储介质
CN109859144B (zh) 图像处理方法及装置、电子设备和存储介质
CN111553864B (zh) 图像修复方法及装置、电子设备和存储介质
CN110708559B (zh) 图像处理方法、装置及存储介质
CN109145970B (zh) 基于图像的问答处理方法和装置、电子设备及存储介质
CN112785672B (zh) 图像处理方法及装置、电子设备和存储介质
CN111340731A (zh) 图像处理方法及装置、电子设备和存储介质
WO2023103298A1 (zh) 遮挡检测方法及装置、电子设备、存储介质及计算机程序产品
CN111369482B (zh) 图像处理方法及装置、电子设备和存储介质
WO2023115859A1 (zh) 压缩图像修复方法及装置、电子设备、存储介质和程序产品
CN110619610A (zh) 图像处理方法及装置
CN112991381A (zh) 图像处理方法及装置、电子设备和存储介质
CN110415258B (zh) 图像处理方法及装置、电子设备和存储介质
CN110675355B (zh) 图像重建方法及装置、电子设备和存储介质
WO2023019870A1 (zh) 视频处理方法及装置、电子设备、存储介质、计算机程序、计算机程序产品
CN115052150A (zh) 视频编码方法、装置、电子设备和存储介质
CN111583142A (zh) 图像降噪方法及装置、电子设备和存储介质
CN109840890B (zh) 图像处理方法及装置、电子设备和存储介质
CN113706421A (zh) 一种图像处理方法及装置、电子设备和存储介质
CN109068138B (zh) 视频图像的处理方法及装置、电子设备和存储介质
CN112837237A (zh) 视频修复方法及装置、电子设备和存储介质
CN104992416A (zh) 图像增强方法和装置、智能设备
CN109816620B (zh) 图像处理方法及装置、电子设备和存储介质
CN113012052B (zh) 图像处理方法及装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909191

Country of ref document: EP

Kind code of ref document: A1