WO2021208580A1 - 视频修复方法、装置、电子设备和计算机可读存储介质 - Google Patents

视频修复方法、装置、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2021208580A1
WO2021208580A1 PCT/CN2021/076044 CN2021076044W WO2021208580A1 WO 2021208580 A1 WO2021208580 A1 WO 2021208580A1 CN 2021076044 W CN2021076044 W CN 2021076044W WO 2021208580 A1 WO2021208580 A1 WO 2021208580A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
block
motion vector
repaired
video
Prior art date
Application number
PCT/CN2021/076044
Other languages
English (en)
French (fr)
Inventor
张弓
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021208580A1 publication Critical patent/WO2021208580A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Definitions

  • This application relates to the field of computer technology, and in particular to a video repair method, device, electronic equipment, and computer-readable storage medium.
  • the embodiments of the present application provide a video repair method, device, electronic equipment, and computer-readable storage medium.
  • a video repair method including:
  • Fusion processing the frame to be repaired and the corresponding interpolation frame to obtain a target frame
  • a target video is generated based on the start frame, the end frame, and the target frame.
  • a video repair device including:
  • An acquiring module configured to acquire a video to be repaired, determine a start frame and an end frame in the video to be repaired, and determine the number of frames to be repaired between the start frame and the end frame;
  • An insertion module configured to generate interpolation frames between the start frame and the end frame based on motion compensation, and the number of the interpolation frames is the same as the number of the frames to be repaired;
  • the fusion module is used to perform fusion processing on the frame to be repaired and the corresponding interpolation frame to obtain a target frame;
  • a generating module configured to generate a target video based on the start frame, the end frame and the target frame.
  • An electronic device includes a memory and a processor, and a computer program is stored in the memory.
  • the processor is caused to perform the following operations:
  • Fusion processing the frame to be repaired and the corresponding interpolation frame to obtain a target frame
  • a target video is generated based on the start frame, the end frame, and the target frame.
  • Fusion processing the frame to be repaired and the corresponding interpolation frame to obtain a target frame
  • a target video is generated based on the start frame, the end frame, and the target frame.
  • the above-mentioned video repair method, device, electronic equipment and computer-readable storage medium determine the start frame and end frame of the video to be repaired by acquiring the video to be repaired, and determine the value of the frame to be repaired between the start frame and the end frame
  • the number of interpolated frames is generated between the start frame and the end frame based on motion compensation.
  • the number of interpolated frames is the same as the number of frames to be repaired.
  • the frame to be repaired and the corresponding interpolated frame are fused to obtain the target frame.
  • the start frame, end frame and target frame generate the target video without manual intervention, which can quickly and accurately complete the video repair and improve the efficiency of the video repair.
  • Fig. 1 is an application environment diagram of a video repair method in an embodiment.
  • Figure 2 is a flowchart of a video repair method in an embodiment.
  • Fig. 3 is a flowchart of generating an interpolation frame between a start frame and an end frame based on motion compensation in an embodiment.
  • Fig. 4 is a schematic diagram of generating an interpolation frame between a start frame and an end frame based on motion compensation in an embodiment.
  • Fig. 5 is a flow chart of determining the interpolation frame according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame in an embodiment.
  • Fig. 6 is a schematic diagram of a motion vector passing through a block in a frame to be interpolated in an embodiment.
  • Fig. 7 is a schematic diagram of replacing a frame to be repaired in a video with a target frame in an embodiment.
  • Fig. 8 is a flowchart of obtaining a video to be repaired in an embodiment.
  • Fig. 9 is a structural block diagram of a video repair device in an embodiment.
  • Fig. 10 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • Fig. 1 is a schematic diagram of an application environment of a video repair method in an embodiment.
  • the application environment includes a terminal 102 and a server 104.
  • the terminal 102 communicates with the server 104 through the network through the network.
  • the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
  • the terminal 102 can obtain the video to be repaired from the server 104.
  • the terminal 102 determines the start frame and the end frame in the video to be repaired, and determines the number of frames to be repaired between the start frame and the end frame.
  • the terminal 102 generates an interpolation frame between the start frame and the end frame based on the motion compensation, and the number of the interpolation frames is the same as the number of frames to be repaired.
  • the terminal 102 performs fusion processing on the frame to be repaired and the corresponding interpolation frame to obtain the target frame.
  • the terminal 102 generates a target video based on the start frame, the end frame, and the target frame. Then, the terminal 102 can send the target video to the server 104 for storage, so that the video repair can be completed quickly and accurately, and the efficiency of the video repair can be improved.
  • FIG. 2 is a flowchart of a video repair method in an embodiment.
  • the video repair method in this embodiment is described by taking the terminal as shown in FIG. 1 as an example.
  • the video repair method includes operations 202 to 206.
  • Operation 202 Obtain a video to be repaired, determine a start frame and an end frame in the video to be repaired, and determine the number of frames to be repaired between the start frame and the end frame.
  • the video to be repaired refers to a video with image frames that need to be repaired.
  • the video to be repaired may be a complete video, or a video segment including a start frame, a frame to be repaired, and an end frame.
  • the start frame refers to the first image frame in the time dimension in the video to be repaired
  • the end frame refers to the last image frame in the time dimension. Both the start frame and the end frame are complete image frames that do not need to be repaired.
  • the starting frame and the repaired frame may be image frames that are adjacent in the time dimension, or may be image frames that are not adjacent.
  • the repaired frame and the starting frame may be image frames that are adjacent in the time dimension, or may be image frames that are not adjacent.
  • the terminal obtains the video to be repaired, determines the frame to be repaired in the video to be repaired, and determines the number of the frame to be repaired.
  • the two image frames are the first and last frames in the time dimension of the to-be-repaired video
  • it means the to-be-repaired video It's a video clip.
  • the terminal uses the first frame in the time dimension of the video to be repaired as the start frame, and the last frame as the end frame.
  • the terminal when the to-be-repaired video is a complete video, or when there are more than two image frames that do not need to be repaired in the to-be-repaired video, in addition to the to-be-repaired frame, the terminal can set the The adjacent previous frame corresponding to the to-be-repaired image frame is taken as the start frame, and the adjacent next frame corresponding to the to-be-repaired image frame is taken as the end frame.
  • an interpolation frame is generated between the start frame and the end frame based on the motion compensation, and the number of the interpolation frames is the same as the number of the frames to be repaired.
  • motion compensation is a method of describing the difference between adjacent frames (adjacent here means that they are adjacent in the encoding relationship, and the two frames may not be adjacent in the playback order). Specifically, it describes each small frame of the previous frame. The process of moving a block to a certain position in the current frame.
  • the terminal may determine the forward motion vector of the start frame relative to the end frame, and determine the backward motion vector of the end frame relative to the start frame. Then, the terminal generates an interpolation frame between the start frame and the end frame based on the forward motion vector and the backward motion vector.
  • the number of interpolated frames is the same as the number of frames to be repaired.
  • the frame to be repaired and the corresponding interpolated frame are fused to obtain a target frame.
  • the target frame refers to the image frame obtained after restoration.
  • the terminal may determine the interpolation frame corresponding to the frame to be repaired after generating each interpolation frame. Fusion processing is performed on the pixels of the frame to be repaired and the pixels of the corresponding interpolation frame to obtain a fused image frame, that is, the target frame corresponding to the frame to be repaired.
  • the interpolation frame corresponding to each frame to be repaired is determined. Fusion processing is performed on each frame to be repaired and the corresponding interpolation frame to obtain a target frame corresponding to each frame to be repaired.
  • a target video is generated based on the start frame, the end frame, and the target frame.
  • the target video is a video that completes image frame restoration.
  • the terminal replaces the corresponding frame to be repaired in the video to be repaired with the target frame, thereby obtaining the target video.
  • the terminal can generate the target video according to the start frame, the end frame, and the target frame, and the corresponding time.
  • the target video and the video to be repaired have the same frame rate and resolution.
  • the frame rate (Frame rate) is the frequency (rate) at which a bitmap image called a frame continuously appears on the display.
  • the video to be repaired is acquired, the start frame and the end frame of the video to be repaired are determined, and the number of frames to be repaired between the start frame and the end frame is determined, based on motion compensation in the start frame and the end frame Interpolation frames are generated between, and the number of interpolation frames is the same as the number of frames to be repaired.
  • the frame to be repaired and the corresponding interpolation frame are merged to obtain the target frame, and the target video is generated based on the start frame, the end frame and the target frame, Without manual intervention, the video repair can be completed quickly and accurately, and the efficiency and accuracy of the video repair can be improved.
  • generating an interpolation frame between the start frame and the end frame based on motion compensation includes:
  • the start frame and the end frame are respectively divided into blocks.
  • the terminal may block the start frame and the end frame respectively, and the block mode of the start frame may be the same as the block mode of the end frame.
  • the block mode of the start frame may be the same as the block mode of the end frame.
  • all are divided into nine square grids, and the number of divided blocks is the same.
  • Different block methods can also be used respectively, and the number of blocks can also be different.
  • a matching block corresponding to each block in the starting frame is determined in the ending frame, and a forward motion vector of each block in the starting frame relative to the matching block in the ending frame is determined.
  • the motion vector refers to the relative displacement of the object in the current frame image and the previous frame image, or the relative displacement between the current frame image and the next frame image.
  • the forward motion vector refers to the displacement of an object in the starting frame relative to the same object in the ending frame.
  • the backward motion vector refers to the displacement of an object in the ending frame relative to the same object in the starting frame.
  • the terminal can traverse according to the blocks in the initial frame.
  • the terminal searches the end frame for a block matching each block in the start frame, so as to obtain the forward motion vector of each block in the start frame relative to the matching block in the end frame.
  • a matching block corresponding to each block in the end frame is determined in the start frame, and a backward motion vector of each block in the end frame relative to the matching block in the start frame is determined.
  • the terminal can traverse according to the blocks in the end frame.
  • the terminal searches the starting frame for a block matching each block in the starting frame, thereby obtaining the backward motion vector of each block in the ending frame with respect to the matching block in the starting frame.
  • an interpolated frame is generated by the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the terminal may determine the time interval between the start frame and the end frame, divide the time interval into a preset number at an equal interval, and each section serves as a time phase. Then, the terminal may generate an interpolation frame in each phase according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the terminal may divide the time interval between the start frame and the end frame into the same number of frames as the number of frames to be repaired according to the number of frames to be repaired, and each part serves as a time phase. Each time phase corresponds to a frame to be repaired. Then, the terminal may generate an interpolation frame in each phase according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the matching block corresponding to each block in the start frame is determined in the end frame, and it is determined that each block in the start frame is relative to the end frame.
  • the forward motion vector of the matching block can determine the displacement of each object in the start frame relative to the same object in the end frame.
  • the matching block corresponding to each block in the ending frame is determined in the starting frame, so that the displacement of each object in the ending frame relative to the same object in the starting frame can be determined.
  • the position of the same object in the interpolation frame can be accurately determined, so as to accurately obtain the starting frame.
  • the terminal traverses block by block in the initial frame. For each block in the starting frame, the terminal searches for the corresponding matching block in the ending frame, thereby obtaining the forward motion vector of each block in the starting frame relative to the matching block in the ending frame. Similarly, the terminal traverses block by block in the end frame. For each block in the end frame, the terminal searches for a corresponding matching block in the start frame, so as to obtain the backward motion vector of each block in the end frame relative to the matching block in the start frame.
  • the terminal generates interpolation frames in the start frame and the end frame according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame, as shown in (b) in Figure 4 ) Shown.
  • the interpolation frame is generated by the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame, including:
  • the frame to be interpolated is determined by the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the terminal generates the frame to be interpolated between the start frame and the end frame according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the pixels in the frame to be interpolated are unknown.
  • the terminal determines the forward motion vector passing through the sub-block and the number of forward motion vectors passing through the sub-block.
  • the terminal determines the backward motion vector passing through the segment and the number of backward motion vectors passing through the segment.
  • the corresponding areas of the forward motion vector and the backward motion vector in the block are determined.
  • the terminal may determine the forward motion vector passing through the block, and respectively determine the area of each forward motion vector passing through the block.
  • the area passing through the block is the corresponding area of each forward motion vector in the block.
  • the terminal may determine the backward motion vector passing through the block, and respectively determine the area of each backward motion vector passing through the block.
  • the area passing through the block is the area corresponding to each backward motion vector in the block.
  • block A in the frame to be interpolated is crossed by the backward motion vectors corresponding to blocks b, c, and d in the end frame, then the area of the backward motion vector corresponding to block b in block A is calculated, and the calculation The area of the backward motion vector corresponding to block c in block A is calculated, and the area of the backward motion vector corresponding to block d in block A is calculated.
  • FIG. 6 it is a schematic diagram of the motion vector passing through the block in the frame to be interpolated.
  • the terminal can calculate the area of each motion vector passing through the block.
  • Operation 508 according to the number of forward motion vectors and backward motion vectors and the corresponding area, determine the mapped motion vector of the block.
  • the mapped motion vector refers to the motion vector of the block relative to the start frame and the end frame.
  • the terminal multiplies the forward motion vector and the corresponding area of the forward motion vector in the block to obtain the product.
  • the terminal multiplies the backward motion vector and the corresponding area of the backward motion vector in the block to obtain the product.
  • the terminal may sum the area of each forward motion vector in the block and the area of each backward motion vector in the block.
  • the terminal sums up the products, and calculates the ratio between the summed value of each product and the summed value of each area. The ratio is the mapped motion vector corresponding to the block.
  • the terminal can obtain the mapped motion vector corresponding to each block in the frame to be interpolated.
  • the terminal may use the following formula (1) to calculate the mapped motion vector corresponding to the block:
  • MV is the mapped motion vector corresponding to the block.
  • k is the total number of forward motion vectors and backward motion vectors passing through the block.
  • n is the number of the motion vector, and MV n is the nth motion vector.
  • W n is the area where the n-th motion vector passes through the block.
  • Operation 510 Determine each pixel value in each block based on the mapped motion vector, and generate an interpolation frame according to each pixel value in each block in the frame to be interpolated.
  • the terminal After determining the mapped motion vector of each block in the frame to be interpolated, the terminal corresponds to each block, and determines the matching block corresponding to the block in the start frame and the end frame based on the mapped motion vector corresponding to the block , Get two matching blocks. Then, the terminal can obtain the respective weights corresponding to the two matching blocks, and the pixel value of each pixel in each matching block. Then, the terminal calculates the pixel value corresponding to each pixel in the sub-block according to the weights corresponding to the two matching blocks and the pixel value of each pixel in each matching block. According to the same processing method, each pixel value in each block in the frame to be interpolated is obtained. Then, the terminal uses the frame to be interpolated after determining each pixel value as the interpolated frame.
  • the number of forward motion vectors and backward motion vectors passing through the block is determined, and the corresponding area of the forward motion vector and backward motion vector in the block is determined .
  • the mapping motion vector of the block is determined to clarify the motion vector of the block relative to the start frame and the end frame.
  • the pixel value in each block is determined based on the mapped motion vector, and each pixel value in each block is each pixel in the interpolation frame, so that the interpolation frame is accurately generated.
  • determining each pixel value in the block based on the mapped motion vector includes:
  • the matching blocks corresponding to the sub-blocks in the start frame and the end frame are determined; the matching blocks corresponding to the sub-blocks in the start frame and the end frame are weighted and interpolated to obtain the pixel values in the sub-blocks.
  • the terminal determines the mapped motion vector of each sub-block in the frame to be interpolated, it corresponds to each sub-block, determines the matching block in the starting frame corresponding to the sub-block based on the mapped motion vector corresponding to the sub-block, and based on the sub-block
  • the mapped motion vector corresponding to the block determines the matching block corresponding to the sub-block in the end frame, thereby obtaining two matching blocks.
  • the terminal can obtain the respective weights corresponding to the two matching blocks, and the pixel value of each pixel in each matching block.
  • the terminal calculates the pixel value corresponding to each pixel in the sub-block according to the weights corresponding to the two matching blocks and the pixel value of each pixel in each matching block. According to the same processing method, each pixel value in each block in the frame to be interpolated is obtained.
  • the terminal may calculate each pixel value in each block in the frame to be interpolated through inverse distance weight interpolation.
  • the terminal can calculate the weight corresponding to the matching block in the starting frame and the weight corresponding to the matching block in the ending frame by formula (2):
  • w ij is the weight value between the i-th unknown point and the j-th known point
  • di ij is the distance between the i-th unknown point and the j-th known point
  • d ik is the i-th unknown point.
  • the distance between a point and the k-th known point, h is a power number.
  • the unknown point refers to the pixel point of the block in the frame to be interpolated
  • the known point refers to the pixel point of the matching block in the starting frame, or the pixel point of the matching block in the ending frame.
  • W [w ij ] m ⁇ n is a weight matrix
  • T [s ij ] m ⁇ n is a matrix of known points
  • P [v ij ] m ⁇ n is a matrix of unknown points.
  • the terminal may calculate each pixel value in the sub-block according to the sub-block in the frame to be interpolated and the matching block in the initial frame.
  • Each pixel value in the sub-block is calculated according to the sub-block in the frame to be interpolated and the matching block in the end frame, so that one pixel in the sub-block of the frame to be interpolated corresponds to two pixel values.
  • the terminal determines the pixel average of the two pixel values corresponding to the same pixel, and uses the pixel average as the target pixel value of the pixel to obtain each target pixel value in the block of the frame to be interpolated.
  • the matching blocks corresponding to the sub-blocks in the start frame and the end frame are determined based on the mapped motion vector, and the matching blocks corresponding to the sub-blocks in the start frame and the end frame are weighted and interpolated, so as to be able to calculate quickly and accurately Get each pixel value in the block.
  • the method before determining the number of forward motion vectors and backward motion vectors passing through the block, the method further includes:
  • the motion vector is a forward motion vector or a backward motion vector.
  • the terminal determines that the forward motion vector that passes through other blocks is between the block and the block. Euclidean distance.
  • the terminal determines the Euclidean distance between the backward motion vector passing through other blocks and the block.
  • the Euclidean distance is the euclidean metric, which refers to the true distance between two points in the m-dimensional space, or the natural length of the vector (that is, the distance from the point to the origin).
  • the Euclidean distance in two-dimensional and three-dimensional space is the actual distance between two points.
  • the terminal determines the motion vector corresponding to the shortest Euclidean distance, and uses the motion vector corresponding to the shortest Euclidean distance as the motion vector corresponding to the block.
  • the forward motion vector is used as the forward motion vector corresponding to the block.
  • the backward motion vector is used as the backward motion vector corresponding to the block.
  • the features between adjacent blocks are the closest, and the continuity of the features is the strongest.
  • the forward motion vector and the backward motion vector determine the distance between the forward motion vector and the backward motion vector passing through other blocks and the block, and use the motion vector with the smallest distance as the block Corresponding motion vector, so that for a block that is not crossed by the motion vector, the motion vector with the shortest Euclidean distance can be used as the motion vector of the block.
  • the method before determining the number of forward motion vectors and backward motion vectors passing through the block, the method further includes:
  • the sub-block When the sub-block is not crossed by the forward motion vector and the backward motion vector, determine the distance between other sub-blocks in the frame to be interpolated and the sub-block; use the motion vector corresponding to the sub-block with the smallest distance as the corresponding sub-block Motion vector, which is forward motion vector or backward motion vector.
  • the terminal may calculate the Euclidean distance between other sub-blocks in the frame to be interpolated and the sub-block. Compare the Euclidean distances, determine the motion vector corresponding to the block with the smallest Euclidean distance, and use the motion vector corresponding to the block with the smallest Euclidean distance as the block corresponding to the block that is not crossed by the forward motion vector and the backward motion vector. Motion vector.
  • fusing the frame to be repaired and the corresponding interpolation frame to obtain the target frame includes:
  • the time phase of the frame to be repaired is obtained, and the interpolation frame that is the same as the time phase of the frame to be repaired is determined; the frame to be repaired and the interpolation frame with the same time phase are fused to obtain the target frame corresponding to the frame to be repaired.
  • the terminal may divide the time interval between the start frame and the end frame into equal intervals as the number of the frames to be repaired according to the number of frames to be repaired, and each part is regarded as a time phase. Each time phase corresponds to a frame to be repaired. Then, the terminal may generate an interpolation frame in each phase according to the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the terminal may obtain the time phase corresponding to the frame to be repaired, and obtain the time phase corresponding to the interpolation frame.
  • the terminal compares the time phase corresponding to each frame to be repaired with the time phase corresponding to each interpolation frame.
  • the terminal determines the frame to be repaired and the interpolation frame with the same time phase, and performs pixel fusion processing on the frame to be repaired and the interpolation frame with the same time phase to obtain a target frame corresponding to the frame to be repaired.
  • an interpolation frame that is the same as the time phase of the frame to be repaired is determined, so that the interpolation frame corresponding to the frame to be repaired can be accurately determined.
  • the to-be-repaired frame and the interpolation frame with the same time phase are fused to recover the damaged area in the to-be-repaired frame, so as to obtain the target frame corresponding to the to-be-repaired frame.
  • the fusion processing of the frame to be repaired and the interpolation frame with the same time phase to obtain the target frame corresponding to the frame to be repaired includes:
  • the terminal determines the pixels of the frame to be repaired and the pixels of the corresponding interpolation frame. Then, the terminal determines the matching pixels in the frame to be repaired and the interpolation frame, and determines the pixel value of the matching pixel. The pixel value of the matched pixel is subjected to weighted fusion processing to obtain the target pixel value. According to the same processing method, the pixels of the frame to be repaired and the pixels of the interpolation frame with the same time phase can be subjected to weighted fusion processing to obtain the target pixel value. Each pixel and the corresponding target pixel value form the target frame image.
  • the pixels of the frame to be repaired and the pixels of the interpolation frame are subjected to weighted fusion processing to accurately restore the damaged area in the frame to be repaired, thereby The fixed target frame.
  • the terminal may perform weighted fusion of the pixels of the frame to be repaired and the pixels of the interpolation frame through the following formula (4):
  • F is the target frame, For the frame to be repaired, For and For interpolation frames with the same time phase, w is the weight corresponding to each pixel, and w is less than 1.
  • M and N are the resolution of the image frame.
  • the weight w can be set in advance, or can be set adaptively according to the reliability of the motion vector of each interpolation block of the interpolation frame.
  • obtaining the video to be repaired includes:
  • the frame to be repaired is a single frame
  • the adjacent frame before the single frame to be repaired is used as the starting frame
  • the adjacent frame of the single frame to be repaired The frame is regarded as the end frame
  • the start frame, the end frame and the single frame to be repaired are regarded as the video to be repaired.
  • the original video refers to a video in which image frames are continuous in the time dimension, and includes the image frame video that needs to be repaired.
  • the terminal may obtain the original video from a local or network or from a third device.
  • the terminal can determine the image frames that need to be repaired in the original video, that is, the to-be-repaired frames, and determine the number of the to-be-repaired frames.
  • the terminal determines the adjacent previous frame of the to-be-repaired frame in the original video, that is, determines the adjacent previous frame of the to-be-repaired frame in the time dimension.
  • the adjacent previous frame is used as the starting frame of the video to be repaired.
  • the terminal determines the next frame next to the frame to be repaired in the original video, that is, determines the next frame next to the frame to be repaired in the time dimension.
  • the next frame is regarded as the end frame of the video to be repaired.
  • the terminal composes the start frame, the frame to be repaired, and the end frame into a video segment to obtain the video to be repaired.
  • the first frame in the original video is a damaged frame to be repaired
  • the terminal determines the adjacent previous frame of the first frame as the starting frame, that is, the 0th frame.
  • the 0th frame and the second frame are undamaged image frames.
  • the terminal forms the 0th frame, the 1st frame and the 2nd frame into a video to be repaired.
  • the third frame is obtained, and the third frame is the restored first frame.
  • the terminal replaces the first frame in the video to be repaired with the third frame, and forms the target video according to the zeroth frame, the third frame, and the second frame. Further, the terminal may use the target video to replace the 0th frame, the 1st frame and the 2nd frame in the original video. Alternatively, the terminal can replace the first frame in the original video with the third frame to obtain a repaired video.
  • the frame to be repaired in the original video is determined.
  • the adjacent frame of the single frame to be repaired is used as the starting frame, and the single frame to be repaired The next adjacent frame is used as the end frame, so that the damaged single image frame in the video can be filtered out and combined with the intact start frame and end frame to form the video to be repaired, without any need for other undamaged image frames. Processing to quickly repair the damaged single frame, so as to quickly complete the video repair.
  • the method further includes:
  • the terminal may further determine the first frame and the last frame in the time dimension of the at least two consecutive frames to be repaired.
  • the first frame refers to the first image frame in the time dimension of the at least two consecutive frames to be repaired
  • the last frame refers to the last image frame in the time dimension of the at least two consecutive frames to be repaired.
  • Operation 804 Use the previous frame adjacent to the first frame in the original video as the start frame, and the next frame adjacent to the last frame in the original video as the end frame; the start and end frames are Not a frame to be repaired.
  • the terminal may determine the adjacent previous frame to the first frame of the at least two consecutive frames to be repaired, that is, determine the adjacent previous frame of the first frame in the time dimension.
  • the adjacent previous frame is used as the starting frame of the video to be repaired.
  • the terminal determines the adjacent frame of the tail frame of the at least two consecutive frames to be repaired, that is, determines the adjacent frame of the tail frame in the time dimension.
  • the next frame is regarded as the end frame of the video to be repaired.
  • the start frame, the end frame, and at least two consecutive frames to be repaired are used as the video to be repaired.
  • the terminal composes a video segment from the start frame, at least two consecutive frames to be repaired, and the end frame according to the time dimension, to obtain the video to be repaired.
  • the weight of the frame to be repaired may be set to 0.
  • the start and end frames are not to be repaired in the original video Frame, so that it is possible to obtain a complete lossless image frame that is closest to the damaged image frame feature.
  • the start frame and end frame are intact image frames, and are the closest to the continuously damaged image frames.
  • the start frame, end frame and at least two consecutive frames to be repaired are regarded as the video to be repaired, based on the start frame and The end frame performs image repair on the frame to be repaired, so that the repaired image frame is more accurate.
  • a video repair method including:
  • the terminal obtains the original video and determines the frame to be repaired in the original video.
  • the terminal uses the adjacent previous frame of the single frame to be repaired as the start frame, and the adjacent frame of the single frame to be repaired as the end frame;
  • the frame, the end frame, and the single frame to be repaired are regarded as the to-be-repaired video.
  • the terminal determines the first frame and the last frame of the at least two consecutive frames to be repaired; the previous frame adjacent to the first frame in the original video is used as the start frame, and the The next frame adjacent to the last frame in the original video is regarded as the end frame; the start frame and end frame are non-to-be-repaired frames in the original video; the start frame, the end frame and at least two consecutive frames to be repaired are regarded as the to-be-repaired frames video.
  • the terminal determines the start frame and the end frame in the video to be repaired, and determines the number of frames to be repaired between the start frame and the end frame.
  • the terminal divides the start frame and the end frame into blocks respectively; determines the matching block corresponding to each block in the start frame in the end frame, and determines the matching of each block in the start frame with respect to the end frame The forward motion vector of the block.
  • the terminal determines the matching block corresponding to each block in the end frame in the start frame, and determines the backward motion vector of each block in the end frame with respect to the matching block in the start frame.
  • the terminal determines the frame to be interpolated through the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame.
  • the terminal determines the number of forward motion vectors and backward motion vectors that pass through the partition for each block in the frame to be interpolated.
  • the terminal determines the corresponding areas of the forward motion vector and the backward motion vector in the block.
  • the terminal determines the mapped motion vector of the block according to the number of the forward motion vector and the backward motion vector and the corresponding area.
  • the terminal determines the distance between the forward motion vector and the backward motion vector passing through other blocks and the block;
  • the motion vector is used as the motion vector corresponding to the block.
  • the motion vector is the forward motion vector or the backward motion vector; determine the number of forward motion vectors and backward motion vectors that pass through the block; determine the forward motion vector and the backward motion
  • the corresponding area of the vector in the block; the mapping motion vector of the block is determined according to the number of forward motion vectors and backward motion vectors and the corresponding area.
  • the terminal determines matching blocks corresponding to the sub-blocks in the start frame and the end frame based on the mapped motion vector.
  • the terminal performs weight interpolation on the matching blocks corresponding to the sub-blocks in the start frame and the end frame to obtain the pixel values in the sub-blocks.
  • the terminal generates an interpolation frame according to the pixel value in each block in the frame to be interpolated, and the number of the interpolation frame is the same as the number of the frame to be repaired.
  • the terminal obtains the time phase of the frame to be repaired, and determines the interpolation frame that is the same as the time phase of the frame to be repaired; determines the pixels of the frame to be repaired and the pixels of the interpolation frame that have the same time phase.
  • the terminal performs weighted fusion processing on the pixels of the frame to be repaired and the pixels of the interpolation frame to obtain the target frame corresponding to the frame to be repaired.
  • the terminal generates a target video based on the start frame, the end frame, and the target frame.
  • the start frame and the end frame can be determined, so that the start frame, the frame to be repaired, and the end frame are filtered out to form the video to be repaired.
  • the matching block corresponding to each block in the starting frame in the ending frame and determining the forward motion vector of each block in the starting frame relative to the matching block in the ending frame, it is possible to determine the The displacement of each object relative to the same object in the end frame.
  • each block in the frame to be interpolated determine the number of forward motion vectors and backward motion vectors passing through the block, determine the corresponding area of the forward motion vector and backward motion vector in the block, according to the forward motion
  • the number of vectors and backward motion vectors, and the corresponding area determine the mapped motion vector of the block to clarify the motion vector of the block relative to the start frame and the end frame, so that it can be based on the comparison between the start frame and the end frame.
  • the matching block corresponding to the sub-block performs weight interpolation to quickly and accurately calculate each pixel value in the sub-block, thereby obtaining an interpolated frame.
  • the to-be-repaired frame and the interpolation frame with the same time phase are fused to recover the damaged area in the to-be-repaired frame, and the target frame corresponding to the to-be-repaired frame can be obtained, so that the repaired target video can be accurately and quickly obtained.
  • Using the video restoration method of this embodiment does not require manual intervention, which can save time for video restoration and improve the efficiency and accuracy of video restoration.
  • Fig. 9 is a structural block diagram of a video repair device according to an embodiment. As shown in FIG. 9, the video repair device includes: an acquisition module 902, an insertion module 904, a fusion module 906, and a generation module 908. in,
  • the obtaining module 902 is configured to obtain the video to be repaired, determine the start frame and the end frame in the video to be repaired, and determine the number of frames to be repaired between the start frame and the end frame.
  • the inserting module 904 is configured to generate interpolation frames between the start frame and the end frame based on motion compensation, and the number of the interpolation frames is the same as the number of frames to be repaired.
  • the fusion module 906 is configured to perform fusion processing on the frame to be repaired and the corresponding interpolation frame to obtain the target frame.
  • the generating module 908 is configured to generate a target video based on the start frame, the end frame, and the target frame.
  • the video to be repaired is acquired, the start frame and the end frame of the video to be repaired are determined, and the number of frames to be repaired between the start frame and the end frame is determined, based on motion compensation in the start frame and the end frame Interpolation frames are inserted between, and the number of interpolation frames is the same as the number of frames to be repaired.
  • the frame to be repaired and the corresponding interpolation frame are fused to obtain the target frame, and the target video is generated based on the start frame, the end frame and the target frame, Without manual intervention, the video repair can be completed quickly and accurately, and the efficiency of video repair can be improved.
  • the inserting module 904 is further configured to: block the start frame and the end frame respectively; determine the matching block corresponding to each block in the start frame in the end frame, and determine The forward motion vector of each block in the starting frame relative to the matching block in the ending frame; the matching block corresponding to each block in the ending frame is determined in the starting frame, and each block in the ending frame is determined The backward motion vector of each block relative to the matching block in the starting frame; the interpolation frame is generated by the forward motion vector of each block in the starting frame and the backward motion vector of each block in the ending frame .
  • the matching block corresponding to each block in the start frame is determined in the end frame, and it is determined that each block in the start frame is relative to the end frame.
  • the forward motion vector of the matching block can determine the displacement of each object in the start frame relative to the same object in the end frame.
  • the matching block corresponding to each block in the ending frame is determined in the starting frame, so that the displacement of each object in the ending frame relative to the same object in the starting frame can be determined.
  • the position of the same object in the interpolation frame can be accurately determined, so as to accurately obtain the starting frame.
  • the insertion module 904 is further used to determine the frame to be interpolated by the forward motion vector of each block in the start frame and the backward motion vector of each block in the end frame; For each block in the block, determine the number of forward motion vectors and backward motion vectors passing through the block; determine the corresponding area of the forward motion vector and the backward motion vector in the block; according to the forward motion vector The number of the backward motion vector and the backward motion vector, and the corresponding area, determine the mapped motion vector of the block; determine the pixel value in the block based on the mapped motion vector, and determine the pixel value in the block to be interpolated according to each block in the frame to be interpolated Each pixel value in generates an interpolated frame.
  • the number of forward motion vectors and backward motion vectors passing through the block is determined, and the corresponding area of the forward motion vector and backward motion vector in the block is determined .
  • the mapping motion vector of the block is determined to clarify the motion vector of the block relative to the start frame and the end frame.
  • the pixel value in each block is determined based on the mapped motion vector, and each pixel value in each block is each pixel in the interpolation frame, so that the interpolation frame is accurately generated.
  • the insertion module 904 is further configured to: determine the matching block corresponding to the sub-block in the start frame and the end frame based on the mapped motion vector; The matching block corresponding to the block performs weight interpolation to obtain each pixel value in the block.
  • the matching blocks corresponding to the sub-blocks in the start frame and the end frame are determined based on the mapped motion vector, and the matching blocks corresponding to the sub-blocks in the start frame and the end frame are weighted and interpolated, so that fast and accurate calculations can be performed Get each pixel value in the block.
  • the inserting module 904 is further used to determine that the forward motion vector and the backward motion vector passing through other blocks are combined with the forward motion vector and the backward motion vector when the block is not traversed by the forward motion vector and the backward motion vector.
  • the distance between the blocks; the motion vector with the smallest distance is used as the motion vector corresponding to the block, and the motion vector is the forward motion vector or the backward motion vector.
  • the features between adjacent blocks are the closest, and the continuity of the features is the strongest.
  • the forward motion vector and the backward motion vector determine the distance between the forward motion vector and the backward motion vector passing through other blocks and the block, and use the motion vector with the smallest distance as the block Corresponding motion vector, so that for a block that is not crossed by the motion vector, the motion vector with the shortest Euclidean distance can be used as the motion vector of the block.
  • the fusion module 906 is further configured to: obtain the time phase of the frame to be repaired, determine an interpolation frame that is the same as the time phase of the frame to be repaired; to fuse the frame to be repaired and the interpolation frame with the same time phase Through processing, the target frame corresponding to the frame to be repaired is obtained.
  • an interpolation frame that is the same as the time phase of the frame to be repaired is determined, so that the interpolation frame corresponding to the frame to be repaired can be accurately determined.
  • the to-be-repaired frame and the interpolation frame with the same time phase are fused to recover the damaged area in the to-be-repaired frame, so as to obtain the target frame corresponding to the to-be-repaired frame.
  • the fusion module 906 is further configured to: determine the pixels of the frame to be repaired and the pixels of the interpolation frame with the same time phase; perform weighted fusion processing on the pixels of the frame to be repaired and the pixels of the interpolation frame to obtain the The target frame corresponding to the frame to be repaired.
  • the pixels of the frame to be repaired and the pixels of the interpolation frame are subjected to weighted fusion processing to accurately restore the damaged area in the frame to be repaired, thereby The fixed target frame.
  • the acquisition module 902 is further configured to: acquire the original video, and determine the frame to be repaired in the original video; when the frame to be repaired is a single frame, the adjacent previous frame of the single frame to be repaired As the start frame, the next adjacent frame of the single frame to be repaired is used as the end frame; the start frame, the end frame, and the single frame to be repaired are used as the video to be repaired.
  • the frame to be repaired in the original video is determined.
  • the adjacent frame of the single frame to be repaired is used as the starting frame, and the single frame to be repaired The next adjacent frame is used as the end frame, so that the damaged single image frame in the video can be filtered out and combined with the intact start frame and end frame to form the video to be repaired, without any need for other undamaged image frames. Processing to quickly repair the damaged single frame, so as to quickly complete the video repair.
  • the acquisition module 902 is further configured to: when the frame to be repaired is at least two consecutive frames, determine the first and last frames of the at least two consecutive frames to be repaired; The previous frame adjacent to the frame is used as the start frame, and the next frame adjacent to the end frame in the original video is used as the end frame; the start frame and the end frame are non-to-be-repaired frames in the original video; The start frame, the end frame, and the at least two consecutive frames to be repaired are regarded as the to-be-repaired video.
  • the start and end frames are not to be repaired in the original video Frame, so that it is possible to obtain a complete lossless image frame that is closest to the damaged image frame feature.
  • the start frame and end frame are intact image frames, and are the closest to the continuously damaged image frames.
  • the start frame, end frame and at least two consecutive frames to be repaired are regarded as the video to be repaired, based on the start frame and The end frame performs image repair on the frame to be repaired, so that the repaired image frame is more accurate.
  • the division of the modules in the above-mentioned video restoration device is only for illustration. In other embodiments, the video restoration device can be divided into different modules as needed to complete all or part of the functions of the above-mentioned video restoration device.
  • Each module in the above-mentioned video repair device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • Fig. 10 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor and a memory connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the processor is used to obtain the video to be repaired, determine the start frame and the end frame in the video to be repaired, and determine the number of frames to be repaired between the start frame and the end frame; Interpolation frames are generated between the start frame and the end frame, and the number of interpolated frames is the same as the number of frames to be repaired; the frame to be repaired and the corresponding interpolation frame are merged to obtain the target frame; based on the start frame, end frame and target frame Generate the target video.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the memory also stores the to-be-repaired video, the data generated during the repairing process, and the target video.
  • the computer program may be executed by the processor to implement a video repair method provided in the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
  • the electronic device can be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, and a wearable device.
  • each module in the video repair device provided in the embodiments of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module constituted by the computer program can be stored in the memory of the electronic device.
  • the computer program is executed by the processor, the operation of the method described in the embodiment of the present application is realized.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform a video repair method.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

本申请涉及一种视频修复方法,包括:获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量;基于运动补偿在起始帧和结束帧之间生成插值帧,插值帧的数量与待修复帧的数量相同;将待修复帧和对应的插值帧进行融合处理,得到目标帧;以及,基于起始帧、结束帧和目标帧生成目标视频。

Description

视频修复方法、装置、电子设备和计算机可读存储介质
相关申请的交叉引用
本申请要求于2020年04月17日提交中国专利局、申请号为2020103047289、发明名称为“视频修复方法、装置、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种视频修复方法、装置、电子设备和计算机可读存储介质。
背景技术
随着计算机技术的发展,出现了视频修复技术,通过视频修复技术能够将一些破损的视频片断进行修复,或者对视频中不清晰的图像帧进行修复。但是在现有的视频修复技术,对于需要修复的图像帧,一般是需要人工参与修复,且需要对待修复的图像帧进行标记,大大的增加了视频修复的时间。
发明内容
本申请实施例提供了一种视频修复方法、装置、电子设备和计算机可读存储介质。
一种视频修复方法,包括:
获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;
基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
一种视频修复装置,包括:
获取模块,用于获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
插入模块,用于基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
融合模块,用于将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;
生成模块,用于基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下操作:
获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;
基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下操作:
获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;
基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
上述视频修复方法、装置、电子设备和计算机可读存储介质,通过获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量,基于运动补偿在起始帧和结束帧之间生成插值帧,该插值帧的数量与待修复帧的数量相同,将待修复帧和对应的插值帧进行融合处理,得到目标帧,基于起始帧、结束帧和目标帧生成目标视频,无需人工干预,能够快速准确地完成视频修复,提高视频修复的效率。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他实施例的附图。
图1为一个实施例中视频修复方法的应用环境图。
图2为一个实施例中视频修复方法的流程图。
图3为一个实施例中基于运动补偿在起始帧和结束帧之间生成插值帧的流程图。
图4为一个实施例中基于运动补偿在起始帧和结束帧之间生成插值帧的示意图。
图5为一个实施例中通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量确定插值帧的流程图。
图6为一个实施例中运动矢量穿过待插值帧中的分块的示意图。
图7为一个实施例中通过目标帧替换视频中的待修复帧的示意图。
图8为一个实施例中获取待修复视频的流程图。
图9为一个实施例中视频修复装置的结构框图。
图10为一个实施例中电子设备的内部结构示意图。
具体实施方式
为了便于理解本申请,下面将参照相关附图对本申请进行更全面的描述。附图中给出了本申请的较佳实施例。但是,本申请可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。
图1为一个实施例中视频修复方法的应用环境示意图。如图1所示,该应用环境包括终端102和服务器104。其中,终端102通过网络与服务器104通过网络进行通信。其中,终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。在本实施例中,终端102可从服务器104获取待修复视频。终端102确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量。接着,终端102基于运动补偿在起始帧和结束帧之间生成插值帧,该插值帧的数量与待修复帧的数量相同。接着,终端102将待修复帧和对应的插值帧进行融合处理,得到目标帧。终端102基于起始帧、结束帧和目标帧生成目标视频。接着,终端102可将该目标视频发送给服务器104进行存储,从而快速准确地完成视频修复,提高视频修复的效率。
图2为一个实施例中视频修复方法的流程图。本实施例中的视频修复方法,以运行于 图1中的终端上为例进行描述。如图2所示,该视频修复方法包括操作202至操作206。
操作202,获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量。
其中,待修复视频是指存在需要修复的图像帧的视频。该待修复视频可以是完整的视频,也可以是包含起始帧、待修复帧和结束帧的视频片断。起始帧是指该待修复视频中的在时间维度上的第一帧图像帧,结束帧是指在时间维度上的最后一帧图像帧。该起始帧和结束帧都是完整的、不需要修复的图像帧。该起始帧和修复帧可以是在时间维度上相邻的图像帧,也可以是不相邻的图像帧。该修复帧和起始帧可以是在时间维度上相邻的图像帧,也可以是不相邻的图像帧。
具体地,终端获取待修复视频,并确定该待修复视频中的待修复帧,并确定该待修复帧的数量。当该待修复视频中除了待修复帧以外,仅存在两帧图像帧,且该两帧图像帧分别为该待修复视频在时间维度上的第一帧和最后一帧时,表示该待修复视频是视频片断。则终端将该待修复视频在时间维度上的第一帧作为起始帧,将最后一帧作为结束帧。
在本实施例中,当该待修复视频为完整视频,或者当该待修复视频中除了待修复帧以外,还存在超过两帧不需要进行修复的图像帧时,终端可将在时间维度上该待修复图像帧对应的相邻前一帧作为起始帧,将该待修复图像帧对应的相邻后一帧作为结束帧。
操作204,基于运动补偿在起始帧和结束帧之间生成插值帧,该插值帧的数量与待修复帧的数量相同。
其中,运动补偿是一种描述相邻帧(相邻在这里表示在编码关系上相邻,在播放顺序上两帧未必相邻)差别的方法,具体来说是描述前一帧的每个小块移动到当前帧中的某个位置去的过程。
具体地,终端可确定该起始帧相对于结束帧的前向运动矢量,并确定结束帧相对于起始帧的后向运动矢量。接着,终端基于前向运动矢量和后向运动矢量在该起始帧和结束帧之间生成插值帧。该插值帧的数量和待修复帧的数量相同。
操作206,将待修复帧和对应的插值帧进行融合处理,得到目标帧。
其中,目标帧是指经过修复后得到的图像帧。
具体地,终端可生成各插值帧后,确定待修复帧对应的插值帧。将该待修复帧的像素和对应的插值帧的像素进行融合处理,得到融合后的图像帧,即该待修复帧对应的目标帧。
当存在多帧待修复图像帧时,确定每个待修复帧对应的插值帧。将每个待修复帧和对应的插值帧进行融合处理,得到每个待修复帧对应的目标帧。
操作208,基于起始帧、结束帧和目标帧生成目标视频。
具体地,目标视频为完成图像帧修复的视频。终端用目标帧替换待修复视频中对应的待修复帧,从而得到目标视频。
在本实施例中,终端可根据起始帧、结束帧和目标帧,以及对应的时间生成目标视频。
在本实施例中,该目标视频和待修复视频具有相同的帧率和分辨率。其中,帧率(Frame rate)是称为帧的位图图像连续出现在显示器上的频率(速率)。
本实施例中,获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量,基于运动补偿在起始帧和结束帧之间生成插值帧,该插值帧的数量与待修复帧的数量相同,将待修复帧和对应的插值帧进行融合处理,得到目标帧,基于起始帧、结束帧和目标帧生成目标视频,无需人工干预,能够快速准确地完成视频修复,提高视频修复的效率和准确性。
在一个实施例中,如图3所示,基于运动补偿在起始帧和结束帧之间生成插值帧,包括:
操作302,将起始帧和结束帧分别进行分块。
具体地,终端可将该起始帧和结束帧分别进行分块,该起始帧的分块方式可以和结束 帧的分块方式相同。例如,都划分为九宫格,划分的分块数量相同。也可以分别使用不同的分块方式,分块的数量也可以不相同。
操作304,在结束帧中确定与起始帧中每个块对应的匹配块,并确定起始帧中每个块相对于结束帧中的匹配块的前向运动矢量。
其中,运动矢量指的是物体在当前帧图像和前一帧图像中的相对位移,或者当前帧图像和后一帧图像中的相对位移。前向运动矢量是指起始帧中的物体相对于结束帧中同一物体的位移。后向运动矢量是指结束帧中的物体相对于起始帧中同一物体的位移。
具体地,终端可按照起始帧中的块进行遍历。终端在结束帧中查找与该起始帧中的每个块相匹配的分块,从而得到该起始帧中每个块相对于结束帧中的匹配块的前向运动矢量。
操作306,在起始帧中确定与结束帧中每个块对应的匹配块,并确定结束帧中每个块相对于起始帧中的匹配块的后向运动矢量。
同样地,终端可按照结束帧中的块进行遍历。终端在起始帧中查找与该起始帧中的每个块相匹配的分块,从而得到该结束帧中每个块相对于起始帧中的匹配块的后向运动矢量。
操作308,通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量生成插值帧。
具体地,终端可确定起始帧和结束帧之间的时间间隔,将该时间间隔等间距划分为预设数量,每一份作为一个时间相位。接着,终端可根据起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量,在每个相位内生成一个插值帧。
在本实施例中,终端可根据待修复帧的数量,将起始帧和结束帧之间的时间间隔划分为与该待修复帧的数量相同的份数,每一份作为一个时间相位。每个时间相位对应一个待修复帧。接着,终端可根据起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量,在每个相位内生成一个插值帧。
本实施例中,通过将起始帧和结束帧分别进行分块,在结束帧中确定与起始帧中每个块对应的匹配块,并确定起始帧中每个块相对于结束帧中的匹配块的前向运动矢量,从而能够确定起始帧中的各物体相对于结束帧中同一物体的位移。在起始帧中确定与结束帧中每个块对应的匹配块,从而能够确定结束帧中的各物体相对于起始帧中同一物体的位移。根据起始帧中的物体相对于结束帧中同一物体的位移,以及结束帧中的物体相对于起始帧中同一物体的位移,能够准确地确定同一物体在插值帧的位置,从而准确得到起始帧和结束帧之间的各插值帧。
如图4中的(a)所示,终端在起始帧中的按块进行遍历。针对起始帧中的每个块,终端在结束帧中查找对应的匹配块,从而得到该起始帧中每个块相对于结束帧中的匹配块的前向运动矢量。同样地,终端在结束帧中按块进行遍历。针对结束帧中的每个块,终端在起始帧中查找对应的匹配块,从而得到该结束帧中每个块相对于起始帧中的匹配块的后向运动矢量。
接着,终端根据起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量,在起始帧和结束帧中生成插值帧,如图4中的(b)所示。
在一个实施例中,如图5所示,通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量生成插值帧,包括:
操作502,通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量确定待插值帧。
具体地,终端根据起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量,在起始帧和结束帧之间生成待插值帧。该待插值帧中的像素未知。
操作504,针对待插值帧中的每个分块,确定穿过分块的前向运动矢量和后向运动矢 量的数量。
具体地,终端将待插值帧划分为多个分块后,对于每个分块,终端确定穿过该分块的前向运动矢量以及穿过该分块的前向运动矢量的数量。终端确定穿过该分块的后向运动矢量以及穿过该分块的后向运动矢量的数量。
操作506,确定前向运动矢量和后向运动矢量在分块中对应的面积。
具体地,终端可确定穿过该分块的前向运动矢量,并分别确定每个前向运动矢量穿过该分块的面积。穿过该分块的面积即为各前向运动矢量在该分块中对应的面积。接着,终端可确定穿过该分块的后向运动矢量,并分别确定每个后向运动矢量穿过该分块的面积。穿过该分块的面积即为各后向运动矢量在该分块中对应的面积。
例如,待插值帧的分块A被起始帧中的块B、C、D对应的前向运动矢量穿过,则计算块B对应的前向运动矢量在分块A中的面积,计算块C对应的前向运动矢量在分块A中的面积,计算块D对应的前向运动矢量在分块A中的面积。
同时,该待插值帧中的分块A被结束帧中的块b、c、d对应的后向运动矢量穿过,则计算块b对应的后向运动矢量在分块A中的面积,计算块c对应的后向运动矢量在分块A中的面积,计算块d对应的后向运动矢量在分块A中的面积。
如图6所示,为运动矢量穿过待插值帧中的分块的示意图。终端通过确定穿过待插值帧中分块的运动矢量,可以计算出每个运动矢量穿过该分块的面积。
操作508,根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量。
其中,该映射运动矢量是指该分块相对于起始帧和结束帧的运动矢量。
具体地,针对每一个前向运动矢量和对应的面积,终端将前向运动矢量和该前向运动矢量在该分块中对应的面积相乘,得到乘积。针对每一个后向运动矢量和对应的面积,终端将后向运动矢量和该后向运动矢量在该分块中对应的面积相乘,得到乘积。终端可将各前向运动矢量在该分块中的面积和各后向运动矢量在该分块中的面积进行求和。接着,终端将各乘积求和,并计算各乘积求和后的值与各面积求和后的值之间的比值。该比值即为该分块对应的映射运动矢量。按照相同处理方式,终端可得到该待插值帧中每个分块对应的映射运动矢量。
例如,终端可通过下列公式(1)计算分块对应的映射运动矢量:
Figure PCTCN2021076044-appb-000001
其中,MV是分块对应的映射运动矢量。k为穿过分块的前向运动矢量和后向运动矢量的总数量。n为运动矢量的编号,MV n为第n个运动矢量。W n为第n个运动矢量穿过该分块的面积。
操作510,基于映射运动矢量确定分块中的各像素值,根据该待插值帧中各分块中的各像素值生成插值帧。
具体地,终端确定待插值帧中每个分块的映射运动矢量后,对应每个分块,基于分块对应的映射运动矢量确定起始帧中和结束帧中与该分块对应的匹配块,得到两个匹配块。接着,终端可获取该两个匹配块分别对应的权重,以及每个匹配块中各像素点的像素值。接着,终端根据该两个匹配块分别对应的权重,以及每个匹配块中各像素点的像素值,计算出该分块中每个像素点对应的像素值。按照相同的处理方式,得到该待插值帧中各分块中的各像素值。接着,终端将确定各像素值之后的待插值帧作为插值帧。
在本实施例中,当存在多帧待修复帧时,对应生成多帧待插值帧。则按照操作502至操作510的处理方式,能够得到每个待插值帧中各像素点对应的像素值,从而得到每个插 值帧。
本实施例中,针对待插值帧中的每个分块,确定穿过分块的前向运动矢量和后向运动矢量的数量,确定前向运动矢量和后向运动矢量在分块中对应的面积,以确定起始帧和结束帧中有哪些分块穿过待插值帧中的分块,并且穿过的面积是多少。根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量,以明确该分块相对于起始帧和结束帧的运动矢量。基于映射运动矢量确定分块中的各像素值,各分块中的各像素值即为插值帧中的各像素,从而准确生成插值帧。
在一个实施例中,基于映射运动矢量确定分块中的各像素值,包括:
基于映射运动矢量确定在起始帧和结束帧中与分块对应的匹配块;对起始帧和结束帧中与分块对应的匹配块进行权重插值,得到分块中的各像素值。
具体地,终端确定待插值帧中每个分块的映射运动矢量后,对应每个分块,基于分块对应的映射运动矢量确定起始帧中与该分块对应的匹配块,并基于分块对应的映射运动矢量确定结束帧中与该分块对应的匹配块,从而得到两个匹配块。接着,终端可获取该两个匹配块分别对应的权重,以及每个匹配块中各像素点的像素值。接着,终端根据该两个匹配块分别对应的权重,以及每个匹配块中各像素点的像素值,计算出该分块中每个像素点对应的像素值。按照相同的处理方式,得到该待插值帧中各分块中的各像素值。
在本实施例中,终端可通过反距离权重插值计算出待插值帧中各分块中的各像素值。例如,终端可通过公式(2)计算出该起始帧中的匹配块对应的权重和该结束帧中的匹配块对应的权重:
Figure PCTCN2021076044-appb-000002
其中,w ij是第i个未知点与第j个已知点之间的权重值,d ij是第i个未知点与第j个已知点之间的距离,d ik是第i个未知点与第k个已知点之间的距离,h为幂数。未知点是指待插值帧中分块的像素点,已知点是指起始帧中匹配块的像素点,或者结束帧中匹配块的像素点。
接着,终端可按照反距离权重插值反函数,即P=W -1T计算出待插值帧中的分块的各像素值。如下列公式(3)所示:
Figure PCTCN2021076044-appb-000003
其中,W=[w ij] m×n是权重矩阵,T=[s ij] m×n是已知点矩阵,P=[v ij] m×n是未知点矩阵。
接着,终端可根据待插值帧中的分块和起始帧中的匹配块计算出该分块中的各像素值。根据待插值帧中的分块和结束帧中的匹配块计算出该分块中的各像素值,从而得到该待插值帧的分块中的一个像素点对应两个像素值。终端确定同一像素点对应的两个像素值的像素均值,将该像素均值作为该像素点的目标像素值,从而得到该待插值帧的分块中的各目标像素值。
本实施例中,基于映射运动矢量确定在起始帧和结束帧中与分块对应的匹配块,对起 始帧和结束帧中与分块对应的匹配块进行权重插值,从而能够快速准确计算出分块中的各像素值。
在一个实施例中,在确定穿过分块的前向运动矢量和后向运动矢量的数量之前,还包括:
当分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与分块之间的距离;将距离最小的运动矢量作为分块对应的运动矢量,运动矢量为前向运动矢量或后向运动矢量。
具体地,当该待插值帧中的分块没有被前向运动矢量穿过,也没有被后向运动矢量穿过时,终端确定穿过其它分块的前向运动矢量与该分块之间的欧式距离。终端确定穿过其它分块的后向运动矢量与该分块之间的欧式距离。该欧式距离即欧几里得度量(euclidean metric),指在m维空间中两个点之间的真实距离,或者向量的自然长度(即该点到原点的距离)。在二维和三维空间中的欧氏距离就是两点之间的实际距离。接着,终端确定最短的欧式距离对应的运动矢量,并将该最短的欧式距离对应的运动矢量作为该分块对应的运动矢量。
当该最短的欧式距离对应的运动矢量为前向运动矢量时,将该前向运动矢量作为该分块对应的前向运动矢量。当该最短的欧式距离对应的运动矢量为后向运动矢量时,将该后向运动矢量作为该分块对应的后向运动矢量。
在本实施例中,相邻的分块之间的特征最接近,特征的连续性最强。当分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与分块之间的距离,将距离最小的运动矢量作为分块对应的运动矢量,从而对于没有被运动矢量穿过的分块,可借助欧式距离最短的运动矢量作为该分块的运动矢量。
在一个实施例中,在确定穿过分块的前向运动矢量和后向运动矢量的数量之前,还包括:
当分块均没有被前向运动矢量和后向运动矢量穿过时,确定该待插值帧中其它分块与该分块之间的距离;将距离最小的分块对应的运动矢量作为分块对应的运动矢量,运动矢量为前向运动矢量或后向运动矢量。
具体地,当分块均没有被前向运动矢量和后向运动矢量穿过时,终端可计算该待插值帧中其它分块与该分块之间的欧式距离。将各欧式距离进行比较,确定欧式距离最小的分块对应的运动矢量,将该欧式距离最小的分块对应的运动矢量作为没有被前向运动矢量和后向运动矢量穿过的分块对应的运动矢量。
在一个实施例中,将待修复帧和对应的插值帧进行融合处理,得到目标帧,包括:
获取待修复帧的时间相位,确定与待修复帧的时间相位相同的插值帧;将时间相位相同的待修复帧和插值帧进行融合处理,得到待修复帧对应的目标帧。
具体地,终端可根据待修复帧的数量,将起始帧和结束帧之间的时间间隔等间距划分为与该待修复帧的数量相同的份数,每一份作为一个时间相位。每个时间相位对应一个待修复帧。接着,终端可根据起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量,在每个相位生成一个插值帧。
接着,终端可获取待修复帧对应的时间相位,并获取插值帧对应的时间相位。终端将各待修复帧对应的时间相位和各插值帧对应的时间相位进行比较。接着,终端确定时间相位相同的待修复帧和插值帧,并将时间相位相同的待修复帧和插值帧进行像素融合处理,得到待修复帧对应的目标帧。
在本实施例中,通过获取待修复帧的时间相位,确定与待修复帧的时间相位相同的插值帧,从而能够准确确定待修复帧对应的插值帧。将时间相位相同的待修复帧和插值帧进行融合处理,以恢复待修复帧中被损坏的区域,从而得到待修复帧对应的目标帧。
在一个实施例中,将时间相位相同的待修复帧和插值帧进行融合处理,得到待修复帧 对应的目标帧,包括:
确定时间相位相同的待修复帧的像素和插值帧的像素;将待修复帧的像素和插值帧的像素进行加权融合处理,得到待修复帧对应的目标帧。
具体地,终端确定时间相位相同的待修复帧和插值帧后,确定该待修复帧的像素和对应的插值帧的像素。接着,终端确定该待修复帧和插值帧中相匹配的像素点,并确定相匹配的像素点的像素值。将该相匹配的像素点的像素值进行加权融合处理,得到目标像素值。按照相同的处理方式,可将各时间相位相同的待修复帧的像素和插值帧的像素进行加权融合处理,得到目标像素值。各像素点和对应的目标像素值组成目标帧图像。
本实施例中,通过确定时间相位相同的待修复帧的像素和插值帧的像素,将待修复帧的像素和插值帧的像素进行加权融合处理,以准确恢复待修复帧中损坏的区域,从而修复好的目标帧。
例如,终端可通过下列公式(4)将待修复帧的像素和插值帧的像素进行加权融合:
Figure PCTCN2021076044-appb-000004
其中,F为目标帧,
Figure PCTCN2021076044-appb-000005
为待修复帧,
Figure PCTCN2021076044-appb-000006
为与
Figure PCTCN2021076044-appb-000007
时间相位相同的插值帧,w为各像素对应的权重,w小于1。M和N为图像帧的分辨率。该权重w可以预先设定,也可以根据插值帧的每个插值块的运动矢量的可信度自适应设定。
在一个实施例中,获取待修复视频,包括:
获取原始视频,确定原始视频中的待修复帧;当待修复帧为单帧时,将单帧待修复帧的相邻前一帧作为起始帧,将单帧待修复帧的相邻后一帧作为结束帧;将起始帧、结束帧和单帧待修复帧作为待修复视频。
其中,原始视频是指图像帧在时间维度上连续的视频,且包含了需要修复的图像帧视频。
具体地,终端可从本地或网络或者从第三设备上获取原始视频。终端可确定该原始视频中需要修复的图像帧,即待修复帧,并确定该待修复帧的数量。当仅存在一帧待修复帧时,终端确定原始视频中该待修复帧的相邻前一帧,即在时间维度上确定该待修复帧的相邻前一帧。将该相邻前一帧作为待修复视频的起始帧。接着,终端确定原始视频中该待修复帧的相邻后一帧,即在时间维度上确定该待修复帧的相邻后一帧。将该相邻后一帧作为待修复视频的结束帧。接着,终端将该起始帧、待修复帧和结束帧组成视频片断,得到待修复视频。
如图7所示,该原始视频中的第1帧为损坏的待修复帧,终端确定第1帧的相邻前一帧作为起始帧,即第0帧。确定第1帧的相邻后一帧作为结束帧,即第2帧。该第0帧和第2帧为未损坏的图像帧。终端将该第0帧、第1帧和第2帧形成待修复视频。通过本实施例的视频修复方法,得到第3帧,该第3帧即为修复好的第1帧。终端用第3帧替换掉待修复视频中的第1帧,根据该第0帧、第3帧和第2帧形成目标视频。进一步地,终端可利用目标视频替换掉原始视频中的第0帧、第1帧和第2帧。或者,终端可用第3帧替换掉原始视频中的第1帧,得到修复好的视频。
本实施例中,通过获取原始视频,确定原始视频中的待修复帧,当待修复帧为单帧时,将单帧待修复帧的相邻前一帧作为起始帧,将单帧待修复帧的相邻后一帧作为结束帧,从而能够将视频中损坏的单帧图像帧筛选出来,并与完好的起始帧、结束帧组成待修复视频,而无需对其它未损坏图像帧进行任何处理,以快速对损坏的单帧进行修复,从而快速完成视频修复。
在一个实施例中,如图8所示,该方法还包括:
操作802,当待修复帧为连续至少两帧时,确定连续至少两帧待修复帧的首帧和尾帧。
具体地,当终端检测出该原始视频中存在连续至少两帧待修复帧时,终端可进一步确定该连续至少两帧待修复帧在时间维度上的首帧和尾帧。首帧是指该连续至少两帧待修复帧在时间维度上的第一帧图像帧,尾帧是指该连续至少两帧待修复帧在时间维度上的最后一帧图像帧。
操作804,将原始视频中与首帧相邻的前一帧作为起始帧,将原始视频中与尾帧相邻的后一帧作为结束帧;该起始帧和结束帧为原始视频中的非待修复帧。
具体地,终端可确定与该连续至少两帧待修复帧中的首帧的相邻前一帧,即在时间维度上确定该首帧的相邻前一帧。将该相邻前一帧作为待修复视频的起始帧。接着,终端确定该连续至少两帧待修复帧中的尾帧的相邻后一帧,即在时间维度上确定该尾帧的相邻后一帧。将该相邻后一帧作为待修复视频的结束帧。
操作806,将起始帧、结束帧和连续至少两帧待修复帧作为待修复视频。
具体地,终端按照时间维度将该起始帧、连续至少两帧待修复帧和结束帧组成视频片断,得到待修复视频。
在本实施例中,在将插值帧和对应的待修复帧进行融合时,可将该待修复帧的权重设置为0。
本实施例中,通过当待修复帧为连续至少两帧时,确定连续至少两帧待修复帧的首帧和尾帧,能够筛选出原始视频中的连续损坏的多帧图像帧。将原始视频中与首帧相邻的前一帧作为起始帧,将原始视频中与尾帧相邻的后一帧作为结束帧;该起始帧和结束帧为原始视频中的非待修复帧,使得能够获取与损坏的图像帧特征最接近的完成无损的图像帧。起始帧和结束帧是完好无损的图像帧,并且是与连续损坏的图像帧特征最相近,将起始帧、结束帧和连续至少两帧待修复帧作为待修复视频,基于起始帧和结束帧对待修复帧进行图像修复,使得修复的图像帧更准确。
在一个实施例中,提供了一种视频修复方法,包括:
终端获取原始视频,确定原始视频中的待修复帧。
可选地,当待修复帧为单帧时,终端将单帧待修复帧的相邻前一帧作为起始帧,将单帧待修复帧的相邻后一帧作为结束帧;将起始帧、结束帧和单帧待修复帧作为待修复视频。
可选地,当待修复帧为连续至少两帧时,终端确定连续至少两帧待修复帧的首帧和尾帧;将原始视频中与首帧相邻的前一帧作为起始帧,将原始视频中与尾帧相邻的后一帧作为结束帧;起始帧和结束帧为原始视频中的非待修复帧;将起始帧、结束帧和连续至少两帧待修复帧作为待修复视频。
接着,终端确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量。
进一步地,终端将起始帧和结束帧分别进行分块;在结束帧中确定与起始帧中每个块对应的匹配块,并确定起始帧中每个块相对于结束帧中的匹配块的前向运动矢量。
接着,终端在起始帧中确定与结束帧中每个块对应的匹配块,并确定结束帧中每个块相对于起始帧中的匹配块的后向运动矢量。
接着,终端通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量确定待插值帧。
接着,终端针对待插值帧中的每个分块,确定穿过分块的前向运动矢量和后向运动矢量的数量。
接着,终端确定前向运动矢量和后向运动矢量在分块中对应的面积。
进一步地,终端根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量。
可选地,当分块均没有被前向运动矢量和后向运动矢量穿过时,终端确定穿过其它分 块的前向运动矢量和后向运动矢量与分块之间的距离;将距离最小的运动矢量作为分块对应的运动矢量,该运动矢量为前向运动矢量或后向运动矢量;确定穿过分块的前向运动矢量和后向运动矢量的数量;确定前向运动矢量和后向运动矢量在分块中对应的面积;根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量。
接着,终端基于映射运动矢量确定在起始帧和结束帧中与分块对应的匹配块。
接着,终端对起始帧和结束帧中与分块对应的匹配块进行权重插值,得到分块中的各像素值。
进一步地,终端根据待插值帧中各分块中的各像素值生成插值帧,该插值帧的数量与待修复帧的数量相同。
接着,终端获取待修复帧的时间相位,确定与待修复帧的时间相位相同的插值帧;确定时间相位相同的待修复帧的像素和插值帧的像素。
进一步地,终端将待修复帧的像素和插值帧的像素进行加权融合处理,得到待修复帧对应的目标帧。
接着,终端基于起始帧、结束帧和目标帧生成目标视频。
本实施例中,通过确定原始视频中存在的待修复帧数量,可确定出起始帧和结束帧,从而筛选出起始帧、待修复帧和结束帧形成待修复视频。通过在结束帧中确定与起始帧中每个块对应的匹配块,并确定起始帧中每个块相对于结束帧中的匹配块的前向运动矢量,从而能够确定起始帧中的各物体相对于结束帧中同一物体的位移。在起始帧中确定与结束帧中每个块对应的匹配块,从而能够确定结束帧中的各物体相对于起始帧中同一物体的位移,从而准确得到起始帧和结束帧之间的各插值帧。
针对待插值帧中的每个分块,确定穿过分块的前向运动矢量和后向运动矢量的数量,确定前向运动矢量和后向运动矢量在分块中对应的面积,根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量,以明确该分块相对于起始帧和结束帧的运动矢量,从而能够基于对起始帧和结束帧中与分块对应的匹配块进行权重插值,以快速准确计算出分块中的各像素值,从而得到插值帧。将时间相位相同的待修复帧和插值帧进行融合处理,以恢复待修复帧中被损坏的区域,能够得到待修复帧对应的目标帧,从而准确快速得到修复好的目标视频。采用本实施例的视频修复方法,无需人工干预,能够节省视频修复的时间并提高视频修复是效率和准确性。
应该理解的是,虽然图2-图8的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图2-图8中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
图9为一个实施例的视频修复装置的结构框图。如图9所示,视频修复装置包括:获取模块902、插入模块904、融合模块906和生成模块908。其中,
获取模块902,用于获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量。
插入模块904,用于基于运动补偿在起始帧和结束帧之间生成插值帧,该插值帧的数量与待修复帧的数量相同。
融合模块906,用于将待修复帧和对应的插值帧进行融合处理,得到目标帧。
生成模块908,用于基于起始帧、结束帧和目标帧生成目标视频。
本实施例中,获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量,基于运动补偿在起始帧和结束帧之间插入插值帧,该插值帧的数量与待修复帧的数量相同,将待修复帧和对应的插值帧进行融合处理,得到目标帧,基于起始帧、结束帧和目标帧生成目标视频,无需人工干预,能够快速准确地完成视频修复,提高视频修复的效率。
在一个实施例中,该插入模块904还用于:将该起始帧和该结束帧分别进行分块;在该结束帧中确定与该起始帧中每个块对应的匹配块,并确定该起始帧中每个块相对于该结束帧中的匹配块的前向运动矢量;在该起始帧中确定与该结束帧中每个块对应的匹配块,并确定该结束帧中每个块相对于该起始帧中的匹配块的后向运动矢量;通过该起始帧中的每个块的前向运动矢量和该结束帧中的每个块的后向运动矢量生成插值帧。
本实施例中,通过将起始帧和结束帧分别进行分块,在结束帧中确定与起始帧中每个块对应的匹配块,并确定起始帧中每个块相对于结束帧中的匹配块的前向运动矢量,从而能够确定起始帧中的各物体相对于结束帧中同一物体的位移。在起始帧中确定与结束帧中每个块对应的匹配块,从而能够确定结束帧中的各物体相对于起始帧中同一物体的位移。根据起始帧中的物体相对于结束帧中同一物体的位移,以及结束帧中的物体相对于起始帧中同一物体的位移,能够准确地确定同一物体在插值帧的位置,从而准确得到起始帧和结束帧之间的各插值帧。
在一个实施例中,该插入模块904还用于:通过起始帧中的每个块的前向运动矢量和结束帧中的每个块的后向运动矢量确定待插值帧;针对待插值帧中的每个分块,确定穿过该分块的前向运动矢量和后向运动矢量的数量;确定该前向运动矢量和该后向运动矢量在该分块中对应的面积;根据该前向运动矢量和该后向运动矢量的数量、以及对应的面积,确定该分块的映射运动矢量;基于该映射运动矢量确定该分块中的各像素值,根据该待插值帧中各分块中的各像素值生成插值帧。
本实施例中,针对待插值帧中的每个分块,确定穿过分块的前向运动矢量和后向运动矢量的数量,确定前向运动矢量和后向运动矢量在分块中对应的面积,以确定起始帧和结束帧中有哪些分块穿过待插值帧中的分块,并且穿过的面积是多少。根据前向运动矢量和后向运动矢量的数量、以及对应的面积,确定分块的映射运动矢量,以明确该分块相对于起始帧和结束帧的运动矢量。基于映射运动矢量确定分块中的各像素值,各分块中的各像素值即为插值帧中的各像素,从而准确生成插值帧。
在一个实施例中,该插入模块904还用于:基于该映射运动矢量确定在该起始帧和该结束帧中与该分块对应的匹配块;对该起始帧和该结束帧中与该分块对应的匹配块进行权重插值,得到该分块中的各像素值。
本实施例中,基于映射运动矢量确定在起始帧和结束帧中与分块对应的匹配块,对起始帧和结束帧中与分块对应的匹配块进行权重插值,从而能够快速准确计算出分块中的各像素值。
在一个实施例中,该插入模块904还用于:当该分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与该分块之间的距离;将该距离最小的运动矢量作为该分块对应的运动矢量,该运动矢量为前向运动矢量或后向运动矢量。
在本实施例中,相邻的分块之间的特征最接近,特征的连续性最强。当分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与分块之间的距离,将距离最小的运动矢量作为分块对应的运动矢量,从而对于没有被运动矢量穿过的分块,可借助欧式距离最短的运动矢量作为该分块的运动矢量。
在一个实施例中,该融合模块906还用于:获取该待修复帧的时间相位,确定与该待修复帧的时间相位相同的插值帧;将时间相位相同的待修复帧和插值帧进行融合处理,得 到该待修复帧对应的目标帧。
在本实施例中,通过获取待修复帧的时间相位,确定与待修复帧的时间相位相同的插值帧,从而能够准确确定待修复帧对应的插值帧。将时间相位相同的待修复帧和插值帧进行融合处理,以恢复待修复帧中被损坏的区域,从而得到待修复帧对应的目标帧。
在一个实施例中,该融合模块906还用于:确定时间相位相同的待修复帧的像素和插值帧的像素;将该待修复帧的像素和该插值帧的像素进行加权融合处理,得到该待修复帧对应的目标帧。
本实施例中,通过确定时间相位相同的待修复帧的像素和插值帧的像素,将待修复帧的像素和插值帧的像素进行加权融合处理,以准确恢复待修复帧中损坏的区域,从而修复好的目标帧。
在一个实施例中,该获取模块902还用于:获取原始视频,确定该原始视频中的待修复帧;当该待修复帧为单帧时,将单帧待修复帧的相邻前一帧作为起始帧,将该单帧待修复帧的相邻后一帧作为结束帧;将该起始帧、该结束帧和该单帧待修复帧作为待修复视频。
本实施例中,通过获取原始视频,确定原始视频中的待修复帧,当待修复帧为单帧时,将单帧待修复帧的相邻前一帧作为起始帧,将单帧待修复帧的相邻后一帧作为结束帧,从而能够将视频中损坏的单帧图像帧筛选出来,并与完好的起始帧、结束帧组成待修复视频,而无需对其它未损坏图像帧进行任何处理,以快速对损坏的单帧进行修复,从而快速完成视频修复。
在一个实施例中,该获取模块902还用于:当该待修复帧为连续至少两帧时,确定该连续至少两帧待修复帧的首帧和尾帧;将该原始视频中与该首帧相邻的前一帧作为起始帧,将该原始视频中与该尾帧相邻的后一帧作为结束帧;该起始帧和该结束帧为该原始视频中的非待修复帧;将该起始帧、该结束帧和该连续至少两帧待修复帧作为待修复视频。
本实施例中,通过当待修复帧为连续至少两帧时,确定连续至少两帧待修复帧的首帧和尾帧,能够筛选出原始视频中的连续损坏的多帧图像帧。将原始视频中与首帧相邻的前一帧作为起始帧,将原始视频中与尾帧相邻的后一帧作为结束帧;该起始帧和结束帧为原始视频中的非待修复帧,使得能够获取与损坏的图像帧特征最接近的完成无损的图像帧。起始帧和结束帧是完好无损的图像帧,并且是与连续损坏的图像帧特征最相近,将起始帧、结束帧和连续至少两帧待修复帧作为待修复视频,基于起始帧和结束帧对待修复帧进行图像修复,使得修复的图像帧更准确。
上述视频修复装置中各个模块的划分仅用于举例说明,在其他实施例中,可将视频修复装置按照需要划分为不同的模块,以完成上述视频修复装置的全部或部分功能。
关于视频修复装置的具体限定可以参见上文中对于视频修复方法的限定,在此不再赘述。上述视频修复装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图10为一个实施例中电子设备的内部结构示意图。如图10所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。在本实施例中,该处理器用于获取待修复视频,确定待修复视频中的起始帧和结束帧,并确定起始帧和结束帧之间的待修复帧的数量;基于运动补偿在起始帧和结束帧之间生成插值帧,插值帧的数量与待修复帧的数量相同;将待修复帧和对应的插值帧进行融合处理,得到目标帧;基于起始帧、结束帧和目标帧生成目标视频。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该存储器中还存储了待修复视频、修复过程中产生的数据以及目标视频等。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种视频修复方法。内存储器为非易 失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备。
本申请实施例中提供的视频修复装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行视频修复方法的操作。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行视频修复方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种视频修复方法,其特征在于,包括:
    获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
    基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
    将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;及
    基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
  2. 根据权利要求1所述的方法,其特征在于,所述基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,包括:
    将所述起始帧和所述结束帧分别进行分块;
    在所述结束帧中确定与所述起始帧中每个块对应的匹配块,并确定所述起始帧中每个块相对于所述结束帧中的匹配块的前向运动矢量;
    在所述起始帧中确定与所述结束帧中每个块对应的匹配块,并确定所述结束帧中每个块相对于所述起始帧中的匹配块的后向运动矢量;及
    通过所述起始帧中的每个块的前向运动矢量和所述结束帧中的每个块的后向运动矢量生成插值帧。
  3. 根据权利要求2所述的方法,其特征在于,所述通过所述起始帧中的每个块的前向运动矢量和所述结束帧中的每个块的后向运动矢量生成插值帧,包括:
    通过所述起始帧中的每个块的前向运动矢量和所述结束帧中的每个块的后向运动矢量确定待插值帧;
    针对所述待插值帧中的每个分块,确定穿过所述分块的前向运动矢量和后向运动矢量的数量;
    确定所述前向运动矢量和所述后向运动矢量在所述分块中对应的面积;
    根据所述前向运动矢量和所述后向运动矢量的数量、以及对应的面积,确定所述分块的映射运动矢量;及
    基于所述映射运动矢量确定所述分块中的各像素值,根据所述待插值帧中各分块中的各像素值生成插值帧。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述前向运动矢量和所述后向运动矢量的数量、以及对应的面积,确定所述分块的映射运动矢量,包括:
    确定各前向运动矢量和对应的面积之间的乘积、以及各后向运动矢量和对应的面积之间的乘积;
    确定所述各前向运动矢量对应的面积与所述各后向运动矢量对应的面积之和;及
    将各乘积之和与各面积之和的比值,作为所述分块对应的映射运动矢量。
  5. 根据权利要求3所述的方法,其特征在于,所述基于所述映射运动矢量确定所述分块中的各像素值,包括:
    基于所述映射运动矢量确定在所述起始帧和所述结束帧中与所述分块对应的匹配块;及
    对所述起始帧和所述结束帧中与所述分块对应的匹配块进行权重插值,得到所述分块中的各像素值。
  6. 根据权利要求3所述的方法,其特征在于,在所述确定穿过所述分块的前向运动矢量和后向运动矢量的数量之前,还包括:
    当所述分块均没有被前向运动矢量和后向运动矢量穿过时,确定所述待插值帧中其它分块与所述分块之间的距离;及
    将所述距离最小的分块对应的运动矢量作为所述分块对应的运动矢量,所述运动矢量 为前向运动矢量或后向运动矢量。
  7. 根据权利要求3所述的方法,其特征在于,在所述确定穿过所述分块的前向运动矢量和后向运动矢量的数量之前,还包括:
    当所述分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与所述分块之间的距离;及
    将所述距离最小的运动矢量作为所述分块对应的运动矢量,所述运动矢量为前向运动矢量或后向运动矢量。
  8. 根据权利要求1所述的方法,其特征在于,所述将所述待修复帧和对应的插值帧进行融合处理,得到目标帧,包括:
    获取所述待修复帧的时间相位,确定与所述待修复帧的时间相位相同的插值帧;及
    将时间相位相同的待修复帧和插值帧进行融合处理,得到所述待修复帧对应的目标帧。
  9. 根据权利要求8所述的方法,其特征在于,所述将时间相位相同的待修复帧和插值帧进行融合处理,得到所述待修复帧对应的目标帧,包括:
    确定时间相位相同的待修复帧的像素和插值帧的像素;及
    将所述待修复帧的像素和所述插值帧的像素进行加权融合处理,得到所述待修复帧对应的目标帧。
  10. 根据权利要求1所述的方法,其特征在于,所述获取待修复视频,包括:
    获取原始视频,确定所述原始视频中的待修复帧;
    当所述待修复帧为单帧时,将单帧待修复帧的相邻前一帧作为起始帧,将所述单帧待修复帧的相邻后一帧作为结束帧;及
    将所述起始帧、所述结束帧和所述单帧待修复帧作为待修复视频。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    当所述待修复帧为连续至少两帧时,确定所述连续至少两帧待修复帧的首帧和尾帧;
    将所述原始视频中与所述首帧相邻的前一帧作为起始帧,将所述原始视频中与所述尾帧相邻的后一帧作为结束帧;所述起始帧和所述结束帧为所述原始视频中的非待修复帧;及
    将所述起始帧、所述结束帧和所述连续至少两帧待修复帧作为待修复视频。
  12. 一种视频修复装置,其特征在于,包括:
    获取模块,用于获取待修复视频,确定所述待修复视频中的起始帧和结束帧,并确定所述起始帧和所述结束帧之间的待修复帧的数量;
    插入模块,用于基于运动补偿在所述起始帧和所述结束帧之间生成插值帧,所述插值帧的数量与所述待修复帧的数量相同;
    融合模块,用于将所述待修复帧和对应的插值帧进行融合处理,得到目标帧;及
    生成模块,用于基于所述起始帧、所述结束帧和所述目标帧生成目标视频。
  13. 根据权利要求12所述的装置,其特征在于,所述插入模块还用于将所述起始帧和所述结束帧分别进行分块;在所述结束帧中确定与所述起始帧中每个块对应的匹配块,并确定所述起始帧中每个块相对于所述结束帧中的匹配块的前向运动矢量;在所述起始帧中确定与所述结束帧中每个块对应的匹配块,并确定所述结束帧中每个块相对于所述起始帧中的匹配块的后向运动矢量;及通过所述起始帧中的每个块的前向运动矢量和所述结束帧中的每个块的后向运动矢量生成插值帧。
  14. 根据权利要求13所述的装置,其特征在于,所述插入模块还用于通过所述起始帧中的每个块的前向运动矢量和所述结束帧中的每个块的后向运动矢量确定待插值帧;针对所述待插值帧中的每个分块,确定穿过所述分块的前向运动矢量和后向运动矢量的数量;确定所述前向运动矢量和所述后向运动矢量在所述分块中对应的面积;根据所述前向 运动矢量和所述后向运动矢量的数量、以及对应的面积,确定所述分块的映射运动矢量;及基于所述映射运动矢量确定所述分块中的各像素值,根据所述待插值帧中各分块中的各像素值生成插值帧。
  15. 根据权利要求13所述的装置,其特征在于,所述插入模块还用于基于所述映射运动矢量确定在所述起始帧和所述结束帧中与所述分块对应的匹配块;及对所述起始帧和所述结束帧中与所述分块对应的匹配块进行权重插值,得到所述分块中的各像素值。
  16. 根据权利要求13所述的装置,其特征在于,所述插入模块还用于当所述分块均没有被前向运动矢量和后向运动矢量穿过时,确定穿过其它分块的前向运动矢量和后向运动矢量与所述分块之间的距离;及将所述距离最小的运动矢量作为所述分块对应的运动矢量,所述运动矢量为前向运动矢量或后向运动矢量。
  17. 根据权利要求12所述的装置,其特征在于,所述融合模块还用于获取所述待修复帧的时间相位,确定与所述待修复帧的时间相位相同的插值帧;及将时间相位相同的待修复帧和插值帧进行融合处理,得到所述待修复帧对应的目标帧。
  18. 根据权利要求17所述的装置,其特征在于,所述融合模块还用于确定时间相位相同的待修复帧的像素和插值帧的像素;及将所述待修复帧的像素和所述插值帧的像素进行加权融合处理,得到所述待修复帧对应的目标帧。
  19. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述的方法的操作。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至11中任一项所述的方法的操作。
PCT/CN2021/076044 2020-04-17 2021-02-08 视频修复方法、装置、电子设备和计算机可读存储介质 WO2021208580A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010304728.9 2020-04-17
CN202010304728.9A CN111491204B (zh) 2020-04-17 2020-04-17 视频修复方法、装置、电子设备和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021208580A1 true WO2021208580A1 (zh) 2021-10-21

Family

ID=71812894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076044 WO2021208580A1 (zh) 2020-04-17 2021-02-08 视频修复方法、装置、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111491204B (zh)
WO (1) WO2021208580A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274106A (zh) * 2023-10-31 2023-12-22 荣耀终端有限公司 一种照片修复的方法、电子设备及相关介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491204B (zh) * 2020-04-17 2022-07-12 Oppo广东移动通信有限公司 视频修复方法、装置、电子设备和计算机可读存储介质
CN113613088A (zh) * 2021-08-02 2021-11-05 安徽文香科技有限公司 一种mp4文件的修复方法、装置、电子设备及可读存储介质
CN114268832A (zh) * 2021-12-20 2022-04-01 杭州逗酷软件科技有限公司 一种修复方法、电子设备及计算机存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277419A (zh) * 2007-03-27 2008-10-01 株式会社东芝 帧内插设备与方法
US20100104140A1 (en) * 2008-10-23 2010-04-29 Samsung Electronics Co., Ltd. Apparatus and method for improving frame rate using motion trajectory
CN102572358A (zh) * 2010-12-16 2012-07-11 三菱电机株式会社 帧插值装置和帧插值方法
CN103947181A (zh) * 2011-11-18 2014-07-23 三菱电机株式会社 图像处理装置和方法以及图像显示装置和方法
CN106031144A (zh) * 2014-03-28 2016-10-12 华为技术有限公司 生成运动补偿视频帧的方法和设备
CN110856048A (zh) * 2019-11-21 2020-02-28 北京达佳互联信息技术有限公司 视频修复方法、装置、设备及存储介质
CN111491204A (zh) * 2020-04-17 2020-08-04 Oppo广东移动通信有限公司 视频修复方法、装置、电子设备和计算机可读存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6583823B1 (en) * 1997-08-01 2003-06-24 Sony Corporation Methods, apparatuses, and mediums for repairing a pixel associated with motion-picture processes
JP4220284B2 (ja) * 2003-03-28 2009-02-04 株式会社東芝 フレーム補間方法、装置及びこれを用いた画像表示システム
CN100337472C (zh) * 2004-09-30 2007-09-12 中国科学院计算技术研究所 一种具有运动前景的视频合成方法
EP1977395B1 (en) * 2006-01-27 2018-10-24 Imax Corporation Methods and systems for digitally re-mastering of 2d and 3d motion pictures for exhibition with enhanced visual quality
CN101827272A (zh) * 2009-03-06 2010-09-08 株式会社日立制作所 视频错误修复装置
CN103402098B (zh) * 2013-08-19 2016-07-06 武汉大学 一种基于图像插值的视频插帧方法
CN106791279B (zh) * 2016-12-30 2020-01-03 中国科学院自动化研究所 基于遮挡检测的运动补偿方法及系统
CN108389217A (zh) * 2018-01-31 2018-08-10 华东理工大学 一种基于梯度域混合的视频合成方法
CN109005342A (zh) * 2018-08-06 2018-12-14 Oppo广东移动通信有限公司 全景拍摄方法、装置和成像设备
CN109922231A (zh) * 2019-02-01 2019-06-21 重庆爱奇艺智能科技有限公司 一种用于生成视频的插帧图像的方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277419A (zh) * 2007-03-27 2008-10-01 株式会社东芝 帧内插设备与方法
US20100104140A1 (en) * 2008-10-23 2010-04-29 Samsung Electronics Co., Ltd. Apparatus and method for improving frame rate using motion trajectory
CN102572358A (zh) * 2010-12-16 2012-07-11 三菱电机株式会社 帧插值装置和帧插值方法
CN103947181A (zh) * 2011-11-18 2014-07-23 三菱电机株式会社 图像处理装置和方法以及图像显示装置和方法
CN106031144A (zh) * 2014-03-28 2016-10-12 华为技术有限公司 生成运动补偿视频帧的方法和设备
CN110856048A (zh) * 2019-11-21 2020-02-28 北京达佳互联信息技术有限公司 视频修复方法、装置、设备及存储介质
CN111491204A (zh) * 2020-04-17 2020-08-04 Oppo广东移动通信有限公司 视频修复方法、装置、电子设备和计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274106A (zh) * 2023-10-31 2023-12-22 荣耀终端有限公司 一种照片修复的方法、电子设备及相关介质
CN117274106B (zh) * 2023-10-31 2024-04-09 荣耀终端有限公司 一种照片修复的方法、电子设备及相关介质

Also Published As

Publication number Publication date
CN111491204B (zh) 2022-07-12
CN111491204A (zh) 2020-08-04

Similar Documents

Publication Publication Date Title
WO2021208580A1 (zh) 视频修复方法、装置、电子设备和计算机可读存储介质
CN113034380B (zh) 基于改进可变形卷积校正的视频时空超分辨率方法和装置
US11756170B2 (en) Method and apparatus for correcting distorted document image
Yang et al. Self-supervised learning of depth inference for multi-view stereo
US9865037B2 (en) Method for upscaling an image and apparatus for upscaling an image
US11443481B1 (en) Reconstructing three-dimensional scenes portrayed in digital images utilizing point cloud machine-learning models
CN111028153A (zh) 图像处理和神经网络训练方法、装置及计算机设备
CN112700516B (zh) 基于深度学习的视频渲染方法和装置
CN111402139A (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
Sheng et al. Cross-view recurrence-based self-supervised super-resolution of light field
Demkowicz Fully automatic hp-adaptivity for Maxwell’s equations
CN110457361B (zh) 特征数据获取方法、装置、计算机设备和存储介质
CN115049769A (zh) 角色动画生成方法、装置、计算机设备和存储介质
CN111586321A (zh) 视频生成方法、装置、电子设备和计算机可读存储介质
CN114048845A (zh) 点云修复方法、装置、计算机设备和存储介质
CN113240042A (zh) 图像分类预处理、图像分类方法、装置、设备及存储介质
Dalalyan et al. $ L_1 $-Penalized Robust Estimation for a Class of Inverse Problems Arising in Multiview Geometry
WO2024001139A1 (zh) 视频分类方法、装置以及电子设备
CN115272082A (zh) 模型训练、视频质量提升方法、装置和计算机设备
CN111311731B (zh) 基于数字投影的随机灰度图生成方法、装置和计算机设备
CN115424038A (zh) 多尺度图像处理方法、系统、装置和计算机设备
JP2015197818A (ja) 画像処理装置およびその方法
CN114049255A (zh) 图像处理方法及其装置、存算一体芯片和电子设备
Aadil et al. Improving super resolution methods via incremental residual learning
Li et al. Channel-Spatial Transformer for Efficient Image Super-Resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21788773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21788773

Country of ref document: EP

Kind code of ref document: A1