WO2023273342A1 - 用于修复视频的方法、装置、设备、介质和产品 - Google Patents

用于修复视频的方法、装置、设备、介质和产品 Download PDF

Info

Publication number
WO2023273342A1
WO2023273342A1 PCT/CN2022/075035 CN2022075035W WO2023273342A1 WO 2023273342 A1 WO2023273342 A1 WO 2023273342A1 CN 2022075035 W CN2022075035 W CN 2022075035W WO 2023273342 A1 WO2023273342 A1 WO 2023273342A1
Authority
WO
WIPO (PCT)
Prior art keywords
repaired
sample
category
pixel
video frame
Prior art date
Application number
PCT/CN2022/075035
Other languages
English (en)
French (fr)
Inventor
李鑫
郑贺
刘芳龙
何栋梁
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to KR1020227035706A priority Critical patent/KR20220146663A/ko
Priority to JP2022553168A priority patent/JP2023535662A/ja
Priority to US17/944,745 priority patent/US20230008473A1/en
Publication of WO2023273342A1 publication Critical patent/WO2023273342A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, in particular to computer vision and deep learning technology, which can be specifically used in image restoration scenarios.
  • old movies are usually shot and archived on film. Therefore, old movies have high requirements for the preservation environment.
  • the present disclosure provides a method, device, device, medium and product for repairing video.
  • a method for repairing video including: acquiring a sequence of video frames to be repaired; determining the video frame to be repaired based on the sequence of video frames to be repaired and a preset category detection model The target category corresponding to each pixel in the sequence; determining the target category from the video frame sequence to be repaired as the pixel to be repaired; performing repair processing on the region to be repaired corresponding to the pixel to be repaired to obtain the target video sequence of frames.
  • a device for repairing video including: a video acquisition unit configured to acquire a video frame sequence to be repaired; a category determination unit configured to based on the video frame sequence to be repaired and a preset category detection model to determine the target category corresponding to each pixel in the video frame sequence to be repaired; the pixel determination unit is configured to determine the target category as the category to be repaired from the video frame sequence to be repaired The pixel to be repaired; the video repair unit configured to perform repair processing on the region to be repaired corresponding to the pixel to be repaired to obtain a sequence of target video frames.
  • an electronic device including: one or more processors; a memory for storing one or more programs; when one or more programs are executed by one or more processors, One or more processors are made to implement any one of the above methods for repairing video.
  • a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to make a computer execute any one of the above methods for repairing video.
  • a computer program product including a computer program, and when the computer program is executed by a processor, any one of the above methods for repairing a video is implemented.
  • a method for repairing video is provided, which can improve the efficiency of video repair.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
  • Figure 2 is a flowchart of one embodiment of a method for repairing video according to the present disclosure
  • FIG. 3 is a schematic diagram of an application scenario of a method for repairing video according to the present disclosure
  • FIG. 4 is a flowchart of another embodiment of a method for repairing video according to the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of a device for repairing video according to the present disclosure
  • FIG. 6 is a block diagram of an electronic device for implementing the method for repairing a video according to an embodiment of the present disclosure.
  • a system architecture 100 may include terminal devices 101 , 102 , 103 , a network 104 and a server 105 .
  • the network 104 is used as a medium for providing communication links between the terminal devices 101 , 102 , 103 and the server 105 .
  • Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • Terminal equipment 101, 102, 103 can be electronic equipments such as mobile phone, computer and tablet, and the software that is used to repair video is installed in terminal equipment 101, 102, 103, and the user can input the video that needs to repair in the software that is used to repair video, Such as the video of an old movie, the software can output the repaired video, such as the repaired old movie.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, 103 When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices, including but not limited to televisions, smart phones, tablet computers, e-book readers, vehicle-mounted computers, laptop computers, and desktop computers.
  • the terminal devices 101, 102, 103 When the terminal devices 101, 102, 103 are software, they can be installed in the electronic devices listed above. It can be implemented as a plurality of software or software modules (for example, to provide distributed services), or as a single software or software module. No specific limitation is made here.
  • the server 105 may be a server that provides various services. For example, after the terminal device 101, 102, 103 acquires the video frame sequence to be repaired input by the user, the server 105 may input the video frame sequence to be repaired into a preset category detection model, Obtain the target category corresponding to each pixel in the video frame sequence to be repaired. Then, the pixel whose target category is the category to be repaired is determined as the pixel to be repaired. Based on the repair processing of the region to be repaired corresponding to the pixels to be repaired, a target video frame sequence, that is, a repaired video, can be obtained, and the target video frame sequence is sent to the terminal devices 101 , 102 , 103 .
  • the server 105 may be hardware or software.
  • the server 105 can be implemented as a distributed server cluster composed of multiple servers, or as a single server.
  • the server 105 is software, it can be implemented as multiple software or software modules (for example, for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the method for repairing a video provided by the embodiment of the present disclosure may be executed by the terminal devices 101 , 102 , 103 or by the server 105 .
  • the device for repairing video can be set in the terminal devices 101 , 102 , 103 or in the server 105 .
  • terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the method for repairing video of the present embodiment comprises the following steps:
  • Step 201 acquire a sequence of video frames to be repaired.
  • the execution subject (such as the server 105 or the terminal equipment 101, 102, 103 in Figure 1) can obtain the video frame sequence to be repaired from the local storage data, or obtain the video frame sequence to be repaired from other electronic devices that have established a connection.
  • the video frame sequence to be repaired may also be obtained from the network, which is not limited in this embodiment.
  • the sequence of video frames to be repaired refers to a sequence composed of various video frames included in the target video to be repaired.
  • the execution subject when it obtains the sequence of video frames to be repaired, it may also perform a preliminary screening of each video frame contained in the target video that needs to be repaired, determine that there may be video frames that need to be repaired, and form the above-mentioned video frames to be repaired.
  • Repair video frame sequence For example, image recognition is performed on each video frame included in the target video, and in response to determining that there is an object to be repaired in the video frame, the video frame is determined as a candidate video frame, and a sequence of video frames to be repaired is generated based on each candidate video frame.
  • the image recognition technology here can adopt the existing image recognition technology, the purpose is to identify the scratches and noises in the image waiting to be repaired.
  • Step 202 based on the video frame sequence to be repaired and the preset category detection model, determine the target category corresponding to each pixel in the video frame sequence to be repaired.
  • the preset category detection model is used to detect whether each pixel in each video frame to be repaired in the video frame sequence to be repaired belongs to the pixel to be repaired, wherein, the pixel to be repaired refers to the object to be repaired in the video frame
  • the pixels in the target to be repaired may include but not limited to scratches, noise spots, noise points, etc., which are not limited in this embodiment.
  • the output data of the preset category detection model can be the probability that the pixel belongs to the pixel to be repaired, the probability that the pixel does not belong to the pixel to be repaired, the probability that the pixel belongs to a normal pixel, and the probability that the pixel does not belong to a normal pixel.
  • the target category includes a category that needs to be repaired, such as a category to be repaired, and can also include a category that does not need to be repaired, such as a normal category.
  • the target category may also include an undetermined category, that is, a category that is difficult to accurately identify based on the output data. For this undetermined category, relevant pixels can be marked and output, so that relevant personnel can manually judge these pixels, and improve the accuracy of determining the area that needs to be repaired.
  • the target category includes a category to be repaired and a normal category; and, based on the sequence of video frames to be repaired and a preset category detection model, determine the corresponding The target category includes: inputting the repaired video frame sequence into a preset category detection model, and obtaining the probability value image of each video frame to be repaired in the video frame sequence to be repaired output by the preset category detection model; the probability value image is used to represent each The probability that each pixel in the video frame to be repaired belongs to the category to be repaired; based on the probability value image and the preset probability threshold, the target category corresponding to each pixel in the video frame sequence to be repaired is determined.
  • the category to be repaired refers to a category that needs to be repaired
  • the normal category refers to a category that does not need to be repaired.
  • the execution subject determines the target category corresponding to each pixel in the video frame sequence to be repaired.
  • the video frame sequence to be repaired can be input into the preset category detection model to obtain the predicted The probability value image output by the given category detection model.
  • Each video frame to be repaired may correspond to a probability value image, and the probability value image is used to represent the probability that each pixel in the corresponding video frame to be repaired belongs to the class to be repaired.
  • the execution subject can set a preset probability threshold in advance, and based on comparing the probability of each pixel belonging to the category to be repaired with the preset probability threshold, it can be determined that each pixel belongs to the category to be repaired or the normal category. For example, for the probability that each pixel belongs to the class to be repaired, in response to determining that the probability is greater than a preset probability threshold, it is determined that the pixel belongs to the class to be repaired, and in response to determining that the probability is less than or equal to the preset probability threshold, it is determined that the pixel into the normal category.
  • Step 203 determine the pixels to be repaired whose target category is the category to be repaired from the sequence of video frames to be repaired.
  • the execution subject may determine the pixels whose target category is the category to be repaired among the pixels as the pixels to be repaired.
  • the execution subject may also remove pixels whose target category is a normal category from all pixels, and determine the remaining pixels as pixels to be repaired.
  • Step 204 performing repair processing on the region to be repaired corresponding to the pixel to be repaired, to obtain a sequence of target video frames.
  • the execution subject may determine the area to be repaired based on each pixel to be repaired, and the area to be repaired is composed of pixels to be repaired. Based on the restoration processing of the area to be repaired, the target video frame sequence can be obtained.
  • the restoration processing here can adopt the existing restoration technology, such as performing restoration processing on these regions to be repaired based on various existing video restoration software, to obtain the target video frame sequence.
  • FIG. 3 shows a schematic diagram of an application scenario of the method for repairing video according to the present disclosure.
  • the execution subject can obtain the old movie 301 to be repaired, input the old movie 301 to be repaired into the category detection model 302, and obtain each video frame of each video frame in the old movie 301 output by the category detection model 302
  • the pixel belongs to the probability information of the pixel corresponding to the scratch, and the pixel category 303 of each pixel is determined based on the probability information.
  • the pixel category 303 is a category corresponding to scratches and a category corresponding to non-scratches.
  • the execution subject forms the scratch region 304 with all the pixels whose pixel category 303 is the category corresponding to the scratch. Then input the scratched area 304 into designated repair software for repair, and obtain the repaired old movie 305 .
  • the method for repairing video provided by the above-mentioned embodiments of the present disclosure can use the class detection model to automatically determine the target class corresponding to each pixel in the video frame sequence to be repaired, determine the pixel to be repaired that needs to be repaired based on the target class, and then The area to be repaired corresponding to the pixel to be repaired is repaired, which realizes the automatic repair of the video and can improve the efficiency of video repair.
  • FIG. 4 shows a flow 400 of another embodiment of the method for repairing video according to the present disclosure.
  • the method for repairing video of the present embodiment may comprise the following steps:
  • Step 401 acquire the video frame sequence to be repaired.
  • step 401 please refer to the detailed description of step 201, which will not be repeated here.
  • Step 402 based on the video frame sequence to be repaired and the preset category detection model, determine inter-frame feature information and intra-frame feature information of the video frame sequence to be repaired.
  • the execution subject may input the video frame sequence to be repaired into a preset category detection model, so that the category detection model extracts inter-frame feature information and intra-frame feature information of the video frame sequence to be repaired.
  • inter-frame feature information refers to associated image features between adjacent video frames
  • intra-frame feature information refers to image features of each video frame.
  • the category detection model may include a time series convolutional network module. After inputting the video frame sequence to be repaired into the category detection model, the video frame sequence to be repaired may first pass through the time series convolutional network module to determine the relationship between the video frames. Temporal features, that is, determine inter-frame feature information. Then based on the image features of each video frame to be repaired in the sequence of video frames to be repaired, intra-frame feature information is obtained.
  • the sequential convolutional network module can be composed of three-dimensional convolutional layers and other forms.
  • the preset category detection model is trained based on the following steps: acquiring sample video frame sequences and sample labeling information; sample labeling information is used to label each sample pixel in the sample video frame sequence category; based on the sample video frame sequence and the model to be trained, determine the sample inter-frame features and sample intra-frame features of the sample video frame sequence; based on the sample inter-frame features and sample intra-frame features, determine the sample pixel in the sample video frame sequence Sample initial category information; weighting the sample initial category information to obtain the sample target category corresponding to each sample pixel in the sample video frame sequence; based on the sample target category and sample label information, adjust the parameters of the model to be trained until the model to be trained converges , to obtain the preset category detection model that has been trained.
  • the executive body can use the pre-repair video frame sequence of the repaired video as the above-mentioned sample video frame sequence, and then compare the pre-repair video frame sequence and post-repair video frame sequence for the repaired video sequence to obtain the above sample annotation information.
  • the sample annotation information may only label the sample pixels that need to be repaired, and the remaining sample pixels that are not marked are the sample pixels that do not need to be repaired.
  • the sample pixels can also only mark the sample pixels that do not need to be repaired, and the remaining marked sample pixels are the sample pixels that need to be repaired.
  • the execution subject inputs the sequence of sample video frames into the model to be trained, so that the model to be trained determines the inter-sample feature and the sample intra-frame feature.
  • the manner of determining the inter-sample feature and the intra-sample feature is similar to the manner of determining the inter-frame feature information and the intra-frame feature information, and will not be repeated here.
  • the execution subject can use the inter-sample feature and the sample intra-frame feature as the input data of the circular convolution neural module in the model to be trained, so that the circular convolution neural module can perform feature analysis based on the sample inter-frame feature and the sample intra-frame feature , to obtain the sample initial category information of each sample pixel.
  • the sample initial category information is used to indicate whether each sample pixel belongs to the category to be repaired, and its specific expression can be the probability that each sample pixel belongs to the category to be repaired, the probability that each sample pixel does not belong to the category to be repaired, and the probability that each sample pixel does not belong to the category to be repaired.
  • the circular convolution neural module can be composed of multi-layer convLSTM (a combination of convolutional neural network and long short-term memory network) or multi-layer convGRU (a combination of convolutional neural network and gated recurrent unit).
  • the execution subject can input the initial category information into the attention module of the model to be trained, so that the attention module performs weighted processing on the initial category information of the sample, and obtains the sample target category corresponding to each sample pixel in the sample video frame sequence.
  • the execution subject can use the attention module to multiply the probability corresponding to each sample pixel in the initial category information by the corresponding weighted weight, and compare the weighted probability with the preset threshold to obtain the sample target corresponding to each sample pixel category. For example, if the weighted probability of the sample pixel belonging to the class to be repaired is greater than a preset threshold, it is determined that the sample pixel belongs to the class to be repaired.
  • the output data of the model to be trained here can be the probability that the sample pixel after weighting belongs to the sample pixel to be repaired, the probability that the sample pixel after weighting does not belong to the sample pixel to be repaired, the probability that the sample pixel after weighting belongs to the normal sample pixel, and the weighted The probability that subsequent sample pixels do not belong to normal sample pixels.
  • the output data of the model to be trained can also be the probability data after being weighted by the attention module, and then input into the upsampling convolution module to obtain a probability map.
  • the upsampling convolution module is used to restore the resolution of the feature map corresponding to the probability data to the resolution of the sample video frame.
  • determining the sample initial category information of each sample pixel in the sample video frame sequence includes: the sample inter-frame feature and the sample frame The convolution operation is performed on the inner feature to obtain the sample convolution feature; based on the sample convolution feature, the sample initial category information of each sample pixel in the sample video frame sequence is determined.
  • the execution subject after the execution subject obtains the inter-sample feature and the sample intra-frame feature, it can perform a convolution operation on the sample inter-frame feature and the sample intra-frame feature, such as a two-dimensional convolution operation, to obtain the sample convolution feature,
  • a convolution operation on the sample inter-frame feature and the sample intra-frame feature, such as a two-dimensional convolution operation, to obtain the sample convolution feature.
  • the above-mentioned initial category information of the sample is determined based on the sample convolution feature.
  • This process can use convolution operation to reduce the resolution of features, which can improve the speed of model training.
  • Step 403 based on the inter-frame feature information and intra-frame feature information, determine the initial category information corresponding to each pixel in the sequence of video frames to be repaired.
  • the execution subject can input the obtained inter-frame feature information and intra-frame feature information into the circular convolution neural module of the category detection model to Make the recurrent convolutional neural module output initial category information.
  • initial category information please refer to the detailed description of the sample initial category information, which will not be repeated here.
  • determining the initial category information corresponding to each pixel in the video frame sequence to be repaired based on inter-frame feature information and intra-frame feature information please also refer to Determine sample video frame sequence based on sample inter-frame features and sample intra-frame features The detailed description of the sample initial category information of each sample pixel in , will not be repeated here.
  • the initial category information corresponding to each pixel in the video frame sequence to be repaired including: inter-frame feature information and intra-frame feature information
  • the information is subjected to convolution operation to obtain the feature information after the convolution operation; based on the feature information after the convolution operation, the initial category information corresponding to each pixel in the video frame sequence to be repaired is determined.
  • the convolution operation of sample inter-frame features and sample intra-frame features to obtain sample convolution features, and determine each of the sample video frame sequences based on sample convolution features.
  • the detailed description of the sample initial category information of the sample pixel will not be repeated here.
  • the method of convolution operation can reduce the resolution of inter-frame feature information and intra-frame feature information, and can improve the determination speed of initial category information.
  • Step 404 weighting the initial category information to obtain the target category corresponding to each pixel in the sequence of video frames to be repaired.
  • step 404 please also refer to the detailed description of the sample object category corresponding to each sample pixel in the sample video frame sequence by performing weighting processing on the sample initial category information, which will not be repeated here.
  • Step 405 determine the pixels to be repaired whose target category is the category to be repaired from the sequence of video frames to be repaired.
  • step 405 please also refer to the detailed description of step 203, which will not be repeated here.
  • Step 406 Determine the area to be repaired based on the position information of the pixels to be repaired.
  • the execution subject may acquire the position coordinates of the pixels to be repaired, and determine the area to be repaired based on the area enclosed by each position coordinate.
  • Step 407 based on the preset repair software, repair the area to be repaired to obtain the target video frame sequence.
  • the preset repair software can be various existing software for repairing the area to be repaired, and the execution subject can mark the area to be repaired in the video frame sequence to be repaired, and mark the video frame to be repaired The sequence is imported into the preset repair software, so that the preset repair software can repair the area to be repaired and obtain the target video frame sequence.
  • the method for repairing video provided by the above embodiments of the present disclosure can also determine the category of pixels based on the inter-frame feature information and intra-frame feature information of the video frame sequence to be repaired, which improves the accuracy of determining the pixel category.
  • the initial category information can be obtained first, and then the initial category information can be weighted to obtain the target category, which can further improve the accuracy of category information determination.
  • the area to be repaired is determined, and the preset repair software is used to repair the area to be repaired, which can realize automatic video repair and improve the efficiency of video repair.
  • the present disclosure provides an embodiment of a device for repairing video, which corresponds to the method embodiment shown in FIG. 2 .
  • the device can be specifically applied to various servers or terminal devices.
  • the apparatus 500 for repairing video in this embodiment includes: a video acquisition unit 501 , a category determination unit 502 , a pixel determination unit 503 , and a video repair unit 504 .
  • the video acquisition unit 501 is configured to acquire a sequence of video frames to be repaired.
  • the class determining unit 502 is configured to determine the target class corresponding to each pixel in the video frame sequence to be repaired based on the video frame sequence to be repaired and a preset class detection model.
  • the pixel determining unit 503 is configured to determine pixels to be repaired whose target category is the category to be repaired from the sequence of video frames to be repaired.
  • the video restoration unit 504 is configured to perform restoration processing on the area to be repaired corresponding to the pixel to be repaired, to obtain a sequence of target video frames.
  • the category determining unit 502 is further configured to: determine the inter-frame feature information and the intra-frame feature information of the video frame sequence to be repaired based on the video frame sequence to be repaired and the preset category detection model. Feature information; based on inter-frame feature information and intra-frame feature information, determine the initial category information corresponding to each pixel in the video frame sequence to be repaired; perform weighted processing on the initial category information to obtain the target category corresponding to each pixel in the video frame sequence to be repaired .
  • the category determining unit 502 is further configured to: perform a convolution operation on the inter-frame feature information and the intra-frame feature information to obtain the feature information after the convolution operation; The final feature information is used to determine the initial category information corresponding to each pixel in the video frame sequence to be repaired.
  • the above-mentioned device further includes: a model training unit configured to acquire a sample video frame sequence and sample labeling information; the sample labeling information is used to label each sample pixel in the sample video frame sequence category; based on the sample video frame sequence and the model to be trained, determine the sample inter-frame features and sample intra-frame features of the sample video frame sequence; based on the sample inter-frame features and sample intra-frame features, determine the sample pixel in the sample video frame sequence Sample initial category information; weighting the sample initial category information to obtain the sample target category corresponding to each sample pixel in the sample video frame sequence; based on the sample target category and sample label information, adjust the parameters of the model to be trained until the model to be trained converges , to obtain the preset category detection model that has been trained.
  • a model training unit configured to acquire a sample video frame sequence and sample labeling information; the sample labeling information is used to label each sample pixel in the sample video frame sequence category; based on the sample video frame sequence and the model to be trained, determine the sample inter-
  • the target category includes a category to be repaired and a normal category; and, the category determination unit 502 is further configured to: input the repaired video frame sequence into a preset category detection model to obtain a preset The probability value image of each video frame to be repaired in the sequence of video frames to be repaired outputted by the class detection model; the probability value image is used to represent the probability that each pixel in each video frame to be repaired belongs to the category to be repaired; based on the probability value image and preset The probability threshold of , to determine the target category corresponding to each pixel in the sequence of video frames to be repaired.
  • the video repair unit 504 is further configured to: determine the area to be repaired based on the position information of the pixels to be repaired; perform repair processing on the area to be repaired based on preset repair software, and obtain Target sequence of video frames.
  • the units 501 to 504 described in the apparatus 500 for repairing a video respectively correspond to the steps in the method described with reference to FIG. 2 . Therefore, the operations and features described above for the method of using a car call are also applicable to the device 500 and the units contained therein, and will not be repeated here.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • FIG. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure.
  • Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
  • the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 600 includes a computing unit 601 that can execute according to a computer program stored in a read-only memory (ROM) 602 or loaded from a storage unit 608 into a random-access memory (RAM) 603. Various appropriate actions and treatments. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored.
  • the computing unit 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604 .
  • the I/O interface 605 includes: an input unit 606, such as a keyboard, a mouse, etc.; an output unit 607, such as various types of displays, speakers, etc.; a storage unit 608, such as a magnetic disk, an optical disk, etc. ; and a communication unit 609, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 609 allows the device 600 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
  • the computing unit 601 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 601 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the calculation unit 601 executes various methods and processes described above, such as a method for repairing a video. For example, in some embodiments, a method for repairing video may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608 .
  • part or all of the computer program may be loaded and/or installed on the device 600 via the ROM 602 and/or the communication unit 609.
  • the computer program When the computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the method for repairing video described above may be performed.
  • the computing unit 601 may be configured in any other suitable way (for example, by means of firmware) to execute the method for repairing a video.
  • Various implementations of the systems and techniques described above herein can be implemented in digital electronic circuit systems, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips Implemented in a system of systems (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC system of systems
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
  • Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or a trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
  • the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
  • a computer system may include clients and servers.
  • Clients and servers are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, a server of a distributed system, or a server combined with a blockchain.
  • steps may be reordered, added or deleted using the various forms of flow shown above.
  • each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了用于修复视频的方法、装置、设备、介质和产品,涉及人工智能领域,尤其涉及计算机视觉和深度学习技术,具体可用于图像修复场景下。具体实现方案为:获取待修复视频帧序列;基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列中各个像素对应的目标类别;从待修复视频帧序列中确定目标类别为待修复类别的待修复像素;对待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。本实现方式可以提高视频修复效率。

Description

用于修复视频的方法、装置、设备、介质和产品
交叉引用
本专利申请要求于2021年06月28日提交的、申请号为202110717424.X、发明名称为“用于修复视频的方法、装置、设备、介质和产品”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开涉及人工智能技术领域,尤其涉及计算机视觉和深度学习技术,具体可用于图像修复场景下。
背景技术
目前,老电影通常采用胶片进行拍摄、存档,因此,老电影对于保存环境的要求很高。
然而,现实中的保存环境很难达到理想的保存条件,因此会导致老电影出现划痕、脏斑、噪波等问题。为了保证老电影播放的清晰度,需要对这些问题进行修复。现在的修复方式是基于有经验的技术人员逐帧、逐区域地人工标记问题区域,再对这些问题区域进行修复处理。但是,人工修复存在着效率低的问题。
发明内容
本公开提供了一种用于修复视频的方法、装置、设备、介质和产品。
根据本公开的一方面,提供了一种用于修复视频的方法,包括:获取待修复视频帧序列;基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别;从所述待 修复视频帧序列中确定所述目标类别为待修复类别的待修复像素;对所述待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
根据本公开的另一方面,提供了一种用于修复视频的装置,包括:视频获取单元,被配置成获取待修复视频帧序列;类别确定单元,被配置成基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别;像素确定单元,被配置成从所述待修复视频帧序列中确定所述目标类别为待修复类别的待修复像素;视频修复单元,被配置成对所述待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
根据本公开的另一方面,提供了一种电子设备,包括:一个或多个处理器;存储器,用于存储一个或多个程序;当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如上任意一项用于修复视频的方法。
根据本公开的另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其中,计算机指令用于使计算机执行如上任意一项用于修复视频的方法。
根据本公开的另一方面,提供了一种计算机程序产品,包括计算机程序,计算机程序在被处理器执行时实现如上任意一项用于修复视频的方法。
根据本公开的技术,提供一种用于修复视频的方法,能够提高视频修复效率。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
附图用于更好地理解本方案,不构成对本公开的限定。其中:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;
图2是根据本公开的用于修复视频的方法的一个实施例的流程图;
图3是根据本公开的用于修复视频的方法的一个应用场景的示意图;
图4是根据本公开的用于修复视频的方法的另一个实施例的流程图;
图5是根据本公开的用于修复视频的装置的一个实施例的结构示意图;
图6是用来实现本公开实施例的用于修复视频的方法的电子设备的框图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103可以为手机、电脑以及平板等电子设备,终端设备101、102、103中安装有用于修复视频的软件,用户可以在用于修复视频的软件中输入需要修复的视频,如老电影的视频,该软件可以输出修复后的视频,如修复后的老电影。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是各种电子设备,包括但不限于电视、智能手机、平板电脑、电子书阅读器、车载电脑、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如,在终端设备101、 102、103获取到用户输入的待修复视频帧序列之后,服务器105可以将待修复视频帧序列输入预设的类别检测模型,得到待修复视频帧序列中各个像素对应的目标类别。再将目标类别为待修复类别的像素确定为待修复像素。基于对待修复像素对应的待修复区域进行修复处理,可以得到目标视频帧序列,即修复后的视频,并将该目标视频帧序列发送给终端设备101、102、103。
需要说明的是,服务器105可以是硬件,也可以是软件。当服务器105为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器105为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块。在此不做具体限定。
需要说明的是,本公开实施例所提供的用于修复视频的方法可以由终端设备101、102、103执行,也可以由服务器105执行。相应地,用于修复视频的装置可以设置于终端设备101、102、103中,也可以设置于服务器105中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2,示出了根据本公开的用于修复视频的方法的一个实施例的流程200。本实施例的用于修复视频的方法,包括以下步骤:
步骤201,获取待修复视频帧序列。
在本实施例中,执行主体(如图1中的服务器105或者终端设备101、102、103)可以从本地存储数据中获取待修复视频帧序列,也可以从建立连接的其他电子设备中获取待修复视频帧序列,也可以从网络中获取待修复视频帧序列,本实施例对此不做限定。其中,待修复视频帧序列指的是需要进行视频修复的目标视频所包含的各个视频帧组成的序列。可选的,在执行主体获取待修复视频帧序列时,也可以先对需要进行视频修复的目标视频所包含的各个视频帧进行初步筛选,确定得到可能存在需要修复的视频帧,组成上述的待修复视频帧序列。例如,对目标视频所包含的各个视频帧进行图像识别,响应于确定视频帧中存在待修复目标,将该视频帧 确定为候选视频帧,基于各个候选视频帧生成待修复视频帧序列。这里的图像识别技术可以采用现有的图像识别技术,目的是识别图像中的划痕、噪点等待修复目标。
步骤202,基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列中各个像素对应的目标类别。
在本实施例中,预设的类别检测模型用于检测待修复视频帧序列中各个待修复视频帧中的各个像素是否属于待修复像素,其中,待修复像素指的是待修复目标在视频帧中的像素,待修复目标可以包括但不限于划痕、噪斑、噪点等,本实施例对此不做限定。为了检测像素是否属于待修复像素,预设的类别检测模型的输出数据可以为像素属于待修复像素的概率、像素不属于待修复像素的概率、像素属于正常像素的概率、像素不属于正常像素的概率等,本实施例对此不做限定。对于输出数据形式的调整,可以在类别检测模型的训练阶段进行相应的配置。执行主体在获取到预设的类别检测模型基于待修复视频帧序列输出的输出数据之后,可以对输出数据进行分析,确定得到待修复视频帧序列中各个像素对应的目标类别。其中,目标类别包括需要进行修复的类别,如待修复类别,还可以包括无需进行修复的类别,如正常类别。可选的,目标类别还可以包括待定类别,即难以基于输出数据准确判别的类别。对于这种待定类别,可以标注后输出相关像素,以使相关人员对这些像素进行人工判定,提高需要修复的区域的确定准确度。
在本实施例的一些可选的实现方式中,目标类别包括待修复类别和正常类别;以及,基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列中各个像素对应的目标类别,包括:将修复视频帧序列输入预设的类别检测模型,得到预设的类别检测模型输出的待修复视频帧序列中各个待修复视频帧的概率值图像;概率值图像用于表示各个待修复视频帧中各个像素属于待修复类别的概率;基于概率值图像和预设的概率阈值,确定待修复视频帧序列中各个像素对应的目标类别。
在本实现方式中,待修复类别指的是需要进行修复的类别,正常类别指的是无需进行修复的类别。执行主体在基于待修复视频帧序列和预设的 类别检测模型,确定待修复视频帧序列中各个像素对应的目标类别,具体可以先将待修复视频帧序列输入预设的类别检测模型,得到预设的类别检测模型输出的概率值图像。每个待修复视频帧可以对应一个概率值图像,该概率值图像用于表示相应的待修复视频帧中每个像素属于待修复类别的概率。执行主体可以预先设有预设的概率阈值,基于将每个像素属于待修复类别的概率与预设的概率阈值进行比对,可以确定出每个像素属于待修复类别或者正常类别。例如,对于每个像素属于待修复类别的概率,响应于确定该概率大于预设的概率阈值,确定该像素属于待修复类别,响应于确定该概率小于或者等于预设的概率阈值,确定该像素属于正常类别。
步骤203,从待修复视频帧序列中确定目标类别为待修复类别的待修复像素。
在本实施例中,执行主体可以将各个像素中目标类别为待修复类别的像素,确定为待修复像素。执行主体也可以从所有像素中去除目标类别为正常类别的像素,将剩余的像素确定为待修复像素。
步骤204,对待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
在本实施例中,执行主体可以基于各个待修复像素,确定待修复区域,待修复区域由待修复像素组成。基于对待修复区域进行修复处理,可以得到目标视频帧序列。这里的修复处理可以采用现有的修复技术,如基于各类已有的视频修复软件对这些待修复区域进行修复处理,得到目标视频帧序列。
继续参见图3,其示出了根据本公开的用于修复视频的方法的一个应用场景的示意图。在图3的应用场景中,执行主体可以获取待修复的老电影301,将待修复的老电影301输入类别检测模型302,得到类别检测模型302输出的老电影301中每个视频帧的每个像素属于划痕对应的像素的概率信息,基于该概率信息确定各个像素的像素类别303。像素类别303为划痕对应的类别以及非划痕对应的类别。执行主体将像素类别303为划痕对应的类别的所有像素,组成划痕区域304。再将划痕区域304输入指定的修复软件进行修复,得到修复后的老电影305。
本公开上述实施例提供的用于修复视频的方法,能够利用类别检测模型,自动化地确定待修复视频帧序列中各个像素对应的目标类别,基于目标类别,确定需要进行修复的待修复像素,再对待修复像素对应的待修复区域进行修复处理,实现了视频的自动化修复,能够提高视频修复效率。
继续参见图4,其示出了根据本公开的用于修复视频的方法的另一个实施例的流程400。如图4所示,本实施例的用于修复视频的方法可以包括以下步骤:
步骤401,获取待修复视频帧序列。
在本实施例中,对于步骤401的详细描述请参照对于步骤201的详细描述,在此不再赘述。
步骤402,基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列的帧间特征信息和帧内特征信息。
在本实施例中,执行主体可以将待修复视频帧序列输入预设的类别检测模型,以使类别检测模型提取待修复视频帧序列的帧间特征信息和帧内特征信息。其中,帧间特征信息指的是各个相邻视频帧之间的相关联的图像特征,帧内特征信息指的是每个视频帧的图像特征。可选的,类别检测模型中可以包括时序卷积网络模块,在将待修复视频帧序列输入类别检测模型之后,该待修复视频帧序列可以先经过时序卷积网络模块,确定视频帧之间的时序特征,也即是,确定帧间特征信息。再基于待修复视频帧序列中的各个待修复视频帧的图像特征,得到帧内特征信息。其中,时序卷积网络模块可以由三维卷积层等形式组成。
在本实施例的一些可选的实现方式中,预设的类别检测模型基于以下步骤训练得到:获取样本视频帧序列以及样本标注信息;样本标注信息用于标注样本视频帧序列中各个样本像素的类别;基于样本视频帧序列和待训练模型,确定样本视频帧序列帧的样本帧间特征和样本帧内特征;基于样本帧间特征和样本帧内特征,确定样本视频帧序列中各个样本像素的样本初始类别信息;对样本初始类别信息进行加权处理,得到样本视频帧序列中各个样本像素对应的样本目标类别;基于样本目标类别和样本标注信息,调整待训练模型的参数,直至待训练模型收敛,得到训练完成的预设 的类别检测模型。
在本实现方式中,执行主体可以将已修复完成的视频的修复前视频帧序列,作为上述的样本视频帧序列,再对于已修复完成的视频,比对修复前视频帧序列和修复后视频帧序列,得到上述的样本标注信息。采用这种方式确定样本视频帧序列以及样本标注信息,无需人工进行标注,模型训练效率更高。其中,样本标注信息可以只标注需要修复的样本像素,其余未标注的样本像素即为不需要修复的样本像素。样本像素也可以只标注不需要修复的样本像素,其余已标注的样本像素即为需要修复的样本像素。进一步的,执行主体将样本视频帧序列输入待训练模型,以使待训练模型确定样本帧间特征和样本帧内特征。其中,确定样本帧间特征和样本帧内特征的方式与确定帧间特征信息和帧内特征信息的方式类似,在此不再赘述。
之后,执行主体可以将样本帧间特征和样本帧内特征作为待训练模型中循环卷积神经模块的输入数据,以使循环卷积神经模块基于对样本帧间特征和样本帧内特征进行特征分析,得到各个样本像素的样本初始类别信息。其中,样本初始类别信息用于表示各个样本像素是否属于待修复类别,其具体表现形式可以为每个样本像素属于待修复类别的概率、每个样本像素不属于待修复类别的概率、每个样本像素属于正常类别的概率、每个样本像素不属于正常类别的概率等,本实施例对此不做限定。以及,循环卷积神经模块可以采用多层convLSTM(一种卷积神经网络和长短期记忆网络的结合体)或者多层convGRU(一种卷积神经网络和门控循环单元的结合体)组成。
之后,执行主体可以将初始类别信息输入待训练模型的注意力模块,以使注意力模块对样本初始类别信息进行加权处理,得到样本视频帧序列中各个样本像素对应的样本目标类别。具体的,执行主体可以采用注意力模块对初始类别信息中各个样本像素对应的概率乘上相应的加权权重,基于加权之后的概率和预设的阈值进行比对,得到各个样本像素对应的样本目标类别。例如,如果样本像素加权之后属于待修复类别的概率大于预设的阈值,则确定该样本像素属于待修复类别。这里的待训练模型的输出数 据可以为加权之后的样本像素属于待修复样本像素的概率、加权之后的样本像素不属于待修复样本像素的概率、加权之后的样本像素属于正常样本像素的概率、加权之后的样本像素不属于正常样本像素的概率。基于待训练模型的输出数据判定各个样本像素对应的样本目标类别,再基于样本目标类别和样本标注信息,调整待训练模型的参数,直至模型收敛,实现类别检测模型的训练。可选的,待训练模型的输出数据还可以为经过注意力模块进行加权处理之后的概率数据,再输入上采样卷积模块,得到的概率图。其中,上采样卷积模块用于将概率数据对应的特征图的分辨率还原到样本视频帧的分辨率。
在本实施例的另一些可选的实现方式中,基于样本帧间特征和样本帧内特征,确定样本视频帧序列中各个样本像素的样本初始类别信息,包括:对样本帧间特征和样本帧内特征进行卷积运算,得到样本卷积特征;基于样本卷积特征,确定样本视频帧序列中各个样本像素的样本初始类别信息。
在本实现方式中,执行主体在获取样本帧间特征和样本帧内特征之后,可以对样本帧间特征和样本帧内特征进行卷积运算,如二维卷积运算,得到样本卷积特征,基于样本卷积特征确定上述的样本初始类别信息。这一过程可以采用卷积运算减小特征的分辨率,能够提高模型训练速度。
步骤403,基于帧间特征信息和帧内特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息。
在本实施例中,在类别检测模型的应用阶段,基于和训练阶段同样的原理,执行主体可以将获取到的帧间特征信息和帧内特征信息输入类别检测模型的循环卷积神经模块,以使循环卷积神经模块输出初始类别信息。其中,对于初始类别信息的详细描述请参照对于样本初始类别信息的详细描述,在此不再赘述。对于基于帧间特征信息和帧内特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息的详细描述,请一并参照基于样本帧间特征和样本帧内特征,确定样本视频帧序列中各个样本像素的样本初始类别信息的详细描述,在此不再赘述。
在本实施例的一些可选的实现方式中,基于帧间特征信息和帧内特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息,包括:对 帧间特征信息和帧内特征信息进行卷积运算,得到卷积运算后的特征信息;基于卷积运算后的特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息。
在本实现方式中,对于上述步骤的详细描述请一并参照对样本帧间特征和样本帧内特征进行卷积运算,得到样本卷积特征,基于样本卷积特征,确定样本视频帧序列中各个样本像素的样本初始类别信息的详细描述,在此不再赘述。采用卷积运算的方式能够减小帧间特征信息和帧内特征信息的分辨率,能够提高初始类别信息的确定速度。
步骤404,对初始类别信息进行加权处理,得到待修复视频帧序列中各个像素对应的目标类别。
在本实施例中,对于步骤404的详细描述请一并参照对样本初始类别信息进行加权处理,得到样本视频帧序列中各个样本像素对应的样本目标类别的详细描述,在此不再赘述。
步骤405,从待修复视频帧序列中确定目标类别为待修复类别的待修复像素。
在本实施例中,对于步骤405的详细描述,请一并参照对于步骤203的详细描述,在此不再赘述。
步骤406,基于待修复像素的位置信息,确定待修复区域。
在本实施例中,执行主体可以获取待修复像素的位置坐标,基于各个位置坐标所围成的区域,确定待修复区域。
步骤407,基于预设的修复软件,对待修复区域进行修复处理,得到目标视频帧序列。
在本实施例中,预设的修复软件可以为现有的各种用于对待修复区域进行修复的软件,执行主体可以对待修复视频帧序列标注待修复区域,并将标注后的待修复视频帧序列导入预设的修复软件,以使预设的修复软件对待修复区域进行修复处理,得到目标视频帧序列。
本公开的上述实施例提供的用于修复视频的方法,还可以基于待修复视频帧序列的帧间特征信息和帧内特征信息确定像素的类别,提高了像素的类别确定精准度。以及还可以先获取初始类别信息,再对初始类别信息 进行加权处理,得到目标类别,能够进一步提高类别信息确定准确度。以及基于待修复像素的位置信息,确定待修复区域,并采用预设的修复软件对待修复区域进行修复处理,能够实现自动化地视频修复,提高了视频修复效率。
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于修复视频的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种服务器或者终端设备中。
如图5所示,本实施例的用于修复视频的装置500包括:视频获取单元501、类别确定单元502、像素确定单元503、视频修复单元504。
视频获取单元501,被配置成获取待修复视频帧序列。
类别确定单元502,被配置成基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列中各个像素对应的目标类别。
像素确定单元503,被配置成从待修复视频帧序列中确定目标类别为待修复类别的待修复像素。
视频修复单元504,被配置成对待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
在本实施例的一些可选的实现方式中,类别确定单元502进一步被配置成:基于待修复视频帧序列和预设的类别检测模型,确定待修复视频帧序列的帧间特征信息和帧内特征信息;基于帧间特征信息和帧内特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息;对初始类别信息进行加权处理,得到待修复视频帧序列中各个像素对应的目标类别。
在本实施例的一些可选的实现方式中,类别确定单元502进一步被配置成:对帧间特征信息和帧内特征信息进行卷积运算,得到卷积运算后的特征信息;基于卷积运算后的特征信息,确定待修复视频帧序列中各个像素对应的初始类别信息。
在本实施例的一些可选的实现方式中,上述装置还包括:模型训练单元,被配置成获取样本视频帧序列以及样本标注信息;样本标注信息用于标注样本视频帧序列中各个样本像素的类别;基于样本视频帧序列和待训练模型,确定样本视频帧序列帧的样本帧间特征和样本帧内特征;基于样 本帧间特征和样本帧内特征,确定样本视频帧序列中各个样本像素的样本初始类别信息;对样本初始类别信息进行加权处理,得到样本视频帧序列中各个样本像素对应的样本目标类别;基于样本目标类别和样本标注信息,调整待训练模型的参数,直至待训练模型收敛,得到训练完成的预设的类别检测模型。
在本实施例的一些可选的实现方式中,目标类别包括待修复类别和正常类别;以及,类别确定单元502进一步被配置成:将修复视频帧序列输入预设的类别检测模型,得到预设的类别检测模型输出的待修复视频帧序列中各个待修复视频帧的概率值图像;概率值图像用于表示各个待修复视频帧中各个像素属于待修复类别的概率;基于概率值图像和预设的概率阈值,确定待修复视频帧序列中各个像素对应的目标类别。
在本实施例的一些可选的实现方式中,视频修复单元504进一步被配置成:基于待修复像素的位置信息,确定待修复区域;基于预设的修复软件,对待修复区域进行修复处理,得到目标视频帧序列。
应当理解,用于修复视频的装置500中记载的单元501至单元504分别与参考图2中描述的方法中的各个步骤相对应。由此,上文针对用车载通话的方法描述的操作和特征同样适用于装置500及其中包含的单元,在此不再赘述。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图6示出了可以用来实施本公开的实施例的示例电子设备600的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图6所示,设备600包括计算单元601,其可以根据存储在只读存储器(ROM)602中的计算机程序或者从存储单元608加载到随机访问存储器(RAM)603中的计算机程序,来执行各种适当的动作和处理。在RAM  603中,还可存储设备600操作所需的各种程序和数据。计算单元601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
设备600中的多个部件连接至I/O接口605,包括:输入单元606,例如键盘、鼠标等;输出单元607,例如各种类型的显示器、扬声器等;存储单元608,例如磁盘、光盘等;以及通信单元609,例如网卡、调制解调器、无线通信收发机等。通信单元609允许设备600通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元601可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元601的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元601执行上文所描述的各个方法和处理,例如用于修复视频的方法。例如,在一些实施例中,用于修复视频的方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元608。在一些实施例中,计算机程序的部分或者全部可以经由ROM 602和/或通信单元609而被载入和/或安装到设备600上。当计算机程序加载到RAM 603并由计算单元601执行时,可以执行上文描述的用于修复视频的方法的一个或多个步骤。备选地,在其他实施例中,计算单元601可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行用于修复视频的方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出 装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部 件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (17)

  1. 一种用于修复视频的方法,包括:
    获取待修复视频帧序列;
    基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别;
    从所述待修复视频帧序列中确定所述目标类别为待修复类别的待修复像素;
    对所述待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
  2. 根据权利要求1所述的方法,其中,所述基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别,包括:
    基于所述待修复视频帧序列和所述预设的类别检测模型,确定所述待修复视频帧序列的帧间特征信息和帧内特征信息;
    基于所述帧间特征信息和所述帧内特征信息,确定所述待修复视频帧序列中各个像素对应的初始类别信息;
    对所述初始类别信息进行加权处理,得到所述待修复视频帧序列中各个像素对应的所述目标类别。
  3. 根据权利要求2所述的方法,其中,所述基于所述帧间特征信息和所述帧内特征信息,确定所述待修复视频帧序列中各个像素对应的初始类别信息,包括:
    对所述帧间特征信息和所述帧内特征信息进行卷积运算,得到卷积运算后的特征信息;
    基于所述卷积运算后的特征信息,确定所述待修复视频帧序列中各个像素对应的所述初始类别信息。
  4. 根据权利要求1-3任一项所述的方法,其中,所述预设的类 别检测模型基于以下步骤训练得到:
    获取样本视频帧序列以及样本标注信息;所述样本标注信息用于标注所述样本视频帧序列中各个样本像素的类别;
    基于所述样本视频帧序列和待训练模型,确定所述样本视频帧序列帧的样本帧间特征和样本帧内特征;
    基于所述样本帧间特征和所述样本帧内特征,确定所述样本视频帧序列中各个样本像素的样本初始类别信息;
    对所述样本初始类别信息进行加权处理,得到所述样本视频帧序列中各个样本像素对应的样本目标类别;
    基于所述样本目标类别和所述样本标注信息,调整所述待训练模型的参数,直至所述待训练模型收敛,得到训练完成的所述预设的类别检测模型。
  5. 根据权利要求4所述的方法,其中,所述基于所述样本帧间特征和所述样本帧内特征,确定所述样本视频帧序列中各个样本像素的样本初始类别信息,包括:
    对所述样本帧间特征和所述样本帧内特征进行卷积运算,得到样本卷积特征;
    基于所述样本卷积特征,确定所述样本视频帧序列中各个样本像素的所述样本初始类别信息。
  6. 根据权利要求1-5任一项所述的方法,其中,所述目标类别包括所述待修复类别和正常类别;以及
    所述基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别,包括:
    将所述修复视频帧序列输入所述预设的类别检测模型,得到所述预设的类别检测模型输出的所述待修复视频帧序列中各个待修复视频帧的概率值图像;所述概率值图像用于表示各个待修复视频帧中各个像素属于所述待修复类别的概率;
    基于所述概率值图像和预设的概率阈值,确定所述待修复视频 帧序列中各个像素对应的所述目标类别。
  7. 根据权利要求1-6任一项所述的方法,其中,所述对所述待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列,包括:
    基于所述待修复像素的位置信息,确定所述待修复区域;
    基于预设的修复软件,对所述待修复区域进行修复处理,得到所述目标视频帧序列。
  8. 一种用于修复视频的装置,包括:
    视频获取单元,被配置成获取待修复视频帧序列;
    类别确定单元,被配置成基于所述待修复视频帧序列和预设的类别检测模型,确定所述待修复视频帧序列中各个像素对应的目标类别;
    像素确定单元,被配置成从所述待修复视频帧序列中确定所述目标类别为待修复类别的待修复像素;
    视频修复单元,被配置成对所述待修复像素对应的待修复区域进行修复处理,得到目标视频帧序列。
  9. 根据权利要求8所述的装置,其中,所述类别确定单元进一步被配置成:
    基于所述待修复视频帧序列和所述预设的类别检测模型,确定所述待修复视频帧序列的帧间特征信息和帧内特征信息;
    基于所述帧间特征信息和所述帧内特征信息,确定所述待修复视频帧序列中各个像素对应的初始类别信息;
    对所述初始类别信息进行加权处理,得到所述待修复视频帧序列中各个像素对应的所述目标类别。
  10. 根据权利要求9所述的装置,其中,所述类别确定单元进一步被配置成:
    对所述帧间特征信息和所述帧内特征信息进行卷积运算,得到卷积运算后的特征信息;
    基于所述卷积运算后的特征信息,确定所述待修复视频帧序列中各个像素对应的所述初始类别信息。
  11. 根据权利要求8-10任一项所述的装置,其中,所述装置还包括:
    模型训练单元,被配置成获取样本视频帧序列以及样本标注信息;所述样本标注信息用于标注所述样本视频帧序列中各个样本像素的类别;基于所述样本视频帧序列和待训练模型,确定所述样本视频帧序列帧的样本帧间特征和样本帧内特征;基于所述样本帧间特征和所述样本帧内特征,确定所述样本视频帧序列中各个样本像素的样本初始类别信息;对所述样本初始类别信息进行加权处理,得到所述样本视频帧序列中各个样本像素对应的样本目标类别;基于所述样本目标类别和所述样本标注信息,调整所述待训练模型的参数,直至所述待训练模型收敛,得到训练完成的所述预设的类别检测模型。
  12. 根据权利要求11所述的装置,其中,所述模型训练单元进一步被配置成:
    对所述样本帧间特征和所述样本帧内特征进行卷积运算,得到样本卷积特征;
    基于所述样本卷积特征,确定所述样本视频帧序列中各个样本像素的所述样本初始类别信息。
  13. 根据权利要求8-12任一项所述的装置,其中,所述目标类别包括所述待修复类别和正常类别;以及
    所述类别确定单元进一步被配置成:
    将所述修复视频帧序列输入所述预设的类别检测模型,得到所述预设的类别检测模型输出的所述待修复视频帧序列中各个待修复 视频帧的概率值图像;所述概率值图像用于表示各个待修复视频帧中各个像素属于所述待修复类别的概率;
    基于所述概率值图像和预设的概率阈值,确定所述待修复视频帧序列中各个像素对应的所述目标类别。
  14. 根据权利要求8-13任一项所述的装置,其中,所述视频修复单元进一步被配置成:
    基于所述待修复像素的位置信息,确定所述待修复区域;
    基于预设的修复软件,对所述待修复区域进行修复处理,得到所述目标视频帧序列。
  15. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-7中任一项所述的方法。
  16. 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-7中任一项所述的方法。
  17. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-7中任一项所述的方法。
PCT/CN2022/075035 2021-06-28 2022-01-29 用于修复视频的方法、装置、设备、介质和产品 WO2023273342A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020227035706A KR20220146663A (ko) 2021-06-28 2022-01-29 비디오 복구 방법, 장치, 기기, 매체 및 컴퓨터 프로그램
JP2022553168A JP2023535662A (ja) 2021-06-28 2022-01-29 ビデオを修復するための方法、装置、機器、媒体及びコンピュータプログラム
US17/944,745 US20230008473A1 (en) 2021-06-28 2022-09-14 Video repairing methods, apparatus, device, medium and products

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110717424.X 2021-06-28
CN202110717424.XA CN113436100B (zh) 2021-06-28 2021-06-28 用于修复视频的方法、装置、设备、介质和产品

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/944,745 Continuation US20230008473A1 (en) 2021-06-28 2022-09-14 Video repairing methods, apparatus, device, medium and products

Publications (1)

Publication Number Publication Date
WO2023273342A1 true WO2023273342A1 (zh) 2023-01-05

Family

ID=77754882

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075035 WO2023273342A1 (zh) 2021-06-28 2022-01-29 用于修复视频的方法、装置、设备、介质和产品

Country Status (2)

Country Link
CN (1) CN113436100B (zh)
WO (1) WO2023273342A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436100B (zh) * 2021-06-28 2023-11-28 北京百度网讯科技有限公司 用于修复视频的方法、装置、设备、介质和产品
CN113989721A (zh) * 2021-10-29 2022-01-28 北京百度网讯科技有限公司 目标检测方法和目标检测模型的训练方法、装置
CN115170400A (zh) * 2022-04-06 2022-10-11 腾讯科技(深圳)有限公司 一种视频修复的方法、相关装置、设备以及存储介质
CN114549369B (zh) * 2022-04-24 2022-07-12 腾讯科技(深圳)有限公司 数据修复方法、装置、计算机及可读存储介质
CN116866665B (zh) * 2023-09-05 2023-11-14 中信建投证券股份有限公司 一种视频播放方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436641A (zh) * 2011-10-10 2012-05-02 上海交通大学 基于波原子变换和非参数模型的电影自动修复方法
CN110087097A (zh) * 2019-06-05 2019-08-02 西安邮电大学 一种基于电子内窥镜的自动去除无效视频剪辑方法
US20200082588A1 (en) * 2018-09-07 2020-03-12 Streem, Inc. Context-aware selective object replacement
CN112149459A (zh) * 2019-06-27 2020-12-29 哈尔滨工业大学(深圳) 一种基于交叉注意力机制的视频显著性物体检测模型及系统
CN113436100A (zh) * 2021-06-28 2021-09-24 北京百度网讯科技有限公司 用于修复视频的方法、装置、设备、介质和产品

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443824A (zh) * 2018-05-02 2019-11-12 北京京东尚科信息技术有限公司 用于生成信息的方法和装置
CN109815979B (zh) * 2018-12-18 2020-11-10 通号通信信息集团有限公司 一种弱标签语义分割标定数据生成方法及系统
CN112633151B (zh) * 2020-12-22 2024-04-12 浙江大华技术股份有限公司 一种确定监控图像中斑马线的方法、装置、设备及介质
CN112686304B (zh) * 2020-12-29 2023-03-24 山东大学 一种基于注意力机制以及多尺度特征融合的目标检测方法、设备及存储介质
CN112749685B (zh) * 2021-01-28 2023-11-03 北京百度网讯科技有限公司 视频分类方法、设备和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436641A (zh) * 2011-10-10 2012-05-02 上海交通大学 基于波原子变换和非参数模型的电影自动修复方法
US20200082588A1 (en) * 2018-09-07 2020-03-12 Streem, Inc. Context-aware selective object replacement
CN110087097A (zh) * 2019-06-05 2019-08-02 西安邮电大学 一种基于电子内窥镜的自动去除无效视频剪辑方法
CN112149459A (zh) * 2019-06-27 2020-12-29 哈尔滨工业大学(深圳) 一种基于交叉注意力机制的视频显著性物体检测模型及系统
CN113436100A (zh) * 2021-06-28 2021-09-24 北京百度网讯科技有限公司 用于修复视频的方法、装置、设备、介质和产品

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XU HONGTENG: "Image Transform and Representation Techniques and Their Applications in Image Restoration and Enhancement", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 7, 15 July 2013 (2013-07-15), CN , XP093017954, ISSN: 1674-0246 *

Also Published As

Publication number Publication date
CN113436100A (zh) 2021-09-24
CN113436100B (zh) 2023-11-28

Similar Documents

Publication Publication Date Title
WO2023273342A1 (zh) 用于修复视频的方法、装置、设备、介质和产品
US11436863B2 (en) Method and apparatus for outputting data
US20210174493A1 (en) Damage identification result optimization method and apparatus
CN113033566B (zh) 模型训练方法、识别方法、设备、存储介质及程序产品
US11216924B2 (en) Method and apparatus for processing image
CN112862877B (zh) 用于训练图像处理网络和图像处理的方法和装置
WO2022213718A1 (zh) 样本图像增量、图像检测模型训练及图像检测方法
US20220245764A1 (en) Method for image super-resolution, device and storage medium
CN112995535B (zh) 用于处理视频的方法、装置、设备以及存储介质
US20230030431A1 (en) Method and apparatus for extracting feature, device, and storage medium
CN113657269A (zh) 人脸识别模型的训练方法、装置及计算机程序产品
CN114187459A (zh) 目标检测模型的训练方法、装置、电子设备以及存储介质
CN113643260A (zh) 用于检测图像质量的方法、装置、设备、介质和产品
CN112989987A (zh) 用于识别人群行为的方法、装置、设备以及存储介质
CN114792355A (zh) 虚拟形象生成方法、装置、电子设备和存储介质
US20230008473A1 (en) Video repairing methods, apparatus, device, medium and products
CN113627361A (zh) 人脸识别模型的训练方法、装置及计算机程序产品
CN114724144B (zh) 文本识别方法、模型的训练方法、装置、设备及介质
CN116363429A (zh) 图像识别模型的训练方法、图像识别方法、装置及设备
KR20230133808A (ko) Roi 검출 모델 훈련 방법, 검출 방법, 장치, 설비 및 매체
CN113591709B (zh) 动作识别方法、装置、设备、介质和产品
CN115376137A (zh) 一种光学字符识别处理、文本识别模型训练方法及装置
CN115457365A (zh) 一种模型的解释方法、装置、电子设备及存储介质
CN114093006A (zh) 活体人脸检测模型的训练方法、装置、设备以及存储介质
CN114120423A (zh) 人脸图像检测方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022553168

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227035706

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22831176

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE