CN114742698A - Wiya line erasing method and device based on depth generation model and storage medium - Google Patents

Wiya line erasing method and device based on depth generation model and storage medium Download PDF

Info

Publication number
CN114742698A
CN114742698A CN202210433690.4A CN202210433690A CN114742698A CN 114742698 A CN114742698 A CN 114742698A CN 202210433690 A CN202210433690 A CN 202210433690A CN 114742698 A CN114742698 A CN 114742698A
Authority
CN
China
Prior art keywords
video
line
generation model
segment
depth generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210433690.4A
Other languages
Chinese (zh)
Inventor
冀中
侯嘉诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202210433690.4A priority Critical patent/CN114742698A/en
Publication of CN114742698A publication Critical patent/CN114742698A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a Wiya line erasing method based on a depth generation model, which comprises the steps of constructing a video depth generation model based on a self-attention mechanism and a convolutional neural network, and training the video depth generation model, or selecting an applicable video depth generation model which is pre-trained; taking any video segment in a target video as a basic erasing unit, utilizing a pre-trained video depth generation model, firstly matching all effective contents in the video segment to obtain effective information to generate reasonable contents, then cutting the generated contents according to the confidence coefficient, and cutting out an area with the confidence coefficient higher than a preset value to cover the area at the position of the Weiya line so as to erase the Weiya line; this process is repeated until all the Weiya lines in all the video segments are erased. The invention uses the relevance between the generated content and other contents and the inter-frame coherence in the video to progressively and iteratively complete video repair segment by segment.

Description

Wiya line erasing method and device based on depth generation model and storage medium
Technical Field
The present invention relates to a video processing method, and more particularly, to a wiegand line erasing method, device and storage medium based on a depth generation model.
Background
At present, in the traditional movie and television play industry, due to the need of plot design, actors usually need to complete actions with high difficulty such as 'cornice and wall walking' by means of Weiya lines. The captured video then requires a special technician to perform wiegand line erasing during post-processing. This process is cumbersome because the technician needs to erase the geodesic lines appearing in the video from outside to inside, frame by frame, by eye.
In recent years, with the development of computing power, deep learning methods represented by deep neural networks are successfully applied to a large number of computer vision tasks and achieve unsophisticated effects including target removal. By means of the deep neural network, a specified area in the video can be automatically and quickly covered by content generated by the deep neural network frame by frame so as to achieve specified target removal. However, most of the existing methods directly use the content generated by the generative model to cover the original content frame by frame at one time, and the reasonable organization and scheduling of the content and the consideration of the inter-frame consistency are lacked. For wiener erasing, due to the viewing requirements of the movie and television drama, wiener erasing not only requires that reasonable content is generated to cover wiener, but also requires that the repaired video has stronger time consistency, and artifacts and color differences are avoided. Therefore, the traditional object removing method is directly applied to the video line wiping field frame by frame to realize automatic line wiping and faces a plurality of problems.
Disclosure of Invention
The invention provides a Wiya line erasing method, a Wiya line erasing device and a Wiya line erasing storage medium based on a depth generation model, which are used for solving the technical problems in the prior art.
The technical scheme adopted by the invention for solving the technical problems in the prior art is as follows: a Wiya line erasing method based on a depth generation model is characterized in that a video depth generation model based on a self-attention mechanism and a convolutional neural network is constructed and trained, or an applicable pre-trained video depth generation model is selected; taking any video segment in a target video as a basic erasing unit, utilizing a pre-trained video depth generation model, firstly matching all effective contents in the video segment to obtain effective information to generate reasonable contents, then cutting the generated contents according to the confidence coefficient, and cutting out an area with the confidence coefficient higher than a preset value to cover the area at the position of the Weiya line so as to erase the Weiya line; this process is repeated until all the Weiya lines in all the video segments are erased.
Further, the method comprises the following specific steps:
firstly, a video clip is intercepted from an original video, and the search of an area to be erased in the video clip and other effective areas in the clip is completed through a pre-trained video depth generation model, so that proper content is generated in a matching manner to cover a Weiya line to be erased in the video clip;
step two, distinguishing the credibility of the generated content by taking the distance between the edge of the generated content and the center of the content as a basis, cutting the generated content according to the credibility, wherein a high-credibility area is used for covering a Weiya line of a corresponding part, and a low-credibility area is discarded;
step three, covering the Wiya line with the cut generated content from outside to inside, and inserting the updated video segment into the original video segment to complete one-time updating;
and step four, repeating the step one to the step three until the Wiya line erasing of the whole video is completed.
Further, the first step comprises the following sub-steps:
a1, sequentially selecting a video clip from a complete video of a Weiya line to be erased;
step A2, selecting an applicable pre-trained video depth generation model for generating content covering a Weiya line;
step A3, for each video frame in the segment, the following processes are performed in sequence: identifying and extracting a Weiya line, performing binarization processing, performing expansion operation and negation operation, and obtaining a frame mask corresponding to each video frame in a fragment;
step A4, normalizing the video clip, multiplying the video clip by the inverse of the mask to obtain the effective area of the video clip, inputting the effective area of the video clip into the video depth generation model, and generating a video clip which is preliminarily erased according to the following formula
Figure BDA0003612006730000021
Ri’=G[(255-Vi)÷255)×(1-Mi)];
Figure BDA0003612006730000022
Indicates a length t0An arbitrary video segment of a frame;
Figure BDA0003612006730000023
representing a mask obtained by binarization video-frame-by-video-frame processing;
g (-) represents any pre-trained video depth generative model;
Figure BDA0003612006730000024
representing a real space;
t0representing the length of the video segment to be erased;
h represents the height of the video to be erased;
w represents the width of the video to be erased.
Further, the second step comprises the following sub-steps:
step B1, calculating the distance matrix D corresponding to each frame in the video clip according to the following formulai∈{1,…,t0}:
Figure BDA0003612006730000031
Step B2, according to the given confidence threshold value l, the confidence matrix I corresponding to each frame in the segment is calculated according to the following formulai∈{1,...,t0};
Figure BDA0003612006730000032
t0Indicating the length of the video sequence to be erased;
a represents a distance matrix DiThe abscissa of any point;
b represents a distance matrix DiThe ordinate of any point;
a' denotes a mask MiThe abscissa of any one of the above non-a coordinates;
b' denotes a mask MiAny of the above is not the ordinate of b.
Further, in step three, the updated video segment is generated according to the following formula:
Vi′=Vi×(1-Mi)+(Mi-Ii)×Ri
Vi' represents a video segment that completes one iterative update;
Viindicates a length t0An arbitrary video segment of a frame;
Mirepresenting a mask obtained by binarization video-frame-by-video-frame processing;
Iirepresenting a confidence matrix;
Riindicating a video segment that has been erased.
The invention also provides equipment for realizing the Wiya wire erasing method based on the depth generation model, which comprises a memory and a processor, wherein the memory is used for storing a computer program; the processor is configured to execute the computer program and to implement the above-mentioned wiegand line erasing method steps based on the depth generation model when the computer program is executed.
The invention also provides a storage medium, which stores a computer program, and when the computer program is executed by a processor, the Wigner line erasing method based on the depth generation model is realized.
The invention has the advantages and positive effects that: the invention fully utilizes the relevance between the generated content and other contents and the inter-frame continuity in the video and has the following advantages:
the novelty is as follows: the method is used for organizing and taking the content generated by the generation model according to the confidence coefficient for the first time, replaces the traditional one-time repair and frame-by-frame repair, and gradually and iteratively completes video repair segment by segment.
Effectiveness: experiments prove that compared with other existing target removal methods, the intelligent Wiya line erasing method based on the generative model, which is designed by the invention, has improved performance on both the traditional target removal data set and the Wiya line erasing data set, and the effectiveness of the invention is demonstrated.
Universality: the invention mainly focuses on the algorithm angle, is not limited to the generative model, can be used as a plug and play module to be applied to any generative model, and obtains certain performance improvement, thereby indicating that the invention is universal.
Drawings
FIG. 1 is a schematic workflow diagram of a Wiya line erasing method based on a depth generation model according to the present invention.
Fig. 2 is a schematic diagram of the operation steps of obtaining a frame mask corresponding to each video frame in a segment.
Detailed Description
For further understanding of the contents, features and effects of the present invention, the following embodiments are enumerated in conjunction with the accompanying drawings, and the following detailed description is given:
referring to fig. 1 to 2, a wiya line erasure method based on a depth generation model is to construct and train a video depth generation model based on a self-attention mechanism and a convolutional neural network, or select an applicable video depth generation model which is pre-trained; taking any video segment in a target video as a basic erasing unit, utilizing a pre-trained video depth generation model, firstly matching all effective contents in the video segment, acquiring matched effective information to generate reasonable contents, then cutting the generated contents according to the confidence coefficient, and cutting out an area with the confidence coefficient higher than a preset value to cover the position of a Weiya line so as to erase the Weiya line; this process is repeated until all the Weiya lines in all the video segments are erased.
For any video segment, a wiegand line erasing method from outside to inside can be adopted, namely, the wiegand line is gradually covered from the outside of the wiegand line to the inside of the wiegand line, and each video segment can need one or more iterations to be completely erased. The erasing of one video clip is completed, the erasing of the video clip is completed, and then the erasing of the next video clip is started until all the Weiya lines in all the video clips are erased.
The video depth generation model can adopt an applicable video depth generation model in the prior art; or may be implemented by software or components in the prior art and by conventional technical means.
Preferably, a wiya line erasing method based on the depth generation model may include the following specific steps:
step one, a video segment can be cut from an original video, the search of the area to be erased in the video segment and other effective areas in the segment can be completed through a pre-trained video depth generation model, and therefore appropriate content can be generated in a matching mode to cover the Weiya line to be erased in the video segment.
And step two, the distance between the edge of the generated content and the center of the content can be used as a basis for distinguishing the credibility of the generated content, the generated content can be cut according to the credibility, the high-credibility area is used for covering the Weiya line of the corresponding part, and the low-credibility area can be discarded.
And step three, covering the Wiya line with the cut generated content from outside to inside, namely gradually covering the cut generated content from the outside of the Wiya line to the inside of the Wiya line, and then inserting the updated video clip into the original video clip to finish one-time updating.
And step four, repeating the step one to the step three until the Wiya line erasing of the whole video is completed.
Preferably, step one may comprise the sub-steps of:
step A1, sequentially selecting a video clip from a complete video of a Weiya line to be erased;
step A2, selecting an applicable pre-trained video depth generation model for generating content covering a Weiya line;
step a3, for each video frame in the segment, the following processes can be performed in sequence: identifying and extracting a Weiya line, performing binarization processing, performing expansion operation and negation operation, and obtaining a frame mask corresponding to each video frame in a fragment;
step A4, normalizing the video clip, multiplying the normalized video clip by the inverse of the mask to obtain the effective area of the video clip, inputting the effective area of the video clip into the video depth generation model, and generating a video clip with preliminary erasure by the following formula
Figure BDA0003612006730000051
Ri’=G[(255-Vi)÷255)×(1-Mi)];
Figure BDA0003612006730000052
Indicates a length t0An arbitrary video segment of a frame;
Figure BDA0003612006730000053
representing a mask obtained by binarization video frame by video frame processing;
g (-) represents any pre-trained video depth generative model;
Figure BDA0003612006730000054
representing a real space;
t0representing the length of the video segment to be erased;
h represents the height of the video to be erased;
w represents the width of the video to be erased.
The Wiya wire identification extraction, binarization processing, expansion operation, negation operation and normalization processing can adopt the applicable modules in the prior art; or may be implemented using software or modules as known in the art and using conventional techniques.
Further, the second step may comprise the following sub-steps:
in step B1, the distance matrix D corresponding to each frame in the video clip can be calculated according to the following formulai∈{1,…,t0}:
Figure BDA0003612006730000055
Step B2, according to the given confidence threshold value l, the confidence matrix I corresponding to each frame in the segment is calculated according to the following formulai∈{1,...,t0};
Figure BDA0003612006730000061
The min function represents a function consisting of the minimum values on the common domain among the set of functions of the contained elements.
t0Indicating the length of the video sequence to be erased;
a represents a distance matrix DiThe abscissa of any point;
b represents a distance matrix DiThe ordinate of any point;
a' represents a mask MiThe abscissa of any one of the above other than a;
b' denotes a mask MiThe ordinate of any one of the above non-b;
l represents a confidence threshold.
Preferably, in step three, the updated video segment can be generated according to the following formula:
Vi′=Vi×(1-Mi)+(Mi-Ii)×Ri
Vi' represents a video segment that completes one iterative update;
Viindicates a length t0An arbitrary video segment of a frame;
Mirepresenting a mask obtained by binarization video frame by video frame processing;
IirepresentA confidence matrix;
Riindicating a video segment that has been erased.
The invention also provides equipment for realizing the Wiya wire erasing method based on the depth generation model, which comprises a memory and a processor, wherein the memory is used for storing a computer program; the processor is configured to execute the computer program and to implement the above-mentioned wiegand line erasing method steps based on the depth generation model when the computer program is executed.
The invention also provides a storage medium, which stores a computer program, and when the computer program is executed by a processor, the Wigner line erasing method based on the depth generation model is realized.
The working process and working principle of the present invention are further explained by a preferred embodiment of the present invention as follows:
the invention relates to a Wiya line erasing method based on a depth generation model, which takes a video clip as a basic erasing unit, utilizes a pre-trained video depth generation model, firstly matches all effective contents in the video clip, and acquires effective information to generate reasonable contents; and organizing the generated content by using an iterative erasure algorithm to gradually complete erasure.
The invention relates to a Wiya line erasing method based on a depth generation model, which mainly comprises two stages: the task of the first stage is to acquire content by using a pre-trained video depth generation model, and the task of the second stage is mainly to organize the generated content to cover the Weiya lines of the original video segment by segment to complete the Weiya line erasing.
The video depth generation model firstly intercepts a video clip from an original video for operation, and the operation process mainly completes the search of an area to be erased in the video clip and other effective areas in the clip, so that proper content is matched for covering a Weiya line to be erased in the video clip. The generated content is cut from the outside and the inside according to a certain thickness, the area with the certain thickness on the outside is regarded as high credibility to cover the Weiya line of the corresponding part, and the area on the inside is regarded as low credibility and discarded. Inserting the updated video segment into the original video to complete one Wiya line erasing, and repeating the steps for a plurality of times until the erasing of the whole video is completed.
The Wiya wire erasing method based on the depth generation model comprises the following specific steps:
1) firstly, a trained deep generative model G (-) is selected for generating contents and a complete video to be repaired
Figure BDA0003612006730000071
Then sampling the complete video, and selecting a video segment
Figure BDA0003612006730000072
2) As shown in fig. 2, for each video frame in a sampled video clip, a wiya line identification extraction, a binarization process, an expansion operation, and a negation operation are sequentially performed to obtain a frame mask corresponding to each video frame in the clip
Figure BDA0003612006730000073
Figure BDA0003612006730000074
3) Video clip ViThe effective area of the video clip obtained by multiplying the normalized video clip by the mask is input into a generation model, and a video clip subjected to coarse erasure is generated through a formula (1)
Figure BDA0003612006730000075
We define
Figure BDA0003612006730000076
Representing an original video of length t frames,
Figure BDA0003612006730000077
indicates a length t0Any video segment of a frame.
Figure BDA0003612006730000078
Representing the mask resulting from the binarization video-frame-by-video frame processing,
Figure BDA0003612006730000079
representing the results of the line rubbing. G (-) represents any deep generative model that has completed training. Then each iteration process will generate a thread wiping result first:
Ri’=G[(255-Vi)÷255)×(1-Mi)] (1);
4) calculating the distance matrix D corresponding to each frame in the video clip according to the mask of each frame in the clip and the formula (2)i∈{1,…,t0}。
Figure BDA00036120067300000710
5) According to a given confidence threshold value l, calculating a confidence matrix I corresponding to each frame in the segment according to a formula (3)i∈{1,...,t0}. Then the erasing structure V of the iteration can be obtained by the formula (4)i', will Vi' insert back to V in orderiThe line wiping can be finished.
Figure BDA00036120067300000711
l represents a confidence threshold;
t0indicating the length of the video sequence to be erased;
a represents a distance matrix DiThe abscissa of any point;
b represents a distance matrix DiThe ordinate of any point;
a' denotes a mask MiThe abscissa of any one of the above non-a coordinates;
b' denotes a mask MiAny of the above is not the ordinate of b.
V is arrangedi' indicate completion of one iterative updateThe video clip of (1); then there are:
Vi′=Vi×(1-Mi)+(Mi-Ii)×Ri (4);
Viindicates a length t0An arbitrary video segment of a frame;
Mirepresenting a mask obtained by binarization video frame by video frame processing;
Iirepresenting a confidence matrix;
Riindicating a video segment that has been erased.
6) Repeating the steps 1) to 5) until all the video segments are erased, and obtaining the erasing result of the video.
The above-mentioned embodiments are only for illustrating the technical ideas and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the contents of the present invention and to carry out the same, and the present invention shall not be limited to the embodiments, i.e. the equivalent changes or modifications made within the spirit of the present invention shall fall within the scope of the present invention.

Claims (7)

1. A Weiya line erasing method based on a depth generation model is characterized in that a video depth generation model based on a self-attention mechanism and a convolutional neural network is constructed and trained, or an applicable pre-trained video depth generation model is selected; taking any video segment in a target video as a basic erasing unit, utilizing a pre-trained video depth generation model, firstly matching all effective contents in the video segment to obtain effective information to generate reasonable contents, then cutting the generated contents according to the confidence coefficient, and cutting out an area with the confidence coefficient higher than a preset value to cover the area of the Weiya line so as to erase the Weiya line; this process is repeated until all the Weiya lines in all the video segments are erased.
2. The wiya line erasing method based on the depth generation model as claimed in claim 1, comprising the following specific steps:
firstly, a video clip is intercepted from an original video, and the search of an area to be erased in the video clip and other effective areas in the clip is completed through a pre-trained video depth generation model, so that proper content is generated in a matching manner to cover a Weiya line to be erased in the video clip;
step two, distinguishing the credibility of the generated content by taking the distance between the edge of the generated content and the center of the content as a basis, cutting the generated content according to the credibility, wherein a high-credibility area is used for covering a Weiya line of a corresponding part, and a low-credibility area is discarded;
step three, covering the Wiya line with the cut generated content from outside to inside, and inserting the updated video segment into the original video segment to complete one-time updating;
and step four, repeating the step one to the step three until the Wiya line erasing of the whole video is completed.
3. The wiegand line erasing method based on the depth generation model as claimed in claim 2, wherein the first step comprises the following substeps:
step A1, sequentially selecting a video clip from a complete video of a Weiya line to be erased;
step A2, selecting an applicable pre-trained video depth generation model for generating content covering a Weiya line;
step A3, for each video frame in the segment, the following processes are performed in sequence: identifying and extracting a Weiya line, performing binarization processing, performing expansion operation and negation operation, and obtaining a frame mask corresponding to each video frame in a fragment;
step A4, normalizing the video clip, multiplying the video clip by the inverse of the mask to obtain the effective area of the video clip, inputting the effective area of the video clip into the video depth generation model, and generating a video clip which is preliminarily erased according to the following formula
Figure FDA0003612006720000011
Ri’=G[(255-Vi)÷255)×(1-Mi)];
Figure FDA0003612006720000021
Indicates a length t0An arbitrary video segment of a frame;
Figure FDA0003612006720000022
representing a mask obtained by binarization video frame by video frame processing;
g (-) represents any pre-trained video depth generative model;
Figure FDA0003612006720000023
representing a real space;
t0representing the length of the video segment to be erased;
h represents the height of the video to be erased;
w represents the width of the video to be erased.
4. The wiya line erase method based on the depth generative model of claim 2, wherein the second step comprises the following substeps:
step B1, calculating the distance matrix D corresponding to each frame in the video clip according to the following formulai∈{1,…,t0}:
Figure FDA0003612006720000024
Step B2, according to the given confidence threshold l, calculating the confidence matrix I corresponding to each frame in the segment according to the following formulai∈{1,…,t0};
Figure FDA0003612006720000025
t0Indicating the length of the video sequence to be erased;
a represents a distance matrix DiThe abscissa of any point;
b represents a distance matrix DiThe ordinate of any point;
a' represents a mask MiThe abscissa of any one of the above non-a coordinates;
b' denotes a mask MiAny of the above is not the ordinate of b.
5. The wiya line wipe method based on depth generation model of claim 2 wherein in step three, the updated video segment is generated according to the following formula:
Vi′=Vi×(1-Mi)+(Mi-Ii)×Ri
Vi' represents a video segment that completes one iterative update;
Viindicates a length t0An arbitrary video segment of a frame;
Mirepresenting a mask obtained by binarization video frame by video frame processing;
Iirepresenting a confidence matrix;
Riindicating a video segment that has been erased.
6. An apparatus for implementing a wiegand line erasure method based on a depth generation model, comprising a memory and a processor, wherein the memory is configured to store a computer program; the processor for executing the computer program and for implementing the wiener line wipe method steps based on the depth generative model as claimed in any one of claims 1 to 5 when the computer program is executed.
7. A storage medium storing a computer program which, when executed by a processor, carries out the wiegand line erasure method steps based on a depth-generating model according to any one of claims 1 to 5.
CN202210433690.4A 2022-04-24 2022-04-24 Wiya line erasing method and device based on depth generation model and storage medium Pending CN114742698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210433690.4A CN114742698A (en) 2022-04-24 2022-04-24 Wiya line erasing method and device based on depth generation model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210433690.4A CN114742698A (en) 2022-04-24 2022-04-24 Wiya line erasing method and device based on depth generation model and storage medium

Publications (1)

Publication Number Publication Date
CN114742698A true CN114742698A (en) 2022-07-12

Family

ID=82284384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210433690.4A Pending CN114742698A (en) 2022-04-24 2022-04-24 Wiya line erasing method and device based on depth generation model and storage medium

Country Status (1)

Country Link
CN (1) CN114742698A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439737A (en) * 2022-10-13 2022-12-06 哈尔滨市科佳通用机电股份有限公司 Railway box wagon window fault image identification method based on image restoration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439737A (en) * 2022-10-13 2022-12-06 哈尔滨市科佳通用机电股份有限公司 Railway box wagon window fault image identification method based on image restoration

Similar Documents

Publication Publication Date Title
Liu et al. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement
Zafeiriou et al. The menpo facial landmark localisation challenge: A step towards the solution
US9418280B2 (en) Image segmentation method and image segmentation device
US11386293B2 (en) Training image signal processors using intermediate loss functions
Wu et al. Deep generative model for image inpainting with local binary pattern learning and spatial attention
CN109522950B (en) Image scoring model training method and device and image scoring method and device
CN109918539B (en) Audio and video mutual retrieval method based on user click behavior
US7522749B2 (en) Simultaneous optical flow estimation and image segmentation
WO2020019591A1 (en) Method and device used for generating information
CN111598796B (en) Image processing method and device, electronic equipment and storage medium
CN110675359A (en) Defect sample generation method and system for steel coil surface and electronic equipment
CN110197183A (en) A kind of method, apparatus and computer equipment of Image Blind denoising
CN113888541B (en) Image identification method, device and storage medium for laparoscopic surgery stage
CN114742698A (en) Wiya line erasing method and device based on depth generation model and storage medium
CN110910445A (en) Object size detection method and device, detection equipment and storage medium
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN111325212A (en) Model training method and device, electronic equipment and computer readable storage medium
Zhao et al. Towards authentic face restoration with iterative diffusion models and beyond
US20210295016A1 (en) Living body recognition detection method, medium and electronic device
Du et al. Boosting dermatoscopic lesion segmentation via diffusion models with visual and textual prompts
CN114626118A (en) Building indoor model generation method and device
CN110795623B (en) Image enhancement training method and system and computer readable storage medium
CN112819755A (en) Thyroid nodule TI-RADS grading system and method
CN112270747A (en) Face recognition method and device and electronic equipment
US20240127452A1 (en) Learning parameters for neural networks using a semantic discriminator and an object-level discriminator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination