CN114005075A - Construction method and device of optical flow estimation model and optical flow estimation method - Google Patents

Construction method and device of optical flow estimation model and optical flow estimation method Download PDF

Info

Publication number
CN114005075A
CN114005075A CN202111635874.0A CN202111635874A CN114005075A CN 114005075 A CN114005075 A CN 114005075A CN 202111635874 A CN202111635874 A CN 202111635874A CN 114005075 A CN114005075 A CN 114005075A
Authority
CN
China
Prior art keywords
image pair
domain image
optical flow
network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111635874.0A
Other languages
Chinese (zh)
Other versions
CN114005075B (en
Inventor
程飞洋
郑伟
刘国清
杨广
王启程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youjia Innovation Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202111635874.0A priority Critical patent/CN114005075B/en
Publication of CN114005075A publication Critical patent/CN114005075A/en
Application granted granted Critical
Publication of CN114005075B publication Critical patent/CN114005075B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a construction method and a device of an optical flow estimation model and an optical flow estimation method, wherein the method comprises the following steps: inputting the simulation domain image pair, the optical flow true value of the simulation domain image pair and the real domain image pair into an initial neural network model for iterative training to obtain an optical flow estimation model; when the initial neural network model is trained, performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input so that the generated countermeasure network generates a first converted domain image pair and a second converted domain image pair; and carrying out supervised training on the optical flow computing network by taking the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, and carrying out unsupervised training on the optical flow computing network by taking the second conversion domain image pair as input. By implementing the method, the labor cost for constructing the optical flow estimation model can be reduced, and the estimation accuracy of the optical flow value of the real domain image can be improved.

Description

Construction method and device of optical flow estimation model and optical flow estimation method
Technical Field
The invention relates to the technical field of computers, in particular to a training method and a device of an optical flow estimation model and an optical flow estimation method.
Background
The current deep learning model relies heavily on truth data related to tasks for supervised training, but for an optical flow calculation task, it is extremely difficult to acquire optical flow truth values of a real domain. Therefore, in the prior art, the optical flow calculation model generally depends on the pre-training of simulation domain data and a very small amount of optical flow truth values of a real domain for fine-tuning training. Here, the domains generally mean that data sources are different, for example, an automatic driving scene image through game simulation and an automatic driving image of a real world directly acquired by a camera can be attributed to different "domains".
However, the above method has the following problems: 1. in the actual training process, even if the optical flow truth value data of a few real domains are acquired, a large amount of labor cost is consumed; 2. generally, a model trained based on data in one domain has poor test results on data in another domain, and the optical flow calculation model trained by the method has poor generalization capability and low accuracy of calculated optical flow values when optical flow calculation is performed on images in a real domain because most of training data of the model trained by the method is trained based on data in a simulation domain.
Disclosure of Invention
The embodiment of the invention provides a construction method and a device of an optical flow estimation model and an optical flow calculation method, which can reduce the labor cost for constructing the optical flow estimation model and improve the estimation accuracy of an optical flow value of a real domain image.
An embodiment of the present invention provides a method for constructing an optical flow estimation model, including: acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real domain training set comprises a real domain image pair of adjacent frames;
inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, and obtaining an optical flow estimation model;
wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network;
when the initial neural network model is subjected to iterative training, performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generated countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first converted domain image pair corresponding to the simulation domain image pair and a second converted domain image pair corresponding to the real domain image pair;
carrying out supervised training on the optical flow computing network by taking the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input and taking the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network by taking the second conversion domain image pair as input and taking the optical flow estimated values of the second conversion domain image pair as output; each recombined image pair is generated from the simulated domain image pair and the first transformed domain image pair.
Further, the generating the countermeasure network includes: generating a network and judging the network; the countermeasure training of the generated countermeasure network with the simulation domain image pair and the real domain image pair as input specifically includes:
taking the simulation domain image pair and the real domain image pair as input, and training the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 309059DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 925985DEST_PATH_IMAGE002
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
Further, the supervised training of the optical flow computing network is performed by taking the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input and taking the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, and specifically includes:
image pair recombination is carried out on the images in the simulation domain image pair and the first conversion domain image pair to generate a plurality of recombined image pairs;
determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs;
the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function;
wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
Further, the unsupervised training of the optical flow computing network is performed by taking the second transform domain image pair as input and taking the optical flow estimated value of the second transform domain image pair as output, and specifically includes:
taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure DEST_PATH_IMAGE003
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
On the basis of the embodiment of the method item, the invention correspondingly provides an embodiment of the device;
an embodiment of the present invention provides a device for constructing an optical flow estimation model, including: the system comprises a data acquisition module and a model training module; wherein the model training module comprises a first training module and a second training module;
the data acquisition module is used for acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real domain training set comprises a real domain image pair of adjacent frames;
the model training module is used for inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, so as to obtain an optical flow estimation model; wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network;
when the initial neural network model is subjected to iterative training, the first training module performs countermeasure training on the generation countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generation countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first conversion domain image pair corresponding to the simulation domain image pair and a second conversion domain image pair corresponding to the real domain image pair;
the second training module takes the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, takes the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, carries out supervised training on an optical flow computing network, takes the second conversion domain image pair as input, and takes the optical flow estimated values of the second conversion domain image pair as output, and carries out unsupervised training on the optical flow computing network; each recombined image pair is generated from the simulated domain image pair and the first transformed domain image pair.
Further, the generating the countermeasure network includes: generating a network and judging the network; the first training module is used for performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, and specifically comprises the following steps:
the first training module takes the simulation domain image pair and the real domain image pair as input and trains the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 809627DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 299646DEST_PATH_IMAGE004
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
Further, the second training module takes the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, takes the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, and performs supervised training on the optical flow computing network, specifically comprising:
the second training module conducts image pair recombination on the simulation domain image pair and images in the first conversion domain image pair to generate a plurality of recombined image pairs;
determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs;
the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function;
wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
Further, the second training module performs unsupervised training on the optical flow computing network by taking the second conversion domain image pair as input and taking the optical flow estimated value of the second conversion domain image pair as output, and specifically includes:
taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure DEST_PATH_IMAGE005
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
On the basis of the above embodiment of the method, another embodiment of the present invention provides an optical flow estimation method, including: acquiring a real domain image pair to be estimated, and inputting the real domain image pair to be estimated into the optical flow estimation model constructed by the optical flow estimation model construction method, so that the optical flow estimation model outputs the optical flow estimation value of the real domain image pair to be estimated.
The invention has the following beneficial effects:
the embodiment of the invention provides a construction method and a device of an optical flow estimation model and an optical flow estimation method, in the construction of the optical flow estimation model, based on the simulation domain image pair and the real domain image pair as input, a conversion domain image pair for converting the simulation domain image pair and the real domain image pair to the same data domain is trained, a first conversion domain image pair and a second conversion domain image pair are generated, and then, taking optical flow truth values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, outputting an optical flow estimate for each generated recombined image pair generated from the pair of simulated domain images and the pair of first transformed domain images, supervised training is carried out on an optical flow computing network in the optical flow estimation model, a second transform domain image pair is taken as input, carrying out unsupervised training on the optical flow computing network by taking the optical flow estimated value of the second conversion domain image pair as output; compared with the prior art, when the optical flow estimation model provided by the invention is used for estimating the optical flow value, the generation countermeasure network of the optical flow estimation model can convert the real domain image pair to be estimated to a conversion domain to generate a conversion domain image pair, then the optical flow calculation network estimates the optical flow value of the conversion domain image pair to indirectly obtain the optical flow value of the real domain image pair, although the input to the model is a real domain image pair, the final optical flow computation network is trained based on the image pair of the transform domain, there is no model trained based on data on one domain in the computation, the problem that the result of testing on the data of the other domain is poor enables the generated optical flow estimation model to generate accurate optical flow estimation values when the optical flow calculation is carried out on the image of the real domain, and the generalization capability of the optical flow estimation model is improved. In addition, in the process of training the optical flow estimation model, the optical flow values of the simulation domain image pair, the conversion domain image pair and the simulation domain image pair are adopted to supervise and train the optical flow calculation network, and meanwhile, the optical flow calculation network is unsupervised and trained only by utilizing the real domain image pair, so that the accuracy of the model in a real domain is improved by utilizing the knowledge of the simulation domain learning, and the optical flow true value data of the real domain image pair is not required to be adopted in the whole training process; in the actual operation process, the simulated domain image pair, the optical flow true value of the simulated domain image pair and the real domain image pair are easy to acquire, so that the labor cost consumed in model training can be reduced.
Drawings
Fig. 1 is a schematic flow chart of a method for constructing an optical flow estimation model according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of an optical flow estimation model according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an apparatus for constructing an optical flow estimation model according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a method for constructing an optical flow estimation model, which at least includes the following steps:
step S101: acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real-domain training set comprises a pair of real-domain images of adjacent frames.
Step S102: inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, and obtaining an optical flow estimation model; wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network; when the initial neural network model is subjected to iterative training, performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generated countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first converted domain image pair corresponding to the simulation domain image pair and a second converted domain image pair corresponding to the real domain image pair; carrying out supervised training on the optical flow computing network by taking the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input and taking the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network by taking the second conversion domain image pair as input and taking the optical flow estimated values of the second conversion domain image pair as output; each recombined image pair is generated from the simulated domain image pair and the first transformed domain image pair.
For step S101, in the present invention, two training set data are required for constructing the optical flow estimation model, one is a simulation domain training set obtained based on simulation, and one is a real domain training set acquired based on a real scene, where one sample of the simulation domain training set includes an optical flow truth value between an image pair of an adjacent frame (i.e., the simulation domain image pair) and two image frames (i.e., the optical flow truth value of the simulation domain image pair), and a sample of the real domain training set includes an image pair of an adjacent frame (i.e., the real domain image pair). Each training set contains a plurality of training samples. Illustratively, in the training phase, the simulated domain image pair S1 and S2, and the optical flow truth values F1-2 between S1 and S2 and F2-1 between S2 and S1 are acquired, and the real domain image pair T1 and T2 are acquired.
For step S102, in a preferred embodiment, the generating a countermeasure network includes: generating a network and judging the network; the countermeasure training of the generated countermeasure network with the simulation domain image pair and the real domain image pair as input specifically includes: taking the simulation domain image pair and the real domain image pair as input, and training the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 268739DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 423776DEST_PATH_IMAGE004
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
In a preferred embodiment, the supervised training of the optical flow computing network with the optical flow truth values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input and the optical flow estimate values of each recombined image pair generated from the simulation domain image pair and the first conversion domain image pair as output specifically includes: image pair recombination is carried out on the images in the simulation domain image pair and the first conversion domain image pair to generate a plurality of recombined image pairs; determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs; the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function; wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
In a preferred embodiment, the unsupervised training of the optical flow computation network is performed by taking the second transform domain image pair as an input and taking the optical flow estimation value of the second transform domain image pair as an output, and specifically includes: taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure 693084DEST_PATH_IMAGE006
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
Illustratively, as shown in FIG. 2, the pair of acquired simulation domain images S1 and S2 is acquired in step S101, and optical flow truth values F1-2 between S1 and S2 and optical flow truth values F2-1 between S2 and S1, acquiring a real domain image pair T1 and T2, encoding and decoding by a generation network G for generating a countermeasure network, in the process of domain conversion, the present invention encodes the simulation domain image pair S1 and S2, and the real domain image pair T1 and T2, into a common conversion domain space through an encoder, that is, the images of different domains are encoded into a common feature space by the encoder, and then the encoded features are decoded into transform domain-style images S1 and S2 (i.e., the first transform domain image pair), T1 and T2 (i.e., the second transform domain image pair) by the decoder, thereby realizing the transformation of the pair of the simulation domain images and the pair of the real domain images into the same data domain. Meanwhile, domain conversion judgment is carried out through the discrimination network D in the training process, specifically, in the training process, the features of the simulation domain image pair S1 and S2 in the conversion domain and the features of the real domain image pair T1 and T2 in the conversion domain are input into the discrimination network D for countermeasure training, namely whether the generated network G converts the images of different domains into the same feature space is judged through the discrimination network D. The discrimination network D and the generation network G jointly form a confrontation generation network, and confrontation training is carried out.
For the countermeasure training, according to the least square countermeasure generation network, the cross entropy loss function is adopted to train the countermeasure generation network, so that the gradient disappears, and therefore the loss function for training the generation network G adopts the least square function:
Figure 540954DEST_PATH_IMAGE001
wherein, c is a target value for judging that the features of the real domain image T and the simulated domain image S of the generated network G code belong to the same conversion domain by the network D.
The training discriminant network D aims to distinguish the coding features of the simulated domain samples and the real domain samples as much as possible, so the minimization form of the loss function is as follows:
Figure DEST_PATH_IMAGE007
wherein, a is a discrimination network output target value corresponding to the characteristic of the real domain image T, and b is a discrimination network output target value corresponding to the characteristic of the simulation domain image S. By minimizing this loss function, the discrimination network D is enabled to clearly distinguish between different data classes from the artificial domain and the real domain.
In the whole confrontation training process, a network G is generated in one step through training, then a network D is judged in one step through training, and then a cyclic training mode of the one step G is trained.
Transform domain-style images generated by the generation of the confrontation network S1 and S2 (i.e., the first transform domain image pair described above), T1 and T2 (i.e., the second transform domain image pair described above), the original simulated domain image pair S1 and S2, and the optical flow truth values F for the simulated domain image pair S1 and S21-2、F2-1 The streams are used as input to train an optical flow computing module (namely, the optical flow computing network), and in the process of performing domain conversion on the image, the maintenance of the original structure of the image is crucial to the final task result. In the process of optical flow calculation, the requirement on maintaining the image structure is more strict, because the calculation accuracy of the optical flow needs to be accurate to the pixel level or even the sub-pixel level, and if the structures of two adjacent images from the same domain are dislocated in the process of domain conversion, the subsequent calculation accuracy of the optical flow is directly influenced. To this end, the invention proposes to train an optical flow computation networkThe cross consistency supervised training method can restrain the image structure maintenance in the training process. For the simulated domain image pair S1 and S2, the generated network G is converted into transform domain style images S1 and S2, where the true value of optical flow between S1 to S2 is F1-2 and the true value of optical flow between S2 to S1 is F2-1. It can thus be seen that the truth values of optical flow between S1 and S2 are F1-2, the truth values of optical flow between S2 and S1 are F2-1, and the truth values of optical flow between S1 and S2 are F1-2, and these supervised training truth values limit the need to keep the image structure essential to the optical flow calculation unchanged during the domain switching process from S1 to S2. Wherein, the supervised training loss function of the optical flow adopts an L1= | F '-F | loss function, wherein F' represents a predicted optical flow value, and F represents a real optical flow value. As described above, the present invention performs image pair reorganization on the simulation domain image pair S1 and S2 and the first transform domain image pair S1 and S2 to obtain a total of four pairs of reorganized image pairs S1 and S2, S1 and S2, S1 and S2, S1 and S2, each pair of image pairs corresponds to two image pair optical flow truth values, and eight sets of supervised training loss functions between S1 to S2, S1 to S2, S1 to S2, S1 to S2, S2 to S1, S2 to S1, S2 to S1, and S2 to S1 are used to achieve the purpose of increasing the cross consistency constraint.
The disadvantages to be overcome by the optical flow calculation in the real domain include illumination change, shadow, etc., so it can be assumed that the domain conversion is performed, and in the conversion domain, the image will retain the structure required by the optical flow calculation to remove the disadvantages. Therefore, the invention adds an unsupervised training loss function between the transformed image pair T1 and T2 of the real domain image pair, and performs unsupervised training on the optical flow computing network, wherein the unsupervised training loss function is as follows:
Figure 544813DEST_PATH_IMAGE008
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
The total loss function is the sum of the generated network loss function, the judged network loss function, the supervised training loss function and the unsupervised loss function, and the whole initial neural model is trained by the method, so that the function value of the total loss function is converged to a preset value, or the training times reach the preset times, and the optical flow estimation model can be obtained.
According to the optical flow estimation model constructed by the construction method, in the process of training the optical flow estimation model, supervised training is carried out on the optical flow calculation network by adopting the optical flow values of the simulation domain image pair, the conversion domain image pair and the simulation domain image pair, and meanwhile, unsupervised training is carried out on the optical flow calculation network by only utilizing the real domain image pair, so that the accuracy of the model in a real domain is improved by utilizing the knowledge of simulation domain learning, and the optical flow true value data of the real domain image pair is not required to be adopted in the whole training process; in the actual operation process, the simulated domain image pair, the optical flow true value of the simulated domain image pair and the real domain image pair are easy to acquire, so that the labor cost consumed in model training can be reduced. In addition, when the optical flow estimation model provided by the invention is used for estimating the optical flow value, the generation countermeasure network of the optical flow estimation model can convert the real domain image pair to be estimated into the conversion domain to generate the conversion domain image pair, then the optical flow calculation network estimates the optical flow value of the conversion domain image pair to indirectly obtain the optical flow value of the real domain image pair, although the input to the model is a real domain image pair, the final optical flow computation network is trained based on the image pair of the transform domain, there is no model trained based on data on one domain in the computation, the problem that the result of testing on the data of the other domain is poor enables the generated optical flow estimation model to generate accurate optical flow estimation values when the optical flow calculation is carried out on the image of the real domain, and the generalization capability of the optical flow estimation model is improved.
As shown in fig. 3, in addition to the above embodiments, the present invention provides an apparatus for constructing various optical flow estimation models, including: the system comprises a data acquisition module and a model training module; wherein the model training module comprises a first training module and a second training module;
the data acquisition module is used for acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real domain training set comprises a real domain image pair of adjacent frames;
the model training module is used for inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, so as to obtain an optical flow estimation model; wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network;
when the initial neural network model is subjected to iterative training, the first training module performs countermeasure training on the generation countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generation countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first conversion domain image pair corresponding to the simulation domain image pair and a second conversion domain image pair corresponding to the real domain image pair;
the second training module takes the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, takes the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, carries out supervised training on an optical flow computing network, takes the second conversion domain image pair as input, and takes the optical flow estimated values of the second conversion domain image pair as output, and carries out unsupervised training on the optical flow computing network; each recombined image pair is generated from the simulated domain image pair and the first transformed domain image pair.
In a preferred embodiment, the generating the countermeasure network includes: generating a network and judging the network; the first training module is used for performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, and specifically comprises the following steps: the first training module takes the simulation domain image pair and the real domain image pair as input and trains the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 503542DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 361777DEST_PATH_IMAGE009
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
In a preferred embodiment, the second training module takes optical flow truth values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, takes optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, and performs supervised training on the optical flow computing network, specifically comprising: the second training module conducts image pair recombination on the simulation domain image pair and images in the first conversion domain image pair to generate a plurality of recombined image pairs; determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs; the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function; wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
In a preferred embodiment, the second training module performs unsupervised training on the optical flow computing network by using the second transform domain image pair as an input and using the optical flow estimation value of the second transform domain image pair as an output, and specifically includes:
taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure 380548DEST_PATH_IMAGE010
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
On the basis of the above-described embodiments, an embodiment of the present invention provides an optical flow estimation method, including: acquiring a real domain image pair to be estimated, and inputting the real domain image pair to be estimated into the optical flow estimation model constructed by the optical flow estimation model construction method, so that the optical flow estimation model outputs the optical flow estimation value of the real domain image pair to be estimated.
After the real domain image pair to be estimated is input into the optical flow estimation model, the optical flow estimation model firstly converts the real domain image pair to be estimated into the conversion domain image pair to be estimated through a generation network in the generation countermeasure network, then inputs the conversion domain image pair to be estimated into the optical flow calculation network, calculates the optical flow value of the conversion domain image pair to be estimated through the optical flow calculation network, and then takes the optical flow value of the conversion domain image pair to be estimated as the optical flow value of the real domain image pair to be estimated.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (9)

1. A method for constructing an optical flow estimation model, comprising:
acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real domain training set comprises a real domain image pair of adjacent frames;
inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, and obtaining an optical flow estimation model;
wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network;
when the initial neural network model is subjected to iterative training, performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generated countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first converted domain image pair corresponding to the simulation domain image pair and a second converted domain image pair corresponding to the real domain image pair;
and carrying out supervised training on the optical flow computing network by taking the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, taking the optical flow estimated values of the recombined image pairs generated according to the simulation domain image pair and the first conversion domain image pair as output, taking the second conversion domain image pair as input, taking the optical flow estimated values of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network.
2. The method of constructing an optical flow estimation model of claim 1, wherein the generating a countermeasure network comprises: generating a network and judging the network; the countermeasure training of the generated countermeasure network with the simulation domain image pair and the real domain image pair as input specifically includes:
taking the simulation domain image pair and the real domain image pair as input, and training the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 985643DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 868279DEST_PATH_IMAGE002
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
3. The method for constructing an optical flow estimation model according to claim 1, wherein the supervised training of the optical flow computation network is performed by taking optical flow truth values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input and taking optical flow estimation values of each recombined image pair generated from the simulation domain image pair and the first conversion domain image pair as output, and specifically comprises:
image pair recombination is carried out on the images in the simulation domain image pair and the first conversion domain image pair to generate a plurality of recombined image pairs;
determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs;
the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function;
wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
4. The method for constructing an optical flow estimation model according to claim 1, wherein the unsupervised training of the optical flow calculation network is performed by taking the second transformed-domain image pair as an input and taking the optical flow estimation value of the second transformed-domain image pair as an output, and specifically comprises:
taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure 331621DEST_PATH_IMAGE003
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
5. The device for constructing the optical flow estimation model is characterized by comprising a data acquisition module and a model training module; wherein the model training module comprises a first training module and a second training module;
the data acquisition module is used for acquiring a simulation domain training set and a real domain training set; wherein each simulated training sample in the simulated domain training set comprises: optical flow truth values of simulation domain image pairs of adjacent frames and simulation domain image pairs; each real training sample in the real domain training set comprises a real domain image pair of adjacent frames;
the model training module is used for inputting the simulation domain training set and the real domain training set into an initial neural network model for iterative training until a preset training frequency is reached or a total loss function value of the initial neural network model reaches a preset value, so as to obtain an optical flow estimation model; wherein the initial neural network model comprises: generating a countermeasure network and an optical flow computation network;
when the initial neural network model is subjected to iterative training, the first training module performs countermeasure training on the generation countermeasure network by taking the simulation domain image pair and the real domain image pair as input, so that the generation countermeasure network converts the simulation domain image pair and the real domain image pair to the same data domain, and generates a first conversion domain image pair corresponding to the simulation domain image pair and a second conversion domain image pair corresponding to the real domain image pair;
the second training module takes the optical flow true values of the simulation domain image pair, the first conversion domain image pair and the simulation domain image pair as input, takes the optical flow estimated values of each recombined image pair generated according to the simulation domain image pair and the first conversion domain image pair as output, carries out supervised training on an optical flow computing network, takes the second conversion domain image pair as input, and takes the optical flow estimated values of the second conversion domain image pair as output, and carries out unsupervised training on the optical flow computing network; each recombined image pair is generated from the simulated domain image pair and the first transformed domain image pair.
6. The apparatus for constructing an optical flow estimation model according to claim 5, wherein the generating a countermeasure network includes: generating a network and judging the network; the first training module is used for performing countermeasure training on the generated countermeasure network by taking the simulation domain image pair and the real domain image pair as input, and specifically comprises the following steps:
taking the simulation domain image pair and the real domain image pair as input, and training the generated countermeasure network according to a generated network loss function and a discriminant network loss function;
wherein the generating a network loss function is:
Figure 207174DEST_PATH_IMAGE001
the discriminant network loss function is:
Figure 439572DEST_PATH_IMAGE004
g is a generation network, D is a discrimination network, S-p (S) represents a simulation domain image from a simulation domain image pair, T-p (T) represents a real domain image from a real domain image pair, D (G (S)) represents a classification score of the discrimination network D for the features of the simulation domain image S encoded by the generation network G, D (G (T)) represents a classification score of the features of the real domain image T encoded by the discrimination network D for the generation network G, E is expectation, c is a target value of the same conversion domain for which the discrimination network D determines the features of the real domain image T encoded by the generation network G and the features of the simulation domain image S belong, a is a discrimination network output target value corresponding to the features of the real domain image T, and b is a discrimination network output target value corresponding to the features of the simulation domain image S.
7. The apparatus for constructing an optical flow estimation model according to claim 5, wherein the second training module takes optical flow truth values of the pair of simulation domain images, the pair of first transformed domain images, and the pair of simulation domain images as input, and takes optical flow estimated values of each reconstructed image pair generated from the pair of simulation domain images and the pair of first transformed domain images as output, and performs supervised training on the optical flow computing network, specifically comprising:
the second training module conducts image pair recombination on the simulation domain image pair and images in the first conversion domain image pair to generate a plurality of recombined image pairs;
determining an optical flow truth value of each recombined image pair according to the optical flow truth values of the simulation domain image pairs;
the light stream true value of each recombined image pair and each recombined image pair are taken as input, the light stream estimated value of each recombined image pair is taken as output, and supervised training is carried out on the light stream computing network according to a supervised training loss function;
wherein the supervised training loss function is: l1= | F' -F |; f' is the estimated value of the optical flow of the recombined image pair, and F is the true value of the optical flow of the recombined image pair.
8. The apparatus for constructing an optical flow estimation model according to claim 5, wherein the second training module performs unsupervised training on the optical flow computation network by taking the second transform domain image pair as input and taking the optical flow estimation value of the second transform domain image pair as output, and specifically comprises:
taking the second conversion domain image pair as input, taking the optical flow estimated value of the second conversion domain image pair as output, and carrying out unsupervised training on the optical flow computing network according to an unsupervised training loss function;
Figure 7956DEST_PATH_IMAGE005
l is an unsupervised training loss function, α and β are preset balance parameters, ρ is a preset penalty function, T1 and T2 are two adjacent images in the second transform domain image pair, (x, y) are coordinates of pixel points in the images, (u, v) are optical flow estimates of the pixel points, and ∇ is a preset gradient operator.
9. An optical flow estimation method, comprising: acquiring a real-domain image pair to be estimated, and inputting the real-domain image pair to be estimated into the optical flow estimation model constructed by the optical flow estimation model construction method according to any one of claims 1 to 4, so that the optical flow estimation model outputs the optical flow estimation value of the real-domain image pair to be estimated.
CN202111635874.0A 2021-12-30 2021-12-30 Construction method and device of optical flow estimation model and optical flow estimation method Active CN114005075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111635874.0A CN114005075B (en) 2021-12-30 2021-12-30 Construction method and device of optical flow estimation model and optical flow estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111635874.0A CN114005075B (en) 2021-12-30 2021-12-30 Construction method and device of optical flow estimation model and optical flow estimation method

Publications (2)

Publication Number Publication Date
CN114005075A true CN114005075A (en) 2022-02-01
CN114005075B CN114005075B (en) 2022-04-05

Family

ID=79932143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111635874.0A Active CN114005075B (en) 2021-12-30 2021-12-30 Construction method and device of optical flow estimation model and optical flow estimation method

Country Status (1)

Country Link
CN (1) CN114005075B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022229A (en) * 2016-05-11 2016-10-12 北京航空航天大学 Abnormal behavior identification method in error BP Adaboost network based on video motion information feature extraction and adaptive boost algorithm
US20190355128A1 (en) * 2017-01-06 2019-11-21 Board Of Regents, The University Of Texas System Segmenting generic foreground objects in images and videos
CN111369595A (en) * 2019-10-15 2020-07-03 西北工业大学 Optical flow calculation method based on self-adaptive correlation convolution neural network
CN112396074A (en) * 2019-08-15 2021-02-23 广州虎牙科技有限公司 Model training method and device based on monocular image and data processing equipment
CN113920581A (en) * 2021-09-29 2022-01-11 江西理工大学 Method for recognizing motion in video by using space-time convolution attention network
CN113947732A (en) * 2021-12-21 2022-01-18 北京航空航天大学杭州创新研究院 Aerial visual angle crowd counting method based on reinforcement learning image brightness adjustment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022229A (en) * 2016-05-11 2016-10-12 北京航空航天大学 Abnormal behavior identification method in error BP Adaboost network based on video motion information feature extraction and adaptive boost algorithm
US20190355128A1 (en) * 2017-01-06 2019-11-21 Board Of Regents, The University Of Texas System Segmenting generic foreground objects in images and videos
CN112396074A (en) * 2019-08-15 2021-02-23 广州虎牙科技有限公司 Model training method and device based on monocular image and data processing equipment
CN111369595A (en) * 2019-10-15 2020-07-03 西北工业大学 Optical flow calculation method based on self-adaptive correlation convolution neural network
CN113920581A (en) * 2021-09-29 2022-01-11 江西理工大学 Method for recognizing motion in video by using space-time convolution attention network
CN113947732A (en) * 2021-12-21 2022-01-18 北京航空航天大学杭州创新研究院 Aerial visual angle crowd counting method based on reinforcement learning image brightness adjustment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning
CN116563169B (en) * 2023-07-07 2023-09-05 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Also Published As

Publication number Publication date
CN114005075B (en) 2022-04-05

Similar Documents

Publication Publication Date Title
Golts et al. Unsupervised single image dehazing using dark channel prior loss
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
WO2020037965A1 (en) Method for multi-motion flow deep convolutional network model for video prediction
CN110533044B (en) Domain adaptive image semantic segmentation method based on GAN
CN109800710B (en) Pedestrian re-identification system and method
CN109636721B (en) Video super-resolution method based on countermeasure learning and attention mechanism
Aliakbarian et al. Flag: Flow-based 3d avatar generation from sparse observations
CN110689599A (en) 3D visual saliency prediction method for generating countermeasure network based on non-local enhancement
CN112258625B (en) Method and system for reconstructing single image to three-dimensional point cloud model based on attention mechanism
CN110599468A (en) No-reference video quality evaluation method and device
CN114005075B (en) Construction method and device of optical flow estimation model and optical flow estimation method
CN111724400A (en) Automatic video matting method and system
Mukherjee et al. Predicting video-frames using encoder-convlstm combination
CN111898482A (en) Face prediction method based on progressive generation confrontation network
CN113283577A (en) Industrial parallel data generation method based on meta-learning and generation countermeasure network
CN112102424A (en) License plate image generation model construction method, generation method and device
CN111738435B (en) Online sparse training method and system based on mobile equipment
CN117351542A (en) Facial expression recognition method and system
CN116229106A (en) Video significance prediction method based on double-U structure
CN117291232A (en) Image generation method and device based on diffusion model
Lu et al. Environment-aware multiscene image enhancement for internet of things enabled edge cameras
CN116958192A (en) Event camera image reconstruction method based on diffusion model
CN115272423B (en) Method and device for training optical flow estimation model and readable storage medium
CN116309171A (en) Method and device for enhancing monitoring image of power transmission line
CN115630612A (en) Software measurement defect data augmentation method based on VAE and WGAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.