CN108596334A - The judgement of data correspondence, generation method and system based on two-way deep learning - Google Patents
The judgement of data correspondence, generation method and system based on two-way deep learning Download PDFInfo
- Publication number
- CN108596334A CN108596334A CN201810244713.0A CN201810244713A CN108596334A CN 108596334 A CN108596334 A CN 108596334A CN 201810244713 A CN201810244713 A CN 201810244713A CN 108596334 A CN108596334 A CN 108596334A
- Authority
- CN
- China
- Prior art keywords
- deep learning
- neural network
- input data
- data
- learning neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Abstract
The invention discloses a kind of, and the data correspondence based on two-way deep learning judges, generation method and system, the data correspondence judgment method include:Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;Judge whether have default correspondence between two class testing input datas using the two-way deep learning neural network of foundation;The data creation method includes:Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;Using the two-way deep learning neural network of foundation, another kind of input data is generated according to a kind of input data;The data correspondence judges that system includes the first two-way deep learning neural network module and data correspondence judgment module;The data generation system includes the second two-way deep learning neural network module and data generation module.
Description
Technical field
The present invention relates to a kind of judgement of data correspondence, generation method and system based on two-way deep learning, belong to
Data correspondence judges and generation technique field.
Background technology
The data resolution of acquisition is often influenced by environment and collecting device and demand, such as photo is not achieved in resolution ratio
Due to factors such as weather haze, light, resolution of video camera, resolution ratio is not high, such as scans the picture come due to scanner
The not high scanned picture resolution ratio caused of resolution ratio it is not high, be required for improving the resolution ratio of data in the case of these, it is existing
The mode for being improved data resolution is interpolation or fitting, and interpolation or fitting are usually to be weighted according to adjacent given data value
Averagely obtain needing the unknown data value be inserted into, but the distribution of real data is not fixed, but change, and interpolation or
The mode of fitting can only use fixed interpolation or fitting formula, therefore be inserted in a fixed manner to the data of changes in distribution
Value or fitting will obviously cause the distortion of data while improving resolution ratio.Another kind is differentiated on earth from high-resolution
Rate, the prior art realize that the high-resolution data that may result in is discontinuous, another often through some data are deleted
Kind mode is to count or be fitted by high-resolution data weighted average to obtain high-resolution data, but the distribution of real data is simultaneously
It is not fixed, variation, and the mode for counting or being fitted can only use fixed statistics or fitting formula, therefore with solid
Fixed mode is counted or is fitted to the data of changes in distribution will obviously cause data while reducing resolution ratio
Distortion.The mankind tend to be envisioned as high-resolution data according to the data of low resolution, also can be according to high-resolution data
It is envisioned as the data of low resolution.The brain of people is similar to neural network.Depth learning technology be from nerual network technique development and
Come.
Existing depth learning technology can be obtained output label by input data and (such as obtain the people's identity card by head portrait
Number, the people's identification card number is for another example obtained by voice), but had to through tape label number in the top-down supervised training stage
According to supervised training (such as head portrait with identification card number label for another example carries the voice of identification card number label).
If being provided simultaneously with the head portrait with identification card number label and the voice with identification card number label, can pass through
Deep learning neural network respectively obtains the corresponding identification card number of a head portrait and the corresponding identification card number of a voice, then sentences
Whether disconnected one corresponding identification card number of head portrait and the corresponding identification card number of one voice are identical, if the same sentence
One head portrait that breaks corresponding with one voice is same people, judges one head portrait and described one if different
Corresponding a voice is not same people.But if it is not provided simultaneously with the head portrait with identification card number label and carries identity card
The voice of number label, then can not judge a head portrait it is corresponding with a voice whether be same people, can not because without label
Complete deep learning.If without the output label layer of deep learning neural network, since different types of input data passes through
What deep learning neural network recognized out is different types of feature, and cannot achieve matching (example between different type feature
As be between characteristics of image and sound characteristic cannot achieve it is matched).So if only head portrait and voice and part head portrait and
Correspondence (corresponding same people) between voice, then by existing depth learning technology, can not judge a head portrait and one
Whether corresponding a voice is same people.
If being provided simultaneously with the head portrait with identification card number label and the voice with identification card number label, can pass through
Deep learning neural network respectively obtains the corresponding identification card number of head portrait voice corresponding with this identification card number, to
To the corresponding voice of this image.But if it is not provided simultaneously with the head portrait with identification card number label and carries identification card number
The voice of label can not complete deep learning then without label.If without the output label layer of deep learning neural network,
Recognize out by deep learning neural network due to different types of input data is different types of feature, and inhomogeneity
It cannot achieve conversion (such as being that cannot achieve conversion between characteristics of image and sound characteristic) between type feature.If only head
Correspondence (corresponding same people) between picture and voice and part head portrait and voice, then by existing depth learning technology,
Voice can not be obtained according to head portrait or image is obtained according to voice.
In addition, in existing depth learning technology, another kind of excessively complicated label can not be obtained by inputting a kind of data
Data export, because only that fairly simple label data (such as class label) could be used for existing depth learning technology, because
For excessively complicated label data output can make the computation complexity of deep learning increase severely and can not within the effective time it is complete
At calculating, therefore excessively complicated label data can not be as the output of existing deep learning neural network, therefore, by existing
Depth learning technology can not be achieved input high-resolution data and obtain output or the input high-resolution number of high-resolution data
According to obtaining the output of high-resolution data.Meanwhile the cognitive process of deep learning neural network can only recognize out input data
Feature, rather than the data of more high-low resolution or more low resolution than input data can not be recognized out from input data, institute
With by the cognitive process of deep learning neural network also cannot achieve from high-resolution data obtain high-resolution data or from
High-resolution data obtains high-resolution data.
Invention content
First of the present invention is designed to provide a kind of data correspondence judgment method based on two-way deep learning,
This method can establish the default correspondence between two class input datas by two-way deep learning, and then may determine that two
Class testing input data whether there is correspondence.
Second object of the present invention is to provide a kind of data creation method based on two-way deep learning, and this method can
To establish the default correspondence between two class input datas by two-way deep learning, and then number can be inputted according to one kind
According to the another kind of input data of generation.
Third object of the present invention is to provide a kind of data correspondence based on two-way deep learning and judges system.
Fourth object of the present invention is to provide a kind of data generation system based on two-way deep learning.
First purpose of the present invention can be reached by adopting the following technical scheme that:
Data correspondence judgment method based on two-way deep learning, the method includes:
Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;
Judge whether have default pair between two class testing input datas using the two-way deep learning neural network of foundation
It should be related to.
Further, the default correspondence established two-way deep learning neural network, learn between two class input datas
Relationship specifically includes:
First kind input data, the second class input data are obtained, and has the first kind input number of default correspondence
According to the second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
Using the second class input data as the input of the second deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
The first kind input data of default correspondence be will be provided with the second class input data respectively as the first depth
The input for practising the input and the second deep learning neural network of neural network, the output for obtaining the first deep learning neural network are made
For the first output, the output of the second deep learning neural network is obtained as the second output, in first kind input data, the second class
It is established between input data, the first output, the second output and presets correspondence;
First kind input data and the second output that will be provided with default correspondence are neural respectively as the first deep learning
The input data and output label of network, carry out top-down supervised training;
The second class input data and the first output that will be provided with default correspondence are neural respectively as the second deep learning
The input data and output label of network, carry out top-down supervised training;
When the amount of the first output and the amount and identical data of the variance data of the second output that have a default correspondence
When ratio is less than the first predetermined threshold value, stop top-down supervised training.
Further, the two-way deep learning neural network that the utilization is established judges
It is no to have default correspondence, it specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, it is deep that deep learning obtains first
Spend the third output of learning neural network;
Using the second class testing input data as the input of the second deep learning neural network, it is deep that deep learning obtains second
Spend the 4th output of learning neural network;
Compare the amount of third output and the amount and identical data of the variance data of the 4th output, if third output and the
The ratio of the amount of the variance data of four outputs and the amount of identical data is less than the second predetermined threshold value, then the first class testing input data
Have default correspondence between the second class testing input data, does not otherwise have default correspondence.
Second object of the present invention can be reached by adopting the following technical scheme that:
Based on the data creation method of two-way deep learning, the method includes:
Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;
Using the two-way deep learning neural network of foundation, another kind of input data is generated according to a kind of input data.
It is described to establish two-way deep learning neural network as a kind of embodiment, learn between two class input datas
Default correspondence, specifically includes:
First kind input data, the second class input data are obtained, and has the first kind input number of default correspondence
According to the second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
Using the second class input data as the input of the second deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
The first kind input data of default correspondence be will be provided with the second class input data respectively as the first depth
The input for practising the input and the second deep learning neural network of neural network, the output for obtaining the first deep learning neural network are made
For the first output, the output of the second deep learning neural network is obtained as the second output, in first kind input data, the second class
It is established between input data, the first output, the second output and presets correspondence;
First kind input data and the second output that will be provided with default correspondence are neural respectively as the first deep learning
The input data and output label of network, carry out top-down supervised training;
The second class input data and the first output that will be provided with default correspondence are neural respectively as the second deep learning
The input data and output label of network, carry out top-down supervised training;
When the amount of the first output and the amount and identical data of the variance data of the second output that have a default correspondence
When ratio is less than the first predetermined threshold value, stop top-down supervised training.
Further, described using the two-way deep learning neural network established, it is generated according to a kind of input data another
Class input data, specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, it is deep that deep learning obtains first
Spend the third output of learning neural network;
Third is exported into the output label as the second deep learning neural network, reversed generate obtains the second deep learning
Second class input data of neural network;
Or
Using the second class testing input data as the input of the second deep learning neural network, it is deep that deep learning obtains second
Spend the 4th output of learning neural network;
Output label by the 4th output as the first deep learning neural network, reversed generate obtain the first deep learning
The first kind input data of neural network.
It is described to establish two-way deep learning neural network as another embodiment, learn between two class input datas
Default correspondence, specifically include:
First kind input data, the second class input data are obtained, and has the first kind input number of default correspondence
According to the second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
Using the second class input data as the input of the second deep learning neural network, unsupervised instruction from bottom to top is carried out
Practice;
The first kind input data of default correspondence be will be provided with the second class input data respectively as the first depth
It is deep to obtain first by the cognitive process of deep learning for the input for practising the input and the second deep learning neural network of neural network
The top layer concept of learning neural network is spent as the first top layer concept, and the second depth is obtained by the cognitive process of deep learning
The top layer concept of neural network is practised as the second top layer concept, in first kind input data, the second class input data, the first top layer
It is established between concept, the second top layer concept and presets correspondence, and calculate the first top layer concept therein and the second top layer concept
Variance data and identical data ratio;
The first kind input data of default correspondence be will be provided with the second top layer concept respectively as the first deep learning
The input data of neural network and expected top layer concept, carry out top-down supervised training;
The second class input data of default correspondence be will be provided with the first top layer concept respectively as the second deep learning
The input data of neural network and expected top layer concept, carry out top-down supervised training;
When the variance data and identical data of the first top layer concept for having default correspondence and the second top layer concept
When ratio is less than default discrepancy threshold, stop top-down supervised training.
Further, described using the two-way deep learning neural network established, it is generated according to a kind of input data another
Class input data, specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, pass through the cognition of deep learning
Process obtains the third top layer concept of the first deep learning neural network;
Using third top layer concept as the top layer concept of the second deep learning neural network, generation obtains the second deep learning
Second class input data of neural network, as the corresponding second class input data of the first class testing input data;
Or
Using the second class testing input data as the input of the second deep learning neural network, cognition obtains the second depth
Practise the 4th top layer concept of neural network;
Using the 4th top layer concept as the top layer concept of the first deep learning neural network, generation obtains the first deep learning
The first kind input data of neural network.
Third object of the present invention can be reached by adopting the following technical scheme that:
Data correspondence based on two-way deep learning judges system, the system comprises:
First two-way deep learning neural network module, for establishing two-way deep learning neural network, study two
Default correspondence between class input data;
Data correspondence judgment module, for judging that two class testings are defeated using the two-way deep learning neural network established
Enter between data and whether has default correspondence.
Further, the described first two-way deep learning neural network module, specifically includes:
First acquisition unit for obtaining first kind input data, the second class input data, and has default corresponding close
The first kind input data of system and the second class input data;
First unsupervised training unit, for using first kind input data as the defeated of the first deep learning neural network
Enter, carries out unsupervised training from bottom to top;
Second unsupervised training unit, for using the second class input data as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top;
First default correspondence relationship establishing unit, the first kind input data and second for will be provided with default correspondence
Class input data is obtained respectively as the input of the input and the second deep learning neural network of the first deep learning neural network
The output of first deep learning neural network obtains the output of the second deep learning neural network as second as the first output
Output establishes between first kind input data, the second class input data, the first output, the second output and presets correspondence;
First supervised training unit, the first kind input data for will be provided with default correspondence are distinguished with the second output
As the input data and output label of the first deep learning neural network, top-down supervised training is carried out;
Second supervised training unit, the second class input data for will be provided with default correspondence are distinguished with the first output
As the input data and output label of the second deep learning neural network, top-down supervised training is carried out;
First stops supervised training unit, for the difference when the first output and the second output that have default correspondence
When the ratio of the amount of data and the amount of identical data is less than the first predetermined threshold value, stop top-down supervised training.
Further, the data correspondence judgment module, specifically includes:
First deep learning unit, for using the first class testing input data as the defeated of the first deep learning neural network
Enter, deep learning obtains the third output of the first deep learning neural network;
Second deep learning unit, for using the second class testing input data as the defeated of the second deep learning neural network
Enter, deep learning obtains the 4th output of the second deep learning neural network;
Comparing unit, the amount for comparing third output and the amount and identical data of the variance data of the 4th output, if
The ratio that the third exports the amount with the amount of the variance data of the 4th output and identical data is less than the second predetermined threshold value, then and first
Have default correspondence between class testing input data and the second class testing input data, does not otherwise have default corresponding pass
System.
Fourth object of the present invention can be reached by adopting the following technical scheme that:
Based on the data generation system of two-way deep learning, the system comprises:
Second two-way deep learning neural network module, for establishing two-way deep learning neural network, study two
Default correspondence between class input data;
Data generation module, for using the two-way deep learning neural network established, being generated according to a kind of input data
Another kind of input data.
As a kind of embodiment, the second two-way deep learning neural network module specifically includes:
Second acquisition unit for obtaining first kind input data, the second class input data, and has default corresponding close
The first kind input data of system and the second class input data;
The unsupervised training unit of third, for using first kind input data as the defeated of the first deep learning neural network
Enter, carries out unsupervised training from bottom to top;
4th unsupervised training unit, for using the second class input data as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top;
Second default correspondence relationship establishing unit, the first kind input data and second for will be provided with default correspondence
Class input data is obtained respectively as the input of the input and the second deep learning neural network of the first deep learning neural network
The output of first deep learning neural network obtains the output of the second deep learning neural network as second as the first output
Output establishes between first kind input data, the second class input data, the first output, the second output and presets correspondence;
Third supervised training unit, the first kind input data for will be provided with default correspondence are distinguished with the second output
As the input data and output label of the first deep learning neural network, top-down supervised training is carried out;
4th supervised training unit, the second class input data for will be provided with default correspondence are distinguished with the first output
As the input data and output label of the second deep learning neural network, top-down supervised training is carried out;
Second stops supervised training unit, for the difference when the first output and the second output that have default correspondence
When the ratio of the amount of data and the amount of identical data is less than the first predetermined threshold value, stop top-down supervised training.
Further, the data generation module, specifically includes:
Third deep learning unit, for using the first class testing input data as the defeated of the first deep learning neural network
Enter, deep learning obtains the third output of the first deep learning neural network;
First reversed generation unit, for third to be exported to the output label as the second deep learning neural network, instead
The second class input data of the second deep learning neural network is obtained to generation;
Or
4th deep learning unit, for using the second class testing input data as the defeated of the second deep learning neural network
Enter, deep learning obtains the 4th output of the second deep learning neural network;
Second reversed generation unit, for the output label by the 4th output as the first deep learning neural network, instead
The first kind input data of the first deep learning neural network is obtained to generation.
As another embodiment, the second two-way deep learning neural network module specifically includes:
Third acquiring unit for obtaining first kind input data, the second class input data, and has default corresponding close
The first kind input data of system and the second class input data;
5th unsupervised training unit, for using first kind input data as the defeated of the first deep learning neural network
Enter, carries out unsupervised training from bottom to top;
6th unsupervised training unit, for using the second class input data as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top;
Third presets correspondence relationship establishing unit, the first kind input data and second for will be provided with default correspondence
Class input data passes through respectively as the input of the input and the second deep learning neural network of the first deep learning neural network
The cognitive process of deep learning obtains the top layer concept of the first deep learning neural network as the first top layer concept, passes through depth
The cognitive process of study obtains the top layer concept of the second deep learning neural network as the second top layer concept, is inputted in the first kind
It is established between data, the second class input data, the first top layer concept, the second top layer concept and presets correspondence, and calculated wherein
The first top layer concept and the second top layer concept variance data and identical data ratio;
5th supervised training unit, first kind input data and the second top layer concept for will be provided with default correspondence
Input data respectively as the first deep learning neural network and expected top layer concept, carry out top-down supervised training;
6th supervised training unit, the second class input data and the first top layer concept for will be provided with default correspondence
Input data respectively as the second deep learning neural network and expected top layer concept, carry out top-down supervised training;
Third stops supervised training unit, general for working as the first top layer concept for having default correspondence and the second top layer
When the variance data of thought and the ratio of identical data are less than default discrepancy threshold, stop top-down supervised training.
Further, the data generation module, specifically includes:
First deep learning recognizes unit, for using the first class testing input data as the first deep learning neural network
Input, the third top layer concept of the first deep learning neural network is obtained by the cognitive process of deep learning;
First generation unit is generated using third top layer concept as the top layer concept of the second deep learning neural network
To the second class input data of the second deep learning neural network, as the corresponding second class input of the first class testing input data
Data;
Or
Second deep learning recognizes unit, for using the second class testing input data as the second deep learning neural network
Input, cognition obtain the 4th top layer concept of the second deep learning neural network;
Second generation unit, for using the 4th top layer concept as the top layer concept of the first deep learning neural network, life
At obtaining the first kind input data of the first deep learning neural network.
The present invention has following advantageous effect compared with the existing technology:
1, data correspondence judgment method of the invention can establish two class input datas by two-way deep learning
Between default correspondence, and then may determine that two class testing input datas whether there is correspondence, for example, by two-way
Deep learning establishes the correspondence between head image data and voice data (corresponding same people), and progress can be in head image data
A certain image whether correspond to same people with a certain audio in voice data.
2, data creation method of the invention can be established pre- between two class input datas by two-way deep learning
If correspondence, and then another kind of input data can be generated according to a kind of input data, for example, by two-way deep learning come
The correspondence (corresponding same people) between head image data and voice data is established, and then can be according to a certain in head image data
Head portrait generates corresponding audio or generates corresponding head portrait according to a certain audio in voice data.
3, data creation method of the invention can be established pre- between two class input datas by two-way deep learning
If correspondence, and then another kind of input data can be generated according to a kind of input data, for example, by two-way deep learning come
It establishes low resolution or has incomplete image data with high-resolution or (corresponding same without the correspondence between incomplete image data
Head portrait), and then by low resolution or there is incomplete head portrait to generate high-resolution or without incomplete image, or pass through high-resolution or nothing
Incomplete head portrait generates low resolution or has incomplete image.
Description of the drawings
Fig. 1 is the flow chart of the data correspondence judgment method of the embodiment of the present invention 1.
Fig. 2 is the flow chart for establishing two-way deep learning neural network of the embodiment of the present invention 1.
Fig. 3 is the stream for judging whether to have default correspondence between two class testing input datas of the embodiment of the present invention 1
Cheng Tu.
Fig. 4 is the flow chart of the data creation method of the embodiment of the present invention 2.
Fig. 5 is the flow chart that the second class input data is generated according to the first class testing input data of the embodiment of the present invention 2.
Fig. 6 is the flow chart that first kind input data is generated according to the second class testing input data of the embodiment of the present invention 2.
Fig. 7 is the flow chart for establishing two-way deep learning neural network of the embodiment of the present invention 3.
Fig. 8 is the flow chart that the second class input data is generated according to the first class testing input data of the embodiment of the present invention 3.
Fig. 9 is the flow chart that first kind input data is generated according to the second class testing input data of the embodiment of the present invention 3.
Figure 10 is that the data correspondence of the embodiment of the present invention 4 judges the structure diagram of system.
Figure 11 is the structure diagram of the first two-way deep learning neural network module of the embodiment of the present invention 4.
Figure 12 is the structure diagram of the data correspondence judgment module of the embodiment of the present invention 4.
Figure 13 is the structure diagram of the data generation system of the embodiment of the present invention 5.
Figure 14 is the structure diagram of the second two-way deep learning neural network module of the embodiment of the present invention 5.
Figure 15 is that the data generation module one of which of the embodiment of the present invention 5 generates the structure diagram of scheme.
Figure 16 is that the data generation module another kind of the embodiment of the present invention 5 generates the structure diagram of scheme.
Figure 17 is the structure diagram of the two-way deep learning neural network module of the embodiment of the present invention 6.
Figure 18 is that the data generation module one of which of the embodiment of the present invention 6 generates the structure diagram of scheme.
Figure 19 is that the data generation module another kind of the embodiment of the present invention 6 generates the structure diagram of scheme.
Specific implementation mode
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited
In this.
Embodiment 1:
The explanation of deep learning and to establish process as follows:
Generated from an input the involved calculating of output can by a flow graph (flow graph) come
It indicates:Flow graph be it is a kind of can indicate calculate figure, in this figure each node indicate a basic calculating and
The value of one calculating, the result of calculating are applied to the value of the child node of this node.Consider such a set of computations, it can
To be allowed in each node and possible graph structure, and define a family of functions.Input node does not have father node, defeated
Egress does not have child node.
The special attribute of one of this flow graph is depth (depth):The longest path of an output is input to from one
Length.
Regard learning structure as a network, then the core ideas of deep learning is as follows:
The first step:Using unsupervised training from bottom to top
1) successively structure monolayer neuronal is first.
2) tuning is carried out using wake-sleep algorithms every layer.One layer is only adjusted every time, is successively adjusted.
This process can be regarded as the process of a feature learning, be to distinguish maximum with traditional neural network
Part.
Wake-sleep algorithms are as follows:
1) the wake stages:Cognitive process is weighed by the input feature vector (Input) of lower layer and upward cognition (Encoder)
Each layer of abstract representation (Code) is generated again, then a reconstruction information is generated by current generation (Decoder) weight
(Reconstruction), input feature vector and reconstruction information residual error are calculated, the downlink for being declined modification interlayer using gradient is generated
(Decoder) weight.Namely " if reality imagines different with me, change the east that my generation weight makes me imagine
West becomes as reality ".
2) the sleep stages:Generating process is generated by Upper Concept (Code) and downward generation (Decoder) weight
The state of lower layer recycles cognition (Encoder) weight to generate an abstract scene.Using initial upper layer concept and create abstract
The residual error of scene declines upward cognition (Encoder) weight of modification interlayer using gradient.Namely " if the scene in dream
It is not the corresponding concepts in my brain, it is exactly this concept to change my cognition weight to make this scene in my view ".
Specifically, first train first layer with no nominal data, when training first learns the parameter of first layer, and (this layer can be seen
Work is to obtain one so that exporting and inputting the hidden layer of the three-layer neural network of difference minimum), due to the limit of model capacity
System and sparsity constraints so that obtained model can learn the structure to data itself, to obtain having more than input
The feature of expression ability;After study obtains (n-1)th layer, the input by n-1 layers of output as n-th layer, training n-th layer, by
This respectively obtains the parameter of each layer.
Second step:Top-down supervised training
This step be first step study obtain each layer parameter on the basis of, add a classification in the coding layer most pushed up
Device (such as Rogers spy's recurrence, SVM etc.) goes to finely tune then by the supervised training of tape label data using gradient descent method
Whole network parameter.
The first step of deep learning is substantially a network parameter initialization procedure.It is different from traditional neural network initial value
Random initializtion, deep learning neural network are obtained by the structure of unsupervised trained input data, thus this initial value
Closer to global optimum, so as to obtain better effect.
Judge whether there is correspondence to have application value realistic between two class input datas, for example, according to certain in video
The head portrait and voice of people judge whether its head portrait is this human head picture, or judge whether its voice is to dub, to differentiate that video is
No forgery is tampered.For another example, the cry of corresponding animal, or the cry according to different animals are generated according to the photo of different animals
Generate the photo of corresponding animal.
As shown in Figure 1, a kind of data correspondence judgment method based on two-way deep learning is present embodiments provided, it should
Method includes the following steps:
S101, two-way deep learning neural network is established, learns the default correspondence between two class input datas.
The step is as shown in Fig. 2, specifically include:
S1011, first kind input data, the second class input data are obtained, and has the first kind of default correspondence
Input data and the second class input data.
First example of the present embodiment, first kind input data are head portrait, and the second class input data is voice, is had pre-
If correspondence is " corresponding same people ", obtain multiple head portraits, multiple voices, the same people of multipair correspondence head portrait and voice.
Second example of the present embodiment, first kind input data are low-resolution image, and the second class input data is height
Image in different resolution, it is " corresponding same image " to have default correspondence.Obtain multiple low-resolution images, multiple high-resolution
Image, the same people of multipair correspondence low-resolution image and high-definition picture.
S1012, using first kind input data as the input of the first deep learning neural network, carry out nothing from bottom to top
Supervised training.
First example of the present embodiment is carried out using each head portrait as the input of the first deep learning neural network
Unsupervised training from bottom to top.
Second example of the present embodiment, using each low-resolution image as the defeated of the first deep learning neural network
Enter, carries out unsupervised training from bottom to top.
S1013, using the second class input data as the input of the second deep learning neural network, carry out nothing from bottom to top
Supervised training.
First example of the present embodiment is carried out using each voice as the input of the second deep learning neural network
Unsupervised training from bottom to top.
Second example of the present embodiment, using each high-definition picture as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top.
S1014, the first kind input data that will be provided with default correspondence and the second class input data are respectively as first
The input of the input and the second deep learning neural network of deep learning neural network, obtains the first deep learning neural network
Output obtains the output of the second deep learning neural network as the second output as the first output, first kind input data,
It is established between second class input data, the first output, the second output and presets correspondence.
First example of the present embodiment, using the head portrait of every a pair of corresponding same people and voice as the first depth
The input for practising the input and the second deep learning neural network of neural network, the output for obtaining the first deep learning neural network are made
For the first output, obtain the output of the second deep learning neural network as the second output, the head portrait, the voice, this first
It is established between output, second output and presets correspondence " corresponding same people ".
Second example of the present embodiment, by the low-resolution image and high-definition picture point of every a pair of corresponding same people
Input not as the input and the second deep learning neural network of the first deep learning neural network, obtains the first deep learning
As the first output, the output for obtaining the second deep learning neural network is exported as second, low at this for the output of neural network
Establishing default correspondence between image in different resolution, the high-definition picture, first output, second output, " correspondence is same
Image ".
S1015, the first kind input data that will be provided with default correspondence are with the second output respectively as the first depth
The input data and output label for practising neural network, carry out top-down supervised training.
First example of the present embodiment, the head portrait of every a pair of corresponding same people and the second output is deep as first
The input data and output label for spending learning neural network, carry out top-down supervised training.
Second example of the present embodiment distinguishes the low-resolution image of every a pair of corresponding same image and the second output
As the input data and output label of the first deep learning neural network, top-down supervised training is carried out.
S1016, the second class input data that will be provided with default correspondence are with the first output respectively as the second depth
The input data and output label for practising neural network, carry out top-down supervised training.
First example of the present embodiment, the voice of every a pair of corresponding same people and the first output is deep as first
The input data and output label for spending learning neural network, carry out top-down supervised training.
S1017, judgement have the first output and the amount of the variance data of the second output and the identical number of default correspondence
According to amount ratio whether be less than the first predetermined threshold value, if so, stopping top-down supervised training, if it is not, repeating step
Rapid S1014~S1016.
First example of the present embodiment judges the difference number of the first output and the second output per a pair of corresponding same people
According to amount and the ratio of amount of identical data whether be less than the first predetermined threshold value, if so, stop top-down supervised training, if
It is no, repeat step S1014~S1016.
Second example of the present embodiment judges the difference of the first output and the second output per a pair of corresponding same image
Whether the ratio of the amount of data and the amount of identical data is less than the first predetermined threshold value, if so, stop top-down supervised training,
If it is not, repeating step S1014~S1016.
S102, judge whether to have between two class testing input datas using the two-way deep learning neural network of foundation it is pre-
If correspondence.
Step S102 is as shown in figure 3, specifically include:
S1021, using the first class testing input data as the input of the first deep learning neural network, deep learning obtains
The third of first deep learning neural network exports.
First example of the present embodiment, using test head portrait P as the input of the first deep learning neural network, depth
The third of acquistion to the first deep learning neural network exports.
Second example of the present embodiment, using for the low-resolution image P of test as the first deep learning nerve net
The input of network, deep learning obtain the third output of the first deep learning neural network.
S1022, using the second class testing input data as the input of the second deep learning neural network, deep learning obtains
4th output of the second deep learning neural network.
First example of the present embodiment, using tested speech Q as the input of the second deep learning neural network, depth
Fourth output of the acquistion to the second deep learning neural network.
Second example of the present embodiment, will be refreshing as the second deep learning for the test high-definition picture Q of test
Input through network, deep learning obtain the 4th output of the second deep learning neural network.
S1023, the amount for comparing third output and the amount and identical data of the variance data of the 4th output, if the third is defeated
The ratio for going out the amount with the amount of the variance data of the 4th output and identical data is less than the second predetermined threshold value, then the first class testing is defeated
Enter between data and the second class testing input data and have default correspondence, does not otherwise have default correspondence.
First example of the present embodiment compares the similarities and differences of third output and the 4th output, if third output and the
The ratio of the amount of the variance data of four outputs and the amount of identical data is less than the second predetermined threshold value, then image P is corresponding with voice Q same
One people, otherwise corresponds to different people;It is identical as the first predetermined threshold value under second predetermined threshold value default situations.
Second example of the present embodiment compares the similarities and differences of third output and the 4th output, if third output and the
The ratio of the amount of the variance data of four outputs and the amount of identical data is less than the second predetermined threshold value, then low-resolution image P and height
Image in different resolution Q corresponds to same image, otherwise corresponds to different images;Under second predetermined threshold value default situations with the first predetermined threshold value
It is identical.
Embodiment 2:
The data creation method based on two-way deep learning, important in inhibiting are present embodiments provided, such as have
Suspect (such as the suspect swindled by phone) may be recorded, but nobody met its head portrait, if can be according to it
Its head portrait of speech production, to seize the suspect play the role of it is particularly important.For another example, it is generated and is corresponded to according to the photo of different animals
The cry of animal, or the photo for corresponding to animal is generated according to the cry of different animals.
As shown in figure 4, the data creation method based on two-way deep learning of the present embodiment includes the following steps:
S401, two-way deep learning neural network is established, learns the default correspondence between two class input datas.
The detailed process of the step is with embodiment 1, and details are not described herein.
S402, the two-way deep learning neural network using foundation generate another kind of input number according to a kind of input data
According to.
Step S402 is as shown in figure 5, specifically include:
S4021, using the first class testing input data as the input of the first deep learning neural network, deep learning obtains
The third of first deep learning neural network exports.
First example of the present embodiment, using test head portrait as the input of the first deep learning neural network, depth
The third of acquistion to the first deep learning neural network exports.
Second example of the present embodiment, using for the low-resolution image of test as the first deep learning neural network
Input, deep learning obtain the first deep learning neural network third output.
S4022, third is exported to the output label as the second deep learning neural network, reversed generate obtains second deeply
Spend the second class input data of learning neural network.
Reversed generate obtains the second class input data of the second deep learning neural network, can be real in the following manner
It is existing:
L1, all and the second deep learning neural network output label is matched from the second deep learning neural network
Matched output label.
L2, the corresponding second class input data of output label for obtaining successful match, as the second deep learning nerve net
Second class input data of network.
Reversed generate obtains the second class input data of the second deep learning neural network, can also be real in the following manner
It is existing:
L1, using all top layer concepts in top layer concept big data or Portions of top layer concept is selected as the current of genetic algorithm
Population.
L2, using each top layer concept in current population as the input of grader in the second deep learning neural network, lead to
It crosses grader and the corresponding output label of this top layer concept, the wherein encoding and decoding of deep learning neural network most pushed up is calculated
Layer is a grader (such as Rogers spy's recurrence, SVM etc.), and top layer concept is the output of the last one hidden layer, and classification
The input of device.
L3, by the similarity between the corresponding output label of each top layer concept and the output label of the second class input data
As the fitness of each top layer concept in current population, wherein the corresponding output label of each top layer concept is logical in previous step
What the coding layer (grader) most pushed up crossed in depth learning neural network obtained.
L4, bred based on current population (including filial generation mutation) obtain it is new work as pre-group kind, return to step 1)
It repeats, until (preset condition is for example, the number repeated has been more than preset times or kind for the preset condition that meets stopping
The number of individuals that fitness is more than preset value in group has been more than default number of individuals) because the individual in current population is top layer concept,
So being bred (including filial generation mutation) using based on current population obtains the new individual worked as in pre-group kind also as top layer
Concept;The process of breeding is divided into three steps:Selection intersects, variation.Current population obtains after selection, intersection, mutation operator
New works as pre-group kind.For example, top layer concept be animal picture feature image, then from current population according to fitness from greatly to
The feature image of small selection predetermined number, the feature image chosen is weighted two-by-two and averagely obtains new multiple features
A few pixels value in certain feature images in new multiple feature images is carried out complementary operation, obtained new institute by picture
There is individual that new current population is added.
L5, the maximum top layer concept of fitness in the current population finally obtained or fitness are more than each of preset value
Top layer concept of the top layer concept (may have multiple) as the second deep learning neural network, generation obtain the second class input number
According to the second class input data as the second deep learning neural network.
Third is exported the output label as the second deep learning neural network, instead by first example of the present embodiment
The input voice of the second deep learning neural network is obtained to generation, as the corresponding voice of test head portrait.
Third is exported the output label as the second deep learning neural network, instead by second example of the present embodiment
The input data of the second deep learning neural network is obtained to generation, as the corresponding high score of low-resolution image for test
Resolution image.
Or
Step S402 is as shown in fig. 6, specifically include following steps:
S4023, using the second class testing input data as the input of the second deep learning neural network, deep learning obtains
4th output of the second deep learning neural network.
First example of the present embodiment, using tested speech as the input of the second deep learning neural network, depth
Fourth output of the acquistion to the second deep learning neural network.
Second example of the present embodiment, will be neural as the second deep learning for the test high-definition picture of test
The input of network, deep learning obtain the 4th output of the second deep learning neural network.
S4024, the output label by the 4th output as the first deep learning neural network, reversed generate obtain first deeply
Spend the first kind input data of learning neural network.
First example of the present embodiment, the output mark by the 4th output as the first deep learning neural network
Label, reversed generate obtains the input head portrait of the first deep learning neural network, as the corresponding head portrait of tested speech.
Second example of the present embodiment, the output mark by the 4th output as the first deep learning neural network
Label, reversed generate obtain the input picture of the first deep learning neural network, as the high-definition picture correspondence for test
Low-resolution image.
Embodiment 3:
By two-way generation depth learning technology, first resolution input data can be established by two-way deep learning
Default correspondence between second resolution input data, and then can be according to first in first resolution input data
Input data generates the second input data in second resolution input data or according to the in second resolution input data
Two input datas generate the first input data in first resolution input data.But it is not often corresponded in reality
High-resolution sample data and low resolution sample data or only high-resolution data sample or only low resolution
Rate data sample, therefore (mutual corresponding high-resolution can not be must have by existing two-way generation depth learning technology
Sample data and low resolution sample data) realize that input high-resolution data obtains output or the input height of high-resolution data
Resolution data obtains the output of high-resolution data.Meanwhile in two-way generation depth learning technology, first according to a kind of defeated
Enter the input data in data and obtain output label, the input number in another kind of input data is then obtained by output label
According to.Different output labels are generally always fewer than different input datas, so from the cognitive process for inputting data into output label
It is convergence (multipair few), and the generating process from output label to input data is diverging (few to more), therefore can cause
Cognitive process from high-resolution data to output label is convergence (multipair few), and from output label to high-resolution life
It is diverging (few to more) at process, can causes, input the low-resolution image of a pig, obtained output label is pig mark
Label, then the high-definition picture of a pig is obtained by pig label, but the pig in the two images is not same pig, because not
Although same pig is all pig label, appearance has the difference in details.Therefore, pass through existing two-way generation deep learning nerve
Network although high-resolution data can be obtained by high-resolution data, or obtains low resolution number by high-resolution data
According to, and the corresponding output label of the two is consistent, but can not ensure that the data details of the two are also consistent, this obviously cannot be satisfied height
Demand when resolution data is converted to data details consistency.Similarly, the prior art also cannot achieve no incomplete data and have
Conversion between incomplete data.
The present embodiment based on the data creation method general steps of two-way deep learning with above-described embodiment 2, difference
It is in the detailed process of step S401 and S402, important in inhibiting is applied, such as photo, remote sensing images, voice, point cloud
The data resolutions such as data increase or decrease, and check details or overall picture convenient for people and be conducive to computer more refining or more saving
Handle to computing resource.It for another example repairs incomplete or has a data such as the photo blocked, remote sensing images, voice, point cloud data.
The step S401 of the present embodiment is as shown in fig. 7, specifically include:
S4011, first kind input data, the second class input data are obtained, and has the first kind of default correspondence
Input data and the second class input data.
First example of the present embodiment, first kind input data are head portrait;Second class input data is voice;Have pre-
If correspondence is " corresponding same people ".Obtain multiple head portraits, multiple voices, the same people of multipair correspondence head portrait and voice.
Second example of the present embodiment, first kind input data are low-resolution image;Second class input data is height
Image in different resolution;It is " corresponding same image " to have default correspondence.Obtain multiple low-resolution images, multiple high-resolution
Image, the same people of multipair correspondence low-resolution image and high-definition picture.
The third example of the present embodiment, first kind input data are to have incomplete data;Second class input data is without residual
Lack data;It is " corresponding same data " to have default correspondence, and obtaining multiple has incomplete data, multiple no incomplete datas, more
There are incomplete data and no incomplete data to the same data of correspondence.
S4012, using first kind input data as the input of the first deep learning neural network, carry out nothing from bottom to top
Supervised training.
First example of the present embodiment is carried out using each head portrait as the input of the first deep learning neural network
Unsupervised training from bottom to top.
Second example of the present embodiment, using each low-resolution image as the defeated of the first deep learning neural network
Enter, carries out unsupervised training from bottom to top.
Each is had incomplete data as the defeated of the first deep learning neural network by the third example of the present embodiment
Enter, carries out unsupervised training from bottom to top.
S4013, using the second class input data as the input of the second deep learning neural network, carry out nothing from bottom to top
Supervised training.
First example of the present embodiment is carried out using each voice as the input of the second deep learning neural network
Unsupervised training from bottom to top.
Second example of the present embodiment, using each high-definition picture as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top.
The third example of the present embodiment, using each without incomplete data as the defeated of the second deep learning neural network
Enter, carries out unsupervised training from bottom to top.
S4014, the first kind input data that will be provided with default correspondence and the second class input data are respectively as first
The input of the input and the second deep learning neural network of deep learning neural network, is obtained by the cognitive process of deep learning
The top layer concept of first deep learning neural network obtains second as the first top layer concept, by the cognitive process of deep learning
The top layer concept of deep learning neural network is as the second top layer concept, in first kind input data, the second class input data,
It is established between one top layer concept, the second top layer concept and presets correspondence, and calculate the first top layer concept therein and the second top
The layer variance data of concept and the ratio of identical data.
First example of the present embodiment, using the head portrait of every a pair of corresponding same people and voice as the first depth
It is deep to obtain first by the cognitive process of deep learning for the input for practising the input and the second deep learning neural network of neural network
The top layer concept of learning neural network is spent as the first top layer concept, and the second depth is obtained by the cognitive process of deep learning
The top layer concept of neural network is practised as the second top layer concept, on the head portrait, the voice, the first top layer concept, second top
It is established between layer concept and presets correspondence " corresponding same people ", and calculate the first top layer concept therein and the second top layer concept
Variance data and identical data ratio.
Second example of the present embodiment, by the low-resolution image and high-definition picture point of every a pair of corresponding same people
Input not as the input and the second deep learning neural network of the first deep learning neural network, passes through recognizing for deep learning
Know that process obtains the top layer concept of the first deep learning neural network as the first top layer concept, passes through the cognition of deep learning
Journey obtains the top layer concept of the second deep learning neural network as the second top layer concept, in the low-resolution image, the high score
It is established between resolution image, the first top layer concept, the second top layer concept and presets correspondence " corresponding same image ", and counted
Calculate the ratio of the variance data and identical data of the first top layer concept therein and the second top layer concept.
Every a pair of corresponding same people is had incomplete data to make respectively with no incomplete data by the third example of the present embodiment
For the input of the input and the second deep learning neural network of the first deep learning neural network, pass through the cognition of deep learning
Journey obtains the top layer concept of the first deep learning neural network as the first top layer concept, is obtained by the cognitive process of deep learning
To the second deep learning neural network top layer concept as the second top layer concept, this have incomplete data, this without incomplete data,
It is established between the first top layer concept, the second top layer concept and presets correspondence " corresponding same data ", and calculated therein
The ratio of the variance data and identical data of first top layer concept and the second top layer concept.
The cognitive process of deep learning specifically includes:
The concept on upper layer is generated by the concept and upward cognition (Encoder) weight of lower layer, Upper Concept compares lower layer
Concept is more abstract, and undermost concept is input data, and the concept of top layer is top layer concept.It is to recognize from input data first
Know to obtain the concept of hidden layer first layer, is then recognized to obtain the concept of the hidden layer second layer by the concept of hidden layer first layer, it is such as such
It pushes away, the concept i.e. top layer concept until obtaining hiding last layer.
S4015, the first kind input data that will be provided with default correspondence and the second top layer concept are deep respectively as first
The input data of learning neural network and expected top layer concept are spent, top-down supervised training is carried out.
First example of the present embodiment, using the head portrait of every a pair of corresponding same people and the second top layer concept as the
The input data of one deep learning neural network and expected top layer concept, carry out top-down supervised training;
Second example of the present embodiment, by the low-resolution image and the second top layer concept of every a pair of corresponding same image
Input data respectively as the first deep learning neural network and expected top layer concept, carry out top-down supervised training;
Every a pair of corresponding same data are had incomplete data and the second top layer concept point by the third example of the present embodiment
Not as the input data of the first deep learning neural network and expected top layer concept, top-down supervised training is carried out;
By input data and top layer concept exercise supervision trained detailed process with by input data and output label
The trained process that exercises supervision is similar, unsupervised training obtain each layer parameter on the basis of, gone using gradient descent method micro-
An adjustment network parameter, the difference is that input data and top layer concept exercise supervision, trained detailed process need not export
Encoding-decoding process between label and top layer concept.
Include by an input data and its corresponding expected top layer the concept trained detailed process that exercises supervision:
L1, practical top layer concept is obtained by input data input deep learning neural computing.
L2, the residual error being expected between top layer concept and practical top layer concept is calculated, is gone using gradient descent method according to residual error
Entire network parameter is finely tuned, the target of adjustment is to enable practical top layer concept is closer to be expected top layer concept.
L3, repetition L1, L2 preset threshold residual value until the residual error between expected top layer concept and practical top layer concept is less than,
Then terminated by this input data and its corresponding expected top layer the concept trained process that exercises supervision.
S4016, the second class input data that will be provided with default correspondence and the first top layer concept are deep respectively as second
The input data of learning neural network and expected top layer concept are spent, top-down supervised training is carried out.
First example of the present embodiment, using the voice of every a pair of corresponding same people and the first top layer concept as the
The input data of one deep learning neural network and expected top layer concept, carry out top-down supervised training;
Second example of the present embodiment, by the high-definition picture and the first top layer concept of every a pair of corresponding same image
Input data respectively as the first deep learning neural network and expected top layer concept, carry out top-down supervised training;
The third example of the present embodiment, by every a pair of corresponding same data without incomplete data and the first top layer concept point
Not as the input data of the first deep learning neural network and expected top layer concept, top-down supervised training is carried out;
Detailed process is the same as above-mentioned steps S4015.
S4017, judgement have the variance data and phase of the first top layer concept and the second top layer concept of default correspondence
Whether the ratio with data is less than default discrepancy threshold, if so, stopping top-down supervised training, if it is not, repeating step
Rapid S4014~S4016.
The step S402 of the present embodiment is as shown in figure 8, specifically include:
S4021, using the first class testing input data as the input of the first deep learning neural network, pass through deep learning
Cognitive process obtain the third top layer concept of the first deep learning neural network.
First example of the present embodiment passes through depth using test head portrait as the input of the first deep learning neural network
The cognitive process of degree study obtains the third top layer concept of the first deep learning neural network.
Second example of the present embodiment, using for the low-resolution image of test as the first deep learning neural network
Input, the third top layer concept of the first deep learning neural network is obtained by the cognitive process of deep learning.
The third example of the present embodiment has incomplete data as the first deep learning neural network using for test
Input, the third top layer concept of the first deep learning neural network is obtained by the cognitive process of deep learning.
The cognitive process of deep learning specifically includes:
The concept on upper layer is generated by the concept and upward cognition (Encoder) weight of lower layer, Upper Concept compares lower layer
Concept is more abstract, and undermost concept is input data, and the concept of top layer is top layer concept;It is to recognize from input data first
Know to obtain the concept of hidden layer first layer, is then recognized to obtain the concept of the hidden layer second layer by the concept of hidden layer first layer, it is such as such
It pushes away, the concept i.e. top layer concept until obtaining hiding last layer.
S4022, using third top layer concept as the top layer concept of the second deep learning neural network, it is deep that generation obtains second
The the second class input data for spending learning neural network, as the corresponding second class input data of the first class testing input data.
First example of the present embodiment, by the top layer that third top layer concept is used as to the deep learning neural network
Concept generates input data by the generating process of deep learning, as the corresponding voice of test head portrait.
Second example of the present embodiment, by the top layer that third top layer concept is used as to the deep learning neural network
Concept generates input data, the corresponding high-resolution of low-resolution image as test by the generating process of deep learning
Image.
The third example of the present embodiment, by the top layer that third top layer concept is used as to the deep learning neural network
Concept generates input data by the generating process of deep learning, has the corresponding no incomplete data of incomplete data as test.
The generating process of deep learning specifically includes:
The concept of lower layer is generated by the concept and downward generation (Decoder) weight on upper layer, lower layer's concept compares upper layer
Concept specifically, the concept of top layer is top layer concept, and undermost concept is input data.It is from top layer concept first
(hiding last layer), which generates, obtains the concept of hidden layer layer second from the bottom, then recognizes to obtain by the concept of hidden layer layer second from the bottom
The concept of hidden layer layer third from the bottom, so analogizes, until obtaining the concept i.e. input data of input layer.
Or
The step S402 of the present embodiment is as shown in figure 9, specifically include:
S4023, using the second class testing input data as the input of the second deep learning neural network, cognition obtains second
4th top layer concept of deep learning neural network.
First example of the present embodiment passes through depth using tested speech as the input of the second deep learning neural network
The cognitive process of degree study obtains the 4th top layer concept of the second deep learning neural network.
Second example of the present embodiment, using for the high-definition picture of test as the second deep learning neural network
Input, the 4th top layer concept of the second deep learning neural network is obtained by the cognitive process of deep learning.
The third example of the present embodiment, using for test without incomplete data as the second deep learning neural network
Input, the 4th top layer concept of the second deep learning neural network is obtained by the cognitive process of deep learning.
The cognitive process of deep learning specifically includes:
The concept on upper layer is generated by the concept and upward cognition (Encoder) weight of lower layer, Upper Concept compares lower layer
Concept is more abstract, and undermost concept is input data, and the concept of top layer is top layer concept.It is to recognize from input data first
Know to obtain the concept of hidden layer first layer, is then recognized to obtain the concept of the hidden layer second layer by the concept of hidden layer first layer, it is such as such
It pushes away, the concept i.e. top layer concept until obtaining hiding last layer.
S4024, using the 4th top layer concept as the top layer concept of the first deep learning neural network, it is deep that generation obtains first
Spend the first kind input data of learning neural network.
First example of the present embodiment, by the top layer that the 4th top layer concept is used as to the deep learning neural network
Concept generates input data, as the corresponding head portrait of tested speech by the generating process of deep learning.
Second example of the present embodiment, by the top layer that the 4th top layer concept is used as to the deep learning neural network
Concept generates input data, the corresponding low resolution of high-definition picture as test by the generating process of deep learning
Image.
The third example of the present embodiment, by the top layer that the 4th top layer concept is used as to the deep learning neural network
Concept generates input data by the generating process of deep learning, has incomplete data without incomplete data is corresponding as test.
The generating process of deep learning specifically includes:
The concept of lower layer is generated by the concept and downward generation (Decoder) weight on upper layer, lower layer's concept compares upper layer
Concept specifically, the concept of top layer is top layer concept, and undermost concept is input data.It is from top layer concept first
(hiding last layer), which generates, obtains the concept of hidden layer layer second from the bottom, then recognizes to obtain by the concept of hidden layer layer second from the bottom
The concept of hidden layer layer third from the bottom, so analogizes, until obtaining the concept i.e. input data of input layer.
It is understood that although data type is different, first kind input data and the second class for having correspondence are defeated
Enter the top layer concept that data may be matched by two-way deep learning, because having this two classes input data of correspondence
Only class is different, and so as to cause details difference, but its key feature is consistent, thus its top layer concept is possible to reach one
It causes, therefore passes through two-way supervised training so that two kinds of input datas are matched top layer is conceptive, so that this right
It should be related to and can be able to store and express by two-way deep learning neural network, and then can be applied to have correspondence
Conversion between inhomogeneity data.
Embodiment 4:
As shown in Figure 10, it present embodiments provides a kind of data correspondence based on two-way deep learning and judges system,
The system includes the first two-way deep learning neural network module 1001 and data correspondence judgment module 1002.
The first two-way deep learning neural network module 1001, for establishing two-way deep learning nerve net
Network learns the default correspondence between two class input datas;The first two-way deep learning neural network module 1001
As shown in figure 11, it specifically includes:
First acquisition unit 10011 for obtaining first kind input data, the second class input data, and has default
The first kind input data of correspondence and the second class input data.
First unsupervised training unit 10012, for using first kind input data as the first deep learning neural network
Input, carry out unsupervised training from bottom to top.
Second unsupervised training unit 10013, for using the second class input data as the second deep learning neural network
Input, carry out unsupervised training from bottom to top.
First default correspondence relationship establishing unit 10014, the first kind input data for will be provided with default correspondence
It is defeated with the second deep learning neural network respectively as inputting for the first deep learning neural network with the second class input data
Enter, obtains the output of the first deep learning neural network as the first output, obtain the output of the second deep learning neural network
As the second output, default pair is established between first kind input data, the second class input data, the first output, the second output
It should be related to.
First supervised training unit 10015, the first kind input data and second for will be provided with default correspondence are defeated
Go out the input data and output label respectively as the first deep learning neural network, carries out top-down supervised training.
Second supervised training unit 10016, the second class input data and first for will be provided with default correspondence are defeated
Go out the input data and output label respectively as the second deep learning neural network, carries out top-down supervised training.
First stops supervised training unit 10017, for when the first output for having default correspondence and the second output
Variance data amount and identical data amount ratio be less than the first predetermined threshold value when, stop top-down supervised training.
The data correspondence judgment module 1002, for judging two using the two-way deep learning neural network established
Whether default correspondence is had between class testing input data, the data correspondence judgment module 1002 is as shown in figure 12,
It specifically includes:
First deep learning unit 10021, for using the first class testing input data as the first deep learning nerve net
The input of network, deep learning obtain the third output of the first deep learning neural network;
Second deep learning unit 10022, for using the second class testing input data as the second deep learning nerve net
The input of network, deep learning obtain the 4th output of the second deep learning neural network;
Comparing unit 10023, the amount for comparing third output and the amount and identical data of the variance data of the 4th output,
If the ratio that the third exports the amount with the amount of the variance data of the 4th output and identical data is less than the second predetermined threshold value,
Have default correspondence between first class testing input data and the second class testing input data, does not otherwise have default correspondence
Relationship.
Embodiment 5:
As shown in figure 13, a kind of data generation system based on two-way deep learning is present embodiments provided, the system packet
Include the second two-way deep learning neural network module 1301 and data generation module 1302.
The second two-way deep learning neural network module 1301, for establishing two-way deep learning nerve net
Network learns the default correspondence between two class input datas;The second two-way deep learning neural network module 1301
As shown in figure 14, it specifically includes:
Second acquisition unit 13011 for obtaining first kind input data, the second class input data, and has default
The first kind input data of correspondence and the second class input data;
The unsupervised training unit 13012 of third, for using first kind input data as the first deep learning neural network
Input, carry out unsupervised training from bottom to top;
4th unsupervised training unit 13013, for using the second class input data as the second deep learning neural network
Input, carry out unsupervised training from bottom to top;
Second default correspondence relationship establishing unit 13014, the first kind input data for will be provided with default correspondence
It is defeated with the second deep learning neural network respectively as inputting for the first deep learning neural network with the second class input data
Enter, obtains the output of the first deep learning neural network as the first output, obtain the output of the second deep learning neural network
As the second output, default pair is established between first kind input data, the second class input data, the first output, the second output
It should be related to;
Third supervised training unit 13015, the first kind input data and second for will be provided with default correspondence are defeated
Go out the input data and output label respectively as the first deep learning neural network, carries out top-down supervised training;
4th supervised training unit 13016, the second class input data and first for will be provided with default correspondence are defeated
Go out the input data and output label respectively as the second deep learning neural network, carries out top-down supervised training;
Second stops supervised training unit 13017, for when the first output for having default correspondence and the second output
Variance data amount and identical data amount ratio be less than the first predetermined threshold value when, stop top-down supervised training.
The data generation module 1302, for using the two-way deep learning neural network established, being inputted according to one kind
Data generate another kind of input data;
The data generation module 1302 is as shown in figure 15, specifically includes:
Third deep learning unit 13021, for using the first class testing input data as the first deep learning nerve net
The input of network, deep learning obtain the third output of the first deep learning neural network;
First reversed generation unit 13022, for third to be exported to the output mark as the second deep learning neural network
Label, reversed generate obtain the second class input data of the second deep learning neural network;
Or
The data generation module 1302 is as shown in figure 16, specifically includes:
4th deep learning unit 13023, for using the second class testing input data as the second deep learning nerve net
The input of network, deep learning obtain the 4th output of the second deep learning neural network;
Second reversed generation unit 13024, for the output mark by the 4th output as the first deep learning neural network
Label, reversed generate obtain the first kind input data of the first deep learning neural network.
Embodiment 6:
The present embodiment based on the data generation system overall structure of two-way deep learning with embodiment 5, difference is in
In the concrete composition part of the second two-way deep learning neural network module 1301 and data generation module 1302.
The second two-way deep learning neural network module 1301 of the present embodiment as shown in figure 17, specifically includes:
Third acquiring unit 13011 for obtaining first kind input data, the second class input data, and has default
The first kind input data of correspondence and the second class input data;
5th unsupervised training unit 13012, for using first kind input data as the first deep learning neural network
Input, carry out unsupervised training from bottom to top;
6th unsupervised training unit 13013, for using the second class input data as the second deep learning neural network
Input, carry out unsupervised training from bottom to top;
Third presets correspondence relationship establishing unit 13014, the first kind input data for will be provided with default correspondence
It is defeated with the second deep learning neural network respectively as inputting for the first deep learning neural network with the second class input data
Entering, the top layer concept that the first deep learning neural network is obtained by the cognitive process of deep learning is used as the first top layer concept,
The top layer concept that the second deep learning neural network is obtained by the cognitive process of deep learning is used as the second top layer concept,
It is established between a kind of input data, the second class input data, the first top layer concept, the second top layer concept and presets correspondence, and
Calculate the ratio of the variance data and identical data of the first top layer concept therein and the second top layer concept;
5th supervised training unit 13015, the first kind input data for will be provided with default correspondence and the second top
Input data and expected top layer concept of the layer concept respectively as the first deep learning neural network, carry out top-down supervision
Training;
6th supervised training unit 13016, the second class input data for will be provided with default correspondence and the first top
Input data and expected top layer concept of the layer concept respectively as the second deep learning neural network, carry out top-down supervision
Training;
Third stops supervised training unit 13017, for when the first top layer concept and second for having default correspondence
When the variance data of top layer concept and the ratio of identical data are less than default discrepancy threshold, stop top-down supervised training.
The data generation module 1302 of the present embodiment is as shown in figure 18, specifically includes:
First deep learning recognizes unit 13021, for using the first class testing input data as the first deep learning god
Input through network obtains the third top layer concept of the first deep learning neural network by the cognitive process of deep learning;
First generation unit 13022, it is raw using third top layer concept as the top layer concept of the second deep learning neural network
At the second class input data of the second deep learning neural network is obtained, as corresponding second class of the first class testing input data
Input data;
Or
The data generation module 1302 of the present embodiment is as shown in figure 19, specifically includes:
Second deep learning recognizes unit 13023, for using the second class testing input data as the second deep learning god
Input through network, cognition obtain the 4th top layer concept of the second deep learning neural network;
Second generation unit 13024, for the 4th top layer concept is general as the top layer of the first deep learning neural network
It reads, generation obtains the first kind input data of the first deep learning neural network.
It is appreciated that term " first ", " second " etc. used in the system of the various embodiments described above can be used for describing various
Unit, but these units should not be limited by these terms.These terms are only used to distinguish first unit and another unit.It lifts
For example, without departing from the scope of the invention, the first unsupervised training unit can be referred to as and be known as second without prison
Training unit is superintended and directed, and similarly, the second unsupervised training unit can be known as to the first unsupervised training unit, the first unsupervised instruction
Practice unit and the second unsupervised training unit both unsupervised training units, but it is not same unsupervised training unit.
The above, patent preferred embodiment only of the present invention, but the protection domain of patent of the present invention is not limited to
This, any one skilled in the art is in the range disclosed in patent of the present invention, according to the skill of patent of the present invention
Art scheme and its inventive concept are subject to equivalent substitution or change, belong to the protection domain of patent of the present invention.
Claims (10)
1. the data correspondence judgment method based on two-way deep learning, it is characterised in that:The method includes:
Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;
Judge whether have default corresponding pass between two class testing input datas using the two-way deep learning neural network of foundation
System.
2. the data correspondence judgment method according to claim 1 based on two-way deep learning, it is characterised in that:Institute
It states and establishes two-way deep learning neural network, learn the default correspondence between two class input datas, specifically include:
Obtain first kind input data, the second class input data, and have the first kind input data of default correspondence with
Second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised training from bottom to top is carried out;
Using the second class input data as the input of the second deep learning neural network, unsupervised training from bottom to top is carried out;
First kind input data and the second class input data that will be provided with default correspondence are refreshing respectively as the first deep learning
Input through network and the input of the second deep learning neural network obtain the output of the first deep learning neural network as the
One output obtains the output of the second deep learning neural network as the second output, is inputted in first kind input data, the second class
It is established between data, the first output, the second output and presets correspondence;
The first kind input data and second that will be provided with default correspondence are exported respectively as the first deep learning neural network
Input data and output label, carry out top-down supervised training;
The the second class input data and first that will be provided with default correspondence are exported respectively as the second deep learning neural network
Input data and output label, carry out top-down supervised training;
When the ratio of the first output and the amount of the variance data of the second output and the amount of identical data that have default correspondence
When less than the first predetermined threshold value, stop top-down supervised training.
3. the data correspondence judgment method according to claim 2 based on two-way deep learning, it is characterised in that:Institute
It states and judges whether have default correspondence between two class testing input datas using the two-way deep learning neural network of foundation,
It specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, deep learning obtains the first depth
Practise the third output of neural network;
Using the second class testing input data as the input of the second deep learning neural network, deep learning obtains the second depth
Practise the 4th output of neural network;
Compare the amount of third output and the amount and identical data of the variance data of the 4th output, if the third exports and the 4th is defeated
The ratio of the amount of the variance data gone out and the amount of identical data is less than the second predetermined threshold value, then the first class testing input data and the
Have default correspondence between two class testing input datas, does not otherwise have default correspondence.
4. the data creation method based on two-way deep learning, it is characterised in that:The method includes:
Two-way deep learning neural network is established, the default correspondence between two class input datas is learnt;
Using the two-way deep learning neural network of foundation, another kind of input data is generated according to a kind of input data.
5. the data creation method according to claim 4 based on two-way deep learning, it is characterised in that:The foundation is double
To deep learning neural network, learns the default correspondence between two class input datas, specifically include:
Obtain first kind input data, the second class input data, and have the first kind input data of default correspondence with
Second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised training from bottom to top is carried out;
Using the second class input data as the input of the second deep learning neural network, unsupervised training from bottom to top is carried out;
First kind input data and the second class input data that will be provided with default correspondence are refreshing respectively as the first deep learning
Input through network and the input of the second deep learning neural network obtain the output of the first deep learning neural network as the
One output obtains the output of the second deep learning neural network as the second output, is inputted in first kind input data, the second class
It is established between data, the first output, the second output and presets correspondence;
The first kind input data and second that will be provided with default correspondence are exported respectively as the first deep learning neural network
Input data and output label, carry out top-down supervised training;
The the second class input data and first that will be provided with default correspondence are exported respectively as the second deep learning neural network
Input data and output label, carry out top-down supervised training;
When the ratio of the first output and the amount of the variance data of the second output and the amount of identical data that have default correspondence
When less than the first predetermined threshold value, stop top-down supervised training.
6. the data creation method according to claim 5 based on two-way deep learning, it is characterised in that:Described utilize is built
Vertical two-way deep learning neural network generates another kind of input data according to a kind of input data, specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, deep learning obtains the first depth
Practise the third output of neural network;
Third is exported into the output label as the second deep learning neural network, reversed generate obtains the second deep learning nerve
Second class input data of network;
Or
Using the second class testing input data as the input of the second deep learning neural network, deep learning obtains the second depth
Practise the 4th output of neural network;
Output label by the 4th output as the first deep learning neural network, reversed generate obtain the first deep learning nerve
The first kind input data of network.
7. the data creation method according to claim 4 based on two-way deep learning, it is characterised in that:The foundation is double
To deep learning neural network, learns the default correspondence between two class input datas, specifically include:
Obtain first kind input data, the second class input data, and have the first kind input data of default correspondence with
Second class input data;
Using first kind input data as the input of the first deep learning neural network, unsupervised training from bottom to top is carried out;
Using the second class input data as the input of the second deep learning neural network, unsupervised training from bottom to top is carried out;
First kind input data and the second class input data that will be provided with default correspondence are refreshing respectively as the first deep learning
The input of input and the second deep learning neural network through network, the first depth is obtained by the cognitive process of deep learning
The top layer concept of neural network is practised as the first top layer concept, the second deep learning god is obtained by the cognitive process of deep learning
Top layer concept through network is general in first kind input data, the second class input data, the first top layer as the second top layer concept
It reads, established between the second top layer concept and preset correspondence, and calculate the first top layer concept therein and the second top layer concept
The ratio of variance data and identical data;
First kind input data and the second top layer concept that will be provided with default correspondence are neural respectively as the first deep learning
The input data of network and expected top layer concept, carry out top-down supervised training;
The second class input data and the first top layer concept that will be provided with default correspondence are neural respectively as the second deep learning
The input data of network and expected top layer concept, carry out top-down supervised training;
When the ratio of the variance data and identical data of the first top layer concept and the second top layer concept that have default correspondence
When less than default discrepancy threshold, stop top-down supervised training.
8. the data creation method according to claim 7 based on two-way deep learning, it is characterised in that:Described utilize is built
Vertical two-way deep learning neural network generates another kind of input data according to a kind of input data, specifically includes:
Using the first class testing input data as the input of the first deep learning neural network, pass through the cognitive process of deep learning
Obtain the third top layer concept of the first deep learning neural network;
Using third top layer concept as the top layer concept of the second deep learning neural network, generation obtains the second deep learning nerve
Second class input data of network, as the corresponding second class input data of the first class testing input data;
Or
Using the second class testing input data as the input of the second deep learning neural network, cognition obtains the second deep learning god
The 4th top layer concept through network;
Using the 4th top layer concept as the top layer concept of the first deep learning neural network, generation obtains the first deep learning nerve
The first kind input data of network.
9. the data correspondence based on two-way deep learning judges system, it is characterised in that:The system comprises:
First two-way deep learning neural network module, for establishing two-way deep learning neural network, two classes of study are defeated
Enter the default correspondence between data;
Data correspondence judgment module, for judging that two class testings input number using the two-way deep learning neural network established
Whether has default correspondence between.
10. the data generation system based on two-way deep learning, it is characterised in that:The system comprises:
Second two-way deep learning neural network module, for establishing two-way deep learning neural network, two classes of study are defeated
Enter the default correspondence between data;
Data generation module, for using the two-way deep learning neural network established, being generated according to a kind of input data another
Class input data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810244713.0A CN108596334B (en) | 2018-03-23 | 2018-03-23 | Data corresponding relation judging and generating method and system based on bidirectional deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810244713.0A CN108596334B (en) | 2018-03-23 | 2018-03-23 | Data corresponding relation judging and generating method and system based on bidirectional deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108596334A true CN108596334A (en) | 2018-09-28 |
CN108596334B CN108596334B (en) | 2021-01-01 |
Family
ID=63627307
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810244713.0A Active CN108596334B (en) | 2018-03-23 | 2018-03-23 | Data corresponding relation judging and generating method and system based on bidirectional deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108596334B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112133426A (en) * | 2020-09-11 | 2020-12-25 | 上海朔茂网络科技有限公司 | Respiratory system disease auxiliary diagnosis method based on deep learning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105072373A (en) * | 2015-08-28 | 2015-11-18 | 中国科学院自动化研究所 | Bilateral-circulation convolution network-based video super-resolution method and system |
CN105976056A (en) * | 2016-05-03 | 2016-09-28 | 成都数联铭品科技有限公司 | Information extraction system based on bidirectional RNN |
CN106127684A (en) * | 2016-06-22 | 2016-11-16 | 中国科学院自动化研究所 | Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks |
CN106126492A (en) * | 2016-06-07 | 2016-11-16 | 北京高地信息技术有限公司 | Statement recognition methods based on two-way LSTM neutral net and device |
CN106560848A (en) * | 2016-10-09 | 2017-04-12 | 辽宁工程技术大学 | Novel neural network model for simulating biological bidirectional cognition capability, and training method |
US20170372199A1 (en) * | 2016-06-23 | 2017-12-28 | Microsoft Technology Licensing, Llc | Multi-domain joint semantic frame parsing |
CN107798351A (en) * | 2017-11-09 | 2018-03-13 | 大国创新智能科技(东莞)有限公司 | A kind of personal identification method and system based on deep learning neutral net |
-
2018
- 2018-03-23 CN CN201810244713.0A patent/CN108596334B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105072373A (en) * | 2015-08-28 | 2015-11-18 | 中国科学院自动化研究所 | Bilateral-circulation convolution network-based video super-resolution method and system |
CN105976056A (en) * | 2016-05-03 | 2016-09-28 | 成都数联铭品科技有限公司 | Information extraction system based on bidirectional RNN |
CN106126492A (en) * | 2016-06-07 | 2016-11-16 | 北京高地信息技术有限公司 | Statement recognition methods based on two-way LSTM neutral net and device |
CN106127684A (en) * | 2016-06-22 | 2016-11-16 | 中国科学院自动化研究所 | Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks |
US20170372199A1 (en) * | 2016-06-23 | 2017-12-28 | Microsoft Technology Licensing, Llc | Multi-domain joint semantic frame parsing |
CN106560848A (en) * | 2016-10-09 | 2017-04-12 | 辽宁工程技术大学 | Novel neural network model for simulating biological bidirectional cognition capability, and training method |
CN107798351A (en) * | 2017-11-09 | 2018-03-13 | 大国创新智能科技(东莞)有限公司 | A kind of personal identification method and system based on deep learning neutral net |
Non-Patent Citations (2)
Title |
---|
YAN HUANG等: "Video Super-Resolution via Bidirectional Recurrent Convolutional Networks", 《 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 》 * |
沈先耿: "深度学习综述", 《教育前沿》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112133426A (en) * | 2020-09-11 | 2020-12-25 | 上海朔茂网络科技有限公司 | Respiratory system disease auxiliary diagnosis method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN108596334B (en) | 2021-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875807B (en) | Image description method based on multiple attention and multiple scales | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN109241834A (en) | A kind of group behavior recognition methods of the insertion based on hidden variable | |
CN106295506A (en) | A kind of age recognition methods based on integrated convolutional neural networks | |
CN109711426A (en) | A kind of pathological picture sorter and method based on GAN and transfer learning | |
CN106503654A (en) | A kind of face emotion identification method based on the sparse autoencoder network of depth | |
CN108398268A (en) | A kind of bearing performance degradation assessment method based on stacking denoising self-encoding encoder and Self-organizing Maps | |
CN108765512B (en) | Confrontation image generation method based on multi-level features | |
CN106056059B (en) | The face identification method of multi-direction SLGS feature description and performance cloud Weighted Fusion | |
CN104778466B (en) | A kind of image attention method for detecting area for combining a variety of context cues | |
CN107145841A (en) | A kind of low-rank sparse face identification method and its system based on matrix | |
CN106909938A (en) | Viewing angle independence Activity recognition method based on deep learning network | |
CN114549850B (en) | Multi-mode image aesthetic quality evaluation method for solving modal missing problem | |
CN109598671A (en) | Image generating method, device, equipment and medium | |
Ocquaye et al. | Dual exclusive attentive transfer for unsupervised deep convolutional domain adaptation in speech emotion recognition | |
CN110619347A (en) | Image generation method based on machine learning and method thereof | |
CN111598153B (en) | Data clustering processing method and device, computer equipment and storage medium | |
CN109859310A (en) | A kind of model and its method for building up can be used for generating MR image | |
WO2022166840A1 (en) | Face attribute editing model training method, face attribute editing method and device | |
CN108520201A (en) | A kind of robust human face recognition methods returned based on weighted blend norm | |
CN108229505A (en) | Image classification method based on FISHER multistage dictionary learnings | |
CN115346149A (en) | Rope skipping counting method and system based on space-time diagram convolution network | |
CN110428846A (en) | Voice-over-net stream steganalysis method and device based on bidirectional circulating neural network | |
CN108596334A (en) | The judgement of data correspondence, generation method and system based on two-way deep learning | |
CN116311483B (en) | Micro-expression recognition method based on local facial area reconstruction and memory contrast learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |