CN107729993A - Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement - Google Patents
Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement Download PDFInfo
- Publication number
- CN107729993A CN107729993A CN201711033085.3A CN201711033085A CN107729993A CN 107729993 A CN107729993 A CN 107729993A CN 201711033085 A CN201711033085 A CN 201711033085A CN 107729993 A CN107729993 A CN 107729993A
- Authority
- CN
- China
- Prior art keywords
- msub
- mrow
- sample
- loss
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
It is using training sample and the 3D convolutional neural networks construction methods of compromise measurement, its technical characterstic the present invention relates to a kind of:Construct the 3D convolutional neural networks of twinned structure;The loss function of network is set, and the loss function is lost by positive sample, negative sample loss and regularization loss form, and combine mahalanobis distance and Euclidean distance in regularization loss;Using softmax loss functions, pre-training is carried out to network using the data set of video sequence form;Positive sample pair and negative sample pair are constructed, image is pre-processed and split;Selectively network is trained using training sample.The present invention is reasonable in design, it is selectively improved training effectiveness and is suppressed over-fitting using training sample, simultaneously, Euclidean distance and mahalanobis distance are weighed when being measured to feature, so as to build 3D convolutional neural networks models, experiment shows that the model built and Training strategy of the invention cause system whole matching rate to greatly promote.
Description
Technical field
It is especially a kind of using training sample and the 3D of compromise measurement the invention belongs to vision pedestrian identification technology field again
Convolutional neural networks construction method.
Background technology
With the increase of monitoring range, explosive growth is presented in monitoring data.By the row in eye recognition monitored picture
People's identity is obviously very poorly efficient, and the task of identification technology is regarded by the never overlapping monitoring of computer vision technique solution to pedestrian again
The problem of Yezhong pedestrian's identities match.
The conventional method that pedestrian identifies again is broadly divided into two steps, carries out feature extraction to image/video first, then
Similarity/distance of different samples is obtained by metric learning.With the rise of convolutional neural networks technology, it is examined in pedestrian
Outstanding performance is shown in the visual tasks such as survey, target following, therefore, the pedestrian based on deep learning identifies again also to be turned into
The research direction that receives much concern.However, existing convolutional neural networks have certain limitation, i.e., it only enters to single image
Row processing, without being utilized to the inter-frame information of monitor video, therefore matching efficiency is relatively low.
The content of the invention
It is overcome the deficiencies in the prior art the mesh of the present invention, proposes that one kind is reasonable in design, matching efficiency is high and performance
Stable utilizes training sample and the 3D convolutional neural networks construction methods of compromise measurement.
The present invention solves its technical problem and takes following technical scheme to realize:
A kind of 3D convolutional neural networks construction methods measured using training sample and compromise, are comprised the following steps:
Step 1, the 3D convolutional neural networks for constructing twinned structure;
Step 2, the loss function for setting network, the loss function is lost by positive sample, negative sample loss and regularization are damaged
Lose and form, and mahalanobis distance and Euclidean distance are combined in regularization loss;
Step 3, using softmax loss functions, pre-training is carried out to network using the data set of video sequence form;
Step 4, construction positive sample pair and negative sample pair, are pre-processed and are split to image;
Step 5, selectively network is trained using training sample.
The 3D convolutional neural networks that step 1 is built, including following two identicals branching networks structure:
3D convolutional layers → batch normalization layer → active coating → Dropout layers → 3D convolutional layers → batch normalization layer → swash
Layer → Dropout layers → maximum pond layer → 3D convolutional layers living → batch normalization layer → active coating → Dropout layers → maximum
Pond layer → 3D convolutional layers → batch normalization layer → active coating → Dropout layers → 3D convolutional layers → batch normalization layer → swashs
Layer → Dropout layers → full articulamentum of full articulamentum → the second of maximum pond layer → the first living.
The parameter of the 3D convolutional layers is 3*3*3;The parameter of the active coating is ReLU;The parameter of the Dropout layers
For 0.2;The parameter of the maximum pond layer is 1*2*2;The parameter of the first full articulamentum is 4096*4096;Described second
The parameter of full articulamentum is 4096*1000.
The specific processing method of the step 2 is:
If twin two outputs of network are respectively Ψ (x1) and Ψ (x2), wherein x1And x2For the original input data of network,
Ψ(x1) and Ψ (x2) be 1000 dimensional features that the last full articulamentum of network exports, then the distance between the two samples define
For:
d(x1, x2)=| | Ψ (x1)-Ψ(x2)||2
And according to the positive and negative property of following formula marking path:
Wherein I (xk) (k=1,2) be xkPedestrian's identity;
If pedestrian's identity identical sample, to for positive sample pair, the different sample of pedestrian's identity is to for negative sample pair;Then just
Sample losses are defined as:
Wherein NpIt is the number of positive sample pair, m is the spacing parameter of setting;
Negative sample loss is defined as:
Wherein t is a threshold value, is punished for judging whether to adjust the distance to negative sample;
Regularization loss is defined as:
Wherein W is the parameter of last layer of full articulamentum, and λ is balance parameters, when λ is larger, measure with Euclidean away from
From based on, when λ is smaller, measure is based on mahalanobis distance;
Whole loss function is as follows:
L=Lp+Ln+Lb。
The specific processing method of the step 4 is:It is 128 pixels that input picture is unified for into width first, is highly 64
Pixel Dimensions, and Retinex processing is carried out to original image;Then dividing the image into has overlapping three parts for upper, middle and lower, and three
Partial size is 64*64;Finally the image sequence of this three parts is superimposed, forms input data.
The specific processing method of the step 5 is:According to the positive sample loss and negative sample loss in step 2, for symbol
The sample pair of conjunction condition, counting loss function simultaneously update model parameter using stochastic gradient descent.
The advantages and positive effects of the present invention are:
The present invention is reasonable in design, and it is selectively improved training effectiveness and suppressed over-fitting using training sample, meanwhile,
Euclidean distance and mahalanobis distance are weighed when being measured to feature, so as to build 3D convolutional neural networks models, examination
Test and show that the model built and Training strategy of the invention cause system whole matching rate to greatly promote.
Brief description of the drawings
Fig. 1 is the overall system architecture figure of the present invention;
Fig. 2 is the structural representation for selecting training sample;
Fig. 3 a to Fig. 3 f are the performance comparison analysis charts for the different key elements that result of the test of the present invention provides.
Embodiment
The embodiment of the present invention is further described below in conjunction with accompanying drawing.
A kind of 3D convolutional neural networks construction methods measured using training sample and compromise, are comprised the following steps:
Step 1, the 3D convolutional neural networks for constructing twinned structure.
Due to traditional 2D convolutional neural networks in height and width both direction to image progress convolutional calculation, Zhi Nengti
The spatial information in single image is taken, and the time between image and spatial information can not be extracted.And 3D convolutional neural networks are also
Convolutional calculation can be carried out to image sequence in time dimension, the Space Time information between front and rear image can be utilized.In view of row
The True Data that people identifies again is visual form, and 3D convolutional neural networks are more suitable for this scene than 2D convolutional neural networks.Cause
This, the present invention uses 3D convolutional neural networks.The specific construction method of this step is:Build 3D convolutional Neurals as shown in Figure 1
Network, two branching networks structure is identical, is respectively:
3D convolutional layers (3*3*3) → batch normalization layer → active coating (ReLU) → Dropout layers (0.2) → 3D convolution
Layer (3*3*3) → batch normalization layer → active coating (ReLU) → Dropout layers (0.2) → maximum pond layer (1*2*2) → 3D
Convolutional layer (3*3*3) → batch normalization layer → active coating (ReLU) → Dropout layers (0.2) → maximum pond layer (1*2*2)
→ 3D convolutional layers (3*3*3) → batch normalization layer → active coating (ReLU) → Dropout layers (0.2) → 3D convolutional layers (3*3*
3) → batch normalization layer → active coating (ReLU) → Dropout layers (0.2) → maximum pond layer (1*2*2) → full articulamentum
(4096*4096) → full articulamentum (4096*1000).
Step 2, the loss function for setting network, the loss function are made up of 3 parts, respectively positive sample loss, negative sample
This loss and regularization loss, and combine mahalanobis distance and Euclidean distance in regularization loss.
The specific processing method of this step is:
Assuming that twin two outputs of network are respectively Ψ (x1) and Ψ (x2), wherein x1And x2Number is originally inputted for network
According to Ψ (x1) and Ψ (x2) it is 1000 dimensional features that the last full articulamentum of network exports.Then the distance between the two samples are fixed
Justice is:
d(x1, x2)=| | Ψ (x1)-Ψ(x2)||2
And according to the positive and negative property of following formula marking path:
Wherein I (xk) (k=1,2) be xkPedestrian's identity.
We provide pedestrian's identity identical sample to for positive sample pair, and the different sample of pedestrian's identity is to for negative sample
It is right.A collection of input data is given, the distance two-by-two between all samples that Liang Ge branches are exported is calculated first and finds maximum
Just distance DpWith minimal negative distance Dn.Then positive sample loss is defined as:
Wherein NpIt is the number of positive sample pair, m is the spacing parameter of setting.Negative sample loss is defined as:
Wherein t is a threshold value, judges whether to adjust the distance to negative sample and punishes.That considers in the process is effective
Sample is as shown in Figure 2.
Regularization loss is defined as:
Wherein W is the parameter of last layer of full articulamentum, and λ is balance parameters, when λ is larger, measure of the invention
Based on Euclidean distance, when λ is smaller, measure of the invention is based on mahalanobis distance.
Consider above-mentioned a few classes and explain that the whole loss function of system is as follows:
L=Lp+Ln+Lb
Step 3, using softmax loss functions, pre-training is carried out to network using the data set of video sequence form, repeatedly
Generation about 500 times.
Step 4, construction positive sample pair and negative sample pair, are pre-processed and are split to image.
The specific processing method of this step is:It is 128 pixels that input picture is unified for width first, is highly 64 pixels
Size, and Retinex processing is carried out to original image, the influences of the factor to image such as illumination are reduced, are closer to human eye sense
Know effect.Then image is split, as shown in figure 1, being divided into upper, middle and lower there are overlapping three parts, the size of three parts is
64*64, finally the image sequence of this three parts is superimposed, forms input data.
Step 5, selectively network is trained using training sample.
The specific processing method of this step is:It is right according to the positive sample loss function and negative sample loss function in step 2
In qualified sample pair, counting loss function and using stochastic gradient descent renewal model parameter.
Tested below as the inventive method, illustrate the experiment effect of this experiment.
Test environment:Ubuntu14.04、MATLAB R2016a
Test data:Selected data collection be the image sequence data collection iLIDs-VID that identifies again for pedestrian and
Prid2011。
Test index:
Present invention uses Cumulated Matching Characteristics (CMC) curves as evaluation index,
The sample that the index expression correctly matches alternatively is concentrating the ranking of similarity.As a result as shown in figure 3, curve is closer to 100%
Performance is better.From first row it can be seen from the figure that, selectively there is gain to whole algorithm using training sample;From secondary series figure
In as can be seen that compromise is carried out to Euclidean distance and mahalanobis distance improves the performance of neutral net;Can be with from the 3rd row figure
Find out, the performance of 3D convolutional neural networks is better than 2D convolutional neural networks
Tables 1 and 2 is of the invention and existing algorithm performance comparision.There it can be seen that the algorithm that the present invention uses exists
It is higher than existing algorithm in the performance of sequencing of similarity.
Table 1 uses the performance table of comparisons of iLIDs-VID image sequence data collection
Table 2 uses the performance table of comparisons of Prid2011 image sequence data collection
It is emphasized that embodiment of the present invention is illustrative, rather than it is limited, therefore present invention bag
Include and be not limited to embodiment described in embodiment, it is every by those skilled in the art's technique according to the invention scheme
The other embodiment drawn, also belongs to the scope of protection of the invention.
Claims (6)
1. a kind of 3D convolutional neural networks construction methods measured using training sample and compromise, it is characterised in that including following step
Suddenly:
Step 1, the 3D convolutional neural networks for constructing twinned structure;
Step 2, the loss function for setting network, the loss function is lost by positive sample, structure is lost in negative sample loss and regularization
Into, and combine mahalanobis distance and Euclidean distance in regularization loss;
Step 3, using softmax loss functions, pre-training is carried out to network using the data set of video sequence form;
Step 4, construction positive sample pair and negative sample pair, are pre-processed and are split to image;
Step 5, selectively network is trained using training sample.
2. the 3D convolutional neural networks construction methods according to claim 1 measured using training sample and compromise, it is special
Sign is:The 3D convolutional neural networks that step 1 is built, including following two identicals branching networks structure:
3D convolutional layers → batch normalization layer → active coating → Dropout layers → 3D convolutional layers → batch normalization layer → active coating
→ Dropout layers → maximum pond layer → 3D convolutional layers → batch normalization layer → active coating → Dropout layers → maximum pond
Layer → 3D convolutional layers → batch normalization layer → active coating → Dropout layers → 3D convolutional layers → batch normalization layer → active coating
→ Dropout layers → full the articulamentum of full articulamentum → the second of maximum pond layer → the first.
3. the 3D convolutional neural networks construction methods according to claim 2 measured using training sample and compromise, it is special
Sign is:The parameter of the 3D convolutional layers is 3*3*3;The parameter of the active coating is ReLU;The parameter of the Dropout layers is
0.2;The parameter of the maximum pond layer is 1*2*2;The parameter of the first full articulamentum is 4096*4096;Described second is complete
The parameter of articulamentum is 4096*1000.
4. the 3D convolutional neural networks construction methods according to claim 1 measured using training sample and compromise, it is special
Sign is:The specific processing method of the step 2 is:
If twin two outputs of network are respectively Ψ (x1) and Ψ (x2), wherein x1And x2For the original input data of network, Ψ
(x1) and Ψ (x2) be 1000 dimensional features that the last full articulamentum of network exports, then the distance between the two samples are defined as:
d(x1, x2)=| | Ψ (x1)-Ψ(x2)||2
And according to the positive and negative property of following formula marking path:
<mrow>
<mi>s</mi>
<mi>i</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>1</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
<mi> </mi>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>,</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>0</mn>
<mo>,</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>i</mi>
<mi>f</mi>
<mi> </mi>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>&NotEqual;</mo>
<mi>I</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>.</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Wherein I (xk) (k=1,2) be xkPedestrian's identity;
If pedestrian's identity identical sample, to for positive sample pair, the different sample of pedestrian's identity is to for negative sample pair;Then positive sample
Loss is defined as:
<mrow>
<msub>
<mi>L</mi>
<mi>p</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>N</mi>
<mi>p</mi>
</msub>
</mfrac>
<munder>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>S</mi>
<mi>i</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>1</mn>
</mrow>
</munder>
<mrow>
<mi>d</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>></mo>
<msub>
<mi>D</mi>
<mi>n</mi>
</msub>
<mo>-</mo>
<mi>m</mi>
</mrow>
</munder>
<mi>d</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein NpIt is the number of positive sample pair, m is the spacing parameter of setting;
Negative sample loss is defined as:
<mrow>
<msub>
<mi>L</mi>
<mi>n</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>N</mi>
<mi>n</mi>
</msub>
</mfrac>
<munder>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>S</mi>
<mi>i</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mn>0</mn>
</mrow>
</munder>
<mrow>
<mi>d</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mo><</mo>
<msub>
<mi>D</mi>
<mi>p</mi>
</msub>
<mo>+</mo>
<mi>m</mi>
</mrow>
</munder>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>-</mo>
<mi>d</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>x</mi>
<mn>2</mn>
</msub>
</mrow>
<mo>)</mo>
<mo>,</mo>
<mn>0</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein t is a threshold value, is punished for judging whether to adjust the distance to negative sample;
Regularization loss is defined as:
<mrow>
<msub>
<mi>L</mi>
<mi>b</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>2</mn>
<mi>&lambda;</mi>
</mfrac>
<mo>|</mo>
<mo>|</mo>
<msup>
<mi>WW</mi>
<mi>T</mi>
</msup>
<mo>-</mo>
<mi>I</mi>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>F</mi>
<mn>2</mn>
</msubsup>
</mrow>
Wherein W is the parameter of last layer of full articulamentum, and λ is balance parameters, when λ is larger, measure using Euclidean distance as
Main, when λ is smaller, measure is based on mahalanobis distance;
Whole loss function is as follows:
L=Lp+Ln+Lb。
5. the 3D convolutional neural networks construction methods according to claim 1 measured using training sample and compromise, it is special
Sign is:The specific processing method of the step 4 is:It is 128 pixels that input picture is unified for into width first, is highly 64 pictures
Plain size, and Retinex processing is carried out to original image;Then dividing the image into has overlapping three parts for upper, middle and lower, three
The size divided is 64*64;Finally the image sequence of this three parts is superimposed, forms input data.
6. the 3D convolutional neural networks construction methods according to claim 1 measured using training sample and compromise, it is special
Sign is:The specific processing method of the step 5 is:According to the positive sample loss and negative sample loss in step 2, for meeting
The sample pair of condition, counting loss function simultaneously update model parameter using stochastic gradient descent.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711033085.3A CN107729993A (en) | 2017-10-30 | 2017-10-30 | Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711033085.3A CN107729993A (en) | 2017-10-30 | 2017-10-30 | Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107729993A true CN107729993A (en) | 2018-02-23 |
Family
ID=61203231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711033085.3A Pending CN107729993A (en) | 2017-10-30 | 2017-10-30 | Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107729993A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960258A (en) * | 2018-07-06 | 2018-12-07 | 江苏迪伦智能科技有限公司 | A kind of template matching method based on self study depth characteristic |
CN109190446A (en) * | 2018-07-06 | 2019-01-11 | 西北工业大学 | Pedestrian's recognition methods again based on triple focused lost function |
CN109376990A (en) * | 2018-09-14 | 2019-02-22 | 中国电力科学研究院有限公司 | A kind of method and system for the critical clearing time determining electric system based on Siamese network model |
CN109711316A (en) * | 2018-12-21 | 2019-05-03 | 广东工业大学 | A kind of pedestrian recognition methods, device, equipment and storage medium again |
CN110175247A (en) * | 2019-03-13 | 2019-08-27 | 北京邮电大学 | A method of abnormality detection model of the optimization based on deep learning |
CN110414586A (en) * | 2019-07-22 | 2019-11-05 | 杭州沃朴物联科技有限公司 | Antifalsification label based on deep learning tests fake method, device, equipment and medium |
CN110610191A (en) * | 2019-08-05 | 2019-12-24 | 深圳优地科技有限公司 | Elevator floor identification method and device and terminal equipment |
CN111027394A (en) * | 2019-11-12 | 2020-04-17 | 天津大学 | Behavior classification method based on twin three-dimensional convolution neural network |
CN112185543A (en) * | 2020-09-04 | 2021-01-05 | 南京信息工程大学 | Construction method of medical induction data flow classification model |
CN113326494A (en) * | 2021-05-31 | 2021-08-31 | 湖北微特传感物联研究院有限公司 | Identity information authentication method, system, computer equipment and readable storage medium |
CN113870254A (en) * | 2021-11-30 | 2021-12-31 | 中国科学院自动化研究所 | Target object detection method and device, electronic equipment and storage medium |
CN115114844A (en) * | 2022-05-09 | 2022-09-27 | 东南大学 | Meta learning prediction model for reinforced concrete bonding slip curve |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722714A (en) * | 2012-05-18 | 2012-10-10 | 西安电子科技大学 | Artificial neural network expanding type learning method based on target tracking |
CN104281858A (en) * | 2014-09-15 | 2015-01-14 | 中安消技术有限公司 | Three-dimensional convolutional neutral network training method and video anomalous event detection method and device |
CN107292259A (en) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | The integrated approach of depth characteristic and traditional characteristic based on AdaRank |
-
2017
- 2017-10-30 CN CN201711033085.3A patent/CN107729993A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722714A (en) * | 2012-05-18 | 2012-10-10 | 西安电子科技大学 | Artificial neural network expanding type learning method based on target tracking |
CN104281858A (en) * | 2014-09-15 | 2015-01-14 | 中安消技术有限公司 | Three-dimensional convolutional neutral network training method and video anomalous event detection method and device |
CN107292259A (en) * | 2017-06-15 | 2017-10-24 | 国家新闻出版广电总局广播科学研究院 | The integrated approach of depth characteristic and traditional characteristic based on AdaRank |
Non-Patent Citations (1)
Title |
---|
HAILIN SHI等: "Embedding Deep Metric for Person Re-identification: A Study Against Large Variations", 《2016 EUROPEAN CONFERENCE ON COMPUTER VISION》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960258A (en) * | 2018-07-06 | 2018-12-07 | 江苏迪伦智能科技有限公司 | A kind of template matching method based on self study depth characteristic |
CN109190446A (en) * | 2018-07-06 | 2019-01-11 | 西北工业大学 | Pedestrian's recognition methods again based on triple focused lost function |
CN109376990A (en) * | 2018-09-14 | 2019-02-22 | 中国电力科学研究院有限公司 | A kind of method and system for the critical clearing time determining electric system based on Siamese network model |
CN109376990B (en) * | 2018-09-14 | 2022-03-04 | 中国电力科学研究院有限公司 | Method and system for determining critical removal time of power system based on Simese network model |
CN109711316A (en) * | 2018-12-21 | 2019-05-03 | 广东工业大学 | A kind of pedestrian recognition methods, device, equipment and storage medium again |
CN109711316B (en) * | 2018-12-21 | 2022-10-21 | 广东工业大学 | Pedestrian re-identification method, device, equipment and storage medium |
CN110175247B (en) * | 2019-03-13 | 2021-06-08 | 北京邮电大学 | Method for optimizing anomaly detection model based on deep learning |
CN110175247A (en) * | 2019-03-13 | 2019-08-27 | 北京邮电大学 | A method of abnormality detection model of the optimization based on deep learning |
CN110414586A (en) * | 2019-07-22 | 2019-11-05 | 杭州沃朴物联科技有限公司 | Antifalsification label based on deep learning tests fake method, device, equipment and medium |
CN110414586B (en) * | 2019-07-22 | 2021-10-26 | 杭州沃朴物联科技有限公司 | Anti-counterfeit label counterfeit checking method, device, equipment and medium based on deep learning |
CN110610191A (en) * | 2019-08-05 | 2019-12-24 | 深圳优地科技有限公司 | Elevator floor identification method and device and terminal equipment |
CN111027394A (en) * | 2019-11-12 | 2020-04-17 | 天津大学 | Behavior classification method based on twin three-dimensional convolution neural network |
CN111027394B (en) * | 2019-11-12 | 2023-07-07 | 天津大学 | Behavior classification method based on twin three-dimensional convolutional neural network |
CN112185543A (en) * | 2020-09-04 | 2021-01-05 | 南京信息工程大学 | Construction method of medical induction data flow classification model |
CN113326494A (en) * | 2021-05-31 | 2021-08-31 | 湖北微特传感物联研究院有限公司 | Identity information authentication method, system, computer equipment and readable storage medium |
CN113326494B (en) * | 2021-05-31 | 2023-08-18 | 湖北微特传感物联研究院有限公司 | Identity information authentication method, system, computer device and readable storage medium |
CN113870254A (en) * | 2021-11-30 | 2021-12-31 | 中国科学院自动化研究所 | Target object detection method and device, electronic equipment and storage medium |
CN115114844A (en) * | 2022-05-09 | 2022-09-27 | 东南大学 | Meta learning prediction model for reinforced concrete bonding slip curve |
CN115114844B (en) * | 2022-05-09 | 2023-09-19 | 东南大学 | Meta-learning prediction model of reinforced concrete bonding slip curve |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107729993A (en) | Utilize training sample and the 3D convolutional neural networks construction methods of compromise measurement | |
CN108537743B (en) | Face image enhancement method based on generation countermeasure network | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
CN105512289B (en) | Image search method based on deep learning and Hash | |
CN110210539B (en) | RGB-T image saliency target detection method based on multi-level depth feature fusion | |
CN109670528B (en) | Data expansion method facing pedestrian re-identification task and based on paired sample random occlusion strategy | |
CN107341452A (en) | Human bodys' response method based on quaternary number space-time convolutional neural networks | |
CN112308158A (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN108960140A (en) | The pedestrian's recognition methods again extracted and merged based on multi-region feature | |
CN110097178A (en) | It is a kind of paid attention to based on entropy neural network model compression and accelerated method | |
CN108614997B (en) | Remote sensing image identification method based on improved AlexNet | |
CN109063649B (en) | Pedestrian re-identification method based on twin pedestrian alignment residual error network | |
CN106778604A (en) | Pedestrian's recognition methods again based on matching convolutional neural networks | |
CN110097029B (en) | Identity authentication method based on high way network multi-view gait recognition | |
CN113343901A (en) | Human behavior identification method based on multi-scale attention-driven graph convolutional network | |
CN108960288B (en) | Three-dimensional model classification method and system based on convolutional neural network | |
CN110852369B (en) | Hyperspectral image classification method combining 3D/2D convolutional network and adaptive spectrum unmixing | |
CN110222718A (en) | The method and device of image procossing | |
CN112800882B (en) | Mask face pose classification method based on weighted double-flow residual error network | |
CN114187308A (en) | HRNet self-distillation target segmentation method based on multi-scale pooling pyramid | |
Liu et al. | APSNet: Toward adaptive point sampling for efficient 3D action recognition | |
Chen et al. | A pornographic images recognition model based on deep one-class classification with visual attention mechanism | |
CN107832753B (en) | Face feature extraction method based on four-value weight and multiple classification | |
CN110991563B (en) | Capsule network random routing method based on feature fusion | |
CN104966075A (en) | Face recognition method and system based on two-dimensional discriminant features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180223 |
|
WD01 | Invention patent application deemed withdrawn after publication |