CN108805151A - A kind of image classification method based on depth similitude network - Google Patents
A kind of image classification method based on depth similitude network Download PDFInfo
- Publication number
- CN108805151A CN108805151A CN201710313616.8A CN201710313616A CN108805151A CN 108805151 A CN108805151 A CN 108805151A CN 201710313616 A CN201710313616 A CN 201710313616A CN 108805151 A CN108805151 A CN 108805151A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- picture
- network
- image classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
The present invention relates to a kind of image classification methods based on depth similitude network, including:Training image is inputted to training pattern;Predetermined number of times training is carried out using training pattern;Initialisation image Feature Selection Model;It inputs target image and extracts feature;Combined training characteristics of image carries out similarity calculation;Image classification is carried out using characteristics of image similarity.The present invention passes through.Depth image feature extraction training pattern is built to carry out depth training to specified training image and optimize training pattern using cross entropy loss function, the characteristic value that training image extraction is added when carrying out real image feature extraction carries out Similarity measures, exact image classification is realized by the computation model, it is put forward for the first time the image classification method that Similarity measures are added, effectively increases image classification accuracy.
Description
Technical field
The present invention relates to a kind of image classification methods based on depth similitude network more particularly to a kind of video image to know
Not with classification process field depth network image sorting technique.
Background technology
With the continuous development and mutually fusion of multimedia and Internet technology, picture is as the matchmaker for most intuitively transmitting information
One of body form is being appeared in the trend of geometry increased number in the sight of internet and people, abundant being provided to us
And the information content of various dimensions is simultaneously, and many redundancies and rubbish picture are also generated because of substantial amounts, therefore how picture is carried out
Classification and Identification is to provide the premise of efficient picture retrieval and management, and needs the technical task continued to optimize.
There is the more mature picture classification method based on deep learning at present, and has had been achieved with better effects.But
These deep learning models are all often first to be trained to model using the picture of training picture library, obtain particular model parameter
Afterwards in being applied to actual picture classification, it can cause to train the relevance of picture library and actual classification picture weaker in this way, in turn
Actual picture classification precision is influenced, effect is still to be improved.
Invention content
For deficiency present in the counting of current picture classification, the present invention relates to a kind of figures based on depth similitude network
As sorting technique, by randomly choosing specific skilled training picture from specific training picture pond, successively by the picture
Specific training pattern is inputted, it is continuous to carry out training operation, the result of calculation according to cross entropy loss function in training twice
Model parameter value is adjusted, and extracts the image feature value of training picture after training, further uses the training parameter
Actual picture disaggregated model is initialized, practical picture to be sorted, which is inputted the model, carries out feature extraction, and the spy of face extraction
The characteristic value that training image is added according to specific similarity calculation algorithm for sign is calculated, and is obtained according to result of calculation more accurate
High characteristic value, finally classified to picture according to these image feature values.The present invention is carried by building depth image feature
Training pattern is taken to specified training image progress depth training and training pattern is optimized using cross entropy loss function, is being carried out in fact
The characteristic value that training image extraction is added when the image characteristics extraction of border carries out Similarity measures, is realized by the computation model accurate
Image classification is put forward for the first time the image classification method that Similarity measures are added, effectively increases image classification accuracy.
The technical solution adopted by the present invention to solve the technical problems includes the following steps:
Training picture input step, the picture that given number and specified type are randomly choosed from specified training image are made
For training image.
Model training step, by building specific training pattern and training picture to carry out specific times to the mode input
Training obtains image characteristics extraction model parameter.
Preferably, the training pattern includes:Monovolume lamination, ReLU active coatings, pond layer, full articulamentum, cross entropy damage
Lose function judgement layer and image characteristics extraction layer.
Preferably, the monovolume lamination has the 64 5*5 convolution kernels, Convolution Formula to be:
Zl(Xi)=Wl*Fl-1(Xi)+Bl
Wherein l indicates the number of plies of full articulamentum, inputs as Fl-1Indicate l-1 layers of output, WlJoin for l layers of weights
Number, BlFor l layers of offset parameter, FlIndicate the output after l layers of full connection.
Preferably, for the core size that the pond layer uses for 3*3, step-length is 2 pixels;The full articulamentum first layer
Using 384 output nodes, the second layer uses 192 output nodes.
Preferably, the training method of the training pattern is:Prediction class label using network and true class label,
Binary channels convolutional neural networks are trained using back-propagation algorithm.
Preferably, the cross entropy loss function is:
Wherein i indicates that i-th samples pictures, n indicate the quantity of sample training collection, yicIndicate the i-th pictures c classification networks
Predicted value, ticIndicate the actual value of corresponding picture c classifications.
Preferably, specific training step is:Carry out primary single convolution operation, ReLU activation operation, pondization operation, full connection
Operation, the judgement adjustment model parameter operation of cross entropy loss function.
Preferably, image characteristics extraction operation is executed after the training step repeats one time in sequence.
Initialization feature extracts network step, and the network model parameter set initialization obtained using training is used for characteristics of image
The network model of extraction.
Characteristic extraction step extracts operation using the network model of initialization to the characteristics of image of input picture.
Preferably, after the completion of the feature extraction operation of actual picture, similarity calculation operation is added and calculates trained picture
With the similarity relationship of actually detected picture feature.
The characteristics of image of similarity calculation step, the characteristics of image and actual extracting that are obtained according to training carries out specific calculation
Obtain similarity calculation.
Preferably, the similarity calculation uses following formula:
Wherein, K is the convolution number of plies,For the last layer output of the most complete works of articulamentum of actual picture, f (RK) it is instruction
Practice the characteristic value of image.
Image classification step carries out image classification using the characteristics of image that similarity calculation obtains.
Using above-mentioned technical proposal, the present invention has the following advantages:
The present invention relates to a kind of image classification methods based on depth similitude network, by training picture pond from specific
The middle specific skilled training picture of random selection, specific training pattern is sequentially input by the picture, carries out training behaviour twice
Make, model parameter value is constantly adjusted according to the result of calculation of cross entropy loss function in training, and extracted after training
The image feature value of training picture, further uses the training parameter to initialize actual picture disaggregated model, will actually wait for point
Class picture inputs the model and carries out feature extraction, and the feature of face extraction is added according to specific similarity calculation algorithm and trains
The characteristic value of image is calculated, and more accurate high characteristic value is obtained according to result of calculation, finally according to these image feature values
Classify to picture.The present invention carries out depth instruction by building depth image feature extraction training pattern to specified training image
Practice and cross entropy loss function is used to optimize training pattern, training image extraction is added when carrying out real image feature extraction
Characteristic value carries out Similarity measures, realizes exact image classification by the computation model, is put forward for the first time and Similarity measures are added
Image classification method effectively increases image classification accuracy.
Description of the drawings
The step of Fig. 1 is a kind of image classification method based on depth similitude network of better embodiment of the present invention is shown
It is intended to.
Fig. 2 is a kind of detailed stream of image classification method based on depth similitude network of better embodiment of the present invention
Cheng Tu.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Whole description, it is clear that described embodiment is only one embodiment of the present of invention, rather than whole embodiments.Based on this
Embodiment in invention, other realities that those of ordinary skill in the art are obtained without making creative work
Example is applied, shall fall within the protection scope of the present invention.
It is shown in Figure 1 the embodiment of the invention discloses a kind of image classification method based on depth similitude network, it should
Method includes:
Step S1:Training picture input, randomly chooses given number and the figure of specified type from specified training image
Piece is as training image.
Step S2:Model training, it is specific by building specific training pattern and training picture to carry out to the mode input
Number trains to obtain image characteristics extraction model parameter.
Step S3:Initialization feature extracts network, and the network model parameter set initialization obtained using training is used for image
The network model of feature extraction.
Step S4:Feature extraction extracts operation using the network model of initialization to the characteristics of image of input picture.
Step S5:Similarity calculation, it is specific according to training the characteristics of image of obtained characteristics of image and actual extracting to carry out
Similarity calculation is calculated.
Step S6:Image classification carries out image classification using the characteristics of image that similarity calculation obtains.
In the embodiment of the present invention, the samples pictures of certain number amount and type are chosen as training picture, using convolution, pond
The processing steps such as change, full connection and the inspection of entropy loss function build specific image characteristics extraction training pattern, further to institute
It states and inputs the trained picture progress feature extraction in training pattern, and pass through the entropy loss function and adjust the training pattern
Configuration parameter, further initialize real image Feature Selection Model using the training parameter, and by image to be extracted
The picture of feature inputs the real image Feature Selection Model, and the characteristics of image extracted and the training image are extracted
Characteristic value carries out specific similarity calculation, and final image feature value, root are finally extracted according to the similarity calculation
Image classification is carried out according to these characteristic values.
As it can be seen that in the embodiment of the present invention, by the characteristic value that training image is added in real image characteristic extraction procedure
The characteristics of image similarity calculation for carrying out ad hoc rules, new image feature value, phase are obtained using the similarity calculation rule
Than conventional characteristics of image training pattern, the embodiment of the present invention increases training image and reality by way of relatedness computation
The correlation between image is extracted, effectively increases the accuracy of image characteristics extraction, and then improve the accurate of image classification
Degree, improves the precision and efficiency of image classification.
The embodiment of the invention discloses it is a kind of with preset visual angle watch virtual reality video method, referring to Fig. 2, relatively on
One embodiment, the present embodiment have made further instruction and optimization to technical solution.Specifically, one kind is with default in the present embodiment
The method of visual angle viewing virtual reality video comprises the steps of:
S1:Training picture input.
Preferably, the picture that given number and specified type are randomly choosed from specified training image is schemed as training
Picture.
S2:Model training.
Preferably, trained by building specific training pattern and training picture to carry out specific times to the mode input
To image characteristics extraction model parameter.
Preferably, carrying out monovolume to the training image by executing step S21 accumulates processing operation, using 64 5*5's
Convolution core, convolution core formula are:
Zl(Xi)=Wl*Fl-1(Xi)+Bl
Wherein l indicates the number of plies of full articulamentum, inputs as Fl-1Indicate l-1 layers of output, WlJoin for l layers of weights
Number, BlFor l layers of offset parameter, FlIndicate the output after l layers of full connection.The result input step of convolution operation processing
S22。
Preferably, the data progress ReLU activation of input is handled to ensure the sparsity of network by executing step S22,
The further input step S23 of handling result.
Preferably, the processing of depth pondization, the core that the pond layer uses are carried out to input data by executing step S23
Size is 3*3, and step-length is 2 pixels.The implementing result input step S24 that pondization is operated.
Preferably, full connection processing is executed to input data by executing step S24 to operate, using 384 output nodes,
Full connection processing result input step S25.
Preferably, cross entropy loss function calculating is carried out to input results by executing step S25, realized to training pattern
The adaptation of parameter adjusts, and the entropy loss function is:
Wherein i indicates that i-th samples pictures, n indicate the quantity of sample training collection, yicIndicate the i-th pictures c classification networks
Predicted value, ticIndicate the actual value of corresponding picture c classifications.Further, by entropy loss function result of calculation input step S26.
Preferably, by executing be repeated in execution of the step S26 realizations to step S1, S21~S24, and knot will be executed
Fruit input step S27.
Preferably, when executing the step S26, the output node number that full articulamentum uses is 192.
Preferably, by executing whether step S27 training of judgement terminates, termination condition is specific frequency of training, if instruction
White silk not yet terminates, then repeats all operations of step S1 to step S26, no to then follow the steps S28.
Preferably, by executing step S28, realize that the image characteristics extraction to training image operates, the specific spy of generation
Value indicative set { f (R1),f(R2)...f(Rk)}。
S3:Initialization feature extracts network.
Preferably, network of the network model parameter set initialization trained using step S2 for image characteristics extraction
Model.
S4:Feature extraction.
Preferably, operation is extracted to the characteristics of image of input picture using the network model of step S3 initialization, and
It will extraction result input step S5.
S5:Similarity calculation.
Preferably, the characteristics of image of the characteristics of image and execution step S4 actual extractings that are obtained according to training carries out specific meter
Calculation obtains similarity calculation.
Preferably, after executing S4 feature extraction operations, training image feature set that step S28 processing is obtained with
The characteristics of image set that actual extracting comes out carries out specific similarity calculation, and the calculation formula is:
Wherein, K is the convolution number of plies,For the last layer output of the most complete works of articulamentum of actual picture, f (RK) it is instruction
Practice the characteristic value of image.
Preferably, the new image extracted according to similarity relationship is calculated according to the calculating formula of similarity
Characteristic value.
S6:Image classification.
Preferably, figure is carried out using the characteristics of image extracted based on similarity calculation that execution obtains in step S5 steps
As classification.
It operates, will operate in conclusion completing the random selection to specific quantity and type training image by step S1
Later result input step S21 executes single convolution operation of 64 5*5 convolution cores, and it is real further to execute step S22
Now executing a ReLU to the convolution results activates processing operation to ensure the sparsity of network, is further held to handling result
Row step S23 realizes that a core size is 3*3, and step-length is the pondization operation of 2 pixels, and executes step to operating result
S24 realizes that the full connection processing that an output node is 384 operates, and further, step S25 is executed to the operating result:
Adaptation and adjustment to network configuration parameters are realized in the judgement of entropy loss function, and executing step S26 for judgement result realizes from step
Repetitive operation of all operations step of rapid S1 to step S25, and then judge whether that reaching target trains by step S27
Number, if not reaching, repeatedly all operations of step S1 to step S26, the no S28 that thens follow the steps is realized to training picture
Feature extraction operation, and execute the initialization of parameter set that step S3 completes to obtain by training to actual characteristic extraction model
Operation further executes feature extraction operation of the step S4 realizations to actual picture, step is executed for all characteristics of image
S5 carry out similarity calculation operation, specifically, will be extraction characteristics of image with training extraction characteristics of image according to similarity
Calculation formula carries out similarity calculation, and final picture feature value is obtained according to result of calculation, finally executes step S6 and realizes
The sort operation to image is realized according to the image feature value extracted based on similarity algorithm.By in real image feature extraction
The characteristic value that training image is added in the process carries out the characteristics of image similarity calculation of ad hoc rules, utilizes the similarity calculation
Rule obtains new image feature value, and compared to conventional characteristics of image training pattern, the embodiment of the present invention passes through relatedness computation
Mode increase the correlation between training image and actual extracting image, effectively increase the accurate of image characteristics extraction
Property, and then the accuracy of image classification is improved, improve the precision and efficiency of image classification.
The foregoing is merely illustratives, rather than are restricted.Those skilled in the art can carry out various change to invention
Dynamic and modification is without departing from the spirit and scope of the present invention.In this way, if these modifications and changes of the present invention belongs to the present invention
Within the scope of claim and its equivalent technologies, then the present invention is also intended to including these modification and variations.
Claims (10)
1. a kind of image classification method based on depth similitude network, which is characterized in that the method includes the steps of:
Training picture input step randomly chooses the picture of given number and specified type as instruction from specified training image
Practice image;
Model training step, by building specific training pattern and training picture to carry out specific times training to the mode input
Obtain image characteristics extraction model parameter;
Initialization feature extracts network step, and the network model parameter set initialization obtained using training is used for image characteristics extraction
Network model;
Characteristic extraction step extracts operation using the network model of initialization to the characteristics of image of input picture;
Similarity calculation step, the characteristics of image of the characteristics of image and actual extracting that are obtained according to training carry out specific calculation and obtain
Similarity calculation;
Image classification step carries out image classification using the characteristics of image that similarity calculation obtains.
2. a kind of image classification method based on depth similitude network as described in claim 1, which is characterized in that the mould
In type training step, the training pattern includes:Monovolume lamination, pond layer, full articulamentum, intersects entropy loss at ReLU active coatings
Function judges layer and image characteristics extraction layer.
3. a kind of image classification method based on depth similitude network as claimed in claim 2, which is characterized in that the list
Convolutional layer has the 64 5*5 convolution kernels, Convolution Formula to be:
Zl(Xi)=Wl*Fl-1(Xi)+Bl
Wherein l indicates the number of plies of full articulamentum, inputs as Fl-1Indicate l-1 layers of output, WlFor l layers of weighting parameter, Bl
For l layers of offset parameter, FlIndicate the output after l layers of full connection.
4. a kind of image classification method based on depth similitude network as claimed in claim 2, which is characterized in that the pond
For the core size that change layer uses for 3*3, step-length is 2 pixels;The full articulamentum first layer uses 384 output nodes, the
Two layers of use, 192 output nodes.
5. a kind of image classification method based on depth similitude network as claimed in claim 2, which is characterized in that the instruction
Practice model training method be:Prediction class label using network and true class label, are instructed using back-propagation algorithm
Practice binary channels convolutional neural networks.
6. a kind of image classification method based on depth similitude network as claimed in claim 2, which is characterized in that the friendship
Pitching entropy loss function is:
Wherein i indicates that i-th samples pictures, n indicate the quantity of sample training collection, yicIndicate the i-th pictures c classification neural network forecasts
Value, ticIndicate the actual value of corresponding picture c classifications.
7. a kind of image classification method based on depth similitude network as described in claim 1, which is characterized in that the mould
In type training step, specific training step is:Carry out primary single convolution operation, ReLU activation operation, pondization operation, full connection behaviour
Make, the judgement adjustment model parameter operation of cross entropy loss function.
8. a kind of image classification method based on depth similitude network as claimed in claim 7, which is characterized in that the instruction
Practice and executes image characteristics extraction operation after step repeats one time in sequence.
9. a kind of image classification method based on depth similitude network as described in claim 1, which is characterized in that the spy
Levy extraction step in, after the completion of the feature extraction operation of actual picture, be added similarity calculation operation calculate trained picture with
The similarity relationship of actually detected picture feature.
10. a kind of image classification method based on depth similitude network as claimed in claim 9, which is characterized in that described
Similarity calculation uses following formula:
Wherein, K is the convolution number of plies,For the last layer output of the most complete works of articulamentum of actual picture, f (RK) it is training image
Characteristic value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313616.8A CN108805151B (en) | 2017-05-05 | 2017-05-05 | Image classification method based on depth similarity network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313616.8A CN108805151B (en) | 2017-05-05 | 2017-05-05 | Image classification method based on depth similarity network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108805151A true CN108805151A (en) | 2018-11-13 |
CN108805151B CN108805151B (en) | 2021-05-25 |
Family
ID=64054787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313616.8A Active CN108805151B (en) | 2017-05-05 | 2017-05-05 | Image classification method based on depth similarity network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108805151B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766920A (en) * | 2018-12-18 | 2019-05-17 | 任飞翔 | Article characteristics Model Calculating Method and device based on deep learning |
CN112580546A (en) * | 2020-12-24 | 2021-03-30 | 电子科技大学 | Cross-view image matching method for unmanned aerial vehicle image and satellite image |
CN112836719A (en) * | 2020-12-11 | 2021-05-25 | 南京富岛信息工程有限公司 | Indicator diagram similarity detection method fusing two classifications and three groups |
CN112906724A (en) * | 2019-11-19 | 2021-06-04 | 华为技术有限公司 | Image processing device, method, medium and system |
CN114677545A (en) * | 2022-03-29 | 2022-06-28 | 电子科技大学 | Lightweight image classification method based on similarity pruning and efficient module |
CN115690443A (en) * | 2022-09-29 | 2023-02-03 | 北京百度网讯科技有限公司 | Feature extraction model training method, image classification method and related device |
CN117151549A (en) * | 2023-10-31 | 2023-12-01 | 南通鸿图健康科技有限公司 | Multi-dimensional detection method and device for production quality of fitness equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408405A (en) * | 2014-11-03 | 2015-03-11 | 北京畅景立达软件技术有限公司 | Face representation and similarity calculation method |
CN106022285A (en) * | 2016-05-30 | 2016-10-12 | 北京智芯原动科技有限公司 | Vehicle type identification method and vehicle type identification device based on convolutional neural network |
-
2017
- 2017-05-05 CN CN201710313616.8A patent/CN108805151B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408405A (en) * | 2014-11-03 | 2015-03-11 | 北京畅景立达软件技术有限公司 | Face representation and similarity calculation method |
CN106022285A (en) * | 2016-05-30 | 2016-10-12 | 北京智芯原动科技有限公司 | Vehicle type identification method and vehicle type identification device based on convolutional neural network |
Non-Patent Citations (2)
Title |
---|
DAYDAYUP_668819: "深度学习笔迹2:池化 全连接 激活函数 softmax", 《HTTPS://BLOG.CSDN.NET/DAYDAYUP_668819/ARTICLE/DETAILS/59486223》 * |
归喆: "基于深度学习的人脸特征提取与匹配", 《中国优秀硕士论文全文数据库》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766920A (en) * | 2018-12-18 | 2019-05-17 | 任飞翔 | Article characteristics Model Calculating Method and device based on deep learning |
CN112906724A (en) * | 2019-11-19 | 2021-06-04 | 华为技术有限公司 | Image processing device, method, medium and system |
CN112836719A (en) * | 2020-12-11 | 2021-05-25 | 南京富岛信息工程有限公司 | Indicator diagram similarity detection method fusing two classifications and three groups |
CN112836719B (en) * | 2020-12-11 | 2024-01-05 | 南京富岛信息工程有限公司 | Indicator diagram similarity detection method integrating two classifications and triplets |
CN112580546A (en) * | 2020-12-24 | 2021-03-30 | 电子科技大学 | Cross-view image matching method for unmanned aerial vehicle image and satellite image |
CN114677545A (en) * | 2022-03-29 | 2022-06-28 | 电子科技大学 | Lightweight image classification method based on similarity pruning and efficient module |
CN114677545B (en) * | 2022-03-29 | 2023-05-23 | 电子科技大学 | Lightweight image classification method based on similarity pruning and efficient module |
CN115690443A (en) * | 2022-09-29 | 2023-02-03 | 北京百度网讯科技有限公司 | Feature extraction model training method, image classification method and related device |
CN117151549A (en) * | 2023-10-31 | 2023-12-01 | 南通鸿图健康科技有限公司 | Multi-dimensional detection method and device for production quality of fitness equipment |
CN117151549B (en) * | 2023-10-31 | 2023-12-22 | 南通鸿图健康科技有限公司 | Multi-dimensional detection method and device for production quality of fitness equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108805151B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805151A (en) | A kind of image classification method based on depth similitude network | |
Bulat et al. | Toward fast and accurate human pose estimation via soft-gated skip connections | |
Peng et al. | Syn2real: A new benchmark forsynthetic-to-real visual domain adaptation | |
Sadegh Aliakbarian et al. | Encouraging lstms to anticipate actions very early | |
CN106778854B (en) | Behavior identification method based on trajectory and convolutional neural network feature extraction | |
CN108510012A (en) | A kind of target rapid detection method based on Analysis On Multi-scale Features figure | |
Liu et al. | Learning spatio-temporal representations for action recognition: A genetic programming approach | |
CN107229904A (en) | A kind of object detection and recognition method based on deep learning | |
CN106779073A (en) | Media information sorting technique and device based on deep neural network | |
CN104408760B (en) | A kind of high-precision virtual assembly system algorithm based on binocular vision | |
CN104036255A (en) | Facial expression recognition method | |
CN107480642A (en) | A kind of video actions recognition methods based on Time Domain Piecewise network | |
Puranik et al. | Human perception-based color image segmentation using comprehensive learning particle swarm optimization | |
KR102199467B1 (en) | Method for collecting data for machine learning | |
CN107423747A (en) | A kind of conspicuousness object detection method based on depth convolutional network | |
Li et al. | Modelling human body pose for action recognition using deep neural networks | |
CN113221663A (en) | Real-time sign language intelligent identification method, device and system | |
Li et al. | Meta learning for task-driven video summarization | |
WO2022170046A1 (en) | System and method for evaluating defensive performance using graph convolutional network | |
CN112329327A (en) | Hardware-aware liquid state machine network generation method and system | |
Huang et al. | Feature context learning for human parsing | |
Park et al. | Binary dense sift flow based two stream CNN for human action recognition | |
Zhao et al. | A survey of deep learning in sports applications: Perception, comprehension, and decision | |
CN108875555B (en) | Video interest area and salient object extracting and positioning system based on neural network | |
CN110110812A (en) | A kind of crossfire depth network model for video actions identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |