CN108681708A - A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models - Google Patents
A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models Download PDFInfo
- Publication number
- CN108681708A CN108681708A CN201810468940.1A CN201810468940A CN108681708A CN 108681708 A CN108681708 A CN 108681708A CN 201810468940 A CN201810468940 A CN 201810468940A CN 108681708 A CN108681708 A CN 108681708A
- Authority
- CN
- China
- Prior art keywords
- neural network
- network models
- image
- inception
- inception neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of vena metacarpea image-recognizing method, device and storage mediums based on Inception neural network models, this method is trained processing by using Inception neural network models to vena metacarpea image data, influence of the picture noise to feature extraction can be reduced, the accuracy and speed of image recognition is improved.
Description
Technical field
The present invention relates to image processing field, especially a kind of vena metacarpea figure based on Inception neural network models
As recognition methods, device and storage medium.
Background technology
In vena metacarpea image recognition processing, before carrying out image recognition, it is necessary first to extract the characteristic of respective image
According to, and input database, as the important element that comparison is identified later.Therefore it can be extracted from vein image reliable
Characteristic point directly affects the matched precision of vein.In traditional hand vein recognition algorithm, classical characteristics of image is often used
Extracting method, such as characteristics extraction based on HU not bending moments.But it finds by many experiments, can be given when vein image quality is poor
Vein pattern extraction brings prodigious difficulty, although can improve and improve picture quality by the processing that vein image enhances,
But vein enhancing is difficult largely to restore the clarity of vein skeleton, and also have when enhancing handles image itself
There is error.So obtaining image feature value by traditional images feature extracting method carries out vena metacarpea identification, the standard of acquired results
Exactness and degree of stability be not high.
In present technology, using based on the deep learning method of Inception convolutional neural networks models come to image
Classification verification is carried out, the method can more accurately extract characteristics of image by the constantly adjustment of the parameter to neural network model
Value, and then realize the more fast and accurately image recognition classification of opposite previous methods.
Invention content
In view of this, the purpose of the present invention is to propose to a kind of vena metacarpea images based on Inception neural network models
Recognition methods, device and storage medium, this method by using Inception neural network models to vena metacarpea image data into
Row training managing can reduce influence of the picture noise to feature extraction, improve the accuracy and speed of image recognition.
The present invention is realized using following scheme:A kind of vena metacarpea image recognition based on Inception neural network models
Method includes the following steps:
Step S1:Obtain the palm vein image of the people of different identity;
Step S2:The palm vein image of the people of the different identity got is carried out in the form of the file of identities
Summarize, as training data;Wherein, folder name is named with identities label;
Step S3:Existing trained Inception neural network models are carried out according to the training data in step S2
Transfer learning establishes required Inception neural network models;
Step S4:Established Inception neural network models are stored by the method for model persistence, and
The image in path set by user is identified, import in a program established Inception neural network models and
Test pictures obtain identification and obtain the corresponding identities label of test pictures and accuracy rate after operation.
Further, the step S1 uses the camera of the filter plate with different wave length to the hand of the people of different identity
The palm is shot, and makes camera obtain the palm vein image of good imaging quality by adjusting the combination of different filter plates.
Further, in the step S2, palm vein image is when being summarized, the folder name of the people of different identity
Title is named with everyone name, then the identities label in the step S4 is everyone name.
Further, the step S3 specifically includes following steps:
Step S31:Define the node number of the bottleneck layer of Inception neural network models;
Step S32:Define the title that bottleneck layer tensor is represented in Inception neural network models;
Step S33:Define the title corresponding to image input tensor;
Step S34:Using existing trained Inception neural network models file directory, original image is passed through
The feature vector that Inception neural network models are calculated is saved in file;
Step S35:The document location of definition input picture;
Step S36:Define the data percentage of verification;
Step S37:Define the data percentage of test;
Step S38:Define the learning rate, frequency of training and the quantity for using data every time of Inception neural networks;
Step S39:It is handled by defining the image that a method function obtains camera;
Step S310:One pictures are handled using existing trained Inception neural network models, obtain this figure
Feature vector is compressed into one-dimension array by the feature vector of piece by four-dimensional array;
Step S311:It handles to obtain the data of a random set picture by the method for step S310, as training data;
Step S312:Whole test set data are obtained by the step process of step S311;
Step S313:Training process is defined, required Inception neural network models are established.
Further, the method function in the step S9 specifically includes following steps:
Step S391:All pictures are all existed in the dictionary data type in operation program;
Step S392:Subdirectory all under current directory is obtained, subdirectory is traversed;
Step S393:Obtain effective picture file format all under current directory, the file for storing effective photo
Folder, storage be picture title;
Step S394:Last filename is returned to, the title of file at this time is the title classified;
Step S395:All matched file path lists are returned, the picture found is loaded on filelist and is passed through
Directory name obtains the title of classification;
Step S396:Initialization current class must generate verification collection, test set, training set.
Further, the step S313 specifically includes following steps:
Step S3131:All pictures are obtained, the number of classification is obtained, read existing trained Inception nerve nets
This model is read in network model, load, obtains the tensor corresponding to bottleneck layer tensor and data input;
Step S3132:New neural network input is defined, i.e., propagated forward reaches bottleneck after new picture passes through model
Node value when layer carries out feature extraction;New model answer input is defined, one layer of full Connection Neural Network, definition are defined
Weight and biasing find out the result of propagated forward algorithm;
Step S3133:It goes to linearize by activation primitive, definition intersects loss function, calculates average loss, optimization algorithm
Optimize loss function, training obtains required Inception neural network models.
To achieve the above object, the present invention also provides a kind of vena metacarpea images based on Inception neural network models
Identification device, the device include processor, storage medium and storage on said storage being executed by the processor
Instruction, described instruction by the processor execute to realize method as described above the step of.
In addition, to achieve the above object, the present invention also provides a kind of storage medium, finger is stored on the storage medium
The step of order, described instruction is executed by processor to realize method as described above.
Compared with prior art, the present invention has following advantageous effect:One kind of the present invention being based on Inception neural networks
The vena metacarpea image-recognizing method of model compared to traditional palm vein recognition technical by algorithm to specific single image into
Row feature extraction, wherein image feature extraction techniques are on the basis of having been provided with the deep learning model of very high recognition capability
The retraining and optimization of progress, feature extraction is more accurate, and then the accuracy of hand vein recognition also corresponding higher, this is greatly reduced
The identification fault rate of hand vein recognition application needed for life-stylize scene, to substantially increasing security reliability, Ke Yiying
It uses in more life-stylize scenes, expands application range.
Description of the drawings
Fig. 1 is the structural schematic diagram of the Inception neural network models for vena metacarpea identification in the embodiment of the present invention.
Fig. 2 is the flow chart of vena metacarpea image-recognizing method in the embodiment of the present invention.
Specific implementation mode
With reference to embodiment, the present invention will be further described with embodiment.
Solve the problems, such as that accuracy and speed that conventional method identifies vena metacarpea image procossing, present embodiment provide a kind of
Vena metacarpea image-recognizing method based on Inception neural network models, Fig. 1 are the Inception god established in this method
Structural schematic diagram through network model:
1.Inception models are convolutional neural networks (CNN) models.
2. bottleneck layer:Layer before last layer of full articulamentum is referred to as bottleneck layer, and a new image is by training
Good convolutional neural networks can regard the process to image progress feature extraction, the node of output as until the process of bottleneck layer
Vector can regard as image as one more simplifies and the stronger feature vector of ability to express.
3. full articulamentum:Be exactly neural network directly all nodes of each layer all be connected.
The model that the present embodiment uses is the adjustment full articulamentum of last layer to generate more preferably model.
In the present embodiment, as shown in Fig. 2, a kind of vena metacarpea image recognition based on Inception neural network models
Method includes the following steps:
Step S1:The palm of the people of different identity is shot using the camera of the filter plate with different wave length,
Camera is made to obtain the palm vein image of good imaging quality by adjusting the combination of different filter plates;
Step S2:The palm vein image of the people of the different identity got is carried out in the form of the file of identities
Summarize, as training data;Wherein, folder name is named with identities label;
Particularly, it before carrying out follow-up modeling procedure, needs the image of required identification by corresponding classification (class herein
Not according to depending on actual demand, it can be variety classes different objects, such as animal, personage, building, can also be one species
Different objects, for example, the Different Individual of same petal type or the different people in the embodiment of the present invention vena metacarpea) carry out
Classification, and be respectively placed in and stored in the file named with corresponding name.
Step S3:Existing trained Inception neural network models are carried out according to the training data in step S2
Transfer learning establishes required Inception neural network models, specifically includes following steps:
Step S31:Define the node number of the bottleneck layer of Inception neural network models;
Step S32:Define the title that bottleneck layer tensor is represented in Inception neural network models;
Step S33:Define the title corresponding to image input tensor;
Step S34:Using existing trained Inception neural network models file directory, original image is passed through
The feature vector that Inception neural network models are calculated is saved in file;
Step S35:The document location of definition input picture;
Step S36:Define the data percentage of verification;
Step S37:Define the data percentage of test;
Step S38:Define the learning rate, frequency of training and the quantity for using data every time of Inception neural networks;
Step S39:It is handled by defining the image that a method function obtains camera, method function specifically wraps
Include following steps:
Step S391:All pictures are all existed in the dictionary data type in operation program;
Step S392:Subdirectory all under current directory is obtained, subdirectory is traversed;
Step S393:Obtain effective picture file format all under current directory, the file for storing effective photo
Folder, storage be picture title;
Step S394:Last filename is returned to, the title of file at this time is the title classified;
Step S395:All matched file path lists are returned, the picture found is loaded on filelist and is passed through
Directory name obtains the title of classification;
Step S396:Initialization current class must generate verification collection, test set, training set.
Step S310:One pictures are handled using existing trained Inception neural network models, obtain this figure
Feature vector is compressed into one-dimension array by the feature vector of piece by four-dimensional array;
Step S311:It handles to obtain the data of a random set picture by the method for step S310, as training data;
Step S312:Whole test set data are obtained by the step process of step S311;
Step S313:Training process is defined, required Inception neural network models is established, specifically includes following step
Suddenly:
Step S3131:All pictures are obtained, the number of classification is obtained, read existing trained Inception nerve nets
This model is read in network model, load, obtains the tensor corresponding to bottleneck layer tensor and data input;
Step S3132:New neural network input is defined, i.e., propagated forward reaches bottleneck after new picture passes through model
Node value when layer carries out feature extraction;New model answer input is defined, one layer of full Connection Neural Network, definition are defined
Weight and biasing find out the result of propagated forward algorithm;
Step S3133:It goes to linearize by activation primitive, definition intersects loss function, calculates average loss, optimization algorithm
Optimize loss function, training obtains required Inception neural network models.
Step S4:Established Inception neural network models are stored by the method for model persistence, and
The image in path set by user is identified, import in a program established Inception neural network models and
Test pictures obtain identification and obtain the corresponding identities label of test pictures and accuracy rate after operation.
Transfer learning, which is the key that the present embodiment image-recognizing method, it can be seen from above method step can realize institute
The time of training pattern process is substantially reduced.Inception is passed through under Tensorflow deep learning frames
Transfer learning obtains the deep learning neural network model of program for identification.By new image by training in transfer learning
Convolutional neural networks until bottleneck layer process, can regard as to image carry out feature extraction process this be compared to biography
The different feature extraction mode of system method.In trained Inception models, the output of bottleneck layer is passed through into a list again
The full articulamentum neural network of layer can distinguish the image of plurality of classes well, so the knot vector of bottleneck layer output can be with
It is more simplified and the stronger feature vector of ability to express by one as any image.
By taking the present embodiment as an example, transfer learning can be summarised as following steps:
First, the Inception model files trained through post-mature data set ImageNet that Google provides are obtained,
The training adjustment that the parameter of neural network has been subjected to a large amount of numbers of multiple types image is optimized, and in fact it has been provided with knowledge
The function of other image;
Secondly, it is acquired new data set, i.e. the vena metacarpea image data set of the different people of the present embodiment by camera,
Data set and Inception model files are put into program simultaneously and carry out feature extraction operation;
Then, on the basis of all convolution layer parameters in retaining Inception models, the feature vector extracted is made
A full Connection Neural Network of new single layer in Inception models is trained to input;
The new image recognition model for being directed to vena metacarpea identification can be obtained by aforesaid operations, then by new model and need
Vein image that will be for identification is imported into program, and recognition result is can be obtained after operation, i.e. the corresponding name of this image.
To achieve the above object, the present embodiment also provides a kind of vena metacarpea figure based on Inception neural network models
As identification device, which includes processor, storage medium and storage on said storage being held by the processor
The step of capable instruction, described instruction is executed by the processor to realize method as described above.
In addition, to achieve the above object, the present embodiment also provides a kind of storage medium, and finger is stored on the storage medium
The step of order, described instruction is executed by processor to realize method as described above.
The embodiment of the present invention by establish deep learning Inception neural network models come to vena metacarpea image data into
Row processing identification, can finally realize that vena metacarpea corresponds to the identification of identity, be effectively prevented from that conventional method is inefficient, and precision is not
Enough accurate drawbacks.
It should be noted that:The deep learning vena metacarpea recognition methods that above-described embodiment provides is when in use with above-mentioned each work(
Can module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different functions
Module is completed, i.e., by disaggregated model internal structure more accurately design to complete all or part of work(described above
Energy.In addition with different frameworks, which is not described herein again.
One of ordinary skill in the art will appreciate that realizing that all or part of step of above-described embodiment can pass through hardware
With software in conjunction with completing.The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, all in the present invention
Spirit and principle within, any modification, equivalent replacement, improvement and so on, should be included in protection scope of the present invention it
It is interior.
Claims (8)
1. a kind of vena metacarpea image-recognizing method based on Inception neural network models, it is characterised in that:Including following step
Suddenly:
Step S1:Obtain the palm vein image of the people of different identity;
Step S2:The palm vein image of the people of the different identity got is converged in the form of the file of identities
Always, as training data;Wherein, folder name is named with identities label;
Step S3:Existing trained Inception neural network models are migrated according to the training data in step S2
Study, establishes required Inception neural network models;
Step S4:Established Inception neural network models are stored by the method for model persistence, and to
The image in path set by family is identified, and imports established Inception neural network models and test in a program
Picture obtains identification and obtains the corresponding identities label of test pictures and accuracy rate after operation.
2. a kind of vena metacarpea image-recognizing method based on Inception neural network models according to claim 1,
It is characterized in that:The step S1 uses the camera of the filter plate with different wave length to clap the palm of the people of different identity
It takes the photograph, makes camera obtain the palm vein image of good imaging quality by adjusting the combination of different filter plates.
3. a kind of vena metacarpea image-recognizing method based on Inception neural network models according to claim 1,
It is characterized in that:In the step S2, for palm vein image when being summarized, the folder name of the people of different identity is with each
The name of people is named, then the identities label in the step S4 is everyone name.
4. a kind of vena metacarpea image-recognizing method based on Inception neural network models according to claim 1,
It is characterized in that:The step S3 specifically includes following steps:
Step S31:Define the node number of the bottleneck layer of Inception neural network models;
Step S32:Define the title that bottleneck layer tensor is represented in Inception neural network models;
Step S33:Define the title corresponding to image input tensor;
Step S34:Using existing trained Inception neural network models file directory, original image is passed through
The feature vector that Inception neural network models are calculated is saved in file;
Step S35:The document location of definition input picture;
Step S36:Define the data percentage of verification;
Step S37:Define the data percentage of test;
Step S38:Define the learning rate, frequency of training and the quantity for using data every time of Inception neural networks;
Step S39:It is handled by defining the image that a method function obtains camera;
Step S310:One pictures are handled using existing trained Inception neural network models, obtain this picture
Feature vector is compressed into one-dimension array by feature vector by four-dimensional array;
Step S311:It handles to obtain the data of a random set picture by the method for step S310, as training data;
Step S312:Whole test set data are obtained by the step process of step S311;
Step S313:Training process is defined, required Inception neural network models are established.
5. a kind of vena metacarpea image-recognizing method based on Inception neural network models according to claim 1,
It is characterized in that:Method function in the step S9 specifically includes following steps:
Step S391:All pictures are all existed in the dictionary data type in operation program;
Step S392:Subdirectory all under current directory is obtained, subdirectory is traversed;
Step S393:Effective picture file format all under current directory is obtained, the file for storing effective photo is deposited
What is put is the title of picture;
Step S394:Last filename is returned to, the title of file at this time is the title classified;
Step S395:All matched file path lists are returned, the picture found is loaded on filelist and passes through catalogue
Name obtains the title of classification;
Step S396:Initialization current class must generate verification collection, test set, training set.
6. a kind of vena metacarpea image-recognizing method based on Inception neural network models according to claim 1,
It is characterized in that:The step S313 specifically includes following steps:
Step S3131:All pictures are obtained, the number of classification is obtained, read existing trained Inception neural networks mould
This model is read in type, load, obtains the tensor corresponding to bottleneck layer tensor and data input;
Step S3132:New neural network input is defined, i.e., when new picture reaches bottleneck layer by propagated forward after model
Node value, carry out feature extraction;New model answer input is defined, one layer of full Connection Neural Network is defined, defines weight
And biasing, find out the result of propagated forward algorithm;
Step S3133:It goes to linearize by activation primitive, definition intersects loss function, calculates average loss, and optimization algorithm is come excellent
Change loss function, training obtains required Inception neural network models.
7. a kind of vena metacarpea pattern recognition device based on Inception neural network models, it is characterised in that:Including processing
Device, storage medium and storage are on said storage to the instruction executed by the processor, and described instruction is by the place
Manage the step of device is executed to realize any one of claim 1-6 the methods.
8. a kind of storage medium, instruction is stored on the storage medium, it is characterised in that:Described instruction be executed by processor with
The step of realizing any one of claim 1-6 the methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810468940.1A CN108681708A (en) | 2018-05-16 | 2018-05-16 | A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810468940.1A CN108681708A (en) | 2018-05-16 | 2018-05-16 | A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108681708A true CN108681708A (en) | 2018-10-19 |
Family
ID=63805581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810468940.1A Pending CN108681708A (en) | 2018-05-16 | 2018-05-16 | A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108681708A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263630A (en) * | 2019-05-10 | 2019-09-20 | 中国地质大学(武汉) | A kind of aluminium flaw identification equipment of the convolutional neural networks based on Inception model |
CN110347851A (en) * | 2019-05-30 | 2019-10-18 | 中国地质大学(武汉) | Image search method and system based on convolutional neural networks |
CN111274924A (en) * | 2020-01-17 | 2020-06-12 | 厦门中控智慧信息技术有限公司 | Palm vein detection model modeling method, palm vein detection method and palm vein detection device |
CN112949780A (en) * | 2020-04-21 | 2021-06-11 | 佳都科技集团股份有限公司 | Feature model training method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631482A (en) * | 2016-03-03 | 2016-06-01 | 中国民航大学 | Convolutional neural network model-based dangerous object image classification method |
CN105760841A (en) * | 2016-02-22 | 2016-07-13 | 桂林航天工业学院 | Identify recognition method and identify recognition system |
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
CN106971174A (en) * | 2017-04-24 | 2017-07-21 | 华南理工大学 | A kind of CNN models, CNN training methods and the vein identification method based on CNN |
-
2018
- 2018-05-16 CN CN201810468940.1A patent/CN108681708A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105760841A (en) * | 2016-02-22 | 2016-07-13 | 桂林航天工业学院 | Identify recognition method and identify recognition system |
CN105631482A (en) * | 2016-03-03 | 2016-06-01 | 中国民航大学 | Convolutional neural network model-based dangerous object image classification method |
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
CN106971174A (en) * | 2017-04-24 | 2017-07-21 | 华南理工大学 | A kind of CNN models, CNN training methods and the vein identification method based on CNN |
Non-Patent Citations (1)
Title |
---|
JUEZHANANGLE: "inception-v3迁移学习", 《CSDN博客BLOG.CSDN.NET/JUEZHANGLE/ARTICLE/DETAILS/78747252》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110263630A (en) * | 2019-05-10 | 2019-09-20 | 中国地质大学(武汉) | A kind of aluminium flaw identification equipment of the convolutional neural networks based on Inception model |
CN110347851A (en) * | 2019-05-30 | 2019-10-18 | 中国地质大学(武汉) | Image search method and system based on convolutional neural networks |
CN111274924A (en) * | 2020-01-17 | 2020-06-12 | 厦门中控智慧信息技术有限公司 | Palm vein detection model modeling method, palm vein detection method and palm vein detection device |
CN112949780A (en) * | 2020-04-21 | 2021-06-11 | 佳都科技集团股份有限公司 | Feature model training method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108681708A (en) | A kind of vena metacarpea image-recognizing method, device and storage medium based on Inception neural network models | |
CN105320602B (en) | A kind of test method and device of application starting speed | |
CN110555050A (en) | heterogeneous network node representation learning method based on meta-path | |
WO2022121485A1 (en) | Image multi-tag classification method and apparatus, computer device, and storage medium | |
US10769208B2 (en) | Topical-based media content summarization system and method | |
CN104834713A (en) | Method and system for storing and transmitting image data of terminal equipment | |
CN107680053A (en) | A kind of fuzzy core Optimized Iterative initial value method of estimation based on deep learning classification | |
CN105635256B (en) | Multimedia synchronization method, device and system | |
CN109753884A (en) | A kind of video behavior recognition methods based on key-frame extraction | |
CN104079926A (en) | Video performance testing method for remote desktop software | |
CN109886283A (en) | A kind of plant image intelligent identifying system based on AR | |
US10929655B2 (en) | Portrait image evaluation based on aesthetics | |
WO2017010514A1 (en) | Image retrieval device and method, photograph time estimation device and method, iterative structure extraction device and method, and program | |
US8526673B2 (en) | Apparatus, system and method for recognizing objects in images using transmitted dictionary data | |
CN109753889A (en) | Service evaluation method, apparatus, computer equipment and storage medium | |
KR102299095B1 (en) | Method of searching and providing data of similar fashion goods and computing device therefor | |
CN116883740A (en) | Similar picture identification method, device, electronic equipment and storage medium | |
CN107277228A (en) | Storage device, mobile terminal and its image processing method | |
CN106652023B (en) | A kind of method and system of the extensive unordered quick exercise recovery structure of image | |
CN109598227A (en) | A kind of single image mobile phone source weight discrimination method based on deep learning | |
CN115171014A (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN111726592A (en) | Method and apparatus for obtaining architecture of image signal processor | |
CN113129252A (en) | Image scoring method and electronic equipment | |
CN106296568A (en) | Determination method, device and the client of a kind of lens type | |
CN113516615B (en) | Sample generation method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181019 |