CN108197605A - Yak personal identification method based on deep learning - Google Patents
Yak personal identification method based on deep learning Download PDFInfo
- Publication number
- CN108197605A CN108197605A CN201810094183.6A CN201810094183A CN108197605A CN 108197605 A CN108197605 A CN 108197605A CN 201810094183 A CN201810094183 A CN 201810094183A CN 108197605 A CN108197605 A CN 108197605A
- Authority
- CN
- China
- Prior art keywords
- yak
- face
- picture
- feature vector
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
Abstract
The invention discloses a kind of yak personal identification methods based on deep learning, include the following steps:S1, acquisition yak picture and video, picture is decoded as by video;S2, classification recurrence and position recurrence are carried out to picture by target detection network Faster R CNN, obtains location of pixels of the ox face on picture and its confidence level;Then obtained ox face location of pixels is cut, intercepts out ox face;S3, the ox face input feature vector for obtaining step S2 extract network, and feature extraction is carried out, and export corresponding feature vector to ox face;Ox face feature vector in S4, feature vector and database that step S3 is extracted matches, and calculates the similarity of ox face, and output database neutralizes the highest yak picture of similarity of the yak, completes yak identification.The invention avoids the limitations for the uncertainty and ear tag identification method that feature is manually extracted in traditional recognition methods, effectively raise the efficiency of yak identification, can reduce the risk of Insurance Fraud.
Description
Technical field
The invention belongs to deep learning image processing fields and yak identification field, more particularly to a kind of to be based on depth
The yak personal identification method of study.
Background technology
With advances in technology with the development of society, the situation of live stock insurance is more and more common.Yak, as highlands
The major source of revenues of herdsman is raised extensively in this area.The ability with epidemic situation risk however herdsman withstand natural calamities
Lowly, yak aquaculture is faced with huge risk.Therefore, yak insures implementation also positive in yak aquaculture, this is just
The risk of Insurance Fraud is brought, sprawling situation is presented in current Insurance Fraud at home.
Traditional yak identifies to be realized by the way of to yak earmarking.There are many disadvantages of this method, first, ear tag
Reproducibility it is very strong, there is no biological differences;Secondly, ear tag can reuse, it is impossible to prevent yak after death ear tag
Recycling.Therefore, it is necessary to a kind of more accurately and effectively yak personal identification methods.
Invention content
It is an object of the invention to overcome manually to extract the uncertainty of feature and ear tag identification side in existing recognition methods
The limitation of formula provides a kind of efficiency that can effectively improve yak identification, reduce Insurance Fraud risk based on depth
Spend the yak personal identification method of study.
The purpose of the present invention is achieved through the following technical solutions:Yak identification side based on deep learning
Method includes the following steps:
S1, acquisition yak picture and video, picture is decoded as by video;
S2, collected yak picture is subjected to classification recurrence and position time by target detection network Faster R-CNN
Return, obtain location of pixels of the ox face on picture and its confidence level;Then obtained ox face location of pixels is cut, intercepted
Go out ox face;
S3, the ox face input feature vector for obtaining step S2 extract network, and feature extraction is carried out, and export corresponding to ox face
Feature vector;
Ox face feature vector in S4, feature vector and database that step S3 is extracted matches, and calculates ox face
Similarity, output database neutralizes the highest yak picture of similarity of the yak, completes yak identification.
Further, the step S2 includes following sub-step:
S21, picture is normalized to 224*224 sizes;
S22, by 13 convolutional layers, 5 down-samplings obtain the characteristic pattern of 512 size 14*14;
S23, each characteristic pattern is handled as follows:It is slided on characteristic pattern by the convolution kernel of 3*3 sizes, respectively
Using each convolution kernel center as a datum mark, then 3 different area sizes and 3 kinds of different rulers are chosen around datum mark
Very little ratio generates 9 candidate regions;
S24, remove be mapped in candidate region in artwork be more than artwork boundary candidate frame;
S25, candidate region is mapped on the characteristic pattern of last layer of convolution of network, is made often for pooling layers by ROI
A candidate region generates fixed-size characteristic pattern, is classified to the characteristic pattern of generation and position refine, is given birth to reference to step S1
Into picture size calculated, obtain the location of pixels and its confidence level that ox face is present on picture;
S26, obtained ox face location of pixels is cut, intercepts out ox face.
Further, the step S3 includes following sub-step:
S31, ox face picture is adjusted to 224*224 sizes, then input feature vector extraction network carries out feature extraction;
S32, feature extraction network export the feature vector of one 4096 dimension by convolutional layer, pond layer and full articulamentum.
Further, the step S4 includes following sub-step:
S41, the data in the step S3 feature vectors extracted and ox face library are compared, calculate similarity, wherein phase
It is calculated by the way of cos angles like degree:
Wherein, cos θ represent the folder cosine of an angle of two feature vectors, and a, b represent the feature vector and ox face of S2 extractions respectively
Feature vector in database;
S42, the cos θ angles values of two yaks are exported as similarity, and output in the database with yak phase to be identified
Like the corresponding number of the maximum yak of degree as yak identification result.
The beneficial effects of the invention are as follows:The present invention to yak based on the method for deep learning by carrying out ox face detection, so
Ox face feature is automatically extracted with convolutional neural networks afterwards, the similarity of ox face, the output phase are finally opened using cos angle calcu-lations two
Like highest yak number is spent as matching result, the high accuracy rate of comparison can be reached.It avoids in traditional recognition methods
The uncertainty of artificial extraction feature and the limitation of ear tag identification method effectively raise the efficiency of yak identification,
The risk of Insurance Fraud can be reduced.
Description of the drawings
Fig. 1 is the flow chart of the yak personal identification method based on deep learning of the present invention;
Fig. 2 is collected ox face picture in the present embodiment;
Fig. 3 is the ox face picture intercepted out in the present embodiment;
Fig. 4 is ox face identification output result picture in the present embodiment.
Specific embodiment
The technical solution further illustrated the present invention below in conjunction with the accompanying drawings.
As shown in Figure 1, the yak personal identification method based on deep learning, includes the following steps:
S1, acquisition yak picture and video, picture is decoded as by video;
S2, collected yak picture is subjected to classification recurrence and position time by target detection network Faster R-CNN
Return, obtain location of pixels of the ox face on picture and its confidence level;Then obtained ox face location of pixels is cut, intercepted
Go out ox face;Specifically include following sub-step:
S21, picture is normalized to 224*224 sizes;
S22, by 13 convolutional layers, 5 down-samplings obtain the characteristic pattern of 512 size 14*14;
S23, each characteristic pattern is handled as follows:It is slided, set on characteristic pattern by the convolution kernel of 3*3 sizes
Then a kind of anchoring (anchor) mechanism, i.e., click using each convolution kernel center as a datum mark around benchmark respectively
Take 3 different area sizes (128,256,512, correspond to characteristic pattern be respectively 3,6,12) dimension scales (1 different with 3 kinds:1、
1:2 and 2:1) 9 candidate regions, are generated;
S24, remove be mapped in candidate region in artwork be more than artwork boundary candidate frame;
S25, candidate region is mapped on the characteristic pattern of last layer of convolution of network, is made often for pooling layers by ROI
A candidate region generates fixed-size characteristic pattern, is classified to the characteristic pattern of generation and position refine, is given birth to reference to step S1
Into picture size calculated, obtain the location of pixels and its confidence level that ox face is present on picture;
S26, obtained ox face location of pixels is cut, intercepts out ox face.
S3, the ox face input feature vector for obtaining step S2 extract network, and feature extraction is carried out, and export corresponding to ox face
Feature vector;Including following sub-step:
S31, ox face picture is adjusted to 224*224 sizes, then input feature vector extraction network carries out feature extraction;
S32, feature extraction network export the feature vector of one 4096 dimension by convolutional layer, pond layer and full articulamentum.
Ox face feature vector in S4, feature vector and database that step S3 is extracted matches, and calculates ox face
Similarity, output database neutralizes the highest yak picture of similarity of the yak, completes yak identification;It specifically includes
Following sub-step:
S41, the data in the step S2 feature vectors extracted and ox face library are compared, calculate similarity, wherein phase
It is calculated by the way of cos angles like degree:
Wherein, cos θ represent the folder cosine of an angle of two feature vectors, and a, b represent the feature vector and ox face of S2 extractions respectively
Feature vector in database;
S42, the cos θ angles values of two yaks are exported as similarity, and output in the database with yak phase to be identified
Like the corresponding number of the maximum yak of degree as yak identification result.
Application example:
1st, sample makes:The present embodiment acquisition yak picture amounts to 3988.2900 are randomly selected as model training
Sample.Make the position that target detection sample marks all ox faces in picture using annotation tool.
2nd, the training stage:
(1) the target detection model Faster R-CNN based on depth convolutional network VGG16-Net are designed, with selection
2900 pictures are as training sample training pattern, altogether iteration 60000 times;Target detection model Faster R-CNN are ability
A kind of common target detection model in domain, specific training process repeat no more.
(2) projected depth convolutional network VGG16-Net, with cut out come ox face training pattern, iteration 100000 altogether
It is secondary.
3rd, test phase:
(1) a yak database for including 315 yaks is made, then picks 455 yak samples at random, is carried out
Accuracy and similarity detection;
(2) the Faster R-CNN network models for completing 455 yak sample input training, export the ox that detected
Face picture, as shown in Figure 2.According to the ox face coordinate that target detection exports, corresponding ox face is cut out, as shown in Figure 3;
(3) the ox face cut out is inputted into ox face Feature Selection Model, exports the feature vector of one 4096 dimension;
(4) the yak data in the feature vector and database exported in (3) carry out similarity detection, finally export
The number of the yak is as recognition result, as shown in figure 4, respectively similarly spending highest in test pictures and database
(0.923420) ox face picture.This time test sample amounts to 416, wherein, identifying the yak picture of error has 31, identification
Accuracy rate is 0.92548.
In summary, it is seen that the present invention carries out ox face detection to yak first by the method based on deep learning, then
Ox face feature is automatically extracted with convolutional neural networks, the similarity of ox face is finally opened using cos angle calcu-lations two, output is similar
Highest yak number is spent as matching result, can reach the high accuracy rate of comparison.Avoid people in traditional recognition methods
The uncertainty of work extraction feature and yak identification effectively raise yak identification using the limitation of ear tag mode
Efficiency.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this hair
Bright principle, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This field
Those of ordinary skill can make according to these technical inspirations disclosed by the invention various does not depart from the other each of essence of the invention
The specific deformation of kind and combination, these deform and combine still within the scope of the present invention.
Claims (4)
1. the yak personal identification method based on deep learning, which is characterized in that include the following steps:
S1, acquisition yak picture and video, picture is decoded as by video;
S2, collected yak picture is subjected to classification recurrence and position recurrence by target detection network Faster R-CNN,
Obtain location of pixels of the ox face on picture and its confidence level;Then obtained ox face location of pixels is cut, intercepted out
Ox face;
S3, the ox face input feature vector for obtaining step S2 extract network, and feature extraction is carried out, and export corresponding feature to ox face
Vector;
Ox face feature vector in S4, feature vector and database that step S3 is extracted matches, and calculates the phase of ox face
Like degree, output database neutralizes the highest yak picture of similarity of the yak, completes yak identification.
2. the yak personal identification method according to claim 1 based on deep learning, which is characterized in that the step S2
Including following sub-step:
S21, picture is normalized to 224*224 sizes;
S22, by 13 convolutional layers, 5 down-samplings obtain the characteristic pattern of 512 size 14*14;
S23, each characteristic pattern is handled as follows:It is slided on characteristic pattern by the convolution kernel of 3*3 sizes, respectively with every
Then 3 different area sizes and 3 kinds of different size ratios are chosen around datum mark in one convolution kernel center as a datum mark
Example generates 9 candidate regions;
S24, remove be mapped in candidate region in artwork be more than artwork boundary candidate frame;
S25, candidate region is mapped on the characteristic pattern of last layer of convolution of network, each time is made for pooling layers by ROI
Favored area generates fixed-size characteristic pattern, is classified to the characteristic pattern of generation and position refine, is generated with reference to step S1
Picture size is calculated, and obtains location of pixels and its confidence level that ox face is present on picture;
S26, obtained ox face location of pixels is cut, intercepts out ox face.
3. the yak personal identification method according to claim 2 based on deep learning, which is characterized in that the step S3
Including following sub-step:
S31, ox face picture is adjusted to 224*224 sizes, then input feature vector extraction network carries out feature extraction;
S32, feature extraction network export the feature vector of one 4096 dimension by convolutional layer, pond layer and full articulamentum.
4. the yak personal identification method according to claim 3 based on deep learning, which is characterized in that the step S4
Including following sub-step:
S41, the data in the step S3 feature vectors extracted and ox face library are compared, calculate similarity, wherein similarity
It calculates by the way of cos angles:
Wherein, cos θ represent the folder cosine of an angle of two feature vectors, and a, b represent the feature vector and ox face data of S2 extractions respectively
Feature vector in library;
S42, the cos θ angles values of two yaks are exported as similarity, and output in the database with yak similarity to be identified
The corresponding number of maximum yak is as yak identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810094183.6A CN108197605A (en) | 2018-01-31 | 2018-01-31 | Yak personal identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810094183.6A CN108197605A (en) | 2018-01-31 | 2018-01-31 | Yak personal identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108197605A true CN108197605A (en) | 2018-06-22 |
Family
ID=62591464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810094183.6A Pending CN108197605A (en) | 2018-01-31 | 2018-01-31 | Yak personal identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108197605A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063589A (en) * | 2018-07-12 | 2018-12-21 | 杭州电子科技大学 | Instrument and equipment on-line monitoring method neural network based and system |
CN109190477A (en) * | 2018-08-02 | 2019-01-11 | 平安科技(深圳)有限公司 | Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face |
CN109508907A (en) * | 2018-12-24 | 2019-03-22 | 中国科学院合肥物质科学研究院 | Milk cow body condition intelligent scoring system based on deep learning and long-distance video |
CN110795980A (en) * | 2019-05-10 | 2020-02-14 | 深圳市睿策者科技有限公司 | Network video-based evasion identification method, equipment, storage medium and device |
CN111291683A (en) * | 2020-02-08 | 2020-06-16 | 内蒙古大学 | Dairy cow individual identification system based on deep learning and identification method thereof |
CN111368766A (en) * | 2020-03-09 | 2020-07-03 | 云南安华防灾减灾科技有限责任公司 | Cattle face detection and identification method based on deep learning |
CN111881906A (en) * | 2020-06-18 | 2020-11-03 | 广州万维创新科技有限公司 | LOGO identification method based on attention mechanism image retrieval |
CN112001324A (en) * | 2020-08-25 | 2020-11-27 | 北京影谱科技股份有限公司 | Method, device and equipment for identifying actions of players of basketball game video |
CN112101333A (en) * | 2020-11-23 | 2020-12-18 | 四川圣点世纪科技有限公司 | Smart cattle farm monitoring and identifying method and device based on deep learning |
CN112183332A (en) * | 2020-09-28 | 2021-01-05 | 成都希盟泰克科技发展有限公司 | Yak face identification method based on transfer learning |
CN112766404A (en) * | 2021-01-29 | 2021-05-07 | 安徽工大信息技术有限公司 | Chinese mitten crab authenticity identification method and system based on deep learning |
CN113780207A (en) * | 2021-09-16 | 2021-12-10 | 中国农业科学院草原研究所 | System and method for goat face recognition |
CN114283366A (en) * | 2021-12-24 | 2022-04-05 | 东北农业大学 | Method and device for identifying individual identity of dairy cow and storage medium |
CN115457593A (en) * | 2022-07-26 | 2022-12-09 | 南京清湛人工智能研究院有限公司 | Cow face identification method, system, storage medium and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078283A1 (en) * | 2014-09-16 | 2016-03-17 | Samsung Electronics Co., Ltd. | Method of extracting feature of input image based on example pyramid, and facial recognition apparatus |
CN106845432A (en) * | 2017-02-07 | 2017-06-13 | 深圳市深网视界科技有限公司 | The method and apparatus that a kind of face is detected jointly with human body |
CN107066990A (en) * | 2017-05-04 | 2017-08-18 | 厦门美图之家科技有限公司 | A kind of method for tracking target and mobile device |
CN107292298A (en) * | 2017-08-09 | 2017-10-24 | 北方民族大学 | Ox face recognition method based on convolutional neural networks and sorter model |
-
2018
- 2018-01-31 CN CN201810094183.6A patent/CN108197605A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078283A1 (en) * | 2014-09-16 | 2016-03-17 | Samsung Electronics Co., Ltd. | Method of extracting feature of input image based on example pyramid, and facial recognition apparatus |
CN106845432A (en) * | 2017-02-07 | 2017-06-13 | 深圳市深网视界科技有限公司 | The method and apparatus that a kind of face is detected jointly with human body |
CN107066990A (en) * | 2017-05-04 | 2017-08-18 | 厦门美图之家科技有限公司 | A kind of method for tracking target and mobile device |
CN107292298A (en) * | 2017-08-09 | 2017-10-24 | 北方民族大学 | Ox face recognition method based on convolutional neural networks and sorter model |
Non-Patent Citations (1)
Title |
---|
董兰芳 等: "基于Faster R-CNN的人脸检测方法", 《计算机系统应用》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063589A (en) * | 2018-07-12 | 2018-12-21 | 杭州电子科技大学 | Instrument and equipment on-line monitoring method neural network based and system |
CN109190477A (en) * | 2018-08-02 | 2019-01-11 | 平安科技(深圳)有限公司 | Settlement of insurance claim method, apparatus, computer equipment and storage medium based on the identification of ox face |
CN109508907A (en) * | 2018-12-24 | 2019-03-22 | 中国科学院合肥物质科学研究院 | Milk cow body condition intelligent scoring system based on deep learning and long-distance video |
CN110795980A (en) * | 2019-05-10 | 2020-02-14 | 深圳市睿策者科技有限公司 | Network video-based evasion identification method, equipment, storage medium and device |
CN111291683B (en) * | 2020-02-08 | 2023-04-18 | 内蒙古大学 | Dairy cow individual identification system based on deep learning and identification method thereof |
CN111291683A (en) * | 2020-02-08 | 2020-06-16 | 内蒙古大学 | Dairy cow individual identification system based on deep learning and identification method thereof |
CN111368766A (en) * | 2020-03-09 | 2020-07-03 | 云南安华防灾减灾科技有限责任公司 | Cattle face detection and identification method based on deep learning |
CN111368766B (en) * | 2020-03-09 | 2023-08-18 | 云南安华防灾减灾科技有限责任公司 | Deep learning-based cow face detection and recognition method |
CN111881906A (en) * | 2020-06-18 | 2020-11-03 | 广州万维创新科技有限公司 | LOGO identification method based on attention mechanism image retrieval |
CN112001324A (en) * | 2020-08-25 | 2020-11-27 | 北京影谱科技股份有限公司 | Method, device and equipment for identifying actions of players of basketball game video |
CN112001324B (en) * | 2020-08-25 | 2024-04-05 | 北京影谱科技股份有限公司 | Method, device and equipment for identifying player actions of basketball game video |
CN112183332A (en) * | 2020-09-28 | 2021-01-05 | 成都希盟泰克科技发展有限公司 | Yak face identification method based on transfer learning |
CN112101333A (en) * | 2020-11-23 | 2020-12-18 | 四川圣点世纪科技有限公司 | Smart cattle farm monitoring and identifying method and device based on deep learning |
CN112766404A (en) * | 2021-01-29 | 2021-05-07 | 安徽工大信息技术有限公司 | Chinese mitten crab authenticity identification method and system based on deep learning |
CN113780207A (en) * | 2021-09-16 | 2021-12-10 | 中国农业科学院草原研究所 | System and method for goat face recognition |
CN114283366A (en) * | 2021-12-24 | 2022-04-05 | 东北农业大学 | Method and device for identifying individual identity of dairy cow and storage medium |
CN115457593A (en) * | 2022-07-26 | 2022-12-09 | 南京清湛人工智能研究院有限公司 | Cow face identification method, system, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108197605A (en) | Yak personal identification method based on deep learning | |
US20220277543A1 (en) | Using a probabilistic model for detecting an object in visual data | |
CN109145745B (en) | Face recognition method under shielding condition | |
Dibeklioglu et al. | 3D facial landmarking under expression, pose, and occlusion variations | |
CN107016319B (en) | Feature point positioning method and device | |
CN110909618B (en) | Method and device for identifying identity of pet | |
Quan et al. | Deep feature correlation learning for multi-modal remote sensing image registration | |
CN107862680B (en) | Target tracking optimization method based on correlation filter | |
US20150131873A1 (en) | Exemplar-based feature weighting | |
Buoncompagni et al. | Saliency-based keypoint selection for fast object detection and matching | |
CN109753578A (en) | A kind of image search method based on exposure mask selection convolution feature | |
CN105913069B (en) | A kind of image-recognizing method | |
Chen et al. | Guide local feature matching by overlap estimation | |
US9081800B2 (en) | Object detection via visual search | |
CN111149101B (en) | Target pattern searching method and computer readable storage medium | |
Guan et al. | Scene coordinate regression network with global context-guided spatial feature transformation for visual relocalization | |
Yang et al. | Random subspace supervised descent method for regression problems in computer vision | |
Hu et al. | Building extraction using mask scoring R-Cnn network | |
CN111368637A (en) | Multi-mask convolution neural network-based object recognition method for transfer robot | |
Guo et al. | Face alignment under occlusion based on local and global feature regression | |
CN110070490A (en) | Image split-joint method and device | |
CN115690803A (en) | Digital image recognition method and device, electronic equipment and readable storage medium | |
CN108154107B (en) | Method for determining scene category to which remote sensing image belongs | |
Wu et al. | Facial 3D model registration under occlusions with sensiblepoints-based reinforced hypothesis refinement | |
Ma et al. | A lip localization algorithm under variant light conditions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180622 |
|
RJ01 | Rejection of invention patent application after publication |