CN106384083A - Automatic face expression identification and information recommendation method - Google Patents
Automatic face expression identification and information recommendation method Download PDFInfo
- Publication number
- CN106384083A CN106384083A CN201610789988.3A CN201610789988A CN106384083A CN 106384083 A CN106384083 A CN 106384083A CN 201610789988 A CN201610789988 A CN 201610789988A CN 106384083 A CN106384083 A CN 106384083A
- Authority
- CN
- China
- Prior art keywords
- facial
- feature points
- key feature
- face
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Abstract
The invention relates to an automatic face expression identification and information recommendation method. The method comprises the following steps that (1) face detection is carried out, a face position is identified, and key feature points in the face are positioned; (2) a face expression feature is extracted from the key feature points of the face, and feature values of the key feature points of the face are obtained; (3) face expressions are identified in a classified manner according to the feature values of the key feature points of the face, and a face expression coding sequence is determined; (4) the face expression coding sequence is analyzed to obtain a face expression label; and (5) information content corresponding to the face expression label is obtained from a preset information recommendation library and pushed to a user, and the information recommendation library includes a recommendation list corresponding to the face expression labels and the information content. Compared with the prior art, the method is simple and convenient, the identification precision is high, and recommendation information is accurate.
Description
Technical field
The present invention relates to a kind of method of man-machine interaction, especially relate to a kind of automatic face Expression Recognition and go forward side by side row information
The method recommended.
Background technology
Man-machine interaction (Human Computer Interaction, HCI) technology always one receives significant attention
Topic.In the past few decades, man-machine interaction (HCI) is usually directed to legacy interface equipment, such as keyboard and mouse, it is emphasised that aobvious
The transmission of formula message, and ignore relevant user, change such as affective state of implicit information etc..Therefore, in recent years, in order to comply with
Intelligent, user friendlyization interaction demand, allows man-machine interaction be no longer limited to traditional interface equipment, computer vision
(Computer Vision, CV) becomes an important component part of human-computer interaction technology.
Automatic face Expression Recognition (Automatic Facial Expression Recognition, AFER) is to calculate
One emerging research topic of machine visual field, goal in research is that computer can automatically be obtained from the image sequence of face
Take expression information, and then analyze the emotion of people.Emotion is the important part in the daily exchange of people, and the expression of a people is then
From the emotion largely having reacted this people, such as we can raise our eyebrow to react the important of verbal information
Property, we can also react the unwilling of oneself by frowning and lifting the low corners of the mouth.These emotion informations will produce a lot of realities
The application on border.For example, it may produce a kind of pattern of brand-new man-machine interaction;In psychological field, it can help the heart
Expert of science is more effectively analyzed to the facial expression sequence of people;In education sector, the head of a family or teacher also can be from children
Emotion changes more fully to understand a child, thus to child use correct educational mode.Therefore, automatic face table
Feelings identification has very important research and using value.
Content of the invention
The purpose of the present invention is exactly to overcome the defect of above-mentioned prior art presence to provide a kind of automatic face expression
The method identifying row information recommendation of going forward side by side.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of automatic face expression recognition method, the method comprises the steps:
(1) carry out face detection, identification facial positions simultaneously position to facial key feature points;
(2) facial expression feature extraction is carried out to facial key feature points, obtain the eigenvalue of facial key feature points;
(3) eigenvalue according to facial key feature points carries out facial expression Classification and Identification, determines facial expression code sequence
Row;
(4) facial expression coded sequence is analyzed, obtains facial expression label.
In step (1), face is carried out by the differentiation facial deformation model of the linear regression grader more New Policy based on cascade
The positioning of portion's key feature points.
Step (2) is specially the key feature points area image that face selected by frame, to key feature points area image respectively
Determine the eigenvalue of facial key feature points using uniform local binary patterns algorithm.
Step (3) specifically, the eigenvalue of facial key feature points is input to the supporting vector machine model obtaining in advance,
By supporting vector machine model, the eigenvalue of facial key feature points is identified, obtains the right of each key feature points facial
Should encode in the facial expression in facial expression coding system, and then the facial expression coding of each key feature points is carried out group
Conjunction obtains facial expression coded sequence.
Step (4) obtains corresponding facial expression mark using HMM to the analysis of facial expression coded sequence
Sign, described facial expression label includes fear, sadness, happiness, anger, detest, is taken aback and neutral.
A kind of method of information recommendation, the method is carrying out facial expression certainly using above-mentioned automatic face expression recognition method
Following step is continued executing with after dynamic identification:
Obtain information content corresponding with facial expression label and be pushed to user from information recommendation storehouse set in advance,
Described information recommendation storehouse includes facial expression label recommendation list corresponding with information content.
Compared with prior art, the invention has the advantages that:
(1) present invention adopts uniform local binary patterns algorithm to determine the eigenvalue of facial key feature points, and by propping up
Hold vector machine model the eigenvalue of facial key feature points is identified, recognition speed is fast, accuracy of identification is high;
(2) present invention carries out information recommendation using facial expression label, and intelligence degree is high, and recommendation information is accurate.
Brief description
Fig. 1 be the present invention automatic face Expression Recognition go forward side by side row information recommendation method FB(flow block).
Fig. 2 is the structural representation of the key feature points of human face;
Fig. 3 is the sample calculation of uniform local binary patterns algorithm.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
Embodiment
As shown in figure 1, a kind of automatic face expression recognition method, the method comprises the steps:
Step 1:Carry out face detection, identification facial positions simultaneously position to facial key feature points;
Step 2:Facial expression feature extraction is carried out to facial key feature points, obtains the feature of facial key feature points
Value;
Step 3:Eigenvalue according to facial key feature points carries out facial expression Classification and Identification, determines that facial expression encodes
Sequence;
Step 4:Facial expression coded sequence is analyzed, obtains facial expression label.
In step 1, face is carried out by the differentiation facial deformation model of the linear regression grader more New Policy based on cascade
The positioning of key feature points.
A kind of method of information recommendation, the method is carrying out facial expression certainly using above-mentioned automatic face expression recognition method
Continue execution step 5 after dynamic identification:
Step 5:Obtain information content corresponding with facial expression label and push from information recommendation storehouse set in advance
To user, described information recommendation storehouse includes facial expression label recommendation list corresponding with information content.
Step 2 is specially the key feature points area image that face selected by frame, and key feature points area image is adopted respectively
Determine the eigenvalue of facial key feature points with uniform local binary patterns algorithm.
Step 3, specifically, the eigenvalue of facial key feature points is input to the supporting vector machine model obtaining in advance, is led to
Cross supporting vector machine model the eigenvalue of facial key feature points is identified, obtain the correspondence of each key feature points facial
Facial expression coding in facial expression coding system, and then the facial expression coding of each key feature points is combined
Obtain facial expression coded sequence.
Step 4 obtains corresponding facial expression label using HMM to the analysis of facial expression coded sequence,
Described facial expression label includes fear, sadness, happiness, anger, detest, is taken aback and neutral.
Step 1 septum reset key feature point is the pretreatment stage of human facial expression recognition system, and this stage is by face
The position in portion identifies from pictures, and the key feature points of face are positioned.It is illustrated in figure 2 human face
Key feature points structural representation.
In step 2, it is a most important stage, this stage in human facial expression recognition system that facial expression feature extracts
Main purpose be the feature extracting more succinct and robust from the gray scale of each pixel facial for machine learning, it
It is the key of human facial expression recognition success or not.Facial characteristics can be divided into two classes by the difference extracted according to processing data:Based on quiet
The feature extraction of state image and the feature extraction based on dynamic image.The feature extraction of still image refers to the figure static to
Piece carries out feature extraction.The feature extraction of dynamic image refers to carry out feature extraction to one section of sequence of pictures, thus can also carry
Get the motion feature of face.
Step 3 septum reset expression classification recognition utilizes expressive features, and the algorithm of use pattern identification is realized expression classification and appointed
Business.Difference that feature based extracts, whether introduce time domain data, can be using different models, wherein, supporting vector machine model
(SVM, Support Vector Machine) is widely used in expression classification, by selecting different kernel functions
, SVM can reach good classifying quality.In addition to the selection difference of model, the target of expression classification is also broadly divided into two kinds.
First kind class object is expression prototype, typically six kinds expression prototypes:Fear, sad, glad, angry, detest, and be taken aback.
Equations of The Second Kind class object is motor unit (AU, Action Unit) sequence, for example.AU be Facial Action Coding System (FACS,
Facial Action Coding System) in concept, FACS defines altogether 55 AU to be come to face muscular movement institute
Face action is caused to be described, such as AU1 represents and is lifted up inside eyebrow, AU2 represents and is lifted up outside eyebrow etc..
Local binary patterns (Local Binary Patterns, LBP), are a kind of description image local space structures
Nonparametric operator.It reflects the relation between each pixel and surrounding pixel.T.Ojala of Oulu university of Finland et al. in
Propose within 1996 this operator for analysis of the image textural characteristics, and describe its strong separating capacity in Texture classification.
Improve and optimization through continuous later, use uniform pattern LBP (Uniform Local Binary Patterns) now more
Method.
The following is uniform local binary patterns algorithm characteristics extraction algorithm realizes process:
(1) by Target Photo gray processing, both the colour picture of 3 passages was converted to single pass gray-scale maps;
(2) calculate the LBP value of each pixel.A simple sample is given below to demonstrate calculating process:
As shown in figure 3, Fig. 3 (a) illustrates the gray value of each pixel of the regional area of a 3*3.By central point
The gray scale of pixel is compared with the gray scale of 8 pixels of surrounding, if the gray scale of certain pixel of surrounding is more than or equal to central point
Pixel, then be calculated as 1 by this point, be otherwise designated as 0, has thus obtained the thresholding figure of Fig. 3 (b).Adopt inverse in thresholding in figure
Around 8 pixels value obtaining after comparing that is linked in sequence of hour hands is it can be deduced that binary string " 11111000 ", conversion
It is 248 (8+16+32+64+128=248) for decimal scale.This 248 be exactly intermediate pixel LBP value.
(3) uniform pattern screening is carried out to the LBP value of each pixel in step (2).So-called uniform pattern refers to one
Individual binary string ending is connected, if this binary string is less than 2 times from 0 to 1 or from 1 to 0 saltus step, then it is exactly
Uniformly, it is not just otherwise.The LBP value of each pixel is the binary string of 8,8 binary strings in, altogether
There are 58 uniform binary strings.Thus all LBP values can be divided into 59 classes, each uniform binary string is a class, remaining
Non-homogeneous binary string be a class.Why adopt this sorting technique, be because that researcher is sent out after substantial amounts of experiment
LBP value in existing major part actual picture is all uniform pattern LBP, has accounted for more than 90%, therefore such sorting technique rises
Arrive the effect rejecting redundancy and dimensionality reduction.
(4) picture is split:First picture is divided into several (such as 5*5) subregions, to every sub-regions
The LBP Data-Statistics rectangular histogram of all pixels, this rectangular histogram is the vector of one 59 dimension;Rectangular histogram series connection all subregions
Get up it is possible to obtain the LBP feature of whole pictures, be the characteristic vector of a 5*5*59=1475 dimension.Why using figure
Piece segmentation and the reason statistic histogram method also for avoiding picture not have within the specific limits when judging picture similarity
There is fully aligned problem.
Commending contents are carried out according to expression.Expression Recognition can be used for realizing the commending system of plurality of kinds of contents form, such as article,
Music, video etc..The present embodiment achieves a music commending system based on Expression Recognition.The source of wherein music is Semen Sojae Preparatum
The API of music.Semen Sojae Preparatum music has powerful tag system, has corresponding label to every song, pushes away exactly for us
Recommend music and provide great convenience.Music recommends to include both of which:
Like/dislike pattern mainly listened several songs and to song marking (according to expression) afterwards thus it is speculated that going out use in user
The label ranking of the music that family is liked.During user listens song, program can catch the expression of user.To the music play
Setting status bar, when user shows the expression liked, state value increases;Show detest expression when, reduce.Work as switching
During song (manual switching, or automatically switch when state value is preferably minimized), beat corresponding fraction to the bent label of this song.It
Can select afterwards to play the high song of label fraction.
Emotion pattern mainly recommends the song of respective type according to the mood of user.Expression according to user speculates to be used
Family is wanted to listen what kind of song.The song of healing is just listened when such as sad.
Claims (6)
1. a kind of automatic face expression recognition method is it is characterised in that the method comprises the steps:
(1) carry out face detection, identification facial positions simultaneously position to facial key feature points;
(2) facial expression feature extraction is carried out to facial key feature points, obtain the eigenvalue of facial key feature points;
(3) eigenvalue according to facial key feature points carries out facial expression Classification and Identification, determines facial expression coded sequence;
(4) facial expression coded sequence is analyzed, obtains facial expression label.
2. a kind of automatic face expression recognition method according to claim 1 is it is characterised in that pass through base in step (1)
In cascade linear regression grader more New Policy differentiate that facial deformation model carries out the positioning of facial key feature points.
3. a kind of automatic face expression recognition method according to claim 1 is it is characterised in that step (2) is specially frame
Select the key feature points area image of face, key feature points area image is respectively adopted with uniform local binary patterns algorithm
Determine the eigenvalue of facial key feature points.
4. a kind of automatic face expression recognition method according to claim 1 is it is characterised in that step (3) is specifically, incite somebody to action
The eigenvalue of facial key feature points is input to the supporting vector machine model obtaining in advance, by supporting vector machine model to face
The eigenvalue of key feature points is identified, obtain each key feature points facial corresponding in facial expression coding system
Facial expression encodes, and then the facial expression coding of each key feature points is combined obtaining facial expression coded sequence.
5. a kind of automatic face expression recognition method according to claim 1 is it is characterised in that step (4) adopts hidden horse
Er Kefu model obtains corresponding facial expression label to the analysis of facial expression coded sequence, and described facial expression label includes
Frightened, sad, glad, angry, detest, be taken aback and neutral.
6. a kind of method of information recommendation is it is characterised in that the method is using a kind of automatic face table described in claim 1
Feelings recognition methodss continue executing with following step after carrying out facial expression automatic identification:
Obtain information content corresponding with facial expression label and be pushed to user from information recommendation storehouse set in advance, described
Information recommendation storehouse include facial expression label recommendation list corresponding with information content.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610789988.3A CN106384083A (en) | 2016-08-31 | 2016-08-31 | Automatic face expression identification and information recommendation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610789988.3A CN106384083A (en) | 2016-08-31 | 2016-08-31 | Automatic face expression identification and information recommendation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106384083A true CN106384083A (en) | 2017-02-08 |
Family
ID=57938758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610789988.3A Pending CN106384083A (en) | 2016-08-31 | 2016-08-31 | Automatic face expression identification and information recommendation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106384083A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169427A (en) * | 2017-04-27 | 2017-09-15 | 深圳信息职业技术学院 | One kind is applied to psychologic face recognition method and device |
CN107688639A (en) * | 2017-08-24 | 2018-02-13 | 努比亚技术有限公司 | Using recommendation method, server and computer-readable recording medium |
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN108564016A (en) * | 2018-04-04 | 2018-09-21 | 北京红云智胜科技有限公司 | A kind of AU categorizing systems based on computer vision and method |
CN109271549A (en) * | 2018-09-30 | 2019-01-25 | 百度在线网络技术(北京)有限公司 | Song recommendations method, apparatus, terminal and computer readable storage medium |
CN109640119A (en) * | 2019-02-21 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN109766767A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Behavioral data method for pushing, device, computer equipment and storage medium |
CN109766765A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Audio data method for pushing, device, computer equipment and storage medium |
WO2019134091A1 (en) * | 2018-01-04 | 2019-07-11 | Microsoft Technology Licensing, Llc | Providing emotional care in a session |
CN110084657A (en) * | 2018-01-25 | 2019-08-02 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus for recommending dress ornament |
CN110998598A (en) * | 2017-06-30 | 2020-04-10 | 挪威科技大学 | Detection of manipulated images |
CN111144266A (en) * | 2019-12-20 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Facial expression recognition method and device |
WO2020125217A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳云天励飞技术有限公司 | Expression recognition method and apparatus and recommendation method and apparatus |
CN111428666A (en) * | 2020-03-31 | 2020-07-17 | 齐鲁工业大学 | Intelligent family accompanying robot system and method based on rapid face detection |
CN112104914A (en) * | 2019-06-18 | 2020-12-18 | 中国移动通信集团浙江有限公司 | Video recommendation method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877056A (en) * | 2009-12-21 | 2010-11-03 | 北京中星微电子有限公司 | Facial expression recognition method and system, and training method and system of expression classifier |
CN103065122A (en) * | 2012-12-21 | 2013-04-24 | 西北工业大学 | Facial expression recognition method based on facial motion unit combination features |
CN104376333A (en) * | 2014-09-25 | 2015-02-25 | 电子科技大学 | Facial expression recognition method based on random forests |
-
2016
- 2016-08-31 CN CN201610789988.3A patent/CN106384083A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877056A (en) * | 2009-12-21 | 2010-11-03 | 北京中星微电子有限公司 | Facial expression recognition method and system, and training method and system of expression classifier |
CN103065122A (en) * | 2012-12-21 | 2013-04-24 | 西北工业大学 | Facial expression recognition method based on facial motion unit combination features |
CN104376333A (en) * | 2014-09-25 | 2015-02-25 | 电子科技大学 | Facial expression recognition method based on random forests |
Non-Patent Citations (4)
Title |
---|
M.F. VALSTAR 等: "Combined Support Vector Machines and Hidden Markov Models for Modeling Facial Action Temporal Dynamics", 《HUMAN–COMPUTER INTERACTION》 * |
施毅: "基于主动外观模型的人脸表情识别研究", 《中国优秀硕士学位论文全文数据库》 * |
谷文娟: "基于红外热成像人脸表情识别研究", 《中国优秀硕士学位论文全文数据库》 * |
赵晖 等: "人脸活动单元自动识别研究综述", 《计算机辅助设计与图形学学报》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107169427A (en) * | 2017-04-27 | 2017-09-15 | 深圳信息职业技术学院 | One kind is applied to psychologic face recognition method and device |
CN107169427B (en) * | 2017-04-27 | 2020-03-17 | 深圳信息职业技术学院 | Face recognition method and device suitable for psychology |
CN110998598A (en) * | 2017-06-30 | 2020-04-10 | 挪威科技大学 | Detection of manipulated images |
CN107688639A (en) * | 2017-08-24 | 2018-02-13 | 努比亚技术有限公司 | Using recommendation method, server and computer-readable recording medium |
US11369297B2 (en) | 2018-01-04 | 2022-06-28 | Microsoft Technology Licensing, Llc | Providing emotional care in a session |
WO2019134091A1 (en) * | 2018-01-04 | 2019-07-11 | Microsoft Technology Licensing, Llc | Providing emotional care in a session |
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN108197593B (en) * | 2018-01-23 | 2022-02-18 | 深圳极视角科技有限公司 | Multi-size facial expression recognition method and device based on three-point positioning method |
CN110084657A (en) * | 2018-01-25 | 2019-08-02 | 北京京东尚科信息技术有限公司 | A kind of method and apparatus for recommending dress ornament |
CN108564016A (en) * | 2018-04-04 | 2018-09-21 | 北京红云智胜科技有限公司 | A kind of AU categorizing systems based on computer vision and method |
CN109271549A (en) * | 2018-09-30 | 2019-01-25 | 百度在线网络技术(北京)有限公司 | Song recommendations method, apparatus, terminal and computer readable storage medium |
CN109766765A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Audio data method for pushing, device, computer equipment and storage medium |
CN109766767A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | Behavioral data method for pushing, device, computer equipment and storage medium |
WO2020125217A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳云天励飞技术有限公司 | Expression recognition method and apparatus and recommendation method and apparatus |
CN109640119A (en) * | 2019-02-21 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN109640119B (en) * | 2019-02-21 | 2021-06-11 | 百度在线网络技术(北京)有限公司 | Method and device for pushing information |
CN112104914A (en) * | 2019-06-18 | 2020-12-18 | 中国移动通信集团浙江有限公司 | Video recommendation method and device |
CN112104914B (en) * | 2019-06-18 | 2022-09-13 | 中国移动通信集团浙江有限公司 | Video recommendation method and device |
CN111144266A (en) * | 2019-12-20 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Facial expression recognition method and device |
CN111144266B (en) * | 2019-12-20 | 2022-11-22 | 北京达佳互联信息技术有限公司 | Facial expression recognition method and device |
CN111428666A (en) * | 2020-03-31 | 2020-07-17 | 齐鲁工业大学 | Intelligent family accompanying robot system and method based on rapid face detection |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106384083A (en) | Automatic face expression identification and information recommendation method | |
CN111897908B (en) | Event extraction method and system integrating dependency information and pre-training language model | |
CN106096557B (en) | A kind of semi-supervised learning facial expression recognizing method based on fuzzy training sample | |
CN107273295B (en) | Software problem report classification method based on text chaos | |
CN112085012A (en) | Project name and category identification method and device | |
Wallraven et al. | Categorizing art: Comparing humans and computers | |
Basnin et al. | An integrated CNN-LSTM model for micro hand gesture recognition | |
CN106934055B (en) | Semi-supervised webpage automatic classification method based on insufficient modal information | |
CN109086351B (en) | Method for acquiring user tag and user tag system | |
Dey et al. | A two-stage CNN-based hand-drawn electrical and electronic circuit component recognition system | |
CN114818710A (en) | Form information extraction method, device, equipment and medium | |
Iqbal et al. | Classifier comparison for MSER-based text classification in scene images | |
Kındıroglu et al. | Aligning accumulative representations for sign language recognition | |
CN102609715A (en) | Object type identification method combining plurality of interest point testers | |
Clavier et al. | DocMining: A cooperative platform for heterogeneous document interpretation according to user-defined scenarios | |
TW202232437A (en) | Method and system for classifying and labeling images | |
Song et al. | Hey, AI! Can You See What I See? Multimodal Transfer Learning-Based Design Metrics Prediction for Sketches With Text Descriptions | |
Pohudina et al. | Method for identifying and counting objects | |
CN101436213A (en) | Method for evaluating three-dimensional model search performance based on content | |
CN110472032A (en) | More classification intelligent answer search methods of medical custom entities word part of speech label | |
CN115017908A (en) | Named entity identification method and system | |
CN111814922B (en) | Video clip content matching method based on deep learning | |
CN115269633A (en) | Method for intelligently inquiring commodities based on CAD (computer-aided design) drawing | |
TW202232388A (en) | Learning system, learning method, and program | |
CN112329389B (en) | Chinese character stroke automatic extraction method based on semantic segmentation and tabu search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170208 |