CN106650610A - Human face expression data collection method and device - Google Patents
Human face expression data collection method and device Download PDFInfo
- Publication number
- CN106650610A CN106650610A CN201610948110.XA CN201610948110A CN106650610A CN 106650610 A CN106650610 A CN 106650610A CN 201610948110 A CN201610948110 A CN 201610948110A CN 106650610 A CN106650610 A CN 106650610A
- Authority
- CN
- China
- Prior art keywords
- human face
- expression information
- target
- face data
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
An embodiment of the invention discloses a human face expression data collection method and device. The method comprises the following steps: collecting at least one human face data; obtaining at least one reference expression information of target human face data, and obtaining probability of target expression information corresponding to the target human face data being each kind of reference expression information in the at least one kind of reference expression information, wherein the target human face data is any human face data in the at least one human face data; and determining the expression information, the probability of which is the largest, as the target expression information of the target human face data, and carrying out association storage on the target human face data and the target expression information to finish establishment of a human face expression database. The human face expression data collection method and device can overcome the problems of clumsiness and non-intelligence when the human face expression data is collected in a conventional mode, and can finish classification and acquisition work of the human face data automatically through existing tools.
Description
Technical field
The present invention relates to the communications field, and in particular to a kind of human face expression method of data capture and device.
Background technology
With the continuous development of artificial intelligence technology, expression recognition technology also makes a breakthrough constantly.And face
Expression Recognition is widely used, and behavior that can be by Expression Recognition to people is analyzed, for example, anti-by the expression of shopper
Should know its hobby to goods, and then targetedly put the position of goods in market, allow product to sell greatly, or, passing through
Expression Recognition is anti-drowsy etc..And expression recognition is built upon being carried out on the basis of model training using expression data, because
This, sets up Facial expression database significant.Traditional expression data acquisition mode is divided into following three kinds:(1) utilize
Image capture device to volunteer or other people carry out the collection of particular emotion;(2) by buying existing expression storehouse;(3) to net
Expression picture on network is downloaded and manual sort.
Convene volunteer to waste time and energy, need personnel to coordinate, and the picture that great majority are collected is picture-
The form of in-the-lab (background change, illumination variation etc. are very single), for picture-in-the-wild (background, chi
Degree, illumination variation etc. are respectively provided with diversity) expression data can not be identified well;Purchase database spends high, and
And no matter which kind of acquisition mode is adopted, it is artificial acquisition mode, lack of wisdom and high efficiency.
The content of the invention
A kind of human face expression method of data capture and device are embodiments provided, to the collection of customer service traditional approach
Clumsiness during human face expression data and not intelligent, by calling existing instrument, automatically completes the classification of human face data
Collecting work.
Embodiment of the present invention first aspect provides a kind of human face expression method of data capture, including:
Collect at least one human face data;
At least class for obtaining target human face data refers to expression information, and to obtain the target human face data corresponding
Target expression information is the probability that an at least class refers to expression information with reference to each class in expression information, wherein, it is described
Target human face data is any one human face data at least one human face data;
The reference expression information of maximum probability is defined as into the target expression information of the target human face data, and
Target human face data described in associated storage and the target expression information in Facial expression database.
With reference in a first aspect, in some possible implementations, at least one human face data of the collection, including:
At least one human face data is collected based on default picture website;
And/or, at least one human face data on webpage is crawled based on crawler technology.
With reference in a first aspect, in some possible implementations, the reference expression information for it is angry, despise, detest,
Fear, happiness, nature, it is sad and surprised at least one.
With reference in a first aspect, in some possible implementations, at least class for obtaining target human face data is joined
Expression information is examined, and obtains the corresponding target expression information of the target human face data at least class reference expression letter
Each class in breath refers to the probability of expression information, including:
Call expression recognition application to obtain at least one of target human face data and refer to expression information, and obtain institute
It is that an at least class is expressed one's feelings with reference to each class reference in expression information to state the corresponding target expression information of target human face data
The probability of information.
With reference in a first aspect, in some possible implementations, the institute of the associated storage in Facial expression database
Target human face data and the target expression information are stated, including:
Set up the corresponding file per class expression information, the expression information for it is angry, despise, detest, fearing, it is glad,
One kind in natural, sad and surprised, the expression information corresponding to each file is different;
By the target face data storage in the corresponding file of the target expression information.
Embodiment of the present invention second aspect provides a kind of human face expression transacter, and described device includes:
Collector unit, for collecting at least one human face data;
Acquiring unit, at least class for obtaining target human face data refers to expression information, and obtains the target
The corresponding target expression information of human face data is that an at least class refers to expression information with reference to each class in expression information
Probability, wherein, the target human face data is any one human face data at least one human face data;
Unit is set up, for the reference expression information of maximum probability to be defined as the target of the target human face data
Expression information, and target human face data described in associated storage and the target expression information in Facial expression database.
With reference to second aspect, in some possible implementations, the collector unit, specifically for based on default picture
Collect at least one human face data in website;And/or, at least one human face data on webpage is crawled based on crawler technology.
With reference to second aspect, in some possible implementations, the reference expression information for it is angry, despise, detest,
Fear, happiness, nature, it is sad and surprised at least one.
With reference to second aspect, in some possible implementations, the acquiring unit, specifically for calling human face expression
Identification application obtains at least one of target human face data and refers to expression information, and to obtain the target human face data corresponding
Target expression information is the probability that an at least class refers to expression information with reference to each class in expression information.
It is described to set up unit in some possible implementations with reference to second aspect, it is being used for the target face
When data are stored with target expression information association, specifically for setting up per the corresponding file of class expression information, the table
Feelings information is angry, despise, detest, fearing, happiness, nature, it is sad and surprised in one kind, the table corresponding to each file
Feelings information is different;By the target face data storage in the corresponding file of the target expression information.
The third aspect, embodiments provides a kind of human face expression transacter, the human face expression data
Collection device includes processor, memory, receiver, transmitter and communication bus, the processor and the memory, described
Receiver, the transmitter connect and complete mutual communication by the communication bus;
The processor is used to call the executable program code stored in the memory, performs such as the embodiment of the present invention
Part or all of step described in first aspect either method.
Fourth aspect, the embodiment of the present invention provides a kind of computer-readable recording medium, wherein, the computer-readable is deposited
Storage media is stored with for the program code of terminal device execution, and the program code specifically includes execute instruction, and the execution refers to
Make the part or all of step for performing described in embodiment of the present invention first aspect either method.
As can be seen that in embodiment of the present invention technical scheme, collecting at least one human face data, target human face data is obtained
An at least class refer to expression information, and obtain the corresponding target expression information of the target human face data for described at least one
Class, with reference to the probability of expression information, the reference expression information of maximum probability is defined as described with reference to each class in expression information
The target expression information of target human face data, and in Facial expression database target human face data described in associated storage with
The target expression information.By implement the embodiment of the present invention, can customer service traditional approach collection human face expression data when it is stupid
Clumsy property and not intelligent, by calling existing instrument, automatically completes the classification collecting work of human face data.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are the present invention
Some embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can be with basis
These accompanying drawings obtain other accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of human face expression method of data capture that first embodiment of the invention is provided;
Fig. 1-1 is that one kind that first embodiment of the invention is provided obtains target face by calling Microsoft's Expression Recognition API
Data, and the effect diagram of probability;
Fig. 1-2 is that one kind that first embodiment of the invention is provided obtains target face by calling Microsoft's Expression Recognition API
At least one of data refers to expression information, and the schematic flow sheet of probability;
Fig. 1-3 is that one kind that first embodiment of the invention is provided is realized probability most by calling Microsoft's Expression Recognition API
Big expression information is defined as the effect diagram of target expression information;
Fig. 1-4 is that one kind that first embodiment of the invention is provided realizes associated storage by calling Microsoft's Expression Recognition API
The effect diagram of target human face data and target expression information;
Fig. 1-5 is that one kind that first embodiment of the invention is provided realizes associated storage by calling Microsoft's Expression Recognition API
The effect diagram of target human face data and target expression information;
Fig. 2 is a kind of schematic flow sheet of human face expression method of data capture that second embodiment of the invention is provided;
Fig. 3 is a kind of structural representation of human face expression transacter that third embodiment of the invention is provided;
Fig. 4 is a kind of structural representation of human face expression transacter that fourth embodiment of the invention is provided.
Specific embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, is clearly and completely described to the technical scheme in the embodiment of the present invention, it is clear that described embodiment is this
Bright a part of embodiment, rather than the embodiment of whole.Based on the embodiment in the present invention, those of ordinary skill in the art are not having
Have and make the every other embodiment obtained under the premise of creative work, belong to the scope of protection of the invention.
Term " first ", " second ", " the 3rd " in description and claims of this specification and above-mentioned accompanying drawing, "
Four " it is etc. for distinguishing different objects, rather than for describing particular order.Additionally, " comprising " and " having " and they appoint
What deforms, it is intended that cover non-exclusive including.For example contain process, method, system, the product of series of steps or unit
Product or equipment are not limited to the step of listing or unit, but alternatively also include the step of not listing or unit, or
Alternatively also include other steps intrinsic for these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can be wrapped in conjunction with the embodiments
In being contained at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly it is understood by, embodiment described herein can be in combination with other embodiments.
Fig. 1 is referred to, Fig. 1 is that a kind of flow process of human face expression method of data capture that first embodiment of the invention is provided is shown
It is intended to, as shown in figure 1, the Facial expression database in the embodiment of the present invention is comprised the following steps:
S101, at least one human face data of collection.
Specifically, the human face data can refer to face picture, and user can download face picture, obtain certain scale
Human face data, such as, by picture websites such as Google Image Search, bing, Baidu, batch downloads human face expression number
According to, or crawl expression picture on webpage using the mode of reptile.But, the human face expression data contamination now collected is net,
Or lack expression label.
S102, an at least class of acquisition target human face data refer to expression information, and obtain the target human face data
Corresponding target expression information is the probability that an at least class refers to expression information with reference to each class in expression information, its
In, the target human face data is any one human face data at least one human face data.
Specifically, it is possible to use existing expression recognition technology (can such as call expression recognition instrument, for example
The expression recognition API of the offers such as Emotient, Affectiva, EmoVu, Microsoft), it is each to what is collected in S101 steps
Opening human face data carries out Expression Recognition, and recognition result is recorded.Now to call Microsoft's Expression Recognition API as a example by step S102
Illustrate, http request bag is encapsulated first, Expression Recognition API to Microsoft's application is input into inside the header of http
Key, reads the binary stream of picture, and then incoming httpbody sends request in the way of post, obtains response, right
The content of response is parsed, result of the content comprising Expression Recognition, face frame information of the result comprising present image
And each probability expressed one's feelings is obtained, specifically refer to shown in Fig. 1-1, wherein, call Microsoft's Expression Recognition API to obtain target person
At least one of face data refers to expression information, and it is described to obtain the corresponding target expression information of the target human face data
An at least class refers to the schematic flow sheet of the probability of expression information with reference to each class in expression information as shown in Figure 1-2.
S103, the target expression letter that the reference expression information of maximum probability is defined as the target human face data
Breath, and target human face data described in associated storage and the target expression information in Facial expression database.
Specifically, according to the corresponding target expression information of the target human face data obtained in step S102 is extremely
A few class, with reference to the probability of expression information, is compared with reference to each class in expression information to each probability, takes wherein probability
The maximum of value is the target expression information of the target human face data, specifically refer to shown in Fig. 1-3.And by each face
Data according to its corresponding target expression information, be respectively put into in corresponding file, so as to complete the data of corresponding expression
Extract, specifically refer to shown in Fig. 1-4- Fig. 1-5.
Wherein, the specific implementation of a collection at least human face data can be:Received based on default picture website
At least one human face data of collection;And/or, at least one human face data on webpage is crawled based on crawler technology.
Wherein, the reference expression information be angry, despise, detest, fearing, happiness, nature, it is sad and surprised in extremely
Few one kind.
Wherein, at least class for obtaining target human face data refers to expression information, and obtains the target face
The corresponding target expression information of data is the probability that an at least class refers to expression information with reference to each class in expression information
Specific implementation can be:
Call expression recognition application to obtain at least one of target human face data and refer to expression information, and obtain institute
It is that an at least class is expressed one's feelings with reference to each class reference in expression information to state the corresponding target expression information of target human face data
The probability of information.
Wherein, the target human face data described in associated storage in Facial expression database and the target expression information
Specific implementation can be:
Set up the corresponding file per class expression information, the expression information for it is angry, despise, detest, fearing, it is glad,
One kind in natural, sad and surprised, the expression information corresponding to each file is different;
By the target face data storage in the corresponding file of the target expression information.
As can be seen that in embodiment of the present invention technical scheme, collecting at least one human face data, target human face data is obtained
An at least class refer to expression information, and obtain the corresponding target expression information of the target human face data for described at least one
Class, with reference to the probability of expression information, the reference expression information of maximum probability is defined as described with reference to each class in expression information
The target expression information of target human face data, and in Facial expression database target human face data described in associated storage with
The target expression information.By implement the embodiment of the present invention can customer service traditional approach gather human face expression data when clumsiness
Property and not intelligent, by calling existing instrument, automatically completes the classification collecting work of human face data.
Fig. 2 is referred to, Fig. 2 is that a kind of flow process of human face expression method of data capture that second embodiment of the invention is provided is shown
It is intended to, as shown in Fig. 2 the Facial expression database in the embodiment of the present invention is comprised the following steps:
S201, based on default picture website collect at least one human face data, and/or, webpage is crawled based on crawler technology
On at least one human face data.
S202, call expression recognition application obtain target human face data an at least class refer to expression information, and
It is that an at least class is joined with reference to each class in expression information to obtain the corresponding target expression information of the target human face data
The probability of expression information is examined, wherein, the target human face data is any one face at least one human face data
Data.
Wherein, the reference expression information be angry, despise, detest, fearing, happiness, nature, it is sad and surprised in extremely
Few one kind.
S203, the target expression letter that the reference expression information of maximum probability is defined as the target human face data
Breath.
S204, set up the corresponding file per class expression information, the expression information for it is angry, despise, detest, fearing,
Happiness, nature, it is sad and surprised in one kind, the expression information corresponding to each file is different.
S205, by the target face data storage in the corresponding file of the target expression information.
As can be seen that in embodiment of the present invention technical scheme, collecting at least one human face data, target human face data is obtained
An at least class refer to expression information, and obtain the corresponding target expression information of the target human face data for described at least one
Class, with reference to the probability of expression information, the reference expression information of maximum probability is defined as described with reference to each class in expression information
The target expression information of target human face data, and in Facial expression database target human face data described in associated storage with
The target expression information.By implement the embodiment of the present invention can customer service traditional approach gather human face expression data when clumsiness
Property and not intelligent, by calling existing instrument, automatically completes the classification collecting work of human face data.
It is below apparatus of the present invention embodiment, apparatus of the present invention embodiment is used to perform the inventive method embodiment one to two
The method of realization, for convenience of description, illustrate only the part related to the embodiment of the present invention, and particular technique details is not disclosed
, refer to the embodiment of the present invention one and embodiment two.
Fig. 3 is referred to, Fig. 3 is that a kind of structure of human face expression transacter that third embodiment of the invention is provided is shown
It is intended to, as shown in figure 3, the human face expression transacter in the embodiment of the present invention is included with lower unit:
Collector unit 301, for collecting at least one human face data;
Acquiring unit 302, at least class for obtaining target human face data refers to expression information, and obtains the mesh
The corresponding target expression information of mark human face data is that an at least class refers to expression information with reference to each class in expression information
Probability, wherein, the target human face data is any one human face data at least one human face data;
Unit 303 is set up, for the reference expression information of maximum probability to be defined as described in the target human face data
Target expression information, and target human face data described in associated storage and the target expression information in Facial expression database.
Optionally, the collector unit 301, specifically for collecting at least one human face data based on default picture website;
And/or, at least one human face data on webpage is crawled based on crawler technology.
Optionally, the reference expression information be angry, despise, detest, fearing, happiness, nature, it is sad and surprised in
It is at least one.
Optionally, the acquiring unit 302, specifically for calling expression recognition application to obtain target human face data
At least one refers to expression information, and obtains the corresponding target expression information of the target human face data for an at least class
Probability of each class in reference to expression information with reference to expression information.
Optionally, it is described to set up unit 303, it is being used for the target human face data and target expression information association
During storage, specifically for setting up the corresponding file per class expression information, the expression information for it is angry, despise, detest, evil
Fearness, happiness, nature, it is sad and surprised in one kind, the expression information corresponding to each file is different;By the target
Human face data is stored in the corresponding file of the target expression information.
Specifically, implementing for above-mentioned unit refers to correlation step in Fig. 1 to Fig. 2 correspondence embodiments and retouches
State, will not be described here.
As can be seen that in embodiment of the present invention technical scheme, collecting at least one human face data, target human face data is obtained
An at least class refer to expression information, and obtain the corresponding target expression information of the target human face data for described at least one
Class, with reference to the probability of expression information, the reference expression information of maximum probability is defined as described with reference to each class in expression information
The target expression information of target human face data, and in Facial expression database target human face data described in associated storage with
The target expression information.By implement the embodiment of the present invention can customer service traditional approach gather human face expression data when clumsiness
Property and not intelligent, by calling existing instrument, automatically completes the classification collecting work of human face data.
Fig. 4 is refer to, Fig. 4 is that a kind of structure of human face expression transacter that fourth embodiment of the invention is provided is shown
It is intended to.As shown in figure 4, the human face expression transacter in the embodiment of the present invention includes:At least one processor 401, example
Such as CPU, at least one receiver 403, at least one memory 404, at least one transmitter 405, at least one communication bus
402.Wherein, communication bus 402 is used to realize the connection communication between these components.Wherein, device in the embodiment of the present invention
Receiver 403 and transmitter 405 can be wired sending ports, or wireless device, such as including antenna assembly, it is used for
With the communication that other node devices carry out signaling or data.Memory 404 can be high-speed RAM memory, it is also possible to which right and wrong are not
Stable memory (non-volatile memory), for example, at least one magnetic disc store.Memory 404 optionally may be used also
Being at least one storage device for being located remotely from aforementioned processor 401.Batch processing code, and institute are stored in memory 404
State processor 401 to call the code stored in memory 404 to perform the function of correlation by communication bus 402.
The processor 401, for collecting at least one human face data;Obtain at least class reference of target human face data
Expression information, and it is that an at least class refers to expression information to obtain the corresponding target expression information of the target human face data
In each class with reference to expression information probability, wherein, the target human face data is at least one human face data
Any one human face data;The reference expression information of maximum probability is defined as into the target expression of the target human face data
Information, and target human face data described in associated storage and the target expression information in Facial expression database.
Optionally, the processor 401, specifically for based on default picture when for collecting an at least human face data
Collect at least one human face data in website;And/or, at least one human face data on webpage is crawled based on crawler technology.
Optionally, the reference expression information be angry, despise, detest, fearing, happiness, nature, it is sad and surprised in
It is at least one.
Optionally, the processor 401, at least class for being used to obtain target human face data expression information is referred to, with
And the corresponding target expression information of the acquisition target human face data is each class that an at least class is referred in expression information
With reference to expression information probability when specifically for call expression recognition application obtain target human face data at least one ginseng
Expression information is examined, and obtains the corresponding target expression information of the target human face data at least class reference expression letter
Probability of each class in breath with reference to expression information.
Optionally, the processor 401, for the target human face data described in associated storage in Facial expression database
During with the target expression information, specifically for setting up the corresponding file per class expression information, the expression information for it is angry,
Despise, detest, fearing, happiness, nature, it is sad and surprised in one kind, each not phase of the expression information corresponding to each file
Together;By the target face data storage in the corresponding file of the target expression information.
Specifically, implementing for above-mentioned unit refers to correlation step in Fig. 1 to Fig. 2 correspondence embodiments and retouches
State, will not be described here.
As can be seen that in embodiment of the present invention technical scheme, collecting at least one human face data, target human face data is obtained
An at least class refer to expression information, and obtain the corresponding target expression information of the target human face data for described at least one
Class, with reference to the probability of expression information, the reference expression information of maximum probability is defined as described with reference to each class in expression information
The target expression information of target human face data, and in Facial expression database target human face data described in associated storage with
The target expression information.By implement the embodiment of the present invention can customer service traditional approach gather human face expression data when clumsiness
Property and not intelligent, by calling existing instrument, automatically completes the classification collecting work of human face data.
The embodiment of the present invention also provides a kind of computer-readable storage medium, wherein, the computer-readable storage medium can be stored with journey
Sequence, during the program performing including any service processes described in said method embodiment monitoring method part or complete
Portion's step.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention do not limited by described sequence of movement because
According to the present invention, some steps can adopt other orders or while carry out.Secondly, those skilled in the art also should know
Know, embodiment described in this description belongs to preferred embodiment, involved action and unit is not necessarily of the invention
It is necessary.
The step of method of the embodiment of the present invention, sequentially can according to actual needs be adjusted, merges or delete.This
The unit of the terminal of bright embodiment can be integrated according to actual needs, Further Division or delete.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, without the portion described in detail in certain embodiment
Point, may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way
Realize.For example, device embodiment described above is schematic, such as division of described unit, is a kind of logic function
Divide, can there is other dividing mode when actually realizing, such as multiple units or component can with reference to or be desirably integrated into
Another system, or some features can ignore, or do not perform.It is another, shown or discussed coupling each other or
Direct-coupling or communication connection can be INDIRECT COUPLING or the communication connections by some interfaces, device or unit, can be electricity
Property or other forms.
The unit as separating component explanation can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can according to the actual needs be selected to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list
Unit both can be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used
When, during a computer read/write memory medium can be stored in.Based on such understanding, technical scheme is substantially
The part for contributing to prior art in other words or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are used so that a station terminal sets
Standby (can be personal computer, server or network equipment etc.) performs whole or the portion of each embodiment methods described of the invention
Step by step.And aforesaid storage medium includes:USB flash disk, read-only storage (ROM, Read-Only Memory), random access memory
Device (RAM, Random Access Memory), portable hard drive, magnetic disc or CD etc. are various can be with Jie of store program codes
Matter.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
Completed with instructing the hardware of correlation by program, the program can be stored in a computer-readable recording medium, storage
Medium can include:Flash disk, read-only storage (English:Read-Only Memory, referred to as:ROM), random access device (English
Text:Random Access Memory, referred to as:RAM), disk or CD etc..
A kind of human face expression method of data capture for being provided the embodiment of the present invention above and device have carried out detailed Jie
Continue, specific case used herein is set forth to the principle and embodiment of the present invention, the explanation of above example is only
It is to be used to help understand the method for the present invention and its core concept;Simultaneously for one of ordinary skill in the art, according to this
Bright thought, will change in specific embodiments and applications, and in sum, this specification content should not be managed
Solve as limitation of the present invention.
Claims (10)
1. a kind of human face expression method of data capture, it is characterised in that methods described includes:
Collect at least one human face data;
At least class for obtaining target human face data refers to expression information, and obtains the corresponding target of the target human face data
Expression information is the probability that an at least class refers to expression information with reference to each class in expression information, wherein, the target
Human face data is any one human face data at least one human face data;
The reference expression information of maximum probability is defined as into the target expression information of the target human face data, and in face
Target human face data described in associated storage and the target expression information in expression data storehouse.
2. the method for claim 1, it is characterised in that at least one human face data of the collection, including:
At least one human face data is collected based on default picture website;
And/or, at least one human face data on webpage is crawled based on crawler technology.
3. the method for claim 1, it is characterised in that the reference expression information for it is angry, despise, detest, fearing,
Happiness, nature, it is sad and surprised at least one.
4. the method for claim 1, it is characterised in that an at least class of the acquisition target human face data is with reference to expression
Information, and it is that an at least class is referred in expression information to obtain the corresponding target expression information of the target human face data
Each class refers to the probability of expression information, including:
Call expression recognition application to obtain at least one of target human face data and refer to expression information, and obtain the mesh
The corresponding target expression information of mark human face data is that an at least class refers to expression information with reference to each class in expression information
Probability.
5. the method for claim 1, it is characterised in that the target described in associated storage in Facial expression database
Human face data and the target expression information, including:
Set up the corresponding file per class expression information, the expression information for it is angry, despise, detest, fearing, it is glad, natural,
One kind in sad and surprised, the expression information corresponding to each file is different;
By the target face data storage in the corresponding file of the target expression information.
6. a kind of human face expression transacter, it is characterised in that described device includes:
Collector unit, for collecting at least one human face data;
Acquiring unit, at least class for obtaining target human face data refers to expression information, and obtains the target face
The corresponding target expression information of data is the probability that an at least class refers to expression information with reference to each class in expression information,
Wherein, the target human face data is any one human face data at least one human face data;
Unit is set up, the target for the reference expression information of maximum probability to be defined as the target human face data is expressed one's feelings
Information, and target human face data described in associated storage and the target expression information in Facial expression database.
7. device as claimed in claim 6, it is characterised in that
The collector unit, specifically for collecting at least one human face data based on default picture website;And/or, based on reptile
Technology crawls at least one human face data on webpage.
8. device as claimed in claim 6, it is characterised in that the reference expression information for it is angry, despise, detest, fearing,
Happiness, nature, it is sad and surprised at least one.
9. device as claimed in claim 6, it is characterised in that
The acquiring unit, specifically for calling expression recognition application to obtain at least one reference table of target human face data
Feelings information, and it is that an at least class is referred in expression information to obtain the corresponding target expression information of the target human face data
Each class with reference to expression information probability.
10. device as claimed in claim 6, it is characterised in that
It is described to set up unit, it is concrete to use when for the target human face data to be stored with target expression information association
In setting up the corresponding file per class expression information, the expression information for it is angry, despise, detest, fearing, happiness, nature, compassion
One kind in hindering and being surprised, the expression information corresponding to each file is different;By the target face data storage in
In the corresponding file of the target expression information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610948110.XA CN106650610A (en) | 2016-11-02 | 2016-11-02 | Human face expression data collection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610948110.XA CN106650610A (en) | 2016-11-02 | 2016-11-02 | Human face expression data collection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106650610A true CN106650610A (en) | 2017-05-10 |
Family
ID=58821318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610948110.XA Pending CN106650610A (en) | 2016-11-02 | 2016-11-02 | Human face expression data collection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106650610A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358169A (en) * | 2017-06-21 | 2017-11-17 | 厦门中控智慧信息技术有限公司 | A kind of facial expression recognizing method and expression recognition device |
CN109150984A (en) * | 2018-07-27 | 2019-01-04 | 平安科技(深圳)有限公司 | The method and apparatus for obtaining data resource |
CN109831618A (en) * | 2018-12-10 | 2019-05-31 | 平安科技(深圳)有限公司 | Photographic method, computer readable storage medium and terminal device based on expression |
WO2020006863A1 (en) * | 2018-07-06 | 2020-01-09 | 平安科技(深圳)有限公司 | Automatic approval comment input method and apparatus, computer device, and storage medium |
CN111612654A (en) * | 2020-05-08 | 2020-09-01 | 快猪侠信息技术(杭州)有限公司 | Smart hotel information interaction system and interaction method thereof |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101859367A (en) * | 2009-04-07 | 2010-10-13 | 北京算通数字技术研究中心有限公司 | Digital photo sorting method, device and application system thereof |
CN102890777A (en) * | 2011-07-21 | 2013-01-23 | 爱国者电子科技(天津)有限公司 | Computer system capable of identifying facial expressions |
CN105373784A (en) * | 2015-11-30 | 2016-03-02 | 北京光年无限科技有限公司 | Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
-
2016
- 2016-11-02 CN CN201610948110.XA patent/CN106650610A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101859367A (en) * | 2009-04-07 | 2010-10-13 | 北京算通数字技术研究中心有限公司 | Digital photo sorting method, device and application system thereof |
CN102890777A (en) * | 2011-07-21 | 2013-01-23 | 爱国者电子科技(天津)有限公司 | Computer system capable of identifying facial expressions |
CN105373784A (en) * | 2015-11-30 | 2016-03-02 | 北京光年无限科技有限公司 | Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358169A (en) * | 2017-06-21 | 2017-11-17 | 厦门中控智慧信息技术有限公司 | A kind of facial expression recognizing method and expression recognition device |
WO2020006863A1 (en) * | 2018-07-06 | 2020-01-09 | 平安科技(深圳)有限公司 | Automatic approval comment input method and apparatus, computer device, and storage medium |
CN109150984A (en) * | 2018-07-27 | 2019-01-04 | 平安科技(深圳)有限公司 | The method and apparatus for obtaining data resource |
CN109150984B (en) * | 2018-07-27 | 2021-11-02 | 平安科技(深圳)有限公司 | Method and device for acquiring data resources |
CN109831618A (en) * | 2018-12-10 | 2019-05-31 | 平安科技(深圳)有限公司 | Photographic method, computer readable storage medium and terminal device based on expression |
CN111612654A (en) * | 2020-05-08 | 2020-09-01 | 快猪侠信息技术(杭州)有限公司 | Smart hotel information interaction system and interaction method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650610A (en) | Human face expression data collection method and device | |
CN105224606B (en) | A kind of processing method and processing device of user identifier | |
CN111125495A (en) | Information recommendation method, equipment and storage medium | |
CN109471938A (en) | A kind of file classification method and terminal | |
CN107908789A (en) | Method and apparatus for generating information | |
CN103942712A (en) | Product similarity based e-commerce recommendation system and method thereof | |
CN107783762A (en) | Interface creating method, device, storage medium and computer equipment | |
CN110135693A (en) | A kind of Risk Identification Method, device, equipment and storage medium | |
CN104317959A (en) | Data mining method and device based on social platform | |
CN103189836A (en) | Method for classification of objects in a graph data stream | |
CN103605691B (en) | Device and method used for processing issued contents in social network | |
CN110705585A (en) | Network fraud identification method and device, computer device and storage medium | |
CN104021185B (en) | The method and apparatus is identified by the information attribute of data in webpage | |
CN104915351A (en) | Picture sorting method and terminal | |
CN110019794A (en) | Classification method, device, storage medium and the electronic device of textual resources | |
CN107368550A (en) | Information acquisition method, device, medium, electronic equipment, server and system | |
CN106611015A (en) | Tag processing method and apparatus | |
CN109714356A (en) | A kind of recognition methods of abnormal domain name, device and electronic equipment | |
CN113821592B (en) | Data processing method, device, equipment and storage medium | |
CN103778226A (en) | Method for establishing language information recognition model and language information recognition device | |
CN106250402A (en) | A kind of Website classification method and device | |
CN107590387A (en) | EL expression formula injection loopholes detection method, device and electronic equipment | |
CN106169961A (en) | The network parameter processing method and processing device of neutral net based on artificial intelligence | |
CN110032619A (en) | A kind of segmenter training method and its device based on deep learning | |
CN109347873A (en) | A kind of detection method, device and the computer equipment of order injection attacks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |