CN109298783A - Mark monitoring method, device and electronic equipment based on Expression Recognition - Google Patents
Mark monitoring method, device and electronic equipment based on Expression Recognition Download PDFInfo
- Publication number
- CN109298783A CN109298783A CN201811021970.4A CN201811021970A CN109298783A CN 109298783 A CN109298783 A CN 109298783A CN 201811021970 A CN201811021970 A CN 201811021970A CN 109298783 A CN109298783 A CN 109298783A
- Authority
- CN
- China
- Prior art keywords
- mark
- degree
- fatigue
- facial image
- expression recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of mark monitoring method, device and electronic equipment based on Expression Recognition, is related to the technical field of image procossing, should mark monitoring method based on Expression Recognition obtain facial image to be processed first;Then Expression Recognition is carried out to facial image, determines the corresponding degree of fatigue of the facial image;Finally according to the degree of fatigue, corresponding mark strategy is executed.This method carries out Expression analysis to mark personnel, quantify the mark effect of mark personnel using the degree of fatigue of acquisition, and then targetedly determine the mark strategy needed to be implemented, it realizes and intelligent control is carried out to mark mechanism, reduce human intervention, while saving a large amount of human resources and time, the mark instruction of mark personnel is effectively improved, guarantees preferable mark effect.
Description
Technical field
This application involves technical field of image processing, more particularly, to a kind of mark monitoring method based on Expression Recognition,
Device and electronic equipment.
Background technique
Data mark is manually picture, video and voice content label, make marks i.e. according to actual needs.It marks
Data be used for training algorithm model, be then applied to the different fields such as image recognition, speech recognition.Labeled data is as model
Exclusive source, labeled data quality directly determines the quality of mode inference.Usually, data are marked more accurate, are counted
Amount is more, and the effect of model is better.
The pith provided as data during Artificial Intelligence Development is provided, is that a repeatability is strong, it is mechanical
Work.As mark personnel, the mark experience that existing mark platform provides is single, and the work sense of pipeline system not only makes to mark
Infuse personnel's efficiency decline obviously, error probability can also correspondingly increase.The quality amount of being difficult in a conventional manner of the work of these marks
Change, existing quality product control depend on accepter inspection and inspectoral pre-delivery inspection.
The quality inspection mechanism that current present mark person, inspector, accepter are constituted is difficult to closed-loop control, human intervention mistake
It is more, it not can guarantee mark quality;And the pursuit due to mark person for speed, so that mark person is easily tired and loses quality.
The mark quality of mark person how is improved, guarantees preferable mark effect, currently no effective solution has been proposed.
Summary of the invention
In view of this, the application's is designed to provide a kind of mark monitoring method, device and electricity based on Expression Recognition
Sub- equipment carries out intelligent control to mark mechanism to realize, reduces human intervention, is saving the same of a large amount of human resources and time
When, the mark instruction of mark personnel is effectively improved, guarantees preferable mark effect.
In a first aspect, the embodiment of the present application provides a kind of mark monitoring method based on Expression Recognition, comprising:
Obtain facial image to be processed;
Expression Recognition is carried out to the facial image, determines the corresponding degree of fatigue of the facial image;
According to the degree of fatigue, corresponding mark strategy is executed.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, wherein institute
It states according to the degree of fatigue, executing corresponding mark strategy includes:
Calculate the average value of the corresponding degree of fatigue of face images obtained in preset duration;
According to the average value, corresponding mark strategy is executed.
With reference to first aspect, the embodiment of the present application provides second of possible embodiment of first aspect, wherein institute
It states according to the degree of fatigue, executing corresponding mark strategy includes:
When the degree of fatigue is greater than or equal to the first preset threshold and when less than the second preset threshold, according to be marked right
The degree-of-difficulty factor of elephant reduces the difficulty of mark task;Or
When the degree of fatigue is greater than or equal to the second preset threshold and is less than third predetermined threshold value, fatigue prompt is generated
Information is to prompt the mark personnel;Or
When the degree of fatigue is greater than or equal to third predetermined threshold value, current mark task is terminated.
The possible embodiment of second with reference to first aspect, the embodiment of the present application provide the third of first aspect
Possible embodiment, wherein before the difficulty for reducing mark task according to the degree-of-difficulty factor of object to be marked, also wrap
It includes:
Obtain the marking types of object to be marked;
Forecasting recognition is carried out to the object to be marked based on the neural network model pre-established, obtains prediction result pair
The confidence level answered;
According to the confidence level and the marking types, the degree-of-difficulty factor of the object to be marked is determined.
The possible embodiment of second with reference to first aspect, the embodiment of the present application provide the 4th kind of first aspect
Possible embodiment, wherein the fatigue prompt information includes that text prompt, picture prompting, voice prompting and vibration mention
One or more of show.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, wherein institute
It states and Expression Recognition is carried out to the facial image, determine that the corresponding degree of fatigue of the facial image includes:
Emotion identification is carried out to the facial image, determines the corresponding type of emotion of the facial image;
Key point extraction is carried out to the facial image, determines the eyes closed degree in the facial image;
According to the type of emotion and eyes closed degree, the corresponding degree of fatigue of the facial image is determined.
The 5th kind of possible embodiment with reference to first aspect, the embodiment of the present application provide the 6th kind of first aspect
Possible embodiment, wherein it is described according to the type of emotion and eyes closed degree, determine that the facial image is corresponding
Degree of fatigue includes:
Obtain the corresponding mark rate of mark personnel and mark accuracy rate;
According to the mark rate and the mark accuracy, determine that the type of emotion and eyes closed degree are right respectively
The weighted value answered;
According to the type of emotion and the corresponding weighted value of eyes closed degree, determine that the facial image is corresponding
Degree of fatigue.
Second aspect, the embodiment of the present application also provide a kind of mark monitoring device based on Expression Recognition, comprising:
Image collection module, for obtaining facial image to be processed;
Tired determining module determines that the facial image is corresponding tired for carrying out Expression Recognition to the facial image
Labor degree;
Policy enforcement module, for executing corresponding mark strategy according to the degree of fatigue.
The third aspect, the embodiment of the present application also provide a kind of electronic equipment, including memory, processor, the memory
On be stored with the computer program that can be run on the processor, the processor is realized when executing the computer program
State method described in first aspect and its any possible embodiment.
Fourth aspect, the embodiment of the present application also provide a kind of meter of non-volatile program code that can be performed with processor
Calculation machine readable medium, said program code make the processor execute the first aspect and its any possible embodiment
The method.
The embodiment of the present application bring it is following the utility model has the advantages that
In the embodiment of the present application, it is somebody's turn to do the mark monitoring method based on Expression Recognition and obtains face figure to be processed first
Picture;Then Expression Recognition is carried out to facial image, determines the corresponding degree of fatigue of the facial image;Finally according to the fatigue journey
Degree executes corresponding mark strategy.This method carries out Expression analysis to mark personnel, quantifies the mark using the degree of fatigue of acquisition
The mark effect of note personnel, and then targetedly determine the mark strategy needed to be implemented, it realizes and intelligent control is carried out to mark mechanism
System reduces human intervention, while saving a large amount of human resources and time, effectively improves the mark instruction of mark personnel, protects
The preferable mark effect of card.
Other feature and advantage of the application will illustrate in the following description, also, partly become from specification
It obtains it is clear that being understood and implementing the application.The purpose of the application and other advantages are in specification, claims
And specifically noted structure is achieved and obtained in attached drawing.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the application specific embodiment or technical solution in the prior art
Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below
Attached drawing is some embodiments of the application, for those of ordinary skill in the art, before not making the creative labor
It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of the mark monitoring method based on Expression Recognition provided by the embodiments of the present application;
Fig. 2 is the mark schematic diagram of the mark personnel provided by the embodiments of the present application for being labeled task;
Fig. 3 is the flow diagram of another mark monitoring method based on Expression Recognition provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of mark monitoring device based on Expression Recognition provided by the embodiments of the present application;
Fig. 5 is the structural schematic diagram of another mark monitoring device based on Expression Recognition provided by the embodiments of the present application;
Fig. 6 is the structural schematic diagram of electronic equipment provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with attached drawing to the application
Technical solution be clearly and completely described, it is clear that described embodiment is some embodiments of the present application, rather than
Whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making creative work premise
Under every other embodiment obtained, shall fall in the protection scope of this application.
The quality inspection mechanism that current present mark person, inspector, accepter are constituted is difficult to closed-loop control, human intervention mistake
It is more, it not can guarantee mark quality;And the pursuit due to mark person for speed, so that mark person is easily tired and loses quality.
Based on this, a kind of mark monitoring method, device and electronic equipment based on Expression Recognition provided by the embodiments of the present application can be right
Mark personnel carry out Expression analysis, quantify the mark effect of mark personnel using the degree of fatigue of acquisition, and then targetedly
It determines the mark strategy needed to be implemented, realizes and intelligent control is carried out to mark mechanism, reduce human intervention, saving a large amount of manpowers
While resource and time, the mark instruction of mark personnel is effectively improved, guarantees preferable mark effect.
For convenient for understanding the present embodiment, first to a kind of based on Expression Recognition disclosed in the embodiment of the present application
Mark monitoring method describes in detail.During this method is applied to data mark, by related hardware and software realization,
Such as realizing the electronic equipment of mark, such as computer, plate or mobile phone.
Referring to a kind of flow diagram of the mark monitoring method based on Expression Recognition shown in fig. 1.It should be known based on expression
Other mark monitoring method includes:
Step S101 obtains facial image to be processed.
For example, see Fig. 2, the computer for realizing mark is arranged or is connected with camera, which can be used for clapping
Take the photograph the facial image of the mark personnel for the task of being labeled.Wherein the facial image can be such as bmp, jpg or png format chart
Picture.When being labeled task, each object to be marked is labeled, object to be marked can be picture, video and voice
Content etc..
Step S102 carries out Expression Recognition to above-mentioned facial image, determines the corresponding degree of fatigue of the facial image.
Such as the corresponding degree of fatigue of facial image, Fatigue assessment can be determined by Fatigue valuation functions
Function is the linear valuation functions in conjunction with each feature of face, is identified derived from mood tag recognition and face key point, and export
The tired score value of corresponding characterization degree of fatigue.
Step S103 executes corresponding mark strategy according to above-mentioned degree of fatigue.
Different mark strategies is determined for different degree of fatigues, such as when degree of fatigue is greater than or equal to the first default threshold
Value and when less than the second preset threshold, reduces the difficulty of mark task;Or when degree of fatigue is greater than or equal to the second default threshold
When being worth and being less than third predetermined threshold value, tired prompting is carried out, or when degree of fatigue is greater than or equal to third predetermined threshold value, by force
Stop only marks task dispatching.Above-mentioned several mark strategies can exist simultaneously in same embodiment, or different
It is executed respectively in embodiment.Wherein the first preset threshold, the second preset threshold and third predetermined threshold value can be according to practical feelings
Condition is specifically set.
In order to guarantee the accuracy of degree of fatigue, prevent as caused by the short time variation of mark personnel's facial expression accidentally
Sentence, in a possible embodiment, above-mentioned steps S103 includes: that the face images of acquisition in calculating preset duration are corresponding tired
The average value of labor degree;According to the average value, corresponding mark strategy is executed.
Such as can detecte facial image of the mark personnel in 5 seconds, calculate the face images obtained in 5 seconds
The average value of degree of fatigue is tactful according to corresponding mark is executed with the average value.Such as when average value be greater than or equal to 0.3 and
When less than 0.6, the difficulty of mark task is reduced;When average value is greater than or equal to 0.6 less than 0.8, mark personnel are carried out tired
Labor prompt.By executing above-mentioned mark strategy, the mark quality of mark personnel can effectively ensure that.
Mark monitoring method provided by the embodiments of the present application based on Expression Recognition, to mark, personnel carry out Expression analysis,
Quantify the mark effect of mark personnel using the degree of fatigue of acquisition, and then targetedly determines the mark plan needed to be implemented
Slightly, it realizes and intelligent control is carried out to mark mechanism, reduce human intervention, while saving a large amount of human resources and time, have
Effect improves the mark instruction of mark personnel, guarantees preferable mark effect.
On the basis of above embodiments, the embodiment of the present application provides another mark monitoring side based on Expression Recognition
The flow diagram of method.As shown in figure 3, the mark monitoring method based on Expression Recognition includes:
Step S301 obtains facial image to be processed.
Mark personnel either automatically or manually open the electronic equipment such as electricity for realizing mark when into mark task is started
The camera of brain or plate.
In a possible embodiment, the image that electronic equipment shoots camera carries out Face datection, when determining in image
When not comprising face, the prompt information of pose adjustment is generated, either adjusts camera to prompt mark personnel to adjust sitting posture
Shooting angle.
Step S302 carries out Emotion identification to above-mentioned facial image, determines the corresponding type of emotion of facial image.
Such as can use the Emotion identification model pre-established and above-mentioned facial image is identified, so that it is determined that people out
The corresponding type of emotion of face image.Wherein type of emotion can be, but not limited to include positive mood and negative emotions, wherein front
Mood can be divided into again it is happy, pleasantly surprised etc., negative emotions can be divided into again it is sad, detest etc..
Step S303 carries out key point extraction to above-mentioned facial image, determines the eyes closed degree in facial image.
Specifically, after mark task is opened, the facial image of mark personnel is obtained in real time, it is equal to each frame facial image
Carry out key point extraction.According to the key point of each frame facial image extracted before, the corresponding eye of current frame image is determined
Eyeball is closed degree.I.e. by determining current eyes closed degree constantly to the study of facial image.
Step S304 determines the corresponding degree of fatigue of the facial image according to above-mentioned type of emotion and eyes closed degree.
For example, the degree of fatigue can be expressed as formula:
T=((∑ Positive × wkn1- ∑ Negative × wkn2) × w1+Landmark × w2)/f (1)
Wherein, T indicates degree of fatigue, and Positive indicates the marker of various positive moods, when there are the front moods
Type, such as include mood " happy " when, the corresponding Positive of mood " happy " be 1, if there is no the front mood
Type, when if do not included mood " happy ", the corresponding Positive of mood " happy " is 0;Wkn1 indicates each in positive mood
The corresponding weighted value of a type of emotion;Negative indicates the marker of various negative emotions, when there are the classes of the negative emotions
Type, when such as including mood " detests ", the corresponding Negative of mood " detest " is 1, if there is no the type of the negative emotions,
When if do not included mood " detest ", the corresponding Negative of mood " detest " is 0;Wkn2 indicates each mood in negative emotions
The corresponding weighted value of type;Landmark indicates eyes closed degree, and w1 indicates that the corresponding total weighted value of mood, w2 indicate eyes
The corresponding weighted value of closure degree, f indicate that normalization factor, the normalization factor can be obtained according to priori.
In a possible embodiment, above-mentioned steps S304 includes:
(a1) the corresponding mark rate of mark personnel and mark accuracy rate are obtained.
Audit according to inspector, accepter to the annotation results of mark personnel, determines the standard accuracy rate of mark personnel,
Such as 90% or 95%;The mark rate that mark personnel are determined by the mark records to mark personnel, such as marks 30 per minute
Picture.
(a2) according to above-mentioned mark rate and above-mentioned mark accuracy, above-mentioned type of emotion and eyes closed degree point are determined
Not corresponding weighted value.
For example, mark speed is faster while eyes opening and closing degree is lower, mark accuracy rate is higher, then corresponding eye
The weighted value that eyeball is closed degree is smaller, and the corresponding weighted value of type of emotion is bigger.In a possible embodiment, the original can be based on
Then adjust each weighted value in above-mentioned formula (1).
(a3) according to above-mentioned type of emotion and the corresponding weighted value of eyes closed degree, determine that facial image is corresponding
Degree of fatigue.
Specifically it may refer to formula (1), after step (a2) has determined each weighted value, according to each weighted value, really
Determine the corresponding degree of fatigue of facial image.
During concrete implementation, by obtaining mark rate and mark accuracy rate, real-time update facial image in real time
The corresponding weighted value of eyes closed degree in corresponding type of emotion and facial image, and then it is corresponding to update facial image
Degree of fatigue accurately degree of fatigue is positioned during continuous study.
Whether step S305 judges above-mentioned degree of fatigue less than the first preset threshold.
Wherein first preset threshold is set according to actual conditions, for the boundary as degree of fatigue.
If above-mentioned degree of fatigue thens follow the steps S306 less than the first preset threshold;If above-mentioned degree of fatigue is greater than
Or it is equal to the first preset threshold, then follow the steps S307.
Step S306 executes normal labeling operation according to pre-set sequence.
When above-mentioned degree of fatigue is less than the first preset threshold, when such as less than 0.3, illustrates to mark personnel state good, be not necessarily to
The change for being labeled strategy is normally carried out mark directly according to pre-set sequence.
Step S307, judges whether above-mentioned degree of fatigue is greater than or equal to the first preset threshold and less than the second default threshold
Value.
Wherein, the second preset threshold is greater than the first preset threshold.If the first preset threshold is 0.3, the second preset threshold is
0.6.When degree of fatigue is greater than or equal to the first preset threshold and when less than the second preset threshold, illustrate that mark personnel cannot be again
Continue with the higher mark task of difficulty.
If above-mentioned degree of fatigue is greater than or equal to the first preset threshold and less than the second preset threshold, then follow the steps
S308;If above-mentioned degree of fatigue is greater than or equal to the second preset threshold, S309 is thened follow the steps.
Step S308 reduces the difficulty of mark task according to the degree-of-difficulty factor of object to be marked.
Wherein the degree-of-difficulty factor of object to be marked is that the pre- model that first passes through calculates.In a possible embodiment, in step
The calculating process of degree-of-difficulty factor is carried out before S308, which includes:
(b1) marking types of object to be marked are obtained.
It is that object to be marked is pre-assigned that the marking types, which can be mark personnel,.The marking types can be, but not limited to
Including two classification annotations and collimation mark is selected to infuse.Under equal conditions, the degree-of-difficulty factor of the object to be marked of two classification annotations is less than choosing
The degree-of-difficulty factor of the object to be marked of collimation mark note.
(b2) Forecasting recognition is carried out to above-mentioned object to be marked based on the neural network model pre-established, obtains prediction knot
The corresponding confidence level of fruit.
For example, passing through the neural network model pre-established in the case where carrying out the scene that positive negative sample is classified to image
Forecasting recognition is carried out to image, obtains prediction result (for positive sample or negative sample) and the corresponding confidence level of the prediction result.
Under equal conditions, the bigger object to be marked of confidence level, degree-of-difficulty factor are bigger.
(b3) according to above-mentioned confidence level and above-mentioned marking types, the degree-of-difficulty factor of object to be marked is determined.
For confidence level and marking types corresponding weighted value can be set, object to be marked is determined according to the weighted value
Degree-of-difficulty factor.Such as the degree-of-difficulty factor can indicate are as follows:
Wherein, S indicates degree-of-difficulty factor, and n indicates the number of species of marking types, labeliIndicate i-th kind of marking types
Marker, liIndicate the corresponding weighted value of i-th kind of marking types, when object to be marked belongs to i-th kind of marking types, i-th kind
The corresponding weighted value l of marking typesiIt is 1, when object to be marked is not belonging to i-th kind of marking types, i-th kind of marking types is corresponding
Weighted value liIt is 0;X indicates the corresponding total weighted value of marking types;Y indicates confidence level;Y indicates the corresponding weight of confidence level
Value.
For example, when the marking types of a certain object to be marked are two classification, corresponding confidence level is 0.9, and two classification
The corresponding weighted value of marking types be 0.4, the corresponding total weighted value of marking types is 0.3, and the corresponding weighted value of confidence level is
0.7, then the corresponding degree-of-difficulty factor of the object to be marked is 1*0.4*0.3+0.7*0.9=0.75.
Therefore, in step S308, such as in degree of fatigue it is greater than or equal to 0.3 and when less than 0.6, screening can be passed through
The lower object to be marked of degree-of-difficulty factor is labeled, to reduce the difficulty of mark task.Degree-of-difficulty factor is biggish to be marked right
As needing biggish workload, and the workload that the lesser object to be marked of degree-of-difficulty factor needs is smaller.
Step S309 judges whether above-mentioned degree of fatigue is greater than or equal to the second preset threshold and is less than third and presets threshold
Value.
Wherein, third predetermined threshold value is greater than the second preset threshold.If the first preset threshold is 0.3, the second preset threshold is
0.6, third predetermined threshold value 0.8.When degree of fatigue is greater than or equal to the second preset threshold and is less than third predetermined threshold value, say
Bright mark personnel are tired, need to rest in due course.
If above-mentioned degree of fatigue is greater than or equal to the second preset threshold and is less than third predetermined threshold value, then follow the steps
S310;If above-mentioned degree of fatigue is greater than or equal to third predetermined threshold value, S311 is thened follow the steps.
Step S310 generates tired prompt information to prompt mark personnel.
Wherein tired prompt information include one of text prompt, picture prompting, voice prompting and vibration prompt or
Person is a variety of.As that can show the text prompted in being used for by highlighted color on the display screen of electronic equipment.
Step S311 terminates current mark task.
When degree of fatigue is greater than or equal to the third predetermined threshold value, illustrates that mark personnel cannot be labeled again and appoint
Business, terminates the mark task.
For example, by directly controlling current mark page close for realizing the electronic equipment that data mark, either
Other prompt pages are shown, to indicate currently be further continued for be labeled.
In a possible embodiment, the termination duration of mark task can be set according to the actual situation, is terminating current mark
When note task, at the same show next time open mark task at the time of.
In practical applications, it can recycle always and execute step S301 to step S311, to be carried out in real time to mark state
Monitoring, and execute different mark strategies.
In addition, in a possible embodiment, if for realizing data mark electronic equipment in preset duration such as 10
Minute, the image shot by camera has been not detected mark personnel and has been labeled, then camera has been automatically closed, to next time
When mark task is opened, then either automatically or manually open.
Mark monitoring method provided by the embodiments of the present application based on Expression Recognition, to mark, personnel carry out Expression analysis,
Quantify the mark effect of mark personnel using the degree of fatigue of acquisition, and then targetedly determines the mark plan needed to be implemented
Slightly, the difficulty of mark task is such as reduced when marking personnel's fatigue, or is reminded, or terminates mark task;It realizes
Intelligent control is carried out to mark mechanism, reduces human intervention;Useless labour of the mark personnel under fatigue state is avoided, is not necessarily to
While examination and inspection repeatedly, a large amount of human resources and time are being saved, the mark instruction of mark personnel is being effectively improved, protects
The preferable mark effect of card.
It is directed to the above-mentioned mark monitoring method based on Expression Recognition, referring to fig. 4, the embodiment of the present application provides a kind of base
In the mark monitoring device of Expression Recognition, which includes:
Image collection module 11, for obtaining facial image to be processed;
Tired determining module 12 determines the corresponding fatigue of facial image for carrying out Expression Recognition to above-mentioned facial image
Degree;
Policy enforcement module 13, for executing corresponding mark strategy according to above-mentioned degree of fatigue.
Further, above-mentioned policy enforcement module 13 is also used to:
Calculate the average value of the corresponding degree of fatigue of face images obtained in preset duration;
According to above-mentioned average value, corresponding mark strategy is executed.
Further, above-mentioned policy enforcement module 13 is also used to:
When the degree of fatigue is greater than or equal to the first preset threshold and when less than the second preset threshold, according to be marked right
The degree-of-difficulty factor of elephant reduces the difficulty of mark task;Or
When the degree of fatigue is greater than or equal to the second preset threshold and is less than third predetermined threshold value, fatigue prompt is generated
Information is to prompt the mark personnel;Or
When the degree of fatigue is greater than or equal to third predetermined threshold value, current mark task is terminated.
Further, on the basis of fig. 4, referring to another mark monitoring device based on Expression Recognition shown in Fig. 5,
The device further includes difficulty determining module 14, which is used for:
Obtain the marking types of object to be marked;
Forecasting recognition is carried out to object to be marked based on the neural network model pre-established, it is corresponding to obtain prediction result
Confidence level;
According to above-mentioned confidence level and above-mentioned marking types, the degree-of-difficulty factor of above-mentioned object to be marked is determined.
Further, above-mentioned tired prompt information includes in text prompt, picture prompting, voice prompting and vibration prompt
One or more.
Further, above-mentioned tired determining module 12 further include:
Mood determination unit 121 determines the corresponding mood class of facial image for carrying out Emotion identification to facial image
Type;
It is closed extent determination unit 122, key point extraction is carried out to facial image, determines the eyes closed in facial image
Degree;
Tired determination unit 123, for determining above-mentioned facial image pair according to above-mentioned type of emotion and eyes closed degree
The degree of fatigue answered.
Further, above-mentioned closure extent determination unit 122 is also used to:
Obtain the corresponding mark rate of mark personnel and mark accuracy rate;
According to above-mentioned mark rate and above-mentioned mark accuracy, determine that above-mentioned type of emotion and eyes closed degree are right respectively
The weighted value answered;
According to above-mentioned type of emotion and the corresponding weighted value of eyes closed degree, determine that the facial image is corresponding tired
Labor degree.
Mark monitoring method provided by the embodiments of the present application based on Expression Recognition, to mark, personnel carry out Expression analysis,
Quantify the mark effect of mark personnel using the degree of fatigue of acquisition, and then targetedly determines the mark plan needed to be implemented
Slightly, it realizes and intelligent control is carried out to mark mechanism, reduce human intervention, while saving a large amount of human resources and time, have
Effect improves the mark instruction of mark personnel, guarantees preferable mark effect.
Referring to Fig. 6, the embodiment of the present application also provides a kind of electronic equipment 100, comprising: processor 40, memory 41, bus
42 and communication interface 43, the processor 40, communication interface 43 and memory 41 are connected by bus 42;Processor 40 is for holding
The executable module stored in line storage 41, such as computer program.
Wherein, memory 41 may include high-speed random access memory (RAM, Random Access Memory),
It may further include nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.By at least
One communication interface 43 (can be wired or wireless) realizes the communication between the system network element and at least one other network element
Connection, can be used internet, wide area network, local network, Metropolitan Area Network (MAN) etc..
Bus 42 can be isa bus, pci bus or eisa bus etc..The bus can be divided into address bus, data
Bus, control bus etc..Only to be indicated with a four-headed arrow convenient for indicating, in Fig. 6, it is not intended that an only bus or
A type of bus.
Wherein, memory 41 is for storing program, and the processor 40 executes the journey after receiving and executing instruction
Sequence, method performed by the device that the stream process that aforementioned the embodiment of the present application any embodiment discloses defines can be applied to handle
In device 40, or realized by processor 40.
Processor 40 may be a kind of IC chip, the processing capacity with signal.During realization, above-mentioned side
Each step of method can be completed by the integrated logic circuit of the hardware in processor 40 or the instruction of software form.Above-mentioned
Processor 40 can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network
Processor (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (Digital Signal
Processing, abbreviation DSP), specific integrated circuit (Application Specific Integrated Circuit, referred to as
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, abbreviation FPGA) or other are programmable
Logical device, discrete gate or transistor logic, discrete hardware components.It may be implemented or execute in the embodiment of the present application
Disclosed each method, step and logic diagram.General processor can be microprocessor or the processor is also possible to appoint
What conventional processor etc..The step of method in conjunction with disclosed in the embodiment of the present application, can be embodied directly in hardware decoding processing
Device executes completion, or in decoding processor hardware and software module combination execute completion.Software module can be located at
Machine memory, flash memory, read-only memory, programmable read only memory or electrically erasable programmable memory, register etc. are originally
In the storage medium of field maturation.The storage medium is located at memory 41, and processor 40 reads the information in memory 41, in conjunction with
Its hardware completes the step of above method.
Mark monitoring device and electronic equipment provided by the embodiments of the present application based on Expression Recognition, mentions with above-described embodiment
The mark monitoring method technical characteristic having the same based on Expression Recognition supplied, so also can solve identical technical problem,
Reach identical technical effect.
The computer program product of the mark monitoring method based on Expression Recognition is carried out provided by the embodiment of the present application, is wrapped
The computer readable storage medium for storing the executable non-volatile program code of processor is included, what said program code included
Instruction can be used for executing previous methods method as described in the examples, and specific implementation can be found in embodiment of the method, no longer superfluous herein
It states.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
And the specific work process of electronic equipment, it can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
The flow chart and block diagram in the drawings show multiple embodiment method and computer program products according to the application
Architecture, function and operation in the cards.In this regard, each box in flowchart or block diagram can represent one
A part of module, section or code, a part of the module, section or code include it is one or more for realizing
The executable instruction of defined logic function.It should also be noted that in some implementations as replacements, function marked in the box
It can also can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be substantially parallel
Ground executes, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram
And/or the combination of each box in flow chart and the box in block diagram and or flow chart, it can the function as defined in executing
Can or the dedicated hardware based system of movement realize, or can come using a combination of dedicated hardware and computer instructions real
It is existing.
In addition, term " first ", " second ", " third " are used for description purposes only, it is not understood to indicate or imply phase
To importance.Unless specifically stated otherwise, the opposite step of the component and step that otherwise illustrate in these embodiments, digital table
It is not limited the scope of the application up to formula and numerical value.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it
Its mode is realized.The apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, in another example, multiple units or components can combine
Or it is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed phase
Coupling, direct-coupling or communication connection between mutually can be through some communication interfaces, the INDIRECT COUPLING of device or unit or
Communication connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in the executable non-volatile computer-readable storage medium of a processor.Based on this understanding, the application
Technical solution substantially the part of the part that contributes to existing technology or the technical solution can be with software in other words
The form of product embodies, which is stored in a storage medium, including some instructions use so that
One computer equipment (can be personal computer, server or the network equipment etc.) executes each embodiment institute of the application
State all or part of the steps of method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-
Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can be with
Store the medium of program code.
Finally, it should be noted that embodiment described above, the only specific embodiment of the application, to illustrate the application
Technical solution, rather than its limitations, the protection scope of the application is not limited thereto, although with reference to the foregoing embodiments to this Shen
It please be described in detail, those skilled in the art should understand that: anyone skilled in the art
Within the technical scope of the present application, it can still modify to technical solution documented by previous embodiment or can be light
It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make
The essence of corresponding technical solution is detached from the spirit and scope of the embodiment of the present application technical solution, should all cover the protection in the application
Within the scope of.Therefore, the protection scope of the application shall be subject to the protection scope of the claim.
Claims (10)
1. a kind of mark monitoring method based on Expression Recognition characterized by comprising
Obtain facial image to be processed;
Expression Recognition is carried out to the facial image, determines the corresponding degree of fatigue of the facial image;
According to the degree of fatigue, corresponding mark strategy is executed.
2. executing corresponding mark the method according to claim 1, wherein described according to the degree of fatigue
Strategy includes:
Calculate the average value of the corresponding degree of fatigue of face images obtained in preset duration;
According to the average value, corresponding mark strategy is executed.
3. executing corresponding mark the method according to claim 1, wherein described according to the degree of fatigue
Strategy includes:
When the degree of fatigue is greater than or equal to the first preset threshold and when less than the second preset threshold, according to object to be marked
The difficulty of degree-of-difficulty factor reduction mark task;Or
When the degree of fatigue is greater than or equal to the second preset threshold and is less than third predetermined threshold value, fatigue prompt letter is generated
Breath;Or
When the degree of fatigue is greater than or equal to third predetermined threshold value, current mark task is terminated.
4. according to the method described in claim 3, it is characterized in that, described reduced according to the degree-of-difficulty factor of object to be marked marks
Before the difficulty of task, further includes:
Obtain the marking types of object to be marked;
Forecasting recognition is carried out to the object to be marked based on the neural network model pre-established, it is corresponding to obtain prediction result
Confidence level;
According to the confidence level and the marking types, the degree-of-difficulty factor of the object to be marked is determined.
5. according to the method described in claim 3, it is characterized in that, the fatigue prompt information includes that text prompt, picture mention
Show, one or more of voice prompting and vibration prompt.
6. being determined the method according to claim 1, wherein described carry out Expression Recognition to the facial image
The corresponding degree of fatigue of the facial image includes:
Emotion identification is carried out to the facial image, determines the corresponding type of emotion of the facial image;
Key point extraction is carried out to the facial image, determines the eyes closed degree in the facial image;
According to the type of emotion and eyes closed degree, the corresponding degree of fatigue of the facial image is determined.
7. according to the method described in claim 6, it is characterized in that, described according to the type of emotion and eyes closed degree,
Determine that the corresponding degree of fatigue of the facial image includes:
Obtain the corresponding mark rate of mark personnel and mark accuracy rate;
According to the mark rate and the mark accuracy, determine that the type of emotion and eyes closed degree are corresponding
Weighted value;
According to the type of emotion and the corresponding weighted value of eyes closed degree, the corresponding fatigue of the facial image is determined
Degree.
8. a kind of mark monitoring device based on Expression Recognition characterized by comprising
Image collection module, for obtaining facial image to be processed;
Tired determining module determines the corresponding tired journey of the facial image for carrying out Expression Recognition to the facial image
Degree;
Policy enforcement module, for executing corresponding mark strategy according to the degree of fatigue.
9. a kind of electronic equipment, including memory, processor, be stored on the memory to run on the processor
Computer program, which is characterized in that the processor realizes that the claims 1 to 7 are any when executing the computer program
Method described in.
10. a kind of computer-readable medium for the non-volatile program code that can be performed with processor, which is characterized in that described
Program code makes the processor execute the described in any item methods of claim 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811021970.4A CN109298783B (en) | 2018-09-03 | 2018-09-03 | Mark monitoring method and device based on expression recognition and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811021970.4A CN109298783B (en) | 2018-09-03 | 2018-09-03 | Mark monitoring method and device based on expression recognition and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109298783A true CN109298783A (en) | 2019-02-01 |
CN109298783B CN109298783B (en) | 2021-10-01 |
Family
ID=65166189
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811021970.4A Active CN109298783B (en) | 2018-09-03 | 2018-09-03 | Mark monitoring method and device based on expression recognition and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109298783B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109965851A (en) * | 2019-04-18 | 2019-07-05 | 郑州工业应用技术学院 | A kind of human motion fatigue detection system based on multi-physiological-parameter |
CN111598002A (en) * | 2020-05-18 | 2020-08-28 | 北京乐元素文化发展有限公司 | Multi-facial expression capturing method and device, electronic equipment and computer storage medium |
CN112101823A (en) * | 2020-11-03 | 2020-12-18 | 四川大汇大数据服务有限公司 | Multidimensional emotion recognition management method, system, processor, terminal and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881285A (en) * | 2011-07-15 | 2013-01-16 | 富士通株式会社 | Method for marking rhythm and special marking equipment |
CN103318023A (en) * | 2013-06-21 | 2013-09-25 | 同济大学 | Vehicle-mounted real-time intelligent fatigue monitoring and auxiliary device |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
CN106781282A (en) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | A kind of intelligent travelling crane driver fatigue early warning system |
CN107358646A (en) * | 2017-06-20 | 2017-11-17 | 安徽工程大学 | A kind of fatigue detecting system and method based on machine vision |
WO2018074371A1 (en) * | 2016-10-21 | 2018-04-26 | シチズン時計株式会社 | Detection device |
-
2018
- 2018-09-03 CN CN201811021970.4A patent/CN109298783B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102881285A (en) * | 2011-07-15 | 2013-01-16 | 富士通株式会社 | Method for marking rhythm and special marking equipment |
CN103318023A (en) * | 2013-06-21 | 2013-09-25 | 同济大学 | Vehicle-mounted real-time intelligent fatigue monitoring and auxiliary device |
CN105844252A (en) * | 2016-04-01 | 2016-08-10 | 南昌大学 | Face key part fatigue detection method |
WO2018074371A1 (en) * | 2016-10-21 | 2018-04-26 | シチズン時計株式会社 | Detection device |
CN106781282A (en) * | 2016-12-29 | 2017-05-31 | 天津中科智能识别产业技术研究院有限公司 | A kind of intelligent travelling crane driver fatigue early warning system |
CN107358646A (en) * | 2017-06-20 | 2017-11-17 | 安徽工程大学 | A kind of fatigue detecting system and method based on machine vision |
Non-Patent Citations (1)
Title |
---|
汪亭亭等: "基于面部表情识别的学习疲劳识别和干预方法", 《计算机工程与设计》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109965851A (en) * | 2019-04-18 | 2019-07-05 | 郑州工业应用技术学院 | A kind of human motion fatigue detection system based on multi-physiological-parameter |
CN111598002A (en) * | 2020-05-18 | 2020-08-28 | 北京乐元素文化发展有限公司 | Multi-facial expression capturing method and device, electronic equipment and computer storage medium |
CN111598002B (en) * | 2020-05-18 | 2023-04-07 | 北京星律动科技有限公司 | Multi-facial expression capturing method and device, electronic equipment and computer storage medium |
CN112101823A (en) * | 2020-11-03 | 2020-12-18 | 四川大汇大数据服务有限公司 | Multidimensional emotion recognition management method, system, processor, terminal and medium |
Also Published As
Publication number | Publication date |
---|---|
CN109298783B (en) | 2021-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI731297B (en) | Risk prediction method and apparatus, storage medium, and server | |
CN106682602B (en) | Driver behavior identification method and terminal | |
CN109035246B (en) | Face image selection method and device | |
US10991141B2 (en) | Automatic creation of a group shot image from a short video clip using intelligent select and merge | |
CN110659646A (en) | Automatic multitask certificate image processing method, device, equipment and readable storage medium | |
CN105095827B (en) | Facial expression recognition device and method | |
US9633044B2 (en) | Apparatus and method for recognizing image, and method for generating morphable face images from original image | |
US9613296B1 (en) | Selecting a set of exemplar images for use in an automated image object recognition system | |
CN109298783A (en) | Mark monitoring method, device and electronic equipment based on Expression Recognition | |
CN108205685A (en) | Video classification methods, visual classification device and electronic equipment | |
CN111126347B (en) | Human eye state identification method, device, terminal and readable storage medium | |
CN110532925B (en) | Driver fatigue detection method based on space-time graph convolutional network | |
CN107590460B (en) | Face classification method, apparatus and intelligent terminal | |
CN111008971B (en) | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system | |
CN112200218B (en) | Model training method and device and electronic equipment | |
CN109670457A (en) | A kind of driver status recognition methods and device | |
CN112712068B (en) | Key point detection method and device, electronic equipment and storage medium | |
CN110288085A (en) | A kind of data processing method, device, system and storage medium | |
CN110096617A (en) | Video classification methods, device, electronic equipment and computer readable storage medium | |
CN107832721A (en) | Method and apparatus for output information | |
CN110427810A (en) | Video damage identification method, device, shooting end and machine readable storage medium | |
KR101939772B1 (en) | Method and apparatus for inferring facial emotion recognition, system for inferring facial emotion, and media for recording computer program | |
CN111401343A (en) | Method for identifying attributes of people in image and training method and device for identification model | |
CN114639152A (en) | Multi-modal voice interaction method, device, equipment and medium based on face recognition | |
CN110490056A (en) | The method and apparatus that image comprising formula is handled |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Annotation monitoring methods, devices, and electronic devices based on facial expression recognition Effective date of registration: 20230404 Granted publication date: 20211001 Pledgee: Shanghai Yunxin Venture Capital Co.,Ltd. Pledgor: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd. Registration number: Y2023990000193 |