CN109446880A - Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium - Google Patents
Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium Download PDFInfo
- Publication number
- CN109446880A CN109446880A CN201811029580.1A CN201811029580A CN109446880A CN 109446880 A CN109446880 A CN 109446880A CN 201811029580 A CN201811029580 A CN 201811029580A CN 109446880 A CN109446880 A CN 109446880A
- Authority
- CN
- China
- Prior art keywords
- image
- human face
- participation
- facial image
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Abstract
The embodiment of the present application discloses a kind of intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium, and this method constructs training image collection;Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas;The image capture module acquisition facial image being set on intelligent elevated table;Illumination compensation is carried out to the facial image;The facial image is detected, human face region in facial image is oriented;Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;Gray level skeleton model expressive features model corresponding with training image collection mesophryon angular zone, mouth region and eye areas is matched, determines the human face expression state of user;User's participation is judged according to human face expression state, and intelligent elevated table is made to realize the accurate evaluation of user's participation during teaching, training etc..
Description
Technical field
The invention relates to field of office furniture more particularly to intelligent subscriber participation evaluation methods, device, intelligence
Same table and storage medium.
Background technique
Desk or class seat are widely used in school, office space as important one of equipment, as classroom, foreground,
Meeting room, office etc..Traditional desk or class seat table only provides simple office or learning functionality, such as provides drawer storing
And put computer office apparatus etc., intelligence degree is poor.For student, in conventional teaching, for student's learning behavior
Observation, analysis means are rested on mostly on the basis of the manual analyses such as traditional questionnaire survey or case, and evaluation result is more
Subjectivity, and the statistical analysis by the long period is needed, Real-time Feedback is unable to improve teaching behavior.If teacher can
The case where understanding student classroom participation on classroom in real time, such as student classroom interaction, attention situation, emotional state just can
The learning state of timely students ', adjusts teaching process, certainly will effectively improve teaching efficiency.For teacher, traditional teacher
Teaching Quality Assessment mode generally has student's assessment, peer review, inspection expert's assessment.In student's assessment, student is with sense
The phenomenon that erotica coloured silk is given a mark is very universal.It is who teacher usually poor management, not responsible to student, but examination be easy to pass through,
And score is not also low, and high score will be beaten when they give a mark;Teacher is not strict with student, very to secure satisfactory grades in assessment
To pleasing student, to divide to student.It is commented in religion in expert, since the expert of Evaluation Center is in order to mitigate oneself task,
Whole subject is often judged with the performance of one class of teacher, and there are also the psychology of a kind of " offending nobody " by Evaluation Center expert.
Institute leader integrates manager, instructor, scientific research person, they are the colored teaching at oneself of most of time energy, section
Grind in management work, but particularly for teaching evaluation work then seem unable to do what one wishes, when many can only assessment as one
Kind form.
The above-mentioned prior art cannot realize the intelligent subscriber participation during teaching, training etc. in conjunction with intelligent elevated table
Evaluation.
Summary of the invention
This application provides a kind of intelligent subscriber participation evaluation method, device, intelligent elevated table and storage mediums, to solve
The problem of certainly background section above is mentioned.
In a first aspect, the embodiment of the present application provides a kind of intelligent subscriber participation evaluation method comprising:
Training image collection is constructed, concentrates every width human face expression to be divided into eyebrow angular zone, mouth training image in pretreatment stage
Bar region and eye areas;
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angular zone, mouth region and eye
The corresponding expressive features model in eyeball region;
The image capture module acquisition facial image being set on intelligent elevated table;
Illumination compensation is carried out to the facial image;
The facial image is detected, human face region in facial image is oriented;
Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;
By gray level skeleton model expression corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Characteristic model is matched, and determines the human face expression state of user;
User's participation is judged according to human face expression state.
Optionally, the image concentrated to training image carries out feature extraction and feature decomposition, obtains eyebrow angular zone, mouth
Bar region and the corresponding expressive features model of eye areas, specifically include:
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model and local grain
Skeleton pattern fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
Optionally, the image capture module acquisition facial image being set on intelligent elevated table, specifically includes: in meeting
When view, Pei Xin or attend class start, administrator is remotely controlled using the camera automatic spring hidden in intelligent elevated table, control
Camera starting, acquires facial image in real time.
Optionally, described that illumination compensation is carried out to the facial image, it specifically includes: facial image is indicated using floating number
The gray value of pixel sets several segments for the dynamic range of gray scale, completes the illumination compensation to facial image.
Optionally, described that user's participation is judged according to human face expression state, it specifically includes: testing out different expressions in advance
Corresponding user's degree of participation, and user's degree of participation is divided into several grades, determined according to the human face expression state identified
Level where user's participation, and result feedback is saved to server.
Second aspect, the embodiment of the present application also provides a kind of intelligent subscriber participation evaluating apparatus, which includes:
Training image collection constructs module and training image is concentrated every width in pretreatment stage for constructing training image collection
Human face expression is divided into eyebrow angular zone, mouth region and eye areas;
Expressive features model construction module, the image for concentrating to training image carry out feature extraction and feature decomposition,
Obtain eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas;
Man face image acquiring module acquires face figure for the image capture module by being set on intelligent elevated table
Picture;
Illumination compensation module, for carrying out illumination compensation to the facial image;
Human face region locating module orients human face region in facial image for detecting to the facial image;
Gray level skeleton model obtains module, for obtaining human face region mesophryon angular zone, mouth region and eye areas
The gray level skeleton model of calibration point;
Matching module is used for the gray level skeleton model and training image collection mesophryon angular zone, mouth region and eyes
The corresponding expressive features model in region is matched, and determines the human face expression state of user;
As a result feedback module, for judging user's participation according to human face expression state.
Optionally, the expressive features model construction module is specifically used for:
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model and local grain
Skeleton pattern fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
Optionally, the man face image acquiring module is specifically used for:
When meeting, Pei Xin or attend class starts, administrator is remotely controlled using the camera hidden in intelligent elevated table
Automatic spring, control camera starting, acquires facial image in real time.
The third aspect, the embodiment of the present application also provides a kind of intelligent elevated tables, comprising: processor, memory and deposits
The computer program that can be run on a memory and on a processor is stored up, the processor is realized when executing the computer program
Intelligent subscriber participation evaluation method as described in the embodiment of the present application.
Fourth aspect, the embodiment of the present application also provides a kind of storage medium comprising intelligent elevated table executable instruction,
The intelligent elevated table executable instruction as intelligent elevated table processor when being executed for executing described in the embodiment of the present application
Intelligent subscriber participation evaluation method.
In the present solution, building training image collection, concentrates every width human face expression to be divided into eyebrow training image in pretreatment stage
Angular zone, mouth region and eye areas;Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angle
Region, mouth region and the corresponding expressive features model of eye areas;The image capture module being set on intelligent elevated table is adopted
Collect facial image;Illumination compensation is carried out to the facial image;The facial image is detected, is oriented in facial image
Human face region;Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;By institute
Gray level skeleton model expressive features model corresponding with training image collection mesophryon angular zone, mouth region and eye areas is stated to carry out
Matching, determines the human face expression state of user;User's participation is judged according to human face expression state, realizes intelligent elevated table
The accurate evaluation of user's participation during teaching, training etc..This programme can also be mended according to ambient conditions to by illumination
It repays, solves the problems, such as facial expression recognition because by intensity of illumination and direction interference.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow chart of intelligent subscriber participation evaluation method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another kind intelligent subscriber participation evaluation method provided by the embodiments of the present application;
Fig. 3 is the flow chart of another kind intelligent subscriber participation evaluation method provided by the embodiments of the present application;
Fig. 4 is the flow chart of another kind intelligent subscriber participation evaluation method provided by the embodiments of the present application;
Fig. 5 is a kind of intelligent subscriber participation evaluating apparatus structural block diagram provided by the embodiments of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is for explaining the application, rather than the restriction to the application.It also should be noted that for the ease of retouching
It states, part relevant to the application is illustrated only in attached drawing rather than entire infrastructure.
Fig. 1 is a kind of flow chart of intelligent subscriber participation evaluation method provided by the embodiments of the present application, is applicable to lead to
Intelligent elevated table monitoring human body sitting posture is crossed, this method can be executed by intelligent elevated table provided by the embodiments of the present application, the intelligence
The mode that software and/or hardware can be used in the intelligent subscriber participation evaluating apparatus of energy same table is realized, as shown in Figure 1, this reality
The concrete scheme for applying example offer is as follows:
S101, building training image collection, concentrate every width human face expression to be divided into eyebrow angular region training image in pretreatment stage
Domain, mouth region and eye areas.
Happiness expression is characterized in the present embodiment: the corners of the mouth upwarps, curved under eyebrow angle, have from the corners of the mouth to the wing of nose high tonsure at
Point;Sad expression is characterized in that mouth opens, and receives in the corners of the mouth.Sad expression is corners of the mouth drop-down, chin tightening;Tranquil table
Feelings are that the corners of the mouth does not move up and down substantially, and chin loosens.Mouth, eyes, eyebrow and cheekbone are used as feature in the present embodiment
Point tracking, the profile of model is obtained with the deformation according to hiding facial characteristics.It is given when being marked to training image collection
First frame marks automatically in sequence image, and potential face feature point is obtained from the Local Extremum of illumination patterns to realize.
S102, feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtains eyebrow angular zone, mouth region
Expressive features model corresponding with eye areas.
S103, the image capture module being set on intelligent elevated table acquire facial image.
S104, illumination compensation is carried out to the facial image.
S105, the facial image is detected, orients human face region in facial image.
Face image processing process is as follows in the present embodiment: facial image normalization, facial image gray processing, face figure
As binaryzation, the legal position eyes of integral projection, mouth, eyebrow angle position human face region is oriented according to triangulation location principle.?
Face datection is the identification face sample from non-face sample in the present embodiment, by face sample set and non-face sample set
Learnt to generate classifier;Establish fixed form and deforming template.Fixed form is to seek test sample and reference template
Between certain measurement, judge whether test sample is face by the definition of threshold size.Deforming template includes some non-solid
Fixed element, joined penalty mechanism, be constituted face template with parametrization or adaptive curve and curved surface.
S106, obtain human face region mesophryon angular zone, mouth region and eye areas calibration point gray level skeleton model.
It is S107, the gray level skeleton model is corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Expressive features model matched, determine the human face expression state of user.
S108, user's participation is judged according to human face expression state.
It tests out the corresponding user's degree of participation of different expressions in advance in the present embodiment, and user's degree of participation is divided
For several grades, level where determining user's participation according to the human face expression state identified, and result is fed back to server
It saves.
Fig. 2 is a kind of flow chart of intelligent subscriber participation evaluation method provided by the embodiments of the present application, is applicable to lead to
Intelligent elevated table monitoring human body sitting posture is crossed, this method can be executed by intelligent elevated table provided by the embodiments of the present application, the intelligence
The mode that software and/or hardware can be used in the intelligent subscriber participation evaluating apparatus of energy same table is realized, as shown in Fig. 2, this reality
The concrete scheme for applying example offer is as follows:
S201, building training image collection, concentrate every width human face expression to be divided into eyebrow angular region training image in pretreatment stage
Domain, mouth region and eye areas.
Happiness expression is characterized in the present embodiment: the corners of the mouth upwarps, curved under eyebrow angle, have from the corners of the mouth to the wing of nose high tonsure at
Point;Sad expression is characterized in that mouth opens, and receives in the corners of the mouth.Sad expression is corners of the mouth drop-down, chin tightening;Tranquil table
Feelings are that the corners of the mouth does not move up and down substantially, and chin loosens.Mouth, eyes, eyebrow and cheekbone are used as feature in the present embodiment
Point tracking, the profile of model is obtained with the deformation according to hiding facial characteristics.It is given when being marked to training image collection
First frame marks automatically in sequence image, and potential face feature point is obtained from the Local Extremum of illumination patterns to realize.
S202, feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model drawn game
Portion's texture skeleton pattern fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
S203, the image capture module being set on intelligent elevated table acquire facial image.
S204, illumination compensation is carried out to the facial image.
S205, the facial image is detected, orients human face region in facial image.
Face image processing process is as follows in the present embodiment: facial image normalization, facial image gray processing, face figure
As binaryzation, the legal position eyes of integral projection, mouth, eyebrow angle position human face region is oriented according to triangulation location principle.?
Face datection is the identification face sample from non-face sample in the present embodiment, by face sample set and non-face sample set
Learnt to generate classifier;Establish fixed form and deforming template.Fixed form is to seek test sample and reference template
Between certain measurement, judge whether test sample is face by the definition of threshold size.Deforming template includes some non-solid
Fixed element, joined penalty mechanism, be constituted face template with parametrization or adaptive curve and curved surface.
S206, obtain human face region mesophryon angular zone, mouth region and eye areas calibration point gray level skeleton model.
It is S207, the gray level skeleton model is corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Expressive features model matched, determine the human face expression state of user.
S208, user's participation is judged according to human face expression state.
It tests out the corresponding user's degree of participation of different expressions in advance in the present embodiment, and user's degree of participation is divided
For several grades, level where determining user's participation according to the human face expression state identified, and result is fed back to server
It saves.
Fig. 3 is a kind of flow chart of intelligent subscriber participation evaluation method provided by the embodiments of the present application, is applicable to lead to
Intelligent elevated table monitoring human body sitting posture is crossed, this method can be executed by intelligent elevated table provided by the embodiments of the present application, the intelligence
The mode that software and/or hardware can be used in the intelligent subscriber participation evaluating apparatus of energy same table is realized, as shown in figure 3, this reality
The concrete scheme for applying example offer is as follows:
S301, building training image collection, concentrate every width human face expression to be divided into eyebrow angular region training image in pretreatment stage
Domain, mouth region and eye areas.
Happiness expression is characterized in the present embodiment: the corners of the mouth upwarps, curved under eyebrow angle, have from the corners of the mouth to the wing of nose high tonsure at
Point;Sad expression is characterized in that mouth opens, and receives in the corners of the mouth.Sad expression is corners of the mouth drop-down, chin tightening;Tranquil table
Feelings are that the corners of the mouth does not move up and down substantially, and chin loosens.Mouth, eyes, eyebrow and cheekbone are used as feature in the present embodiment
Point tracking, the profile of model is obtained with the deformation according to hiding facial characteristics.It is given when being marked to training image collection
First frame marks automatically in sequence image, and potential face feature point is obtained from the Local Extremum of illumination patterns to realize.
S302, feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model drawn game
Portion's texture skeleton pattern fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
S303, when meeting, Pei Xin or attend class starts, administrator remotely controls to be taken the photograph using what is hidden in intelligent elevated table
As head automatic spring, control camera starting acquires facial image in real time.
S304, illumination compensation is carried out to the facial image.
S305, the facial image is detected, orients human face region in facial image.
Face image processing process is as follows in the present embodiment: facial image normalization, facial image gray processing, face figure
As binaryzation, the legal position eyes of integral projection, mouth, eyebrow angle position human face region is oriented according to triangulation location principle.?
Face datection is the identification face sample from non-face sample in the present embodiment, by face sample set and non-face sample set
Learnt to generate classifier;Establish fixed form and deforming template.Fixed form is to seek test sample and reference template
Between certain measurement, judge whether test sample is face by the definition of threshold size.Deforming template includes some non-solid
Fixed element, joined penalty mechanism, be constituted face template with parametrization or adaptive curve and curved surface.
S306, obtain human face region mesophryon angular zone, mouth region and eye areas calibration point gray level skeleton model.
It is S307, the gray level skeleton model is corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Expressive features model matched, determine the human face expression state of user.
S308, user's participation is judged according to human face expression state.
It tests out the corresponding user's degree of participation of different expressions in advance in the present embodiment, and user's degree of participation is divided
For several grades, level where determining user's participation according to the human face expression state identified, and result is fed back to server
It saves.
Fig. 4 is a kind of flow chart of intelligent subscriber participation evaluation method provided by the embodiments of the present application, is applicable to lead to
Intelligent elevated table monitoring human body sitting posture is crossed, this method can be executed by intelligent elevated table provided by the embodiments of the present application, the intelligence
The mode that software and/or hardware can be used in the intelligent subscriber participation evaluating apparatus of energy same table is realized, as shown in figure 4, this reality
The concrete scheme for applying example offer is as follows:
S401, building training image collection, concentrate every width human face expression to be divided into eyebrow angular region training image in pretreatment stage
Domain, mouth region and eye areas.
Happiness expression is characterized in the present embodiment: the corners of the mouth upwarps, curved under eyebrow angle, have from the corners of the mouth to the wing of nose high tonsure at
Point;Sad expression is characterized in that mouth opens, and receives in the corners of the mouth.Sad expression is corners of the mouth drop-down, chin tightening;Tranquil table
Feelings are that the corners of the mouth does not move up and down substantially, and chin loosens.Mouth, eyes, eyebrow and cheekbone are used as feature in the present embodiment
Point tracking, the profile of model is obtained with the deformation according to hiding facial characteristics.It is given when being marked to training image collection
First frame marks automatically in sequence image, and potential face feature point is obtained from the Local Extremum of illumination patterns to realize.
S402, feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model drawn game
Portion's texture skeleton pattern fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
S403, when meeting, Pei Xin or attend class starts, administrator remotely controls to be taken the photograph using what is hidden in intelligent elevated table
As head automatic spring, control camera starting acquires facial image in real time.
S404, the gray value that facial image pixel is indicated using floating number, set several segments for the dynamic range of gray scale,
Complete the illumination compensation to facial image.
S405, the facial image is detected, orients human face region in facial image.
Face image processing process is as follows in the present embodiment: facial image normalization, facial image gray processing, face figure
As binaryzation, the legal position eyes of integral projection, mouth, eyebrow angle position human face region is oriented according to triangulation location principle.?
Face datection is the identification face sample from non-face sample in the present embodiment, by face sample set and non-face sample set
Learnt to generate classifier;Establish fixed form and deforming template.Fixed form is to seek test sample and reference template
Between certain measurement, judge whether test sample is face by the definition of threshold size.Deforming template includes some non-solid
Fixed element, joined penalty mechanism, be constituted face template with parametrization or adaptive curve and curved surface.
S406, obtain human face region mesophryon angular zone, mouth region and eye areas calibration point gray level skeleton model.
It is S407, the gray level skeleton model is corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Expressive features model matched, determine the human face expression state of user.
S408, user's participation is judged according to human face expression state.
It tests out the corresponding user's degree of participation of different expressions in advance in the present embodiment, and user's degree of participation is divided
For several grades, level where determining user's participation according to the human face expression state identified, and result is fed back to server
It saves.
Fig. 5 is a kind of intelligent subscriber participation evaluating apparatus structural block diagram provided by the embodiments of the present application, which is used for
Intelligent subscriber participation evaluation method provided by the above embodiment is executed, has the corresponding functional module of execution method and beneficial to effect
Fruit.As shown in figure 5, a kind of intelligent subscriber participation evaluating apparatus provided by the embodiments of the present application specifically includes:
Training image collection constructs module 501, for constructing training image collection, concentrates training image often in pretreatment stage
Width human face expression is divided into eyebrow angular zone, mouth region and eye areas.
Expressive features model construction module 502, the image for concentrating to training image carry out feature extraction and feature point
Solution obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
Man face image acquiring module 503 acquires face for the image capture module by being set on intelligent elevated table
Image.
Illumination compensation module 504, for carrying out illumination compensation to the facial image.
Human face region locating module 505 orients face area in facial image for detecting to the facial image
Domain.
Gray level skeleton model obtains module 506, for obtaining human face region mesophryon angular zone, mouth region and eye areas
Calibration point gray level skeleton model.
Matching module 507 is used for the gray level skeleton model and training image collection mesophryon angular zone, mouth region and eye
The corresponding expressive features model in eyeball region is matched, and determines the human face expression state of user.
As a result feedback module 508, for judging user's participation according to human face expression state.
Expressive features model construction module 502 is specifically used in the present embodiment: carrying out to the image that training image is concentrated
Feature extraction and feature decomposition obtain eyebrow angular zone, mouth according to statistical shape model and local grain skeleton pattern fitting operation
Bar region and the corresponding expressive features model of eye areas.Man face image acquiring module 503 is specifically used for: meeting, Pei Xin or
It attends class when starting, administrator remotely controls using the camera automatic spring hidden in intelligent elevated table, and control camera opens
It is dynamic, facial image is acquired in real time.Illumination compensation module 504 is specifically used for indicating facial image using floating number in the present embodiment
The gray value of pixel sets several segments for the dynamic range of gray scale, completes the illumination compensation to facial image.In the present embodiment
In
The present embodiment provides a kind of intelligent elevated table on the basis of the various embodiments described above, and intelligent elevated table may include table
Face, table leg, camera, processor, memory and storage on a memory and the computer program that can run on a processor,
The processor realizes the intelligent subscriber participation evaluation method as described in the embodiment of the present application when executing the computer program.
Table leg can stretch and then adjust the height of intelligent elevated table according to the control of controller.It should be understood that above-mentioned intelligence
Same table is only an example of intelligent elevated table, and intelligent elevated table can have more or less component,
Two or more components can be combined, or can have different component configurations.The various parts of intelligent elevated table can be with
In the combination of hardware, software or hardware and software including one or more signal processings and/or specific integrated circuit
It realizes.
The embodiment of the present application also provides a kind of storage medium comprising intelligent elevated table executable instruction, described intelligent elevated
Table executable instruction is participated in when being executed as intelligent elevated table processor for executing intelligent subscriber described in the embodiment of the present application
Evaluation method is spent, this method comprises:
Training image collection is constructed, concentrates every width human face expression to be divided into eyebrow angular zone, mouth training image in pretreatment stage
Bar region and eye areas;
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angular zone, mouth region and eye
The corresponding expressive features model in eyeball region;
The image capture module acquisition facial image being set on intelligent elevated table;
Illumination compensation is carried out to the facial image;
The facial image is detected, human face region in facial image is oriented;
Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;
By gray level skeleton model expression corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Characteristic model is matched, and determines the human face expression state of user;
User's participation is judged according to human face expression state.
In the present solution, building training image collection, concentrates every width human face expression to be divided into eyebrow training image in pretreatment stage
Angular zone, mouth region and eye areas;Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angle
Region, mouth region and the corresponding expressive features model of eye areas;The image capture module being set on intelligent elevated table is adopted
Collect facial image;Illumination compensation is carried out to the facial image;The facial image is detected, is oriented in facial image
Human face region;Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;By institute
Gray level skeleton model expressive features model corresponding with training image collection mesophryon angular zone, mouth region and eye areas is stated to carry out
Matching, determines the human face expression state of user;User's participation is judged according to human face expression state, realizes intelligent elevated table
The accurate evaluation of user's participation during teaching, training etc..This programme can also be mended according to ambient conditions to by illumination
It repays, solves the problems, such as facial expression recognition because by intensity of illumination and direction interference.
Note that above are only the preferred embodiment and institute's application technology principle of the application.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, be able to carry out for a person skilled in the art it is various it is apparent variation,
The protection scope readjusted and substituted without departing from the application.Therefore, although being carried out by above embodiments to the application
It is described in further detail, but the application is not limited only to above embodiments, in the case where not departing from the application design, also
It may include more other equivalent embodiments, and scope of the present application is determined by the scope of the appended claims.
Claims (10)
1. a kind of intelligent subscriber participation evaluation method, which is characterized in that this method comprises:
Training image collection is constructed, concentrates every width human face expression to be divided into eyebrow angular zone, mouth area training image in pretreatment stage
Domain and eye areas;
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, obtain eyebrow angular zone, mouth region and eyes area
The corresponding expressive features model in domain;
The image capture module acquisition facial image being set on intelligent elevated table;
Illumination compensation is carried out to the facial image;
The facial image is detected, human face region in facial image is oriented;
Obtain the gray level skeleton model of the calibration point of human face region mesophryon angular zone, mouth region and eye areas;
By gray level skeleton model expressive features corresponding with training image collection mesophryon angular zone, mouth region and eye areas
Model is matched, and determines the human face expression state of user;
User's participation is judged according to human face expression state.
2. the method according to claim 1, wherein the image concentrated to training image carries out feature extraction
And feature decomposition, eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas are obtained, is specifically included:
Feature extraction and feature decomposition are carried out to the image that training image is concentrated, according to statistical shape model and local grain profile
Model fitting operation obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
3. the method according to claim 1, wherein the image capture module being set on intelligent elevated table
Facial image is acquired, is specifically included:
When meeting, Pei Xin or attend class starts, administrator is remotely controlled automatic using the camera hidden in intelligent elevated table
Pop-up, control camera starting, acquires facial image in real time.
4. the method according to claim 1, wherein described carry out illumination compensation to the facial image, specifically
Include:
The gray value that facial image pixel is indicated using floating number is set several segments for the dynamic range of gray scale, completed to people
The illumination compensation of face image.
5. method according to claim 1 to 4, which is characterized in that described to judge user according to human face expression state
Participation specifically includes:
The corresponding user's degree of participation of different expressions is tested out in advance, and user's degree of participation is divided into several grades, according to knowledge
Not Chu human face expression state determine user's participation where level, and by result feedback to server save.
6. a kind of intelligent subscriber participation evaluating apparatus, which is characterized in that the device includes:
Training image collection constructs module and training image is concentrated every width face in pretreatment stage for constructing training image collection
Expression is divided into eyebrow angular zone, mouth region and eye areas;
Expressive features model construction module, the image for concentrating to training image carry out feature extraction and feature decomposition, obtain
Eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas;
Man face image acquiring module acquires facial image for the image capture module by being set on intelligent elevated table;
Illumination compensation module, for carrying out illumination compensation to the facial image;
Human face region locating module orients human face region in facial image for detecting to the facial image;
Gray level skeleton model obtains module, for obtaining the calibration of human face region mesophryon angular zone, mouth region and eye areas
The gray level skeleton model of point;
Matching module is used for the gray level skeleton model and training image collection mesophryon angular zone, mouth region and eye areas
Corresponding expressive features model is matched, and determines the human face expression state of user;
As a result feedback module, for judging user's participation according to human face expression state.
7. device according to claim 6, which is characterized in that the expressive features model construction module is specifically used for: right
The image that training image is concentrated carries out feature extraction and feature decomposition, quasi- according to statistical shape model and local grain skeleton pattern
It closes operation and obtains eyebrow angular zone, mouth region and the corresponding expressive features model of eye areas.
8. device described in one of according to claim 6 or 7, which is characterized in that the man face image acquiring module is specifically used for:
When meeting, Pei Xin or attend class starts, administrator is remotely controlled using the camera automatic spring hidden in intelligent elevated table,
Camera starting is controlled, acquires facial image in real time.
9. a kind of intelligent elevated table, comprising: processor, memory and storage can be run on a memory and on a processor
Computer program, which is characterized in that the processor is realized as described in one of claim 1-5 when executing the computer program
Intelligent subscriber participation evaluation method.
10. a kind of storage medium comprising intelligent elevated table executable instruction, which is characterized in that the intelligent elevated table is executable
Instruction is commented when being executed as intelligent elevated table processor for executing the intelligent subscriber participation as described in one of claim 1-5
Valence method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811029580.1A CN109446880A (en) | 2018-09-05 | 2018-09-05 | Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811029580.1A CN109446880A (en) | 2018-09-05 | 2018-09-05 | Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109446880A true CN109446880A (en) | 2019-03-08 |
Family
ID=65533072
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811029580.1A Pending CN109446880A (en) | 2018-09-05 | 2018-09-05 | Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109446880A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889366A (en) * | 2019-11-22 | 2020-03-17 | 成都市映潮科技股份有限公司 | Method and system for judging user interest degree based on facial expression |
CN113033514A (en) * | 2021-05-24 | 2021-06-25 | 南京伯索网络科技有限公司 | Classroom student aggressiveness evaluation method based on network |
CN113052064A (en) * | 2021-03-23 | 2021-06-29 | 北京思图场景数据科技服务有限公司 | Attention detection method based on face orientation, facial expression and pupil tracking |
CN113221798A (en) * | 2021-05-24 | 2021-08-06 | 南京伯索网络科技有限公司 | Classroom student aggressiveness evaluation system based on network |
WO2021179706A1 (en) * | 2020-03-13 | 2021-09-16 | 平安科技(深圳)有限公司 | Meeting check-in method and system, computer device, and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015038358A1 (en) * | 2013-09-11 | 2015-03-19 | Digitalglobe, Inc. | Classification of land based on analysis of remotely-sensed earth images |
CN105335691A (en) * | 2014-08-14 | 2016-02-17 | 南京普爱射线影像设备有限公司 | Smiling face identification and encouragement system |
CN106228293A (en) * | 2016-07-18 | 2016-12-14 | 重庆中科云丛科技有限公司 | teaching evaluation method and system |
-
2018
- 2018-09-05 CN CN201811029580.1A patent/CN109446880A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015038358A1 (en) * | 2013-09-11 | 2015-03-19 | Digitalglobe, Inc. | Classification of land based on analysis of remotely-sensed earth images |
CN105335691A (en) * | 2014-08-14 | 2016-02-17 | 南京普爱射线影像设备有限公司 | Smiling face identification and encouragement system |
CN106228293A (en) * | 2016-07-18 | 2016-12-14 | 重庆中科云丛科技有限公司 | teaching evaluation method and system |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110889366A (en) * | 2019-11-22 | 2020-03-17 | 成都市映潮科技股份有限公司 | Method and system for judging user interest degree based on facial expression |
WO2021179706A1 (en) * | 2020-03-13 | 2021-09-16 | 平安科技(深圳)有限公司 | Meeting check-in method and system, computer device, and computer readable storage medium |
CN113052064A (en) * | 2021-03-23 | 2021-06-29 | 北京思图场景数据科技服务有限公司 | Attention detection method based on face orientation, facial expression and pupil tracking |
CN113052064B (en) * | 2021-03-23 | 2024-04-02 | 北京思图场景数据科技服务有限公司 | Attention detection method based on face orientation, facial expression and pupil tracking |
CN113033514A (en) * | 2021-05-24 | 2021-06-25 | 南京伯索网络科技有限公司 | Classroom student aggressiveness evaluation method based on network |
CN113221798A (en) * | 2021-05-24 | 2021-08-06 | 南京伯索网络科技有限公司 | Classroom student aggressiveness evaluation system based on network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109446880A (en) | Intelligent subscriber participation evaluation method, device, intelligent elevated table and storage medium | |
CN107679507B (en) | Facial pore detection system and method | |
Hu et al. | Research on abnormal behavior detection of online examination based on image information | |
CN108805009A (en) | Classroom learning state monitoring method based on multimodal information fusion and system | |
CN100440246C (en) | Positioning method for human face characteristic point | |
CN110334610A (en) | A kind of various dimensions classroom based on computer vision quantization system and method | |
CN106599881A (en) | Student state determination method, device and system | |
CN107103298A (en) | Chin-up number system and method for counting based on image procossing | |
CN104123543A (en) | Eyeball movement identification method based on face identification | |
KR20100016696A (en) | Student learning attitude analysis systems in virtual lecture | |
Krishnan et al. | Implementation of automated attendance system using face recognition | |
CN110163567A (en) | Classroom roll calling system based on multitask concatenated convolutional neural network | |
CN111291613A (en) | Classroom performance evaluation method and system | |
CN109145716B (en) | Boarding gate verifying bench based on face recognition | |
Nyaga et al. | Sign language gesture recognition through computer vision | |
CN112861809B (en) | Classroom head-up detection system based on multi-target video analysis and working method thereof | |
Nithya | Automated class attendance system based on face recognition using PCA algorithm | |
Carson et al. | Angle-drawing accuracy as an objective performance-based measure of drawing expertise. | |
Pavlopoulou et al. | Indoor-outdoor classification with human accuracies: Image or edge gist? | |
CN103605968A (en) | Pupil locating method based on mixed projection | |
Chamberlain et al. | Cain’s house task revisited and revived: Extending theory and methodology for quantifying drawing accuracy. | |
Mehrubeoglu et al. | Capturing reading patterns through a real-time smart camera iris tracking system | |
JP2016189073A (en) | Personality estimation device, personality estimation program, and personality estimation method | |
Zhao et al. | A new face feature point matrix based on geometric features and illumination models for facial attraction analysis | |
Wang et al. | Artificial aging of faces by support vector machines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190308 |