CN110148075A - A kind of learning evaluation method and device based on artificial intelligence - Google Patents
A kind of learning evaluation method and device based on artificial intelligence Download PDFInfo
- Publication number
- CN110148075A CN110148075A CN201910532663.0A CN201910532663A CN110148075A CN 110148075 A CN110148075 A CN 110148075A CN 201910532663 A CN201910532663 A CN 201910532663A CN 110148075 A CN110148075 A CN 110148075A
- Authority
- CN
- China
- Prior art keywords
- eye
- region
- benevolence
- eyeball image
- eye movement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Health & Medical Sciences (AREA)
- Educational Technology (AREA)
- General Health & Medical Sciences (AREA)
- Educational Administration (AREA)
- General Physics & Mathematics (AREA)
- Economics (AREA)
- Primary Health Care (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Resources & Organizations (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The learning evaluation method and device based on artificial intelligence that this application involves a kind of, the evaluation method include: to establish learning evaluation model;Obtain eye movement identification information;The eye movement identification information is matched with the eye movement identification information setting value of the learning evaluation model, obtains corresponding evaluation of estimate.The application can quickly carry out eye movement identification, while can identify eye movement amplitude, be applied in learning evaluation for eye movement identification and bring powerful support.
Description
Technical field
This application involves field of artificial intelligence, such as are related to a kind of learning evaluation method and dress based on artificial intelligence
It sets.
Background technique
Learning evaluation is an important link in the learning aid activity of classroom, is the feed-back regulatory mechanism of learning system, very
To it can be said that the content and evaluation method of learning evaluation largely affect teaching process.In traditional education ring
Under border, the main method for understanding student is classroom behavior observation, examination, job analysis etc., usually again based on taking an examination and test,
And it is generally evaluated after End-of-Course.Weighting evaluation of result present in this evaluation method ignores process evaluation, lays particular stress on
The problems such as individual development, is ignored in group's development, though for a long time by denouncing, never effective way is fundamentally solved.Mesh
Before, in human-computer interaction technique field, there are many trials of emerging interactive mode, such as body feeling interaction, eye-tracking, voice
The modes such as interaction, bio-identification, but most interactive mode utilization rate be not it is very high, do not enter yet truly
Business application it is universal in, more without which kind of man-machine interaction mode, can reach people can have no obstacle, indiscriminately ad. as one wishes and set
The level of standby exchange.Optokinetics is to explore the mankind to pay attention to main means with cognitive law: it is true by record eyeball
Movement, describes the visual behaviour of people, reflects the Cognitive Processing and psychological activity of people.Eye movement identification at present in technological applicability and
It need to be broken out of an encirclement on stability in use.
For learning evaluation, how effectively and efficiently eye movement identification fusion to be entered, be a urgent problem to be solved.
Summary of the invention
In order to which some aspects of the embodiment to disclosure have basic understanding, simple summary is shown below.It is described general
Including is not extensive overview, nor to determine key/critical component or describe the protection scope of these embodiments, but is made
For the preamble of following detailed description.
The embodiment of the present disclosure provides a kind of learning evaluation method based on artificial intelligence.
In some embodiments, which comprises
Establish learning evaluation model;The learning evaluation model includes eye movement identification information setting value and corresponding evaluation
Value;
Obtain eye movement identification information;
The eye movement identification information is matched with the eye movement identification information setting value of the learning evaluation model, is obtained
Corresponding evaluation of estimate;
The eye movement identification information obtains in the following manner:
At least a pair of of eyeball image is obtained, the eyeball image includes white of the eye region and eye benevolence region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
The embodiment of the present disclosure provides a kind of learning evaluation device based on artificial intelligence.
In some embodiments, described device includes:
Evaluation unit is configured as establishing learning evaluation model;The learning evaluation model includes that eye movement identification information is set
Definite value and corresponding evaluation of estimate;Eye movement identification information is obtained by recognition unit;By the eye movement identification information and the study
The eye movement identification information setting value of evaluation model is matched, and corresponding evaluation of estimate is obtained;
The recognition unit, is configured as obtaining at least a pair of of eyeball image, the eyeball image include white of the eye region and
Eye benevolence region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
Following technical effect may be implemented in some technical solutions that the embodiment of the present disclosure provides:
Eye movement identification can be quickly carried out, while can identify eye movement amplitude, is applied in learning evaluation for eye movement identification
Bring powerful support.
Above general description and it is discussed below be only it is exemplary and explanatory, be not used in limitation the application.
Detailed description of the invention
One or more embodiments are illustrated by corresponding attached drawing, these exemplary illustrations and attached drawing
The restriction to embodiment is not constituted, the element with same reference numbers label is shown as similar element in attached drawing, and attached drawing is not
Composition limitation, and wherein:
Fig. 1 is the flow diagram that the embodiment of the present disclosure provides;
Fig. 2 is the structural schematic diagram for the electronic equipment that the embodiment of the present disclosure provides.
Appended drawing reference:
100: processor;101: memory;102: communication interface;103: bus.
Specific embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present disclosure and technology contents, with reference to the accompanying drawing to this public affairs
The realization for opening embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used to limit the embodiment of the present disclosure.
In technical description below, for convenience of explanation for the sake of, disclosed embodiment is fully understood with providing by multiple details.
However, one or more embodiments still can be implemented in the case where without these details.It in other cases, is simplification
Attached drawing, well known construction and device can simplify displaying.
As shown in Figure 1, the embodiment of the present disclosure provides a kind of learning evaluation method based on artificial intelligence, comprising:
Establish learning evaluation model;The learning evaluation model includes eye movement identification information setting value and corresponding evaluation
Value;
Obtain eye movement identification information;
The eye movement identification information is matched with the eye movement identification information setting value of the learning evaluation model, is obtained
Corresponding evaluation of estimate;
The eye movement identification information obtains in the following manner:
At least a pair of of eyeball image is obtained, the eyeball image includes white of the eye region and eye benevolence region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
In teaching scene, the time point of acquisition PPT screen switching, and a period of time after switching, such as t points after switching
Clock, t > 0 it is of course also possible to be other equipment or the picture of other contents of courses, such as can also be and play in VR glasses
Content of courses picture.Time t after getting switching time point and switching, judges whether each pair of eyeball image has phase frame by frame
The eyeball shift action answered, these eyeball mobile messages that will acquire are matched with established learning evaluation model, are obtained
Corresponding evaluation of estimate, also just obtain each student the learning evaluation value of certain time period can be effective by this kind of mode
And efficiently evaluated for students'learning, enrich teaching examination means.
In some embodiments, reference point determines in the following manner, according to the farthest two o'clock of distance on the side in eye benevolence region,
It connects to form straight line, finds out the auxiliary line perpendicular to the straight line, with the farthest two o'clock of distance on the side comprising eye benevolence region
Auxiliary line, be reference point with the intersection point of the straight line.
It is in some embodiments, described that eye movement identification is carried out according to the reference distance of each eyeball image, comprising:
Judge the whether reduction/increase of the reference distance of each pair of eyeball image.Using above scheme, eyes can be judged
Eye movement consistency.Due to many eye movement tracing equipments especially VR glasses, be mostly using eyes respectively by identification device come into
The identification of row eye movement, therefore by identification eye movement consistency, it can more accurately judge whether eyeball moves, avoid due to list
The shake of one identification device or other reasons cause to judge by accident.
In some embodiments, described that eye movement identification is carried out according to the reference distance of each eyeball image, comprising: it is each right to judge
Within the set range whether the reference distance changing value of eyeball image.First determine whether eyes consistency, the ginseng of each pair of eyeball image
It examines whether distance increases or reduce, whether within the set range then judges the variation absolute difference of each reference distance,
Unstable caused erroneous judgement of the identification device under strenuous exercise is avoided, the accurate of the mobile identification of eyeball is further improved
Property.
In some embodiments, further includes: determine the moving direction in eye benevolence region.
In some embodiments, the moving direction in determination eye benevolence region, comprising:
The eyeball image is subjected to binary conversion treatment;The gray value in eye benevolence region is greater than the ash in the white of the eye region
Angle value;Such as 0 is set by the grey scale pixel value in white of the eye region, 255. are set by the grey scale pixel value in eye benevolence region
The eye benevolence region of different time is matched:
When the pixel quantity of eye benevolence area coincidence is greater than or equal to setting value, then eye benevolence region is not moved;
When the pixel quantity of eye benevolence area coincidence is less than setting value, then basis increases in all quadrants of the coordinate system
Eye benevolence area pixel quantity determine the moving direction in eye benevolence region.Optionally, pixel is judged by the gray value of pixel
Whether it is overlapped.
Optionally, when the increased eye benevolence area pixel quantity of first quartile is most in the coordinate system, the area Yan Ren
Domain is mobile toward first quartile;When the increased eye benevolence area pixel quantity of the second quadrant is most in the coordinate system, the eye benevolence
Region is mobile toward the second quadrant;When the increased eye benevolence area pixel quantity of third quadrant is most in the coordinate system, the eye
Benevolence region is mobile toward third quadrant;It is described when the increased eye benevolence area pixel quantity of fourth quadrant is most in the coordinate system
Eye benevolence region is mobile toward fourth quadrant;When the increased eye benevolence area pixel of four quadrants is identical in the coordinate system, according to ginseng
The distance change of the origin of examination point and the coordinate system determines the moving direction in eye benevolence region: when reference point and the coordinate
When the distance of the origin of system becomes larger along the axis of abscissas positive direction of the coordinate system, eye benevolence region is along axis of abscissas positive direction
It is mobile;It is described when axis of abscissas negative direction of the reference point at a distance from the origin of the coordinate system along the coordinate system becomes larger
It is moved along axis of abscissas negative direction in eye benevolence region;When reference point at a distance from the origin of the coordinate system along the vertical of the coordinate system
When reference axis positive direction becomes larger, eye benevolence region is moved along axis of ordinates positive direction;When the original of reference point and the coordinate system
When the distance of point becomes larger along the axis of ordinates negative direction of the coordinate system, eye benevolence region is moved along axis of ordinates negative direction.
Certainly, the settable multiple quadrants of coordinate system, such as 8,16 etc., precision is judged to improve the moving direction in eye benevolence region.
The embodiment of the present disclosure provides a kind of learning evaluation device based on artificial intelligence, comprising:
Evaluation unit is configured as establishing learning evaluation model;The learning evaluation model includes that eye movement identification information is set
Definite value and corresponding evaluation of estimate;Eye movement identification information is obtained by recognition unit;By the eye movement identification information and the study
The eye movement identification information setting value of evaluation model is matched, and corresponding evaluation of estimate is obtained;
The recognition unit, is configured as obtaining at least a pair of of eyeball image, the eyeball image include white of the eye region and
Eye benevolence region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
Optionally, it is directed to the photographic device of different eyes by being arranged in VR glasses or other equipment, is filled using camera shooting
The realtime graphic of acquisition or the image according to video different frame are set, identifies eyeball image.Wherein, eyeball image includes white of the eye area
Domain and eye benevolence region.When it is implemented, first acquiring white of the eye region using preceding since white of the eye region is different with the color in eye benevolence region
Correlation model is stored and constructs with the color in eye benevolence region, which includes each white of the eye colouring information and eyeball colouring information.It can
Selection of land, the model may also include the profile of eyeball image.The image obtained in real time using camera, according to the white of the eye color of record
Information and eye benevolence colouring information or eyeball profile identify possible eyeball image, white of the eye region and eye benevolence region in the picture, when
When detecting the region with the white of the eye matches color information of record, white of the eye region subject to the region is determined, when detecting and record
Eye benevolence matches color information region when, determine eye benevolence region subject to the region.The technology is this field conventional means, also not
It is this case inventive point, details are not described herein.
In some embodiments, the recognition unit be also configured to judge each pair of eyeball image reference distance whether
Reduction/increase.
In some embodiments, the recognition unit is also configured to judge the reference distance variation of each pair of eyeball image
Within the set range whether value.
In some embodiments, the recognition unit is also configured to determine the moving direction in eye benevolence region.
In some embodiments, the moving direction in determination eye benevolence region, comprising:
The eyeball image is subjected to binary conversion treatment;The gray value in eye benevolence region is greater than the ash in the white of the eye region
Angle value;
The eye benevolence region of different time is matched:
When the pixel quantity of eye benevolence area coincidence is greater than or equal to setting value, then eye benevolence region is not moved;
When the pixel quantity of eye benevolence area coincidence is less than setting value, then basis increases in all quadrants of the coordinate system
Eye benevolence area pixel quantity determine the moving direction in eye benevolence region.
The embodiment of the present disclosure provides a kind of VR glasses, includes above-mentioned learning evaluation device.
Above method and device are particularly suitable for the teaching of VR scene, can targetedly evaluate the learning state etc. of student,
Can also it understand under different pictures simultaneously, the focus of student.
The embodiment of the present disclosure provides a kind of computer readable storage medium, is stored with computer executable instructions, described
Computer executable instructions are arranged to carry out above-mentioned learning evaluation method.
The embodiment of the present disclosure provides a kind of computer program product, and the computer program product includes being stored in calculating
Computer program on machine readable storage medium storing program for executing, the computer program include program instruction, when described program instruction is calculated
When machine executes, the computer is made to execute above-mentioned learning evaluation method.
Above-mentioned computer readable storage medium can be transitory computer readable storage medium, be also possible to non-transient meter
Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure provides a kind of electronic equipment, and structure is as shown in Fig. 2, the electronic equipment includes:
In at least one processor (processor) 100, Fig. 2 by taking a processor 100 as an example;And memory
(memory) 101, it can also include communication interface (Communication Interface) 102 and bus 103.Wherein, it handles
Device 100, communication interface 102, memory 101 can complete mutual communication by bus 103.Communication interface 102 can be used
It is transmitted in information.Processor 100 can call the logical order in memory 101, to execute the learning evaluation of above-described embodiment
Method.
In addition, the logical order in above-mentioned memory 101 can be realized by way of SFU software functional unit and conduct
Independent product when selling or using, can store in a computer readable storage medium.
Memory 101 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer
Sequence, such as the corresponding program instruction/module of the method in the embodiment of the present disclosure.Processor 100 is stored in memory 101 by operation
In software program, instruction and module, thereby executing functional application and data processing, i.e., in realization above method embodiment
Learning evaluation method.
Memory 101 may include storing program area and storage data area, wherein storing program area can storage program area,
Application program needed at least one function;Storage data area, which can be stored, uses created data etc. according to terminal device.
In addition, memory 101 may include high-speed random access memory, it can also include nonvolatile memory.
The technical solution of the embodiment of the present disclosure can be embodied in the form of software products, which deposits
Storage in one storage medium, including one or more instruction is used so that computer equipment (can be personal computer,
Server or the network equipment etc.) execute embodiment of the present disclosure the method all or part of the steps.And storage above-mentioned is situated between
Matter can be non-transient storage media, comprising: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with
Machine accesses a variety of Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk
Matter is also possible to transitory memory medium.
Above description and attached drawing sufficiently illustrate embodiment of the disclosure, to enable those skilled in the art to practice
They.Other embodiments may include structure, logic, it is electrical, process and other change.Embodiment only represents
Possible variation.Unless explicitly requested, otherwise individual components and functionality is optional, and the sequence operated can change.
The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.The embodiment of the present disclosure
Range includes the entire scope of claims and all obtainable equivalents of claims.When for the application
When middle, although term " first ", " second " etc. may be used in this application to describe each element, these elements should not be by
To the limitation of these terms.These terms are only used to differentiate an element with another element.For example, not changing description
Meaning in the case where, first element can be called second element, and same, and second element can be called first element,
As long as " first element " that is occurred unanimously renames and " second element " occurred unanimously renames.First
Element and second element are all elements, but can not be identical element.Moreover, word used herein is only used for describing
Embodiment and it is not used in limitation claim.As used in the description in embodiment and claim, unless context
It clearly illustrates, otherwise "one" (a) of singular, "one" (an) and " described " (the) is intended to equally include plural shape
Formula.Similarly, term "and/or" refers to and associated lists comprising one or more as used in this specification
Any and all possible combination.In addition, when in the application, term " includes " (comprise) and its modification " packet
Include " (comprises) and/or feature, entirety, step, operation, element and/or group including the statement such as (comprising) fingers
The presence of part, but it is not excluded for one or more other features, entirety, step, operation, element, component and/or these point
The presence or addition of group.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in the process, method or equipment for including the element.Herein, each embodiment emphasis
What is illustrated can be the difference from other embodiments, and the same or similar parts in each embodiment can refer to each other.It is right
For the method disclosed in embodiment, product etc., if it is corresponding with method part disclosed in embodiment, related place
It may refer to the description of method part.
It will be appreciated by those of skill in the art that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and
Algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually with hard
Part or software mode execute, and can depend on the specific application and design constraint of technical solution.The technical staff
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The range of the embodiment of the present disclosure.The technical staff can be understood that, for convenience and simplicity of description, foregoing description
The specific work process of system, device and unit, can refer to corresponding processes in the foregoing method embodiment, no longer superfluous herein
It states.
In embodiments disclosed herein, disclosed method, product (including but not limited to device, equipment etc.) can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
Divide, can be only a kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or
Component can be combined or can be integrated into another system, or some features can be ignored or not executed.In addition, shown
Or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit it is indirect
Coupling or communication connection can be electrical property, mechanical or other forms.The unit as illustrated by the separation member can be or
Person, which may not be, to be physically separated, and component shown as a unit may or may not be physical unit
With in one place, or may be distributed over multiple network units.Portion therein can be selected according to the actual needs
Point or whole unit realize the present embodiment.In addition, each functional unit in the embodiments of the present disclosure can integrate at one
In processing unit, it is also possible to each unit and physically exists alone, a list can also be integrated in two or more units
In member.
The flow chart and block diagram in the drawings show system, the method and computer program products according to the embodiment of the present disclosure
Architecture, function and operation in the cards.In this regard, each box in flowchart or block diagram can represent one
A part of module, section or code, a part of the module, section or code include it is one or more for realizing
The executable instruction of defined logic function.In some implementations as replacements, function marked in the box can also be with
Occur different from the sequence marked in attached drawing.For example, two continuous boxes can actually be basically executed in parallel, they
Sometimes it can also execute in the opposite order, this can be depended on the functions involved.Flow chart and block diagram institute in the accompanying drawings
In corresponding description, operation corresponding to different boxes or step can also be to be different from sequence hair disclosed in description
Raw, there is no specific sequences between sometimes different operations or step.For example, two continuous operations or step actually may be used
To be basically executed in parallel, they can also be executed in the opposite order sometimes, this can be depended on the functions involved.Block diagram
And/or the combination of each box in flow chart and the box in block diagram and or flow chart, it can the function as defined in executing
Can or the dedicated hardware based system of movement realize, or can come using a combination of dedicated hardware and computer instructions real
It is existing.
Claims (10)
1. a kind of learning evaluation method based on artificial intelligence characterized by comprising
Establish learning evaluation model;The learning evaluation model includes eye movement identification information setting value and corresponding evaluation of estimate;
Obtain eye movement identification information;
The eye movement identification information is matched with the eye movement identification information setting value of the learning evaluation model, is corresponded to
Evaluation of estimate;
The eye movement identification information obtains in the following manner:
At least a pair of of eyeball image is obtained, the eyeball image includes white of the eye region and eye benevolence region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
2. the method according to claim 1, wherein described carry out eye movement according to the reference distance of each eyeball image
Identification, comprising:
Judge the whether reduction/increase of the reference distance of each pair of eyeball image.
3. the method according to claim 1, wherein described carry out eye movement according to the reference distance of each eyeball image
Identification, comprising:
Whether within the set range to judge the reference distance changing value of each pair of eyeball image.
4. according to claim 1 to method described in 3, which is characterized in that further include: determine the mobile side in eye benevolence region
To.
5. according to the method described in claim 4, it is characterized in that, the moving direction in determination eye benevolence region, comprising:
The eyeball image is subjected to binary conversion treatment;The gray value in eye benevolence region is greater than the gray scale in the white of the eye region
Value;
The eye benevolence region of different time is matched:
When the pixel quantity of eye benevolence area coincidence is greater than or equal to setting value, then eye benevolence region is not moved;
When the pixel quantity of eye benevolence area coincidence is less than setting value, then according to the increased eye in all quadrants of the coordinate system
Benevolence area pixel quantity determines the moving direction in eye benevolence region.
6. a kind of learning evaluation device based on artificial intelligence characterized by comprising
Evaluation unit is configured as establishing learning evaluation model;The learning evaluation model includes eye movement identification information setting value
And corresponding evaluation of estimate;Eye movement identification information is obtained by recognition unit;By the eye movement identification information and the learning evaluation
The eye movement identification information setting value of model is matched, and corresponding evaluation of estimate is obtained;
The recognition unit is configured as obtaining at least a pair of of eyeball image, and the eyeball image includes white of the eye region and eye benevolence
Region;
Central point in eye benevolence region is as a reference point;
Coordinate system is established in each eyeball image;
Reference point described in each eyeball image is calculated at a distance from the origin of the coordinate system, as reference distance;
Eye movement identification is carried out according to the reference distance of each eyeball image.
7. device according to claim 6, which is characterized in that the recognition unit is also configured to judge each pair of eyeball
The whether reduction/increase of the reference distance of image.
8. device according to claim 6, which is characterized in that the recognition unit is also configured to judge each pair of eyeball
Within the set range whether the reference distance changing value of image.
9. the device according to claim 6 to 8, which is characterized in that the recognition unit is also configured to determine the eye
The moving direction in benevolence region.
10. device according to claim 9, which is characterized in that the moving direction in determination eye benevolence region, packet
It includes:
The eyeball image is subjected to binary conversion treatment;The gray value in eye benevolence region is greater than the gray scale in the white of the eye region
Value;
The eye benevolence region of different time is matched:
When the pixel quantity of eye benevolence area coincidence is greater than or equal to setting value, then eye benevolence region is not moved;
When the pixel quantity of eye benevolence area coincidence is less than setting value, then according to the increased eye in all quadrants of the coordinate system
Benevolence area pixel quantity determines the moving direction in eye benevolence region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910532663.0A CN110148075A (en) | 2019-06-19 | 2019-06-19 | A kind of learning evaluation method and device based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910532663.0A CN110148075A (en) | 2019-06-19 | 2019-06-19 | A kind of learning evaluation method and device based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110148075A true CN110148075A (en) | 2019-08-20 |
Family
ID=67595959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910532663.0A Pending CN110148075A (en) | 2019-06-19 | 2019-06-19 | A kind of learning evaluation method and device based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110148075A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329643A (en) * | 2020-11-06 | 2021-02-05 | 重庆第二师范学院 | Learning efficiency detection method, system, electronic device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013165198A1 (en) * | 2012-05-02 | 2013-11-07 | Samsung Electronics Co., Ltd. | Apparatus and method of controlling mobile terminal based on analysis of user's face |
CN106952200A (en) * | 2017-03-28 | 2017-07-14 | 安徽味唯网络科技有限公司 | A kind of method that internet teaching supervises student |
CN107092872A (en) * | 2017-04-08 | 2017-08-25 | 闲客智能(深圳)科技有限公司 | A kind of eye moves recognition methods and device |
CN107562199A (en) * | 2017-08-31 | 2018-01-09 | 北京金山安全软件有限公司 | Page object setting method and device, electronic equipment and storage medium |
CN109522815A (en) * | 2018-10-26 | 2019-03-26 | 深圳博为教育科技有限公司 | A kind of focus appraisal procedure, device and electronic equipment |
-
2019
- 2019-06-19 CN CN201910532663.0A patent/CN110148075A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013165198A1 (en) * | 2012-05-02 | 2013-11-07 | Samsung Electronics Co., Ltd. | Apparatus and method of controlling mobile terminal based on analysis of user's face |
CN106952200A (en) * | 2017-03-28 | 2017-07-14 | 安徽味唯网络科技有限公司 | A kind of method that internet teaching supervises student |
CN107092872A (en) * | 2017-04-08 | 2017-08-25 | 闲客智能(深圳)科技有限公司 | A kind of eye moves recognition methods and device |
CN107562199A (en) * | 2017-08-31 | 2018-01-09 | 北京金山安全软件有限公司 | Page object setting method and device, electronic equipment and storage medium |
CN109522815A (en) * | 2018-10-26 | 2019-03-26 | 深圳博为教育科技有限公司 | A kind of focus appraisal procedure, device and electronic equipment |
Non-Patent Citations (1)
Title |
---|
李若瑄: "基于眼动特性的选择性注意机制研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112329643A (en) * | 2020-11-06 | 2021-02-05 | 重庆第二师范学院 | Learning efficiency detection method, system, electronic device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110298569A (en) | Learning evaluation method and device based on eye movement identification | |
EP3992846A1 (en) | Action recognition method and apparatus, computer storage medium, and computer device | |
CN109389069A (en) | Blinkpunkt judgment method and device, electronic equipment and computer storage medium | |
JP7309116B2 (en) | Gaze direction identification method, device, electronic device, and storage medium | |
CN109313490A (en) | It is tracked using the eye gaze of neural network | |
CN109241903A (en) | Sample data cleaning method, device, computer equipment and storage medium | |
US20200202226A1 (en) | System and method for context based deep knowledge tracing | |
CN110263681A (en) | The recognition methods of facial expression and device, storage medium, electronic device | |
CN110688874B (en) | Facial expression recognition method and device, readable storage medium and electronic equipment | |
CN108399383A (en) | Expression moving method, device storage medium and program | |
US20200410338A1 (en) | Multimodal data learning method and device | |
CN110674664A (en) | Visual attention recognition method and system, storage medium and processor | |
CN110288614A (en) | Image processing method, device, equipment and storage medium | |
CN109584153A (en) | Modify the methods, devices and systems of eye | |
CN110310247A (en) | Image processing method, device, terminal and computer readable storage medium | |
CN112102185B (en) | Image deblurring method and device based on deep learning and electronic equipment | |
JP2022513858A (en) | Data processing methods, data processing equipment, computer programs, and computer equipment for facial image generation | |
CN108491823A (en) | Method and apparatus for generating eye recognition model | |
WO2022161234A1 (en) | Image processing method and apparatus, and electronic device and storage medium | |
CN112101123A (en) | Attention detection method and device | |
CN111680544B (en) | Face recognition method, device, system, equipment and medium | |
CN115546692A (en) | Remote education data acquisition and analysis method, equipment and computer storage medium | |
Ahmad et al. | Requirements engineering framework for human-centered artificial intelligence software systems | |
CN110148075A (en) | A kind of learning evaluation method and device based on artificial intelligence | |
CN114783017A (en) | Method and device for generating confrontation network optimization based on inverse mapping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190820 |
|
RJ01 | Rejection of invention patent application after publication |