CN110503024A - A kind of interaction mode analysis method, device and storage medium - Google Patents
A kind of interaction mode analysis method, device and storage medium Download PDFInfo
- Publication number
- CN110503024A CN110503024A CN201910764619.2A CN201910764619A CN110503024A CN 110503024 A CN110503024 A CN 110503024A CN 201910764619 A CN201910764619 A CN 201910764619A CN 110503024 A CN110503024 A CN 110503024A
- Authority
- CN
- China
- Prior art keywords
- target object
- interaction mode
- state
- image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Abstract
The invention discloses a kind of interaction mode analysis method, device and storage mediums, wherein the described method includes: obtain locating for target object on the spot in scene with the associated scene image of the target object;The status information of target object is extracted from the scene image;According to the state information, the interaction mode of the target object is determined.The embodiment of the present invention extracts the status information of student through acquisition scene image and from scene image, then the interaction mode (learning state) that student is determined according to status information can make the related personnel such as parent or teacher understand the learning state after students in class.
Description
Technical field
The present invention relates to technical field of information processing more particularly to a kind of interaction mode analysis method and devices and storage to be situated between
Matter.
Background technique
Existing classroom analysis of the students method is all that analysis student pays attention to the class state at classroom, to determine the interaction of student
State and absorbed degree.
However, quality is completed in homework, it is the important indicator for assessing student's school grade.In most cases, operation
After the completion of being stayed at home by student, teacher is given within second day.Teacher and parent can only see operation as a result, lack to students in class after
The understanding of process of doing the homework and student's interaction mode.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of interaction mode analysis method, device and storage medium, for solving
Certainly in the prior art, the problem of interaction mode after students in class can not be understood.
In order to solve the above-mentioned technical problem, embodiments herein adopts the technical scheme that a kind of interaction mode point
Analysis method, includes the following steps:
Obtain locating for target object on the spot in scene with the associated scene image of the target object;
The status information of target object is extracted from the scene image;
According to the state information, the interaction mode of the target object is determined.
Optionally, the status information of the target object, comprising: environmental information and target object where target object
Sight angle;Wherein, the environmental information where the target object includes: the interference source information around target object;
It is described according to the state information, determine the interaction mode of the target object, comprising:
Interaction mode based on target object described in the environmental information and the sight angle analysis is to be absorbed in state, non-
It is absorbed in state or disturbed state.
Optionally, the status information of the target object, comprising: the sight angle of target object;
It is described according to the state information, determine the interaction mode of the target object, comprising:
It is to be absorbed in state or non-absorbed state according to the interaction mode of target object described in the sight angle analysis.
Optionally, the sight angle of the target object is obtained from the scene image, comprising:
At least one of head angle and the pupil angle of target object are obtained from the scene image;
The sight angle of target object is obtained based at least one of the head angle and pupil angle.
Optionally, the method also includes:
Obtain the acoustic information at moment corresponding to the scene image;
According to the state information, determine the interaction mode of the target object, further includes: according to the acoustic information and
The status information determines that the interaction mode of the target object is to be absorbed in state, non-absorbed state or disturbed state.
Optionally, the method also includes:
At least one of written image and books image are obtained from the scene image;
Using optical character recognition technology, to the written image and books image, at least one is identified, determines interaction
Content;
The interaction content is associated with the interaction mode of the target object.
Optionally, the interaction content includes by identifying at least one of the written image and books image
Learn subject;The method also includes:
Determine the study section purpose learning time;
Each exercise in the written image is identified, determines the writing time of each exercise;
By the acquisition of the analysis result, it is associated at least one of the writing time of the learning time and exercise.
Optionally, the method also includes by the interaction mode of the target object at predetermined intervals with default
Sending method be sent to specified distal end.
To solve the above problems, the embodiment of the invention also discloses a kind of interaction mode analytical equipments, comprising:
First obtains module, for obtain locating for target object in scene on the spot with the associated scene figure of the target object
Picture;
Extraction module, for extracting the status information of target object from the scene image;
Determining module, for according to the state information, determining the interaction mode of the target object.
Optionally, the status information of the target object, comprising: environmental information and target object where target object
Sight angle;Wherein, the environmental information where the target object includes: the interference source information around target object;
The determining module is further used for: based on target object described in the environmental information and the sight angle analysis
Interaction mode be to be absorbed in state, non-absorbed state or disturbed state.
Optionally, the status information of the target object, comprising: the sight angle of target object;
The determining module is also used to: being to be absorbed in shape according to the interaction mode of target object described in the sight angle analysis
State or non-absorbed state.
Optionally, the extraction module is specifically used for:
At least one of head angle and the pupil angle of target object are obtained from the scene image;
The sight angle of target object is obtained based at least one of the head angle and pupil angle
Optionally, described device further includes the second acquisition module, and the second acquisition module is used for:
Obtain the acoustic information at moment corresponding to the scene image;
The determining module is further used for: according to the acoustic information and the status information, determining the target pair
The interaction mode of elephant is to be absorbed in state, non-absorbed state or disturbed state.
Optionally, described device further includes identification module and relating module;
The identification module is used for: at least one of written image and books image are obtained from the scene image;
Using optical character recognition technology, to the written image and books image, at least one is identified, determines interaction content;
The relating module: for the interaction content to be associated with the interaction mode of the target object.
Optionally, the interaction content includes by identifying at least one of the written image and books image
Learn subject;Described device further includes first time determining module and the second time determining module;The first time determines mould
Block: for determining the study section purpose learning time;
The second time determining module: it for being identified to each exercise in the written image, determines each described
The writing time of exercise;
The relating module is also used to: the interaction mode of the target object is associated with the learning time and exercise
At least one of writing time.
Optionally, described device further includes sending module, the sending module, for by the interaction shape of the target object
State is sent to specified distal end at predetermined intervals with preset sending method.
To solve the above problems, being stored on the storage medium the embodiment of the invention also discloses a kind of storage medium
Computer program, the computer program realize any one interaction mode analysis method as described above when being executed by processor
The step of.
The beneficial effect of the embodiment of the present invention is: state when by the way that student being learnt or done the homework is analyzed, and
The time that record student learns and does the homework can make parent or teacher understand the complete process and each section work of each section's operation
The time that industry each section is completed, help the interaction mode after class of parent and teacher's students '.
Detailed description of the invention
Fig. 1 is the flow chart of interaction mode of embodiment of the present invention analysis method;
Fig. 2 is the flow chart of interaction mode of embodiment of the present invention analysis method;
Fig. 3 is the flow chart of interaction mode of embodiment of the present invention analysis method;
Fig. 4 is the flow chart of interaction mode of embodiment of the present invention analysis method;
Fig. 5 is the flow chart of interaction mode of embodiment of the present invention analysis method;
Fig. 6 is the structural block diagram of interaction mode of embodiment of the present invention analytical equipment.
Specific embodiment
The various schemes and feature of the application are described herein with reference to attached drawing.
It should be understood that various modifications can be made to the embodiment applied herein.Therefore, description above should not regard
To limit, and only as the example of embodiment.Those skilled in the art will expect in the scope and spirit of the present application
Other modifications.
The attached drawing being included in the description and forms part of the description shows embodiments herein, and with it is upper
What face provided is used to explain the application together to substantially description and the detailed description given below to embodiment of the application
Principle.
By the description of the preferred form with reference to the accompanying drawings to the embodiment for being given as non-limiting example, the application's
These and other characteristic will become apparent.
It is also understood that although the application is described referring to some specific examples, those skilled in the art
Member realizes many other equivalents of the application in which can determine, they have feature as claimed in claim and therefore all
In the protection scope defined by whereby.
When read in conjunction with the accompanying drawings, in view of following detailed description, above and other aspect, the feature and advantage of the application will become
It is more readily apparent.
The specific embodiment of the application is described hereinafter with reference to attached drawing;It will be appreciated, however, that applied embodiment is only
Various ways implementation can be used in the example of the application.Known and/or duplicate function and structure and be not described in detail to avoid
Unnecessary or extra details makes the application smudgy.Therefore, applied specific structural and functionality is thin herein
Section is not intended to restrictions, but as just the basis of claim and representative basis be used to instructing those skilled in the art with
Substantially any appropriate detailed construction diversely uses the application.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it can be referred to one or more of the identical or different embodiment according to the application.
The embodiment of the present invention provides a kind of interaction mode analysis method, as shown in Figure 1, including the following steps:
Step S101, obtain locating for target object on the spot in scene with the associated scene image of the target object;
In this step, target object indicates that actual needs interacts the personnel such as student, the student of state analysis, scene figure
The ambient image as locating for as personnel.Specifically when obtaining scene image, photographic device can use to obtain include mesh
Mark the scene image of object.It for example by installing camera on the desk lamp used in target object to obtain include target pair
Scene image as learning image.
Step S102 extracts the status information of target object from the scene image;
In this step, learning state and the ambient condition of surrounding of the target object that the status information indicates etc..
Step S103 determines the interaction mode of the target object according to the state information.
In this step, interaction mode indicates the state of mind between the books such as target object and textbook/exercise volume, specific to wrap
Include absorbed state, non-absorbed state and disturbed state etc..
The embodiment of the present invention extracts the status information of student by acquisition scene image and from scene image, then root
The interaction mode (learning state) that student is determined according to status information can make the related personnel such as parent or teacher understand students in class
Learning state afterwards.
Another embodiment of the present invention provides a kind of interaction mode analysis methods, as shown in Fig. 2, including the following steps:
Step S201, obtain locating for target object on the spot in scene with the associated scene image of the target object;
In this step, it can start when target object is in study to the environment around target object and target object
Record a video, obtain one or several video images, be then based on one it is predetermined at the time of, from each video image obtain at this
Video frame under predetermined instant, and it is interrelated to obtain each video frame, with this as with the associated field of the target object
Scape image.By by each video frame to target object as with the associated scene image of the target object, divided with this
The interaction mode of the target object at moment corresponding to not a video frame of analysis, thus may be implemented the entire study to target object
The interaction mode of process is analyzed.
Step S202, the sight angle of environmental information and target object where extracting target object in the scene image
Degree;
In this step, the environmental information where the target object includes: the interference source information around target object;Such as
Whether determine has animal or other staff etc. around target object.
In this step, the sight angle of the target object is obtained from the scene image, comprising: from the scene figure
At least one of head angle and the pupil angle of target object are obtained as in;Based in the head angle and pupil angle
At least one obtain target object sight angle.
Step S203, the interaction mode based on target object described in the environmental information and the sight angle analysis is special
Note state, non-absorbed state or disturbed state.
In this step, when determining sight angle to deviate textbook/books direction, then it can determine that target object is non-and be absorbed in
State, when determining sight angle is to watch textbook/books direction attentively, then confirmable interaction mode is non-absorbed state;When
Sight angle is determined to deviate textbook/books direction, it, then can be true and there are when animal or other staff around target object
The interaction mode of object of setting the goal is disturbed state.
In the embodiment of the present invention, whether there is interference source by the sight angle and surrounding that determine target object, to determine
The interaction mode of target object, so that the analysis of interaction mode is more rationally accurate.It is accurate for related personnel such as parent, teachers
The interaction mode after class for understanding student provides guarantee.
Further embodiment of this invention provides a kind of interaction mode analysis method, as shown in figure 3, including the following steps:
Step S301, obtain locating for target object on the spot in scene with the associated scene image of the target object;
Step S302, at least one from the head angle and pupil angle for obtaining target object in the scene image
It is a;The sight angle of target object is obtained based at least one of the head angle and pupil angle.
Step S303, the interaction mode based on target object described in the environmental information and the sight angle analysis is special
Note state or non-absorbed state.
In this step, when determining sight angle to deviate textbook/books direction, then the interaction of target object can be determined
State is non-absorbed state, and when determining sight angle is to watch textbook/books direction attentively, then confirmable interaction mode is special
Note state.
The present embodiment further includes determining interaction content in the specific implementation process, and interaction content and interaction mode are closed
Connection, so that related personnel is in detailed all kinds of subject interaction modes of understanding target object.The determination process of particular content are as follows: from institute
State acquisition at least one of written image and books image in scene image;Using optical character recognition technology to the writing
Image and books image at least one identified, determine interaction content;Wherein interaction content includes study subject
In the present embodiment, by the head angle and pupil angle according to target object, to determine the sight of target object
Angle is provided so that the determination of sight angle is more accurate with the interaction mode for subsequent accurate determining target object
Guarantee.
The embodiment of the present invention provides a kind of interaction mode analysis method and includes the following steps: as shown in connection with fig. 4
Step S401, obtain locating for target object on the spot in scene with the associated scene image of the target object;
Step S402 extracts the status information of target object from the scene image;
Step S403 obtains at least one of written image and books image from the scene image;
Using optical character recognition technology, to the written image and books image, at least one is identified, determines study
Subject;
In this step when being identified, after specifically being identified as text by ocr, natural language processing skill is used
Art judges the study subject of target object.
Step S404 determines the study section purpose learning time;
It is that state analysis is interacted to each video frame in video image in this step when determining learning time,
Then it determines the study subject in each video frame, same study section purpose video frame is finally divided into one group, by true
The number of video frame in each group is determined to determine each study section purpose learning time.For example, (per second to target object video recording 1 hour
Record 30 video frames), then obtaining a video, there are 108000 video frames in the video, then just needing to 108000
A video frame interacts state analysis respectively, determines the study subject in each video frame, for example determines the 1st video
Study subject of the frame into the 36000th video frame is mathematics, then being assured that out that the learning time of mathematics is 1200
Second (20 minutes).It can similarly determine other section's purpose learning times.
Step S405 identifies each exercise in the written image, determines the writing time of each exercise;
In this step when carrying out exercise identification, after specifically being identified as text by ocr, natural language processing is used
Technology, to judge exercise classification.
It is that state analysis is interacted to each video frame in video image in this step when determining writing time,
Then it determines the exercise classification in each video frame, the video frame of the same exercise classification is then divided into one group, is passed through
The number of video frame in each group is determined to determine the writing time of each exercise.For example, to target object video recording (record per second in 1 hour
30 video frames), then obtaining a video, there are 108000 video frames in the video, then just needing to 108000
Video frame interacts state analysis respectively, determines the exercise classification in each video frame, for example determines the 36001st view
Exercise classification of the frequency frame into the 54000th video frame is physics topic (11 topic) with along with, then being assured that the road Chu Gai object
The writing time for managing exercise is 600 seconds (18000 video frames, 30 video frames per second).It can similarly determine other each roads
The writing time of exercise.
Step S406 determines the interaction mode of the target object according to the state information;
In this step, the absorbed state of target object in each video frame can be passed through when determining interaction mode.Such as
The 901st video frame is determined into the 1080th video frame, the interaction mode of target object is non-absorbed state, then just
The state that can determine the period corresponding to this 180 video frames (namely 1 minute) is non-absorbed state.
The interaction mode of the target object is associated in the writing time of the learning time and exercise by step S407
At least one.
In this step, for example, it is above-mentioned determine the 901st video frame into the 1080th video frame, the friendship of target object
Mutual state is non-absorbed state, and the study subject of the 901st video frame target object into the 1080th video frame is number
Learn, then can be associated with interaction mode with learning time, such as: the mathematical studying time 20 minutes, wherein have 1 minute be non-special
Note state.The time of the writing of 11 topic of physics topic is 10 minutes.
It, i.e., will study subject and section's purpose learning time by the way that interaction content to be associated with interaction mode in this step
It is associated with interaction mode, each section's purpose learning state is understood this makes it possible to detailed.
In the present embodiment, the method also includes by the interaction mode of the target object at predetermined intervals with
Preset sending method is sent to specified distal end.Specifically, can be when every morning by the target of evening before that day
The interaction mode of object is sent in parent/teacher terminal in the form reported.Parent and teacher in this way can be timely
The time that the complete process of Xie Ge section operation and each section's operation each section are completed, convenient for the class of parent and teacher's students '
Interaction mode afterwards.
The embodiment of the present invention provides a kind of interaction mode analysis method, as shown in figure 5, including the following steps:
Step S501, obtain locating for target object on the spot in scene with the associated scene image of the target object;
Step S502 extracts the status information of target object from the scene image;
Step S503 obtains the acoustic information at moment corresponding to the scene image;
Step S504 determines that the interaction mode of the target object is according to the acoustic information and the status information
It is absorbed in state, non-absorbed state or disturbed state.
It, then can be with when determining sight angle to deviate textbook/books direction, and acoustic information being not present in this step
The interaction mode for determining target object is non-absorbed state;When determining sight angle to deviate textbook/books direction, and there are sound
When message ceases, then confirmable interaction mode is disturbed state;When determine sight angle be watch textbook/books direction attentively
When, then confirmable interaction mode is to be absorbed in state.
The embodiment of the present invention is by obtaining acoustic information corresponding with status information, from two side of status information and acoustic information
Face determines that the interaction mode of target object avoids so that the confirmation of interaction mode is more accurate since there are noises
Or there are other staff and target object to talk and the wrong the case where interaction mode of target object is confirmed as non-absorbed state.
One embodiment of the invention provides a kind of interaction mode analytical equipment, includes: as shown in Figure 6
First obtains module 1, for obtain locating for target object in scene on the spot with the associated scene of the target object
Image;
Extraction module 2, for extracting the status information of target object from the scene image;
Determining module 3, for according to the state information, determining the interaction mode of the target object.
The embodiment of the present invention extracts the status information of student by acquisition scene image and from scene image, then root
The interaction mode (learning state) that student is determined according to status information can make the related personnel such as parent or teacher understand students in class
Learning state afterwards
In an alternative embodiment of the invention, the status information of the target object, comprising: the environment letter where target object
The sight angle of breath and target object;Wherein, the environmental information where the target object includes: the interference around target object
Source information;
The determining module is further used for: based on target object described in the environmental information and the sight angle analysis
Interaction mode be to be absorbed in state, non-absorbed state or disturbed state.
In the embodiment of the present invention, whether there is interference source by the sight angle and surrounding that determine target object, to determine
The interaction mode of target object, so that the analysis of interaction mode is more rationally accurate.It is accurate for related personnel such as parent, teachers
The interaction mode after class for understanding student provides guarantee.
In still another embodiment of the process, the status information of the target object, comprising: the sight angle of target object;
The determining module is also used to: being to be absorbed in shape according to the interaction mode of target object described in the sight angle analysis
State or non-absorbed state.
In still another embodiment of the process, the extraction module is specifically used for:
At least one of head angle and the pupil angle of target object are obtained from the scene image;
The sight angle of target object is obtained based at least one of the head angle and pupil angle.
In the present embodiment, by the head angle and pupil angle according to target object, to determine the sight of target object
Angle is provided so that the determination of sight angle is more accurate with the interaction mode for subsequent accurate determining target object
Guarantee.
In an embodiment of the present invention, described device further includes the second acquisition module, and the second acquisition module is used for:
Obtain the acoustic information at moment corresponding to the scene image;
The determining module is further used for: according to the acoustic information and the status information, determining the target pair
The interaction mode of elephant is to be absorbed in state, non-absorbed state or disturbed state.
The embodiment of the present invention is by obtaining acoustic information corresponding with status information, from two side of status information and acoustic information
Face determines that the interaction mode of target object avoids so that the confirmation of interaction mode is more accurate since there are noises
Or there are other staff and target object to talk and the wrong the case where interaction mode of target object is confirmed as non-absorbed state.
In an alternative embodiment of the invention, described device further includes identification module and relating module;
The identification module is used for: at least one of written image and books image are obtained from the scene image;
Using optical character recognition technology, to the written image and books image, at least one is identified, determines interaction content;
The relating module: for the interaction content to be associated with the interaction mode of the target object.
Specifically, the interaction content includes by identifying at least one of the written image and books image
Learn subject;Described device further includes first time determining module and the second time determining module;The first time determines mould
Block: for determining the study section purpose learning time;
The second time determining module: it for being identified to each exercise in the written image, determines each described
The writing time of exercise;
The relating module is also used to: the interaction mode of the target object is associated with the learning time and exercise
At least one of writing time.
By the way that interaction content to be associated with interaction mode in the present embodiment, i.e., when will learn subject and section's purpose study
Between it is associated with interaction mode, understand each section's purpose learning state this makes it possible to detailed.
In still another embodiment of the process, described device further includes sending module, the sending module, is used for the mesh
The interaction mode of mark object is sent to specified distal end at predetermined intervals with preset sending method.
In the present embodiment, the method also includes by the interaction mode of the target object at predetermined intervals with
Preset sending method is sent to specified distal end.Specifically, can be when every morning by the target of evening before that day
The interaction mode of object is sent in parent/teacher terminal in the form reported.Parent and teacher in this way can be timely
The time that the complete process of Xie Ge section operation and each section's operation each section are completed, convenient for the class of parent and teacher's students '
Interaction mode afterwards.
One embodiment of the invention provides a kind of storage medium, and computer program, the meter are stored on the storage medium
The step of calculation machine program realizes following method when being executed by processor:
Step 1: obtain locating for target object on the spot in scene with the associated scene image of the target object;
Step 2: extracting the status information of target object from the scene image;
Step 3: according to the state information, determining the interaction mode of the target object.
The specific embodiment process of above method step can be found in the embodiment of above-mentioned any interaction mode analysis method, this
It is no longer repeated herein for embodiment.
The embodiment of the present invention extracts the status information of student by acquisition scene image and from scene image, then root
The interaction mode (learning state) that student is determined according to status information can make the related personnel such as parent or teacher understand students in class
Learning state afterwards.
Above embodiments are only exemplary embodiment of the present invention, are not used in the limitation present invention, protection scope of the present invention
It is defined by the claims.Those skilled in the art can within the spirit and scope of the present invention make respectively the present invention
Kind modification or equivalent replacement, this modification or equivalent replacement also should be regarded as being within the scope of the present invention.
Claims (10)
1. a kind of interaction mode analysis method, which comprises the steps of:
Obtain locating for target object on the spot in scene with the associated scene image of the target object;
The status information of target object is extracted from the scene image;
According to the state information, the interaction mode of the target object is determined.
2. the method as described in claim 1, which is characterized in that the status information of the target object, comprising: target object institute
Environmental information and target object sight angle;Wherein, the environmental information where the target object includes: target object
The interference source information of surrounding;
It is described according to the state information, determine the interaction mode of the target object, comprising:
Interaction mode based on target object described in the environmental information and the sight angle analysis be absorbed in state, it is non-be absorbed in
State or disturbed state.
3. the method as described in claim 1, which is characterized in that the status information of the target object, comprising: target object
Sight angle;
It is described according to the state information, determine the interaction mode of the target object, comprising:
It is to be absorbed in state or non-absorbed state according to the interaction mode of target object described in the sight angle analysis.
4. method as claimed in claim 2 or claim 3, which is characterized in that obtain the target object from the scene image
Sight angle, comprising:
At least one of head angle and the pupil angle of target object are obtained from the scene image;
The sight angle of target object is obtained based at least one of the head angle and pupil angle.
5. the method as described in claim 1, it is characterised in that: the method also includes:
Obtain the acoustic information at moment corresponding to the scene image;
According to the state information, the interaction mode of the target object is determined, further includes: according to the acoustic information and described
Status information determines that the interaction mode of the target object is to be absorbed in state, non-absorbed state or disturbed state.
6. the method as described in claim 1, which is characterized in that the method also includes:
At least one of written image and books image are obtained from the scene image;
Using optical character recognition technology, to the written image and books image, at least one is identified, is determined in interaction
Hold;
The interaction content is associated with the interaction mode of the target object.
7. method as claimed in claim 6, which is characterized in that the interaction content includes by the written image and book
The study subject of at least one of nationality image identification;The method also includes:
Determine the study section purpose learning time;
Each exercise in the written image is identified, determines the writing time of each exercise;
The interaction mode of the target object is associated at least one of the writing time of the learning time and exercise.
8. the method as described in claim 1, which is characterized in that the method also includes by the interaction mode of the target object
Specified distal end is sent to preset sending method at predetermined intervals.
9. a kind of interaction mode analytical equipment, comprising:
First obtains module, for obtain locating for target object in scene on the spot with the associated scene image of the target object;
Extraction module, for extracting the status information of target object from the scene image;
Determining module, for according to the state information, determining the interaction mode of the target object.
10. a kind of storage medium, which is characterized in that be stored with computer program, the computer program on the storage medium
The step of interaction mode analysis method as described in any one of claim 1-8 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764619.2A CN110503024A (en) | 2019-08-19 | 2019-08-19 | A kind of interaction mode analysis method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764619.2A CN110503024A (en) | 2019-08-19 | 2019-08-19 | A kind of interaction mode analysis method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110503024A true CN110503024A (en) | 2019-11-26 |
Family
ID=68588456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910764619.2A Pending CN110503024A (en) | 2019-08-19 | 2019-08-19 | A kind of interaction mode analysis method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110503024A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935453A (en) * | 2020-07-27 | 2020-11-13 | 浙江大华技术股份有限公司 | Learning supervision method and device, electronic equipment and storage medium |
CN112053224A (en) * | 2020-09-02 | 2020-12-08 | 中国银行股份有限公司 | Business processing monitoring implementation method, device and system |
CN112613780A (en) * | 2020-12-29 | 2021-04-06 | 北京市商汤科技开发有限公司 | Learning report generation method and device, electronic equipment and storage medium |
CN113298597A (en) * | 2020-08-06 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Object heat analysis system, method and device |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
CN106599881A (en) * | 2016-12-30 | 2017-04-26 | 首都师范大学 | Student state determination method, device and system |
CN108281052A (en) * | 2018-02-09 | 2018-07-13 | 郑州市第十中学 | A kind of on-line teaching system and online teaching method |
CN108682189A (en) * | 2018-04-20 | 2018-10-19 | 南京脑桥智能科技有限公司 | A kind of learning state confirmation system and method |
CN109242736A (en) * | 2018-09-27 | 2019-01-18 | 广东小天才科技有限公司 | A kind of method and system for the study situation for assisting teacher to understand student |
WO2019035007A1 (en) * | 2017-08-15 | 2019-02-21 | American Well Corporation | Methods and apparatus for remote camera control with intention based controls and machine learning vision state management |
CN109523852A (en) * | 2018-11-21 | 2019-03-26 | 合肥虹慧达科技有限公司 | The study interactive system and its exchange method of view-based access control model monitoring |
CN109819402A (en) * | 2019-01-08 | 2019-05-28 | 李超豪 | For supervising the method and its system that improve study habit |
CN110033400A (en) * | 2019-03-26 | 2019-07-19 | 深圳先进技术研究院 | A kind of classroom monitoring analysis system |
-
2019
- 2019-08-19 CN CN201910764619.2A patent/CN110503024A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228982A (en) * | 2016-07-27 | 2016-12-14 | 华南理工大学 | A kind of interactive learning system based on education services robot and exchange method |
CN106599881A (en) * | 2016-12-30 | 2017-04-26 | 首都师范大学 | Student state determination method, device and system |
WO2019035007A1 (en) * | 2017-08-15 | 2019-02-21 | American Well Corporation | Methods and apparatus for remote camera control with intention based controls and machine learning vision state management |
CN108281052A (en) * | 2018-02-09 | 2018-07-13 | 郑州市第十中学 | A kind of on-line teaching system and online teaching method |
CN108682189A (en) * | 2018-04-20 | 2018-10-19 | 南京脑桥智能科技有限公司 | A kind of learning state confirmation system and method |
CN109242736A (en) * | 2018-09-27 | 2019-01-18 | 广东小天才科技有限公司 | A kind of method and system for the study situation for assisting teacher to understand student |
CN109523852A (en) * | 2018-11-21 | 2019-03-26 | 合肥虹慧达科技有限公司 | The study interactive system and its exchange method of view-based access control model monitoring |
CN109819402A (en) * | 2019-01-08 | 2019-05-28 | 李超豪 | For supervising the method and its system that improve study habit |
CN110033400A (en) * | 2019-03-26 | 2019-07-19 | 深圳先进技术研究院 | A kind of classroom monitoring analysis system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935453A (en) * | 2020-07-27 | 2020-11-13 | 浙江大华技术股份有限公司 | Learning supervision method and device, electronic equipment and storage medium |
CN113298597A (en) * | 2020-08-06 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Object heat analysis system, method and device |
CN112053224A (en) * | 2020-09-02 | 2020-12-08 | 中国银行股份有限公司 | Business processing monitoring implementation method, device and system |
CN112053224B (en) * | 2020-09-02 | 2023-08-18 | 中国银行股份有限公司 | Service processing monitoring realization method, device and system |
CN112613780A (en) * | 2020-12-29 | 2021-04-06 | 北京市商汤科技开发有限公司 | Learning report generation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110503024A (en) | A kind of interaction mode analysis method, device and storage medium | |
Manuguerra et al. | Promoting student engagement by integrating new technology into tertiary education: The role of the iPad | |
US9520070B2 (en) | Interactive learning system and method | |
AU2016243058A1 (en) | System and method for adaptive assessment and training | |
CN110598770A (en) | Multi-space fusion learning environment construction method and device | |
US10056002B2 (en) | Technologies for students evaluating teachers | |
Ogata et al. | Ubiquitous Learning Log: What if we can log our ubiquitous learning | |
Adie et al. | Fidelity of summative performance assessment in initial teacher education: The intersection of standardisation and authenticity | |
CN110795917A (en) | Personalized handout generation method and system, electronic equipment and storage medium | |
Önger et al. | An investigation into digital literacy views of social studies preservice teachers in the context of authentic learning | |
Crompton | Using context-aware ubiquitous learning to support students' understanding of geometry | |
Ferster et al. | Automated formative assessment as a tool to scaffold student documentary writing | |
CN112396897A (en) | Teaching system | |
Parwata et al. | The Development of Digital Teaching to Improve the Quality of Student Learning in the Revolution 4.0 Era at Warmadewa University | |
Bernardo | Reading what’s beyond the textbooks: Documentary films as student projects in college reading courses | |
Woodbury Jr | The Effects of a Training Session on Teacher Knowledge, Perceptions, and Implementation of Assistive Technology in Secondary Schools. | |
Pourreau et al. | Perceptions of K-12 online teaching endorsement program effectiveness in Georgia: A case study | |
Hassan et al. | Artificial intelligence in educational examinations | |
US20220319152A1 (en) | Methods for generating cognitive building blocks | |
Krumova | Research on LMS and KPIs for Learning Analysis in Education | |
Salu et al. | Model Operating of Field Experience Program in Improving of Novice English Teachers | |
KR101023901B1 (en) | System and method for learning management | |
Mroz | " Off the Radar:" The Framing of Speech, Language and Communication in the Description of Children with Special Educational Needs in Literacy. | |
LOMBARDI | CHAPTER FOUR MAKING PROGRESS IN ENGLISH SPEAKING VISIBLE: A CLASS ROUTINE TO RAISE LANGUAGE LEARNING AWARENESS | |
Razak et al. | The development of M-LODGE for training instructional designers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |