CN111008914A - Object concentration analysis method and device, electronic terminal and storage medium - Google Patents

Object concentration analysis method and device, electronic terminal and storage medium Download PDF

Info

Publication number
CN111008914A
CN111008914A CN201811166640.4A CN201811166640A CN111008914A CN 111008914 A CN111008914 A CN 111008914A CN 201811166640 A CN201811166640 A CN 201811166640A CN 111008914 A CN111008914 A CN 111008914A
Authority
CN
China
Prior art keywords
window
learning
concentration
information
concentration degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811166640.4A
Other languages
Chinese (zh)
Inventor
郑文丞
张建华
姜远航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wind Creation Information Consulting Co Ltd
Original Assignee
Shanghai Wind Creation Information Consulting Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wind Creation Information Consulting Co Ltd filed Critical Shanghai Wind Creation Information Consulting Co Ltd
Priority to CN201811166640.4A priority Critical patent/CN111008914A/en
Publication of CN111008914A publication Critical patent/CN111008914A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides an object concentration degree analysis method, an object concentration degree analysis device, an electronic terminal and a storage medium. The invention can simply and flexibly know the concentration degree of the audience object, thereby improving the learning efficiency.

Description

Object concentration analysis method and device, electronic terminal and storage medium
Technical Field
The invention relates to the technical field of application program window detection, in particular to an object concentration degree analysis method and device, an electronic terminal and a storage medium.
Background
With the continuous progress of information technology and computer technology, internet remote education in the forms of online teaching or live classroom and the like is rapidly developed. However, since the internet distance education cannot make the teacher and the students face to face contact and communicate with each other like the conventional education, the teacher cannot know whether the students learning through the internet are carefully learning or not, and whether the students are chatting on the internet while listening to or playing the teaching contents or not occurs. Although some teachers lock the teaching contents to prevent learning by using other application software, the method also limits the freedom of students, is more prone to generate vague or stubborn situations, and cannot achieve proper effects.
Therefore, how to simply and flexibly judge the concentration degree of audience objects in online teaching or live-broadcast classroom-type internet remote teaching activities is a problem to be solved urgently in the field at present.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object concentration degree analysis method, device, electronic terminal and storage medium are provided to solve the problem that the concentration degree of an audience object cannot be simply and flexibly known in the existing internet remote teaching activities in the form of online teaching or live classroom.
To achieve the above and other related objects, the present invention provides a method for analyzing concentration of a subject, the method comprising: acquiring window information of a learning window displayed on an object terminal; extracting window size information, window shielding information and/or media playing parameter information through the window information; and obtaining the concentration degree of the user of the learning window according to the window size information, the window shielding information and/or the media playing parameter information.
In an embodiment of the present invention, the manner of acquiring the window size information includes: judging the size of the window by acquiring the current visual state of the learning window; the visual states include: one or more combinations of maximizing, normalizing, and minimizing.
In an embodiment of the present invention, the manner of acquiring the window shielding information includes: and judging whether the learning window is blocked or not by acquiring the current set-top information and/or window focus information of the learning window.
In an embodiment of the present invention, the manner of acquiring the window shielding information includes: judging whether the learning window is shielded or not by acquiring current set-top information and/or window focus information of the learning window; obtaining the concentration degree of a user of the learning window according to the window size information and the window shielding information: when the visual state is minimized, determining that the concentration is a low concentration; when the visual state is maximized and the learning window is not blocked, judging that the concentration degree is high concentration degree; when the visual state is maximized and the learning window is shielded, acquiring concentration degree according to the shielded information; when the visual state is normalized other than minimized and maximized, and the learning window is not occluded, then determining that the concentration is high; when the visual state is normal and the learning window is occluded, concentration is obtained according to the occluded information.
In an embodiment of the present invention, the occluded information includes: one or more combinations of duration, frequency, location, and extent of the learning window being occluded; the manner of obtaining concentration according to the occluded information includes one of the following: A) determining a concentration level according to a duration of time for which the learning window is blocked, wherein the magnitude of the duration is inversely proportional to the concentration level; B) judging the concentration degree according to the frequency of the learning window being blocked in a certain time, wherein the magnitude of the frequency is inversely proportional to the concentration degree; C) judging the concentration degree according to the range of the learning window which is shielded, wherein the magnitude of the range is inversely proportional to the concentration degree; D) judging the concentration degree according to the number of the blocked preset key learning positions in the learning window; the preset key learning position is a person or object which is identified by an image as being directly related to the knowledge input accepted by the user; the number is inversely proportional to concentration; E) the concentration level is determined by combining two or more methods selected from the group consisting of A), B), C) and D) by the set weight fusion.
In an embodiment of the present invention, the media playing parameter information includes: learning one or more combinations of volume, brightness and playing state of the media played by the window; the play states include play, pause, and stop.
In an embodiment of the invention, the method for obtaining concentration according to the media playing parameter information includes one of the following: F) judging the concentration degree according to the volume of the learning window, wherein the volume is in direct proportion to the concentration degree; G) judging the concentration degree according to the brightness size of the learning window, wherein the magnitude of the brightness size is in direct proportion to the concentration degree; H) judging the concentration degree according to the playing state of the learning window, and when the playing state is playing, judging that the concentration degree is high concentration degree; when the playing state is pause or stop, judging that the concentration degree is low concentration degree; I) the concentration level is determined by combining two or more methods selected from F), G), and H) by the set weight fusion.
To achieve the above and other related objects, the present invention provides an object concentration degree analyzing apparatus, including: the acquisition module is used for acquiring learning window information of the object terminal; and the processing module is used for extracting window size information and/or window occlusion information through the learning window information and obtaining concentration degree according to the window size information and/or the window occlusion information, wherein the concentration degree is divided into high concentration degree or low concentration degree.
To achieve the above and other related objects, the present invention provides an electronic terminal, comprising: a processor, and a memory; the memory is used for storing a computer program, and the processor is used for implementing the object concentration analysis method when executing the computer program stored in the memory.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program, which, when executed by a processor, implements the method for concentration analysis of an object as described above.
As described above, according to the object concentration analysis method, device, electronic terminal and storage medium of the present invention, the concentration of the user of the learning window is obtained according to the window size information, the window occlusion information and/or the media playing parameter information by obtaining the window information of the learning window displayed on the object terminal, and then extracting the window size information, the window occlusion information and/or the media playing parameter information from the window information. Has the following beneficial effects: can know audience object's concentration degree simply nimble, and then promote learning efficiency.
Drawings
Fig. 1 is a schematic flow chart illustrating an object concentration analysis method according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of an object concentration analysis apparatus according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic terminal according to an embodiment of the invention.
Description of the element reference numerals
Method steps S101 to S103
201 acquisition module
202 processing module
301 processor
302 memory
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The object concentration degree analysis method, the object concentration degree analysis device, the electronic terminal and the storage medium are used for solving the problem that the concentration degree of an audience object in an online education or training process cannot be accurately and efficiently judged in the prior art.
Fig. 1 is a schematic flow chart showing an object concentration analysis method according to an embodiment of the present invention. The method comprises the following steps:
step S101: window information of a learning window displayed on an object terminal is acquired.
In an embodiment of the present invention, the object terminal is any one of a desktop computer, a notebook computer, a tablet computer, and a mobile terminal, and the learning window is a window or an interface for displaying teaching contents in the form of live-broadcasting or video-broadcasting common online teaching or live-broadcasting classroom through a media carrier such as an application program, a client, a software system, a web page, an applet, and the like.
In an embodiment of the present invention, the learning window may be an application software window, a client running window, a web page or a video/live broadcast window in a web page, or a window that is transferred to an independent player during video/live broadcast.
It should be noted that the learning window of the present invention is not only a video or live window, but also may include a display area, such as a live window for displaying a lecturer, or an information area of the lecturer; or, the communication area, the areas where teachers chat with students in characters and the questions and answers are convenient, and the like are included; or, the media carrier includes an operation interface, a menu bar, a tool bar, and other areas. The above-mentioned regions are all common within the vector and will be appreciated by those skilled in the art.
In an embodiment of the present invention, the method for acquiring window information of a learning window displayed on an object terminal includes: and adding a related detection code to the operation program of the learning window of the medium carrier, or adding a related detection program to the file by the operation of the learning window, so as to obtain window size information, window shielding information, media playing parameter information and/or the like by the operation of the medium carrier.
Or a window detection file or program for the learning window is set or added on the system of the object terminal to acquire the learning window information. For example, the volume of various applications can be viewed on a conventional computer or tablet computer.
For example, a student of the learning platform learns in a live broadcasting mode through a learning program opened on a computer, a detection program of a related learning window is preset or installed in a computer system, and in the live broadcasting process, the computer system can detect related information of the learning window of the learning application program in real time or periodically, such as maximization or minimization of the learning window, whether the learning window is shielded by other chat application programs, whether the volume is closed to use other application programs, whether live broadcasting is unexpected and unknown, and the concentration degree of a user of the learning window can be reflected really and objectively by detecting the information of the learning window.
Step S102: and extracting window size information, window shielding information and/or media playing parameter information through the window information.
In an embodiment of the present invention, the existing internet remote teaching activities in the form of online teaching or live classroom mostly adopt a video or live broadcast form, so the learning window of the present invention includes window size information, window occlusion information, media playing parameter information, and the like.
In an embodiment of the present invention, the manner of acquiring the window size information includes: judging the size of the window by acquiring the current visual state of the learning window; the visual states include: one or more combinations of maximizing, normalizing, and minimizing.
In one embodiment of the present invention, the normalized visual state is a visual state other than maximized and minimized, and the random stretched dimensions are normalized.
It should be noted that, when the object is learned in the application program through the mobile phone, because of the reason of the mobile phone system, more than two programs cannot be simultaneously displayed on the mobile phone, so that the window size information is only maximized and minimized. However, this situation does not affect the idea set forth in the present invention, and it can be understood that the normalized visual state of the window size information does not appear when the method of the present invention is applied to the mobile phone. Of course, if the mobile phone learns in the form of a web page, the playing window or the live broadcasting window in the web page still follows the method for judging the normalized visual state in the present invention.
In an embodiment of the invention, the detection of the visual state of the learning window can be realized by calling related library functions of windows such as minimizepappbtnclicked, maximizepappbtnclicked, NormalAppBtnClicked and the like based on a windows system program and judging the length and the width.
For example, the procedure for determining the visible state of a window in a windows system is as follows:
Figure BDA0001821283280000051
of course, the corresponding web page, APP and other different media carriers can be realized by calling and writing a library function in a corresponding programming language, and those skilled in the art should understand that.
In an embodiment of the present invention, the manner of acquiring the window shielding information includes: and judging whether the learning window is blocked or not by acquiring the current set-top information and/or window focus information of the learning window.
In an embodiment of the present invention, the currently active window and the non-currently active window can be further distinguished by the set-top information.
For example, by default, in WINDOWS system, if it is not the current window, a control line corresponding to minimize \ restore \ close is gray, and if it is the current window, it is blue. The active window accepts input from the keyboard and the inactive window does not accept input from the keyboard. In short, the page where the mouse is located is the current window, i.e. the window currently active, i.e. the currently set top window.
In an embodiment of the present invention, the two different coding ideas are used to obtain the current set-top information and/or window focus information of the learning window, and both the two methods can determine whether the learning window is blocked or not, or both the two methods determine the learning window together, so that the accuracy of the determination can be increased.
In addition, under the idea set forth in the present invention, the method for determining whether the window is blocked is not limited to the method based on the above-mentioned vertex information and/or window focus information, and other methods capable of determining whether the window is blocked are also within the scope of the present invention.
For example, the procedure for determining whether a window is occluded according to window focus information in a windows system is as follows:
Figure BDA0001821283280000061
of course, the corresponding web page, APP and other different media carriers can be realized by calling and writing a library function in a corresponding programming language, and those skilled in the art should understand that.
In an embodiment of the present invention, the media playing parameter information includes: learning one or more combinations of volume, brightness and playing state of the media played by the window; the play states include play, pause, and stop.
The acquisition of the volume or brightness information of the media played by the learning window can acquire the volume or brightness information according to the corresponding offset by reading the background numerical value. The program may call a volume or brightness library function to obtain the corresponding value.
The playing state can be judged according to the starting state of the calling related key or the normal or pause or stop state of the video or live data stream.
Step S103: and obtaining the concentration degree of the user of the learning window according to the window size information, the window shielding information and/or the media playing parameter information.
In an embodiment of the present invention, the manner of acquiring the window shielding information includes: judging whether the learning window is shielded or not by acquiring current set-top information and/or window focus information of the learning window; and obtaining the concentration degree of the user learning the window according to the window size information and the window shielding information.
In an embodiment of the present invention, the window size information and the window occlusion information are placed in a scene, and it is reasonable to determine the concentration condition of the object.
In an embodiment of the present invention, when the visual status is minimized, the concentration level is determined to be low concentration level.
For example, when the student minimizes the learning window, it is assumed that the student wants to open or find other application software at this time, and has no strong interest or insufficient concentration in the learning window, so we reasonably believe that the student is not concentrated at this time.
In an embodiment of the invention, when the visual status is maximized and the learning window is not blocked, the concentration degree is determined to be high concentration degree.
For example, when the learner performs the maximization operation on the learning window, it is assumed that the learner wants to enlarge the window to see more clearly, for example, if the maximized learning window is not covered by other applications, and the learner is considered to be temporarily not interested in other applications, then we can reasonably believe that the concentration of the learner is higher at this time.
In an embodiment of the invention, when the visual state is normalized other than minimized and maximized and the learning window is not occluded, the concentration is determined to be high.
For example, when the student performs a normalization operation on the learning window, i.e., not a minimized or maximized visual state, the student's concentration cannot be known at this time. If the learning window is detected as being unobstructed, i.e. at the top, and is the current active window, then it can be assumed that the software that the student has last focused on is on the learning window or learning software, and we have reason to believe that the student's concentration is higher at this time.
In an embodiment of the invention, when the visual state is maximized and the learning window is occluded, the concentration is obtained according to the occluded information, and/or when the visual state is normalized and the learning window is occluded, the concentration is obtained according to the occluded information.
For example, when the student performs the maximization or normalization operation on the learning window, it is detected that the learning window is blocked, and although the blocked learning window is present, there may be a window which is popped up by the computer or the mobile phone and is not actively opened, such as a message of antivirus software on the computer, or a short message or a phone received by the mobile phone, and these situations cannot be determined that the subject does not have high concentration. Considering that the shielding learning window is not opened actively, further analysis on the shielding condition is needed.
In an embodiment of the present invention, the occluded information includes: learning one or more combinations of duration, frequency, location, and extent that the window is occluded.
In an embodiment of the present invention, the duration of the learning window being blocked is a time of continuously blocking the learning window, and the situation of the non-actively opening the blocked learning window can be better solved according to the duration. For example, messages of antivirus software on a computer, or short messages or calls received by a mobile phone are all provided with a certain time for popping up messages and displaying, and if the time for popping up the messages and displaying does not reach the duration, the blocking behavior with low concentration is not considered.
For another example, when the object is chatting through the chat software while opening the learning window, the chat window shields the learning window when typing, and after sending the chat message, the hiding is switched, so that the true concentration of the object cannot be detected only by the duration. If the frequency of the occlusion of the learning window is detected, for example, counting the number of times of occlusion within 1 minute, the concentration condition of the object can be known more accurately.
In addition, the analysis of the shielding condition of the learning window can be more accurate and objective by combining the shielding position and range of the window.
The manner of obtaining concentration according to the occluded information includes one of the following:
A) determining a concentration level according to a duration of time that the learning window is occluded, the duration having a magnitude that is inversely proportional to the concentration level.
In an embodiment of the present invention, when the duration is longer, the determined concentration degree is lower, and vice versa, when a time threshold is preset, if the duration is lower than the time threshold, the determination may be made as the high concentration degree; if the duration is above a time threshold, a low concentration may be determined.
For example, the preset time threshold may be 20 seconds, when the learning window is occluded for more than 20 seconds, then the concentration of the object is determined to be low.
B) And judging the concentration degree according to the frequency of the blocked learning window in a certain time, wherein the magnitude of the frequency is inversely proportional to the concentration degree.
In an embodiment of the present invention, when the frequency is more, the determined concentration degree is lower, otherwise, the determined concentration degree is higher, when a frequency threshold is preset, if the frequency is lower than the frequency threshold, the determined concentration degree is high; if the frequency is higher than the frequency threshold, it can be determined that the concentration of the user in the learning window is low.
For example, the preset frequency threshold may be 5 times/minute, and if the learning window is blocked for more than 5 times in 1 minute, the concentration of the user of the learning window is determined to be low.
C) And judging the concentration degree according to the range of the occluded learning window, wherein the magnitude of the range is inversely proportional to the concentration degree.
In an embodiment of the present invention, when the range is larger, the determined concentration degree is lower, otherwise, the determined concentration degree is higher, when a range threshold is preset, if the range is lower than the range threshold, the determined concentration degree may be high; if the range is above a range threshold, low concentration may be determined.
For example, the preset range threshold may be 50% of the size of the learning window, and when the blocked area of the learning window is more than half, the concentration level of the user of the learning window is determined to be low.
D) Judging the concentration degree according to the number of the blocked preset key learning positions in the learning window; the preset key learning position is a person or object which is identified by an image as being directly related to the knowledge input accepted by the user; the number is inversely proportional to the concentration.
In an embodiment of the present invention, when the number of the occluded key learning positions is larger, the determined concentration degree is lower, and vice versa, when a number threshold is preset, if the number of the occluded key learning positions is lower than the number threshold, the determined concentration degree may be high; and if the number of the blocked key learning positions is higher than a number threshold, determining that the attention is low.
For example, the key learning position may be a video window dedicated to teachers in the learning window, such as a small video window displaying the other party in live broadcast or video chat, or the key learning position is a text communication area and may be used for teacher-student interaction. According to a preset number threshold value, for example, 2, when the number of the blocked key learning positions in the learning window exceeds 2, the concentration degree of the user in the learning window is determined to be low.
E) The concentration level is determined by combining two or more methods selected from the group consisting of A), B), C) and D) by the set weight fusion.
In an embodiment of the present invention, any two or more of duration, frequency, position, and range of the blocked learning window may be subjected to weight fusion to perform comprehensive detection for determination, so as to improve the accuracy of determination.
For example, a), B), C), and D) may be weighted by 25% to more objectively determine the occlusion condition for concentration. When the excessive half concentration degree is the high concentration degree, the comprehensively determined concentration degree is the high concentration degree, otherwise, the comprehensively determined concentration degree is the low concentration degree.
In an embodiment of the invention, the method for obtaining concentration according to the media playing parameter information includes one of the following:
F) and judging the concentration degree according to the volume size of the learning window, wherein the magnitude of the volume size is in direct proportion to the concentration degree.
In an embodiment of the present invention, when the volume is more, the determined concentration degree is higher, otherwise, the determined concentration degree is lower, when a volume threshold is preset, if the volume is higher than the volume threshold, the determined concentration degree is high; if the volume is below a volume threshold, it may be determined that attention is low.
For example, the learning process through video or live broadcasting requires sound, the louder the sound is, the more the student wants to hear clearly, and if the learning window is opened for learning and other video or music application software is also opened, the student may reduce the volume of the learning window or even close the learning window to hear the sound of other application software. For this case, the learning window still detects that it is not occluded, but the trainee is not actually listening. Therefore, the concentration degree of the student can be judged more truly and objectively by increasing the detection volume.
In an embodiment of the invention, the media playing parameter information may determine the concentration degree of the user of the learning window together with the window size information and the window shielding information, and may also determine the concentration degree of the user of the learning window separately.
For example, if the volume of the learning window is 0 or very low for whatever reason, because of the live or video form of learning, there are few subtitles, and a large probability can be that the user of the learning window is less attentive. Therefore, the volume of the media playing parameter information in the window information can be separately detected to determine the concentration of the user.
G) And judging the concentration degree according to the brightness size of the learning window, wherein the magnitude of the brightness size is in direct proportion to the concentration degree.
In an embodiment of the invention, when the brightness is more, the determined concentration degree is higher, otherwise, the determined concentration degree is lower, when a brightness threshold is preset, if the brightness is higher than the brightness threshold, the determination may be made as the high concentration degree; if the brightness is below a brightness threshold, it may be determined that the concentration is low.
For example, when the learning window is too dark, the learning effect of the learning window is greatly reduced, and from another perspective, it can be determined that the concentration of the user of the learning window is low.
H) Judging the concentration degree according to the playing state of the learning window, and when the playing state is playing, judging that the concentration degree is high concentration degree; and when the playing state is pause or stop, judging that the concentration degree is low concentration degree.
In an embodiment of the present invention, the playing state of the window is normally playing, and when a temporary event needs to be handled occurs, or the picture played by the learning window is unexpected to the terminal or the video card owner, and the subject opens the learning window and does other events or does not learn, the attentiveness of the user of the learning window can be effectively determined according to the playing state of the learning window.
I) The concentration level is determined by combining two or more methods selected from F), G), and H) by the set weight fusion.
In an embodiment of the present invention, any two or more of the volume, the brightness, and the playing state of the learning window may be weighted and fused to perform comprehensive detection for determination, so as to improve the accuracy of determination.
For example, F), G), and H) may be weighted by 40%, 30%, and 30%, respectively, to more objectively determine the occlusion condition and to obtain concentration. And when the proportion of the high concentration degree is judged to be more than 50%, comprehensively judging that the concentration degree of the user in the learning window is the high concentration degree, otherwise, judging that the concentration degree is the low concentration degree.
In an embodiment of the present invention, by using the object concentration degree analysis method of the present invention, the instructor client or the platform client can obtain the concentration degree of the user in the learning window, and can score or prompt the user in time according to the concentration degree condition, so as to improve the learning efficiency of internet distance education in the form of online teaching or live classroom.
Or, after the media carrier of the learning window, such as the application software or the client, acquires the concentration degree of the user of the learning window, the media carrier directly pushes a reminding message into the learning window or the application software according to the situation.
For example, when the concentration of the user who gets the learning window is low, the teaching content provider background or the teacher who teaches or trains is reminded to know that the user is not concentrated in time. Then, the content provider can send a reminding message to the media carrier of the user learning window, or automatically pop up the reminding message at the client of the audience object to remind the audience object of low current concentration, so that the user can timely adjust the concentration and learning efficiency in the internet remote education process in the form of live classroom or online teaching and the like.
As shown in fig. 2, a schematic block diagram of an apparatus for analyzing concentration of an object according to the present invention is shown. As shown in the figure, the method comprises the following steps: an acquisition module 201, configured to acquire window information of a learning window displayed on an object terminal; the processing module 202 is configured to extract window size information, window occlusion information, and/or media playing parameter information through the window information, and obtain the concentration degree of a user of the learning window according to the window size information, the window occlusion information, and/or the media playing parameter information.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the extraction module 202 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the function of the judgment module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 3 is a schematic structural diagram of an electronic terminal according to an embodiment of the invention. As shown in the figure, the method comprises the following steps: a processor 301, and a memory 302; the memory 302 is used for storing computer programs, and the processor 301 is used for executing the computer programs stored in the memory 302 so as to make the electronic terminal execute the method for analyzing the concentration of the object, as shown in fig. 1.
In an embodiment of the invention, the external device communicatively connected to the communicator 303 may be a device providing image information of a face of a subject, such as a camera.
The processor 301 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
The memory 302 may include a Random Access Memory (RAM), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory.
To achieve the above and other related objects, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the subject concentration analysis method.
It should be noted that, as can be understood by those skilled in the art: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In summary, the object concentration analysis method, the device, the electronic terminal and the storage medium of the present invention obtain the concentration of the user in the learning window by obtaining the window information of the learning window displayed on the object terminal, extracting the window size information, the window occlusion information and/or the media playing parameter information from the window information, and obtaining the concentration of the user in the learning window according to the window size information, the window occlusion information and/or the media playing parameter information. The invention can simply and flexibly know the concentration degree of the audience object, thereby improving the learning efficiency. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method for concentration analysis of an object, the method comprising:
acquiring window information of a learning window displayed on an object terminal;
extracting window size information, window shielding information and/or media playing parameter information through the window information;
and obtaining the concentration degree of the user of the learning window according to the window size information, the window shielding information and/or the media playing parameter information.
2. The method of claim 1, wherein the manner of obtaining the window size information comprises: judging the size of the window by acquiring the current visual state of the learning window; the visual states include: one or more combinations of maximizing, normalizing, and minimizing.
3. The method of claim 1, wherein the manner of obtaining the window occlusion information comprises: and judging whether the learning window is blocked or not by acquiring the current set-top information and/or window focus information of the learning window.
4. The method of claim 2, wherein the manner of obtaining the window occlusion information comprises: judging whether the learning window is shielded or not by acquiring current set-top information and/or window focus information of the learning window; obtaining the concentration degree of a user of the learning window according to the window size information and the window shielding information:
when the visual state is minimized, determining that the concentration is a low concentration;
when the visual state is maximized and the learning window is not blocked, judging that the concentration degree is high concentration degree;
when the visual state is maximized and the learning window is shielded, acquiring concentration degree according to the shielded information;
when the visual state is normalized other than minimized and maximized, and the learning window is not occluded, then determining that the concentration is high;
when the visual state is normal and the learning window is occluded, concentration is obtained according to the occluded information.
5. The object concentration analysis method of claim 4, wherein the occluded information comprises: one or more combinations of duration, frequency, location, and extent of the learning window being occluded; the manner of obtaining concentration according to the occluded information includes one of the following:
A) determining a concentration level according to a duration of time for which the learning window is blocked, wherein the magnitude of the duration is inversely proportional to the concentration level;
B) judging the concentration degree according to the frequency of the learning window being blocked in a certain time, wherein the magnitude of the frequency is inversely proportional to the concentration degree;
C) judging the concentration degree according to the range of the learning window which is shielded, wherein the magnitude of the range is inversely proportional to the concentration degree;
D) judging the concentration degree according to the number of the blocked preset key learning positions in the learning window; the preset key learning position is a person or object which is identified by an image as being directly related to the knowledge input accepted by the user; the number is inversely proportional to concentration;
E) the concentration level is determined by combining two or more methods selected from the group consisting of A), B), C) and D) by the set weight fusion.
6. The method of claim 1, wherein the media playing parameter information comprises:
learning one or more combinations of volume, brightness and playing state of the media played by the window; the play states include play, pause, and stop.
7. The method of claim 6, wherein the means for obtaining concentration according to the media playing parameter information comprises one of the following:
F) judging the concentration degree according to the volume of the learning window, wherein the volume is in direct proportion to the concentration degree;
G) judging the concentration degree according to the brightness size of the learning window, wherein the magnitude of the brightness size is in direct proportion to the concentration degree;
H) judging the concentration degree according to the playing state of the learning window, and when the playing state is playing, judging that the concentration degree is high concentration degree; when the playing state is pause or stop, judging that the concentration degree is low concentration degree;
I) the concentration level is determined by combining two or more methods selected from F), G), and H) by the set weight fusion.
8. An object concentration degree analysis apparatus, comprising:
the acquisition module is used for acquiring window information of a learning window displayed on the object terminal;
and the processing module is used for extracting window size information, window shielding information and/or media playing parameter information through the window information and obtaining the concentration degree of a user learning the window according to the window size information, the window shielding information and/or the media playing parameter information.
9. An electronic terminal, comprising: a processor, and a memory;
the memory is configured to store a computer program, and the processor is configured to implement the method of object concentration analysis of any one of claims 1 to 7 when executing the computer program stored in the memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method for subject concentration analysis of any one of claims 1 to 7.
CN201811166640.4A 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium Pending CN111008914A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811166640.4A CN111008914A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811166640.4A CN111008914A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Publications (1)

Publication Number Publication Date
CN111008914A true CN111008914A (en) 2020-04-14

Family

ID=70111275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811166640.4A Pending CN111008914A (en) 2018-10-08 2018-10-08 Object concentration analysis method and device, electronic terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111008914A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708674A (en) * 2020-06-16 2020-09-25 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for determining key learning content
CN111782029A (en) * 2020-07-09 2020-10-16 傅建玲 Electronic course learning supervising and urging method and system
CN112131977A (en) * 2020-09-09 2020-12-25 湖南新云网科技有限公司 Learning supervision method and device, intelligent equipment and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708674A (en) * 2020-06-16 2020-09-25 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for determining key learning content
CN111782029A (en) * 2020-07-09 2020-10-16 傅建玲 Electronic course learning supervising and urging method and system
CN112131977A (en) * 2020-09-09 2020-12-25 湖南新云网科技有限公司 Learning supervision method and device, intelligent equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
O'Hara Young children’s ICT experiences in the home: Some parental perspectives
Slates et al. Counteracting summer slide: Social capital resources within socioeconomically disadvantaged families
CN107066619B (en) User note generation method and device based on multimedia resources and terminal
CN110020059B (en) System and method for inclusive CAPTCHA
CN111008914A (en) Object concentration analysis method and device, electronic terminal and storage medium
US20190026482A1 (en) Device access control based on task completion
Marr Future skills: The 20 skills and competencies everyone needs to succeed in a digital world
CN107368585B (en) Storage method and system based on teaching video
CN109275036B (en) Message reminding method, device and equipment for teaching live broadcast
CN112883851A (en) Learning state detection method and device, electronic equipment and storage medium
CN113315979A (en) Data processing method and device, electronic equipment and storage medium
CN112101231A (en) Learning behavior monitoring method, terminal, small program and server
CN106600237B (en) Method and device for assisting in memorizing traditional Chinese medicine books
Shen et al. Cognitive engagement on social media: A study of the effects of visual cueing in educational videos
CN110381359B (en) Video processing method and device, computer equipment and storage medium
CN110213654B (en) Method for detecting effective watching content of streaming media video
CN111931875A (en) Data processing method, electronic device and computer readable medium
CN106933443B (en) Method and device for processing electronic book data and electronic book reader
Potter 24 A General Framework for Media Psychology Scholarship
Kelly et al. Smartphones and parent-child conversations during young children's informal science learning at an aquarium
CN112528790B (en) Teaching management method, device and server based on behavior recognition
CN113570227A (en) Online education quality evaluation method, system, terminal and storage medium
CN113723354A (en) Information processing method and device
CN111507870A (en) Lecture display method, system, equipment and storage medium based on online education
CN111787344B (en) Multimedia interaction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination