WO2019218427A1 - Method and apparatus for detecting degree of attention based on comparison of behavior characteristics - Google Patents

Method and apparatus for detecting degree of attention based on comparison of behavior characteristics Download PDF

Info

Publication number
WO2019218427A1
WO2019218427A1 PCT/CN2018/092786 CN2018092786W WO2019218427A1 WO 2019218427 A1 WO2019218427 A1 WO 2019218427A1 CN 2018092786 W CN2018092786 W CN 2018092786W WO 2019218427 A1 WO2019218427 A1 WO 2019218427A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
behavior
information
user behavior
standard
Prior art date
Application number
PCT/CN2018/092786
Other languages
French (fr)
Chinese (zh)
Inventor
陈鹏宇
卢炀
赵鹏祥
Original Assignee
深圳市鹰硕技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市鹰硕技术有限公司 filed Critical 深圳市鹰硕技术有限公司
Publication of WO2019218427A1 publication Critical patent/WO2019218427A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to a method, device, electronic device, and computer readable storage medium based on behavioral feature comparison.
  • the rapid and accurate detection of students' attention to teaching content can not only remind teachers of the key teaching of high-interest teaching content, but also promote students' emphasis on different knowledge points of knowledge, and do more with less. effect.
  • the patent application with the application number CN201110166693.8 discloses a method for quantifying the regional attention of an object, comprising: acquiring a line of sight direction of a human eye; recording a stay time of the line of sight direction in each area of the object; and staying time Long areas give high attention weights, and areas with short stay times give low attention weights. Since the application mainly evaluates the degree of attention through the analysis of the duration of the human eye, and does not comprehensively reflect various attention indicators in the teaching content, it is impossible to comprehensively evaluate the attention of the teaching content.
  • the patent application with the application number CN201110166693.8 discloses a method and device for evaluating user attention.
  • the method for evaluating user attention includes: detecting a line of sight direction of the user; and determining an area on the screen corresponding to the detected line of sight direction; Obtaining a metric of the user's expression on the determined area for each of the predetermined emotions; and generating a user's attention to the determined area based on the obtained metric. Since the application mainly realizes the evaluation of the degree of attention through the user's measure of the direction of the emotion, it cannot comprehensively reflect various attention indicators in the teaching content, and cannot comprehensively evaluate the attention of the teaching content.
  • the patent application with the application number CN201110166693.8 discloses a method and system for counting attention degree of display screen, which transmits a wireless access broadcast signal through a wireless access point device; and a user who enters a signal strength distance range of the wireless access point device
  • the terminal receives the radio access broadcast signal and accesses the wireless access point device, and obtains information of the accessed user terminal, such as a unique identifier and access time information, through the wireless access point device; the device accesses the device through the wireless access point device.
  • the information of the user terminal is provided to the server; and the degree of attention of the target display screen is counted by the server according to the information of the accessed user terminal.
  • the application mainly collects statistics on the degree of connection of the display by counting the number of connected devices in the wireless access point, the method can only be used in a situation where the user is concentrated, and is not suitable for the situation in which the network teaching user is more dispersed. .
  • An object of the present disclosure is to provide a method, device, electronic device, and computer readable storage medium based on behavioral feature comparison, thereby at least partially overcoming one or more of the limitations and disadvantages of the related art. problem.
  • a method for detecting attention based on behavior feature comparison including:
  • the video information obtaining step is to obtain video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
  • a behavior feature recognition step of identifying voice instruction information, and analyzing user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
  • the behavior feature comparison step searches for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compares the user behavior feature with the standard behavior feature;
  • a degree of interest scoring step of scoring a user behavior feature that is inconsistent with the standard behavior feature according to a pre-designed sub-criteria, and collecting a score result in the entire time period of the video information, and generating the video information according to the score result.
  • the behavior feature identifying step includes:
  • the same user behavior feature number in the specified time period is greater than the preset number of user behavior features as a standard behavior feature to be referenced;
  • the method further includes:
  • the pre-established standard behavior feature model is trained according to the standard behavior characteristics to be referenced.
  • comparing the user behavior characteristic to the standard behavior characteristic includes:
  • the method further includes a scoring standard generating step, the scoring standard generating step comprising:
  • a full score is calculated for the user behavior characteristic of the corresponding user in the specified time period.
  • the method further includes a scoring standard generating step, the scoring standard generating step comprising:
  • the user behavior characteristic of the corresponding user in the specified time period is zero.
  • the user behavior information includes a user's head rotation behavior and an eye focus behavior; and the behavior feature recognition step includes:
  • the head moving direction, the moving angle, and the eye focusing state in the user's eye focusing behavior are taken as user behavior characteristics.
  • the eye focus behavior includes closed eye information of a user's eyes
  • the behavior feature identification step further includes:
  • the user's eyes are closed according to the closed eye information of the user's eyes.
  • the method further includes a scoring standard generating step, the scoring standard generating step comprising:
  • the user behavior characteristic of the corresponding user in the specified time period is zero.
  • the user behavior obtaining step includes:
  • the method further includes:
  • the user behavior attention of each user is sorted, and each user's attention ranking is sent to the specified object.
  • a attention degree detecting apparatus based on behavior feature comparison including:
  • a video information acquiring module configured to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
  • a behavior feature recognition module configured to identify voice instruction information, and analyze user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
  • a behavior feature comparison module configured to search for a standard behavior feature corresponding to current voice instruction information in a pre-established standard behavior feature model, and compare the user behavior feature with the standard behavior feature;
  • a degree of interest scoring module configured to score a user behavior feature that is inconsistent with the standard behavior feature according to a pre-designed sub-criteria, and collect a score result in the entire time period of the video information, and generate the video according to the score result.
  • an electronic device comprising:
  • a memory having stored thereon computer readable instructions that, when executed by the processor, implement the method of any of the above.
  • a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor, implements the method of any of the above.
  • the behavior feature comparison-based attention degree detection method in the exemplary embodiment of the present disclosure acquires video information collected by the video collection device, identifies voice instruction information, and analyzes the user behavior information within a specified time period after the voice instruction information is recognized and analyzed.
  • User behavior characteristics searching for a standard behavior feature corresponding to current voice instruction information in a pre-established standard behavior feature model, and comparing the user behavior feature with the standard behavior feature, according to a pre-designed standard
  • the user behavior characteristics in which the standard behavior characteristics are inconsistent are scored, and the score results in the entire time period of the video information are counted, and the teaching attention degree of the video information is generated according to the score result.
  • comparing the voice instruction information with the user behavior characteristics it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving accurate inference of attention; on the other hand, the duration of the user behavior characteristics in a specified time period
  • the analysis realizes the method of scoring the corresponding proportion of the user behavior characteristics, so that the statistical result of the user attention degree is more accurate.
  • FIG. 1 illustrates a flowchart of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure
  • FIG. 2 is a schematic diagram showing an application scenario of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic diagram showing an application scenario of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure
  • FIG. 4 illustrates a schematic block diagram of a attention degree detecting apparatus based on behavior feature comparison according to an exemplary embodiment of the present disclosure
  • FIG. 5 schematically illustrates a block diagram of an electronic device in accordance with an exemplary embodiment of the present disclosure
  • FIG. 6 schematically illustrates a schematic diagram of a computer readable storage medium in accordance with an exemplary embodiment of the present disclosure.
  • a method for detecting attention based on behavior feature comparison is first provided, which can be applied to an electronic device such as a computer; as shown in FIG. 1, the method for detecting attention based on behavior feature comparison may include the following step:
  • the video information obtaining step S110 is performed to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
  • the behavior feature identifying step S120 is to identify the voice instruction information, and analyze the user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
  • the behavior feature comparison step S130 searches for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compares the user behavior feature with the standard behavior feature;
  • the degree of interest scoring step S140 the user behavior characteristics that are inconsistent with the standard behavior characteristics are scored according to the pre-designed sub-criteria, and the score results in the entire time period of the video information are counted, and the video information is generated according to the score result. Teaching attention.
  • the attention feature detection method based on the behavior feature comparison in the exemplary embodiment, on the one hand, comparing the voice instruction information with the user behavior feature, it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving the attention degree. Accurate inference; on the other hand, analyzing the duration of user behavior characteristics in a specified time period to achieve a proportional proportion of user behavior characteristics, so that the statistical results of user attention are more accurate.
  • the video information collected by the video collection device may be acquired, where the video information includes user behavior information and voice instruction information.
  • a video capture device may collect a user's video, and by analyzing user behavior information in the video and voice command information in the audio, As a basis for further judging user attention.
  • the user behavior obtaining step includes: acquiring video information of user behavior information of multiple users collected by the same video collection device; or acquiring acquisition of one or more video collection devices, including multiple users.
  • Video information of user behavior information For example, in a realistic teaching scene, a video image of all users in the entire classroom can be collected through a camera placed in front of the teaching, and in the network teaching scene, the camera is viewed through the user video, especially the mobile portable device. , can collect video images of each network teaching user.
  • the voice instruction information may be identified, and the user behavior feature in the user behavior information within a specified time period after the voice instruction information is recognized is analyzed.
  • the user's voice instruction information can be identified, and the user's next action behavior can be determined.
  • the teacher voice command "Please open the third page of the textbook”.
  • all users should turn to read the contents of the book from the blackboard in front of the classroom.
  • This behavior includes the characteristics of the user's behavior; for example, when the teaching content is the first page of the textbook.
  • the teacher voice command "please see the exercises on the second page", at this time the user should have the action of head rotation and eye focus change.
  • the user behavior information includes a user's head rotation behavior and an eye focus behavior
  • the behavior feature recognition step includes: extracting a head movement direction, a movement angle, and a user's eye in a user's head rotation behavior
  • the behavior feature identifying step includes: after analyzing the user behavior feature in the user behavior information within a specified time period, the same number of user behavior features in the specified time period is greater than a preset number.
  • the user behavior characteristics are taken as standard behavioral features to be referenced.
  • the teacher tells the content of the first page of the text. As the progress of the logic is advanced, the second page is spoken, then the users of all users. The behavior of the head rotation or the focus state of the glasses should have a consistent animation trend. Statistical analysis of such behavior trends as a standard behavioral feature to be referenced can be compared as a basis for judging behavior.
  • the method further includes: training a pre-established standard behavior feature model according to the standard behavior feature to be referenced, and collecting statistics on the results of some standard behavior characteristics to be referenced, which can be stored as a standard behavior feature model, and reduced in actual operation comparison. Calculate the load and speed up the response.
  • a standard behavior feature corresponding to the current voice instruction information may be searched in the pre-established standard behavior feature model, and the user behavior feature is compared with the standard behavior feature.
  • the correspondence between the received voice instruction information and the standard behavior feature may be modeled, and when the voice instruction information is received, the voice instruction information may be directly used as an input to search in the model.
  • the corresponding standard behavior characteristics are obtained, and then the behavioral state of the user can be judged by comparing the standard behavior characteristics with the user behavior characteristics. For example, when the user views the content of the blackboard, the teacher's voice command “Please open the third page of the textbook” is received, and the corresponding standard behavior feature is that the head is turned down sharply and then rotated slightly, and the user who does not perform the standard behavior feature will be Mark and proceed with the next step in data processing.
  • comparing the user behavior feature with the standard behavior feature includes: comparing whether the user behavior information in the video information includes the voice instruction information within a specified time period after the voice instruction information is recognized The corresponding standard behavior feature, if included, determines whether the retention duration of the user behavior feature satisfies a preset time range of the standard behavior feature. The user performs the standard behavior feature according to the requirement of the preset time period, which may be all or part of the duration of the specified time period, within a specified time period of the standard behavior feature.
  • the user behavior characteristics that are inconsistent with the standard behavior characteristics may be scored according to the pre-designed sub-criteria, and the scoring results in the entire time period of the video information are counted, and the scoring result is generated according to the scoring result.
  • the teaching attention of video information may be performed according to the pre-designed sub-criteria, and the scoring results in the entire time period of the video information are counted, and the scoring result is generated according to the scoring result.
  • the comparison of the user behavior characteristics and the standard behavior characteristics especially the behavior characteristics comparison in a specified time period, it can be used as an important basis for the teaching attention degree score.
  • the method further includes a scoring standard generating step, the scoring standard generating step comprising: if the retention duration of the user behavior feature satisfies a preset time range of the standard behavior feature, the designating The user's behavioral characteristics corresponding to the user are scored within the time period.
  • the standard behavior feature is completed in a preset time range within a specified time period, indicating that the user's attention level reaches a preset standard, and the user's attention degree in the specified time period is scored.
  • the method further includes a scoring standard generating step, the scoring standard generating step includes: if the comparison is within a specified time period after the voice instruction information is recognized, the user behavior information in the video information does not include The standard behavior feature corresponding to the voice instruction information is zeroed for the user behavior characteristic of the corresponding user in the specified time period.
  • a user behavior that does not have a standard behavior feature within a specified time period indicates that the user's attention does not reach the preset standard at all, and the user's attention at the specified time period is zero.
  • the eye focus behavior includes closed eye information of the user's eyes
  • the behavior feature recognition step further includes: counting the user's closed eye duration according to the closed eye information of the user's eyes.
  • the statistics on the duration of closed eyes are mainly to achieve statistics on the situation of dozing in the teaching scene.
  • the method further includes a scoring standard generating step, where the scoring standard generating step includes: determining, according to the user's eye focusing behavior, that the user's closed eye duration is greater than a preset duration, within the specified time period, The user's behavior characteristics of the corresponding user are recorded in the specified time period.
  • the user behavior information includes a user's head and face behavior; the behavior feature identification step further includes: extracting a facial feature of the user; searching for the user information in the preset user facial feature database; Correspondence between user information and user behavior characteristics.
  • the face recognition algorithm based on the video collection device can be used as a means for comparing and searching user information in the method. By identifying the user's face and finding the user information of the corresponding user, the automatic rating statistics of the user's attention can be realized and corresponding to the user. There is no need to manually re-identify the recognition calculations.
  • the method further includes: after collecting the user behavior information of the multiple users, respectively generating a user behavior feature score corresponding to each user, and generating a teaching attention degree according to the user behavior feature score; The teaching attention is sorted, and the teaching attention is sorted and sent to the specified object.
  • FIG. 3 is an application scenario of a degree of attention display.
  • the user attention ranking is obtained according to the collected user behavior information of all users, and is pushed to all users and/or the instructor for display.
  • the behavioral feature comparison-based attention degree detecting apparatus 400 may include: a video information acquiring module 410, a behavior feature identifying module 420, a behavior feature comparing module 430, and a attention degree scoring module 440. among them:
  • the video information obtaining module 410 is configured to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
  • the behavior feature identification module 420 is configured to identify voice instruction information, and analyze user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
  • the behavior feature comparison module 430 is configured to search for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compare the user behavior feature with the standard behavior feature;
  • the attention scoring module 440 is configured to score the user behavior characteristics that are inconsistent with the standard behavior characteristics according to the pre-designed sub-criteria, and collect the scoring results in the entire time period of the video information, and generate the The teaching attention of video information.
  • modules or units of the attention-aware detection apparatus 400 based on behavioral feature comparison are mentioned in the above detailed description, such division is not mandatory. Indeed, in accordance with embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one of the modules or units described above may be further divided into multiple modules or units.
  • an electronic device capable of implementing the above method is also provided.
  • aspects of the present invention can be implemented as a system, method, or program product. Accordingly, aspects of the present invention may be embodied in the form of a complete hardware embodiment, a complete software embodiment (including firmware, microcode, etc.), or a combination of hardware and software aspects, which may be collectively referred to herein. "Circuit,” “module,” or “system.”
  • FIG. 5 An electronic device 500 in accordance with such an embodiment of the present invention is described below with reference to FIG. 5 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
  • electronic device 500 is embodied in the form of a general purpose computing device.
  • the components of the electronic device 500 may include, but are not limited to, the at least one processing unit 510, the at least one storage unit 520, the bus 530 connecting the different system components (including the storage unit 520 and the processing unit 510), and the display unit 540.
  • the storage unit stores program code, which can be executed by the processing unit 510, such that the processing unit 510 performs various exemplary embodiments according to the present invention described in the "Exemplary Method" section of the present specification.
  • the processing unit 510 can perform steps S110 to S140 as shown in FIG. 1.
  • the storage unit 520 can include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 5201 and/or a cache storage unit 5202, and can further include a read only storage unit (ROM) 5203.
  • RAM random access storage unit
  • ROM read only storage unit
  • the storage unit 520 can also include a program/utility 5204 having a set (at least one) of the program modules 5205, such as but not limited to: an operating system, one or more applications, other program modules, and program data, Implementations of the network environment may be included in each or some of these examples.
  • a program/utility 5204 having a set (at least one) of the program modules 5205, such as but not limited to: an operating system, one or more applications, other program modules, and program data, Implementations of the network environment may be included in each or some of these examples.
  • Bus 530 may be representative of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any of a variety of bus structures. bus.
  • the electronic device 500 can also communicate with one or more external devices 570 (eg, a keyboard, pointing device, Bluetooth device, etc.), and can also communicate with one or more devices that enable the user to interact with the electronic device 500, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. This communication can take place via an input/output (I/O) interface 550. Also, electronic device 500 can communicate with one or more networks (e.g., a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) via network adapter 560. As shown, network adapter 560 communicates with other modules of electronic device 500 via bus 530.
  • network adapter 560 communicates with other modules of electronic device 500 via bus 530.
  • the exemplary embodiments described herein may be implemented by software, or may be implemented by software in combination with necessary hardware. Therefore, the technical solution according to an embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network.
  • a non-volatile storage medium which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.
  • a number of instructions are included to cause a computing device (which may be a personal computer, server, terminal device, or network device, etc.) to perform a method in accordance with an embodiment of the present disclosure.
  • a computer readable storage medium having stored thereon a program product capable of implementing the above method of the present specification.
  • aspects of the present invention may also be embodied in the form of a program product comprising program code for causing said program product to run on a terminal device The terminal device performs the steps according to various exemplary embodiments of the present invention described in the "Exemplary Method" section of the present specification.
  • a program product 600 for implementing the above method which may employ a portable compact disk read only memory (CD-ROM) and includes program code, and may be in a terminal device, is illustrated in accordance with an embodiment of the present invention.
  • CD-ROM portable compact disk read only memory
  • the program product of the present invention is not limited thereto, and in the present document, the readable storage medium may be any tangible medium containing or storing a program that can be used by or in connection with an instruction execution system, apparatus or device.
  • the program product can employ any combination of one or more readable media.
  • the readable medium can be a readable signal medium or a readable storage medium.
  • the readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples (non-exhaustive lists) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable signal medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium can be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing the operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++, etc., including conventional procedural Programming language—such as the "C" language or a similar programming language.
  • the program code can execute entirely on the user computing device, partially on the user device, as a stand-alone software package, partially on the remote computing device on the user computing device, or entirely on the remote computing device or server. Execute on.
  • the remote computing device can be connected to the user computing device via any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computing device (eg, provided using an Internet service) Businesses are connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Businesses are connected via the Internet.
  • comparing the voice instruction information with the user behavior characteristics it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving accurate inference of attention; on the other hand, the duration of the user behavior characteristics in a specified time period
  • the analysis realizes the method of scoring the corresponding proportion of the user behavior characteristics, so that the statistical result of the user attention degree is more accurate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Acoustics & Sound (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a method and apparatus for detecting a degree of attention based on the comparison of behavior characteristics, an electronic device and a storage medium. The method comprises: acquiring video information collected by a video collection device; recognizing voice instruction information, and analyzing a user behavior characteristic in user behavior information within a specified time period after the recognition of the voice instruction information; searching a pre-established standard behavior characteristic model for a standard behavior characteristic corresponding to the current voice instruction information, and comparing the user behavior characteristic with the standard behavior characteristic; and scoring, according to a pre-set scoring standard, a user behavior characteristic inconsistent with the standard behavior characteristic, compiling statistics on scoring results in all the time periods of the video information, and generating the teaching degree of attention to the video information according to the scoring results. According to the present disclosure, a teaching degree of attention can be generated by means of consistency analysis of user behavior.

Description

基于行为特征对比的关注度检测方法以及装置Attention detection method and device based on behavior feature comparison 技术领域Technical field
本公开涉及计算机技术领域,具体而言,涉及一种基于行为特征对比的关注度检测方法、装置、电子设备以及计算机可读存储介质。The present disclosure relates to the field of computer technology, and in particular to a method, device, electronic device, and computer readable storage medium based on behavioral feature comparison.
背景技术Background technique
在教学中,快速而准确的检测学生对教学内容的关注度,既可以提醒教师对高关注度教学内容的重点教学,又可以促进学生对不同关注度知识点的着重学习,起到事半功倍的教学效果。In teaching, the rapid and accurate detection of students' attention to teaching content can not only remind teachers of the key teaching of high-interest teaching content, but also promote students' emphasis on different knowledge points of knowledge, and do more with less. effect.
然而,在实际教学中,通常都是通过教师凭借个人经验实际观察各学生的学习状态,来实现学生对教学内容的关注度的检测,这样的方法既占用的了老师的教学精力,也不易在网络教学等场合适用。However, in actual teaching, it is usually through the teacher's personal experience to actually observe the learning state of each student to achieve the student's attention to the teaching content. This method not only occupies the teacher's teaching energy, but also is difficult to Suitable for occasions such as online teaching.
申请号为CN201110166693.8的专利申请公开了一种量化对象的区域关注度的方法,包括:获取人眼视线方向;记录所述视线方向在所述对象的各个区域的停留时间;以及将停留时间长的区域赋予高关注度权重,将停留时间短的区域赋予低关注度权重。由于该申请主要是通过对人眼停留时长分析的方式来实现对关注度的评估,并不能综合的反应教学内容中各种关注度指标,无法全面的对教学内容关注度进行评估。The patent application with the application number CN201110166693.8 discloses a method for quantifying the regional attention of an object, comprising: acquiring a line of sight direction of a human eye; recording a stay time of the line of sight direction in each area of the object; and staying time Long areas give high attention weights, and areas with short stay times give low attention weights. Since the application mainly evaluates the degree of attention through the analysis of the duration of the human eye, and does not comprehensively reflect various attention indicators in the teaching content, it is impossible to comprehensively evaluate the attention of the teaching content.
申请号为CN201110166693.8的专利申请公开了一种评价用户关注度的方法和设备,评价用户关注度的方法包括:检测用户的视线方向;确定所检测的视线方向所对应的屏幕上的区域;获得用户针对所确定的区域的表情在各个预定情绪上的度量;以及根据所获得的度量,生成用户对所确定的区域的关注度。由于该申请主要是通过用户实现方向上情绪的度量,来实现对关注度的评估,并不能综合的反应教学内容中各种关注度指标,不能全面的对教学内容关注度进行评估。The patent application with the application number CN201110166693.8 discloses a method and device for evaluating user attention. The method for evaluating user attention includes: detecting a line of sight direction of the user; and determining an area on the screen corresponding to the detected line of sight direction; Obtaining a metric of the user's expression on the determined area for each of the predetermined emotions; and generating a user's attention to the determined area based on the obtained metric. Since the application mainly realizes the evaluation of the degree of attention through the user's measure of the direction of the emotion, it cannot comprehensively reflect various attention indicators in the teaching content, and cannot comprehensively evaluate the attention of the teaching content.
申请号为CN201110166693.8的专利申请公开了一种显示屏关注度统计方法及系统,通过无线接入点设备发送无线接入广播信号;当进入无线接入点设备的信号强度距离范围内的用户终端接收到无线接入广播信号并接入无 线接入点设备,通过无线接入点设备获取接入的用户终端的信息例如唯一标识和接入时间信息;通过无线接入点设备将接入的用户终端的信息提供给服务器;以及通过服务器根据接入的用户终端的信息统计目标显示屏的关注度。由于该申请主要是通过统计无线接入点中连接设备数量的方式来实现对显示屏关注度的统计,所述方法只能用在用户较为集中的场合,并不适合网络教学用户较分散的情况。The patent application with the application number CN201110166693.8 discloses a method and system for counting attention degree of display screen, which transmits a wireless access broadcast signal through a wireless access point device; and a user who enters a signal strength distance range of the wireless access point device The terminal receives the radio access broadcast signal and accesses the wireless access point device, and obtains information of the accessed user terminal, such as a unique identifier and access time information, through the wireless access point device; the device accesses the device through the wireless access point device. The information of the user terminal is provided to the server; and the degree of attention of the target display screen is counted by the server according to the information of the accessed user terminal. Since the application mainly collects statistics on the degree of connection of the display by counting the number of connected devices in the wireless access point, the method can only be used in a situation where the user is concentrated, and is not suitable for the situation in which the network teaching user is more dispersed. .
因此,需要提供一种或多种至少能够解决上述问题的技术方案。Therefore, it is desirable to provide one or more technical solutions that at least solve the above problems.
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。It should be noted that the information disclosed in the Background section above is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
发明内容Summary of the invention
本公开的目的在于提供一种基于行为特征对比的关注度检测方法、装置、电子设备以及计算机可读存储介质,进而至少在一定程度上克服由于相关技术的限制和缺陷而导致的一个或者多个问题。An object of the present disclosure is to provide a method, device, electronic device, and computer readable storage medium based on behavioral feature comparison, thereby at least partially overcoming one or more of the limitations and disadvantages of the related art. problem.
根据本公开的一个方面,提供一种基于行为特征对比的关注度检测方法,包括:According to an aspect of the present disclosure, a method for detecting attention based on behavior feature comparison is provided, including:
视频信息获取步骤,获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;The video information obtaining step is to obtain video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
行为特征识别步骤,识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征;a behavior feature recognition step of identifying voice instruction information, and analyzing user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
行为特征对比步骤,在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;The behavior feature comparison step searches for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compares the user behavior feature with the standard behavior feature;
关注度评分步骤,按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。a degree of interest scoring step of scoring a user behavior feature that is inconsistent with the standard behavior feature according to a pre-designed sub-criteria, and collecting a score result in the entire time period of the video information, and generating the video information according to the score result. Teaching attention.
在本公开的一种示例性实施例中,所述行为特征识别步骤,包括:In an exemplary embodiment of the present disclosure, the behavior feature identifying step includes:
在分析指定时间段内所述用户行为信息中的用户行为特征后,将所述指定时间段内相同的用户行为特征数量大于预设数量的用户行为特征作为待参考标准行为特征;After analyzing the user behavior feature in the user behavior information in the specified time period, the same user behavior feature number in the specified time period is greater than the preset number of user behavior features as a standard behavior feature to be referenced;
所述方法还包括:The method further includes:
根据所述待参考标准行为特征训练预先建立的标准行为特征模型。The pre-established standard behavior feature model is trained according to the standard behavior characteristics to be referenced.
在本公开的一种示例性实施例中,将所述用户行为特征与所述标准行为特征进行对比,包括:In an exemplary embodiment of the present disclosure, comparing the user behavior characteristic to the standard behavior characteristic includes:
对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息是否包含与语音指令信息对应的标准行为特征,若包含,判断所述用户行为特征的保持时长是否满足所述标准行为特征的预设时间范围。Comparing whether the user behavior information in the video information includes a standard behavior characteristic corresponding to the voice instruction information within a specified time period after the voice instruction information is recognized, and if yes, determining whether the retention duration of the user behavior feature satisfies the standard The preset time range for behavioral characteristics.
在本公开的一种示例性实施例中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:In an exemplary embodiment of the present disclosure, the method further includes a scoring standard generating step, the scoring standard generating step comprising:
若所述用户行为特征的保持时长满足所述标准行为特征的预设时间范围,对该指定时间段内对应用户的用户行为特征计满分。If the retention duration of the user behavior feature satisfies the preset time range of the standard behavior feature, a full score is calculated for the user behavior characteristic of the corresponding user in the specified time period.
在本公开的一种示例性实施例中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:In an exemplary embodiment of the present disclosure, the method further includes a scoring standard generating step, the scoring standard generating step comprising:
若对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息不包含与语音指令信息对应的标准行为特征,对该指定时间段内对应用户的用户行为特征计零分。If the user behavior information in the video information does not include the standard behavior feature corresponding to the voice instruction information within a specified time period after the voice instruction information is recognized, the user behavior characteristic of the corresponding user in the specified time period is zero.
在本公开的一种示例性实施例中,所述用户行为信息包括用户的头部转动行为以及眼部对焦行为;所述行为特征识别步骤包括:In an exemplary embodiment of the present disclosure, the user behavior information includes a user's head rotation behavior and an eye focus behavior; and the behavior feature recognition step includes:
提取用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态;Extracting a head moving direction, a moving angle, and an eye focusing state in a user's eye focusing behavior in a user's head rotation behavior;
将用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态作为用户行为特征。The head moving direction, the moving angle, and the eye focusing state in the user's eye focusing behavior are taken as user behavior characteristics.
在本公开的一种示例性实施例中,所述眼部对焦行为包括用户眼睛的闭眼信息,所述行为特征识别步骤,还包括:In an exemplary embodiment of the present disclosure, the eye focus behavior includes closed eye information of a user's eyes, and the behavior feature identification step further includes:
根据用户眼睛的闭眼信息,统计用户闭眼时长。The user's eyes are closed according to the closed eye information of the user's eyes.
在本公开的一种示例性实施例中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:In an exemplary embodiment of the present disclosure, the method further includes a scoring standard generating step, the scoring standard generating step comprising:
若指定时间段内,根据用户眼部对焦行为确定用户闭眼时长大于预设时长,对该指定时间段内对应用户的用户行为特征记零分。If the user's eye-closing behavior is determined to be greater than the preset duration according to the user's eye focus behavior, the user behavior characteristic of the corresponding user in the specified time period is zero.
在本公开的一种示例性实施例中,所述用户行为获取步骤包括:In an exemplary embodiment of the present disclosure, the user behavior obtaining step includes:
获取同一视频采集设备采集的包含多个用户的用户行为信息的视频信息;或者,Obtaining video information collected by the same video collection device and containing user behavior information of multiple users; or
获取一个或多个视频采集设备采集的包含多个用户的用户行为信息的视频信息。Acquiring video information collected by one or more video collection devices that includes user behavior information of multiple users.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
当采集到多个用户的用户行为信息后,分别生成与各个用户对应的用户行为特征评分并生成关注度;After collecting user behavior information of multiple users, respectively generating user behavior feature scores corresponding to the respective users and generating attention degrees;
对各个用户的用户行为关注度排序,并将各个用户的关注度排序发送至指定对象。The user behavior attention of each user is sorted, and each user's attention ranking is sent to the specified object.
在本公开的一个方面,提供一种基于行为特征对比的关注度检测装置,包括:In an aspect of the present disclosure, a attention degree detecting apparatus based on behavior feature comparison is provided, including:
视频信息获取模块,用于获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;a video information acquiring module, configured to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
行为特征识别模块,用于识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征;a behavior feature recognition module, configured to identify voice instruction information, and analyze user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
行为特征对比模块,用于在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;a behavior feature comparison module, configured to search for a standard behavior feature corresponding to current voice instruction information in a pre-established standard behavior feature model, and compare the user behavior feature with the standard behavior feature;
关注度评分模块,用于按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。a degree of interest scoring module, configured to score a user behavior feature that is inconsistent with the standard behavior feature according to a pre-designed sub-criteria, and collect a score result in the entire time period of the video information, and generate the video according to the score result. The teaching attention of information.
在本公开的一个方面,提供一种电子设备,包括:In an aspect of the disclosure, an electronic device is provided, comprising:
处理器;以及Processor;
存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现根据上述任意一项所述的方法。A memory having stored thereon computer readable instructions that, when executed by the processor, implement the method of any of the above.
在本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现根据上述任意一项所述的方法。In an aspect of the present disclosure, a computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor, implements the method of any of the above.
本公开的示例性实施例中的基于行为特征对比的关注度检测方法,获取视频采集设备采集的视频信息,识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征,在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比,按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。一方面,将语音指令信息与用户行为特征对比,可以准确的判断用户行为是否与所述语音指令信息一致,进而实现对关注度的准确推断;另一方面,在指定时间段对用户行为特征时长分析来实现对用户行为特征对应比例计分的方式,使用户关注度的统计结果更加精确。The behavior feature comparison-based attention degree detection method in the exemplary embodiment of the present disclosure acquires video information collected by the video collection device, identifies voice instruction information, and analyzes the user behavior information within a specified time period after the voice instruction information is recognized and analyzed. User behavior characteristics, searching for a standard behavior feature corresponding to current voice instruction information in a pre-established standard behavior feature model, and comparing the user behavior feature with the standard behavior feature, according to a pre-designed standard The user behavior characteristics in which the standard behavior characteristics are inconsistent are scored, and the score results in the entire time period of the video information are counted, and the teaching attention degree of the video information is generated according to the score result. On the one hand, comparing the voice instruction information with the user behavior characteristics, it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving accurate inference of attention; on the other hand, the duration of the user behavior characteristics in a specified time period The analysis realizes the method of scoring the corresponding proportion of the user behavior characteristics, so that the statistical result of the user attention degree is more accurate.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。The above general description and the following detailed description are intended to be illustrative and not restrictive.
附图说明DRAWINGS
通过参照附图来详细描述其示例实施例,本公开的上述和其它特征及优点将变得更加明显。The above and other features and advantages of the present disclosure will become more apparent from the detailed description.
图1示出了根据本公开一示例性实施例的基于行为特征对比的关注度检测方法的流程图;FIG. 1 illustrates a flowchart of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure;
图2示出了根据本公开一示例性实施例的基于行为特征对比的关注度检测方法应用场景的示意图;FIG. 2 is a schematic diagram showing an application scenario of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure; FIG.
图3示出了根据本公开一示例性实施例的基于行为特征对比的关注度检 测方法应用场景的示意图;FIG. 3 is a schematic diagram showing an application scenario of a attention degree detection method based on behavior feature comparison according to an exemplary embodiment of the present disclosure; FIG.
图4示出了根据本公开一示例性实施例的基于行为特征对比的关注度检测装置的示意框图;FIG. 4 illustrates a schematic block diagram of a attention degree detecting apparatus based on behavior feature comparison according to an exemplary embodiment of the present disclosure; FIG.
图5示意性示出了根据本公开一示例性实施例的电子设备的框图;以及FIG. 5 schematically illustrates a block diagram of an electronic device in accordance with an exemplary embodiment of the present disclosure;
图6示意性示出了根据本公开一示例性实施例的计算机可读存储介质的示意图。FIG. 6 schematically illustrates a schematic diagram of a computer readable storage medium in accordance with an exemplary embodiment of the present disclosure.
具体实施方式Detailed ways
现在将参考附图更全面地描述示例实施例。然而,示例实施例能够以多种形式实施,且不应被理解为限于在此阐述的实施例;相反,提供这些实施例使得本公开将全面和完整,并将示例实施例的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in a variety of forms and should not be construed as being limited to the embodiments set forth herein. To those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and the repeated description thereof will be omitted.
此外,所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施例中。在下面的描述中,提供许多具体细节从而给出对本公开的实施例的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而没有所述特定细节中的一个或更多,或者可以采用其它的方法、组元、材料、装置、步骤等。在其它情况下,不详细示出或描述公知结构、方法、装置、实现、材料或者操作以避免模糊本公开的各方面。Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are set forth However, one skilled in the art will appreciate that the technical solution of the present disclosure may be practiced without one or more of the specific details, or other methods, components, materials, devices, steps, etc. may be employed. In other instances, well-known structures, methods, devices, implementations, materials, or operations are not shown or described in detail to avoid obscuring aspects of the present disclosure.
附图中所示的方框图仅仅是功能实体,不一定必须与物理上独立的实体相对应。即,可以采用软件形式来实现这些功能实体,或在一个或多个软件硬化的模块中实现这些功能实体或功能实体的一部分,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。The block diagrams shown in the figures are merely functional entities and do not necessarily have to correspond to physically separate entities. That is, these functional entities may be implemented in software, or implemented in one or more software-hardened modules, or in different network and/or processor devices and/or microcontroller devices. Implement these functional entities.
在本示例实施例中,首先提供了一种基于行为特征对比的关注度检测方法,可以应用于计算机等电子设备;参考图1中所示,该基于行为特征对比的关注度检测方法可以包括以下步骤:In the present exemplary embodiment, a method for detecting attention based on behavior feature comparison is first provided, which can be applied to an electronic device such as a computer; as shown in FIG. 1, the method for detecting attention based on behavior feature comparison may include the following step:
视频信息获取步骤S110,获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;The video information obtaining step S110 is performed to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
行为特征识别步骤S120,识别语音指令信息,分析识别到语音指令信息 后指定时间段内所述用户行为信息中的用户行为特征;The behavior feature identifying step S120 is to identify the voice instruction information, and analyze the user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
行为特征对比步骤S130,在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;The behavior feature comparison step S130 searches for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compares the user behavior feature with the standard behavior feature;
关注度评分步骤S140,按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。The degree of interest scoring step S140, the user behavior characteristics that are inconsistent with the standard behavior characteristics are scored according to the pre-designed sub-criteria, and the score results in the entire time period of the video information are counted, and the video information is generated according to the score result. Teaching attention.
根据本示例实施例中的基于行为特征对比的关注度检测方法,一方面,将语音指令信息与用户行为特征对比,可以准确的判断用户行为是否与所述语音指令信息一致,进而实现对关注度的准确推断;另一方面,在指定时间段对用户行为特征时长分析来实现对用户行为特征对应比例计分的方式,使用户关注度的统计结果更加精确。According to the attention feature detection method based on the behavior feature comparison in the exemplary embodiment, on the one hand, comparing the voice instruction information with the user behavior feature, it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving the attention degree. Accurate inference; on the other hand, analyzing the duration of user behavior characteristics in a specified time period to achieve a proportional proportion of user behavior characteristics, so that the statistical results of user attention are more accurate.
下面,将对本示例实施例中的基于行为特征对比的关注度检测方法进行进一步的说明。Hereinafter, the attention degree detecting method based on the behavior feature comparison in the present exemplary embodiment will be further described.
在视频信息获取步骤S110中,可以获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息。In the video information obtaining step S110, the video information collected by the video collection device may be acquired, where the video information includes user behavior information and voice instruction information.
本示例实施方式中,在常见的教学场景,特别是网络教学的场景中,都有视频采集设备可以采集用户的视频,通过分析所述视频中的用户行为信息及音频中的语音指令信息,可以作为进一步判断用户关注度的依据。In the exemplary embodiment, in a common teaching scenario, especially in a network teaching scenario, a video capture device may collect a user's video, and by analyzing user behavior information in the video and voice command information in the audio, As a basis for further judging user attention.
本示例实施方式中,所述用户行为获取步骤包括:获取同一视频采集设备采集的包含多个用户的用户行为信息的视频信息;或者,获取一个或多个视频采集设备采集的包含多个用户的用户行为信息的视频信息。如在现实的教学场景中,通过一个置于教学前方高处的摄像头,就可以采集到整个教室所有用户的视频图像,又如网络教学场景中,通过用户视频观看设备特别是移动便携设备的摄像头,可以采集到每个网络教学用户的视频图像。In the example implementation, the user behavior obtaining step includes: acquiring video information of user behavior information of multiple users collected by the same video collection device; or acquiring acquisition of one or more video collection devices, including multiple users. Video information of user behavior information. For example, in a realistic teaching scene, a video image of all users in the entire classroom can be collected through a camera placed in front of the teaching, and in the network teaching scene, the camera is viewed through the user video, especially the mobile portable device. , can collect video images of each network teaching user.
在行为特征识别步骤S120中,可以识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征。In the behavior feature identification step S120, the voice instruction information may be identified, and the user behavior feature in the user behavior information within a specified time period after the voice instruction information is recognized is analyzed.
本示例实施方式中,对用户语音指令信息的识别,可以判断出用户的下 一步的动作行为,如在现实的教学场景中,所有用户都在观看教室前方的黑板上的内容,然后教师语音指令“请打开课本第3页”,此时所有用户的应该都会从观看教室前方的黑板转而去看书本的内容,这样的行为就包含了用户行为特征;又如当教学内容为课本第一页,然后教师语音指令“请看第二页的习题”,此时用户应该都应该有头部旋转及眼睛对焦变化的动作。In the example embodiment, the user's voice instruction information can be identified, and the user's next action behavior can be determined. For example, in a real teaching scenario, all users are watching the content on the blackboard in front of the classroom, and then the teacher voice command. "Please open the third page of the textbook". At this time, all users should turn to read the contents of the book from the blackboard in front of the classroom. This behavior includes the characteristics of the user's behavior; for example, when the teaching content is the first page of the textbook. Then, the teacher voice command "please see the exercises on the second page", at this time the user should have the action of head rotation and eye focus change.
本示例实施方式中,所述用户行为信息包括用户的头部转动行为以及眼部对焦行为;所述行为特征识别步骤包括:提取用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态;将用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态作为用户行为特征。以上行为都可以当作用户行为信息来作为用户关注度变化的依据。In this example embodiment, the user behavior information includes a user's head rotation behavior and an eye focus behavior; the behavior feature recognition step includes: extracting a head movement direction, a movement angle, and a user's eye in a user's head rotation behavior The eye focus state in the focus behavior; the head movement direction, the movement angle, and the eye focus state in the user's eye focus behavior as the user behavior characteristics. All of the above behaviors can be used as user behavior information as a basis for user attention changes.
本示例实施方式中,所述行为特征识别步骤,包括:在分析指定时间段内所述用户行为信息中的用户行为特征后,将所述指定时间段内相同的用户行为特征数量大于预设数量的用户行为特征作为待参考标准行为特征。在有些教学场景中,可能并没有语音指令信息,如图2所的教学场景中,老师讲述课文第一页的内容,随着讲述逻辑进度的推进,讲到第二页,那么所有用户的用户行为信息中头部转动或者眼镜对焦状态应该都有一致性的动画变化趋势,将这样的行为趋势统计分析出来作为待参考标准行为特征,就可以进行对比作为非常规行为的判断依据。所述方法还包括:根据所述待参考标准行为特征训练预先建立的标准行为特征模型,对一些待参考标准行为特征训练的结果统计,可以作为标准行为特征模型储存,在实际运算对比时,减少计算负荷,加快响应速度。In this example embodiment, the behavior feature identifying step includes: after analyzing the user behavior feature in the user behavior information within a specified time period, the same number of user behavior features in the specified time period is greater than a preset number. The user behavior characteristics are taken as standard behavioral features to be referenced. In some teaching scenarios, there may be no voice command information. In the teaching scenario shown in Figure 2, the teacher tells the content of the first page of the text. As the progress of the logic is advanced, the second page is spoken, then the users of all users. The behavior of the head rotation or the focus state of the glasses should have a consistent animation trend. Statistical analysis of such behavior trends as a standard behavioral feature to be referenced can be compared as a basis for judging behavior. The method further includes: training a pre-established standard behavior feature model according to the standard behavior feature to be referenced, and collecting statistics on the results of some standard behavior characteristics to be referenced, which can be stored as a standard behavior feature model, and reduced in actual operation comparison. Calculate the load and speed up the response.
在行为特征对比步骤S130中,可以在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比。In the behavior feature comparison step S130, a standard behavior feature corresponding to the current voice instruction information may be searched in the pre-established standard behavior feature model, and the user behavior feature is compared with the standard behavior feature.
本示例实施方式中,可以将接收到的语音指令信息与标准行为特征的对应关系建立模型,在接收到语音指令信息时,直接将所述语音指令信息作为输入在所述模型中查找,就可以得到对应的标准行为特征,然后使用所述标准行为特征与用户行为特征对比,就可以判断出所述用户的行为状态。如在 用户观看黑板内容时,接收到教师语音指令“请打开课本第3页”,对应的标准行为特征为头部大幅向下转动,然后小幅左右转动,没有执行这个标准行为特征的用户将被标记并进行下一步数据处理工作。In this example embodiment, the correspondence between the received voice instruction information and the standard behavior feature may be modeled, and when the voice instruction information is received, the voice instruction information may be directly used as an input to search in the model. The corresponding standard behavior characteristics are obtained, and then the behavioral state of the user can be judged by comparing the standard behavior characteristics with the user behavior characteristics. For example, when the user views the content of the blackboard, the teacher's voice command “Please open the third page of the textbook” is received, and the corresponding standard behavior feature is that the head is turned down sharply and then rotated slightly, and the user who does not perform the standard behavior feature will be Mark and proceed with the next step in data processing.
本示例实施方式中,将所述用户行为特征与所述标准行为特征进行对比,包括:对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息是否包含与语音指令信息对应的标准行为特征,若包含,判断所述用户行为特征的保持时长是否满足所述标准行为特征的预设时间范围。如在所述标准行为特征的指定时间段内,用户执行标准行为特征符合预设时间段的要求,所述预设时间段可以是指定时间段的全部或者部分时长。In this example embodiment, comparing the user behavior feature with the standard behavior feature includes: comparing whether the user behavior information in the video information includes the voice instruction information within a specified time period after the voice instruction information is recognized The corresponding standard behavior feature, if included, determines whether the retention duration of the user behavior feature satisfies a preset time range of the standard behavior feature. The user performs the standard behavior feature according to the requirement of the preset time period, which may be all or part of the duration of the specified time period, within a specified time period of the standard behavior feature.
在关注度评分步骤S140中,可以按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。In the attention degree scoring step S140, the user behavior characteristics that are inconsistent with the standard behavior characteristics may be scored according to the pre-designed sub-criteria, and the scoring results in the entire time period of the video information are counted, and the scoring result is generated according to the scoring result. The teaching attention of video information.
本示例实施方式中,根据所述用户行为特征与标准行为特征的对比,特别是在指定时间段的行为特征对比,可以作为教学关注度评分的重要依据。In the present exemplary embodiment, according to the comparison of the user behavior characteristics and the standard behavior characteristics, especially the behavior characteristics comparison in a specified time period, it can be used as an important basis for the teaching attention degree score.
本示例实施方式中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:若所述用户行为特征的保持时长满足所述标准行为特征的预设时间范围,对该指定时间段内对应用户的用户行为特征计满分。在指定时间段内的预设时间范围完成标准行为特征,说明用户关注度达到预设标准,则对所述用户在该指定时间段的关注度计满分。In this example embodiment, the method further includes a scoring standard generating step, the scoring standard generating step comprising: if the retention duration of the user behavior feature satisfies a preset time range of the standard behavior feature, the designating The user's behavioral characteristics corresponding to the user are scored within the time period. The standard behavior feature is completed in a preset time range within a specified time period, indicating that the user's attention level reaches a preset standard, and the user's attention degree in the specified time period is scored.
本示例实施方式中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:若对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息不包含与语音指令信息对应的标准行为特征,对该指定时间段内对应用户的用户行为特征计零分。在指定时间段内没有标准行为特征的用户行为,说明用户关注度完全没有达到预设标准,则对所述用户在该指定时间段的关注度计零分。In this example embodiment, the method further includes a scoring standard generating step, the scoring standard generating step includes: if the comparison is within a specified time period after the voice instruction information is recognized, the user behavior information in the video information does not include The standard behavior feature corresponding to the voice instruction information is zeroed for the user behavior characteristic of the corresponding user in the specified time period. A user behavior that does not have a standard behavior feature within a specified time period indicates that the user's attention does not reach the preset standard at all, and the user's attention at the specified time period is zero.
本示例实施方式中,所述眼部对焦行为包括用户眼睛的闭眼信息,所述行为特征识别步骤,还包括:根据用户眼睛的闭眼信息,统计用户闭眼时长。对闭眼时长的统计主要是为了实现对教学场景中打瞌睡的情况的统计。In the exemplary embodiment, the eye focus behavior includes closed eye information of the user's eyes, and the behavior feature recognition step further includes: counting the user's closed eye duration according to the closed eye information of the user's eyes. The statistics on the duration of closed eyes are mainly to achieve statistics on the situation of dozing in the teaching scene.
本示例实施方式中,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:若指定时间段内,根据用户眼部对焦行为确定用户闭眼时长大于预设时长,对该指定时间段内对应用户的用户行为特征记零分。In this example, the method further includes a scoring standard generating step, where the scoring standard generating step includes: determining, according to the user's eye focusing behavior, that the user's closed eye duration is greater than a preset duration, within the specified time period, The user's behavior characteristics of the corresponding user are recorded in the specified time period.
本示例实施方式中,所述用户行为信息包括用户的头面部行为;所述行为特征识别步骤,还包括:提取用户的面部特征;在预设用户面部特征库中查找所述用户信息;建立所述用户信息与用户行为特征的对应关系。基于视频采集设备的人脸识别算法可以在本方法中作为用户信息对比查找的手段,通过对用户面部识别并查找对应用户的用户信息,可以实现用户关注的自动评分统计并与所述用户对应,不需要人工再次统计识别计算。In this example embodiment, the user behavior information includes a user's head and face behavior; the behavior feature identification step further includes: extracting a facial feature of the user; searching for the user information in the preset user facial feature database; Correspondence between user information and user behavior characteristics. The face recognition algorithm based on the video collection device can be used as a means for comparing and searching user information in the method. By identifying the user's face and finding the user information of the corresponding user, the automatic rating statistics of the user's attention can be realized and corresponding to the user. There is no need to manually re-identify the recognition calculations.
本示例实施方式中,所述方法还包括:当采集到多个用户的用户行为信息后,分别生成与各个用户对应的用户行为特征评分,根据用户行为特征评分生成教学关注度;对各个用户的教学关注度排序,并将教学关注度排序发送至指定对象。如图3为某关注度显示的应用场景,在所述教学场景中,根据采集到的所有用户的用户行为信息得到用户关注度排序,并推送给所有用户和/或教学者显示。In this example, the method further includes: after collecting the user behavior information of the multiple users, respectively generating a user behavior feature score corresponding to each user, and generating a teaching attention degree according to the user behavior feature score; The teaching attention is sorted, and the teaching attention is sorted and sent to the specified object. FIG. 3 is an application scenario of a degree of attention display. In the teaching scenario, the user attention ranking is obtained according to the collected user behavior information of all users, and is pushed to all users and/or the instructor for display.
需要说明的是,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。It should be noted that, although the various steps of the method of the present disclosure are described in a particular order in the drawings, this does not require or imply that the steps must be performed in the specific order, or that all the steps shown must be performed. Achieve the desired results. Additionally or alternatively, certain steps may be omitted, multiple steps being combined into one step execution, and/or one step being decomposed into multiple step executions and the like.
此外,在本示例实施例中,还提供了一种基于行为特征对比的关注度检测装置。参照图4所示,该基于行为特征对比的关注度检测装置400可以包括:视频信息获取模块410、行为特征识别模块420、行为特征对比模块430以及关注度评分模块440。其中:Further, in the present exemplary embodiment, a attention degree detecting device based on behavior feature comparison is also provided. Referring to FIG. 4, the behavioral feature comparison-based attention degree detecting apparatus 400 may include: a video information acquiring module 410, a behavior feature identifying module 420, a behavior feature comparing module 430, and a attention degree scoring module 440. among them:
视频信息获取模块410,用于获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;The video information obtaining module 410 is configured to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
行为特征识别模块420,用于识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征;The behavior feature identification module 420 is configured to identify voice instruction information, and analyze user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
行为特征对比模块430,用于在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;The behavior feature comparison module 430 is configured to search for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compare the user behavior feature with the standard behavior feature;
关注度评分模块440,用于按照预设计分标准对与所述标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。The attention scoring module 440 is configured to score the user behavior characteristics that are inconsistent with the standard behavior characteristics according to the pre-designed sub-criteria, and collect the scoring results in the entire time period of the video information, and generate the The teaching attention of video information.
上述中各基于行为特征对比的关注度检测装置模块的具体细节已经在对应的音频段落识别方法中进行了详细的描述,因此此处不再赘述。The specific details of the attention degree detecting device module based on the behavior feature comparison in the above have been described in detail in the corresponding audio paragraph identifying method, and thus will not be described herein.
应当注意,尽管在上文详细描述中提及了基于行为特征对比的关注度检测装置400的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the attention-aware detection apparatus 400 based on behavioral feature comparison are mentioned in the above detailed description, such division is not mandatory. Indeed, in accordance with embodiments of the present disclosure, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one of the modules or units described above may be further divided into multiple modules or units.
此外,在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。Further, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施例、完全的软件实施例(包括固件、微代码等),或硬件和软件方面结合的实施例,这里可以统称为“电路”、“模块”或“系统”。Those skilled in the art will appreciate that various aspects of the present invention can be implemented as a system, method, or program product. Accordingly, aspects of the present invention may be embodied in the form of a complete hardware embodiment, a complete software embodiment (including firmware, microcode, etc.), or a combination of hardware and software aspects, which may be collectively referred to herein. "Circuit," "module," or "system."
下面参照图5来描述根据本发明的这种实施例的电子设备500。图5显示的电子设备500仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。An electronic device 500 in accordance with such an embodiment of the present invention is described below with reference to FIG. The electronic device 500 shown in FIG. 5 is merely an example and should not impose any limitation on the function and scope of use of the embodiments of the present invention.
如图5所示,电子设备500以通用计算设备的形式表现。电子设备500的组件可以包括但不限于:上述至少一个处理单元510、上述至少一个存储单元520、连接不同系统组件(包括存储单元520和处理单元510)的总线530、显示单元540。As shown in FIG. 5, electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to, the at least one processing unit 510, the at least one storage unit 520, the bus 530 connecting the different system components (including the storage unit 520 and the processing unit 510), and the display unit 540.
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元510执行,使得所述处理单元510执行本说明书上述“示例性方法”部分 中描述的根据本发明各种示例性实施例的步骤。例如,所述处理单元510可以执行如图1中所示的步骤S110至步骤S140。Wherein the storage unit stores program code, which can be executed by the processing unit 510, such that the processing unit 510 performs various exemplary embodiments according to the present invention described in the "Exemplary Method" section of the present specification. The steps of the examples. For example, the processing unit 510 can perform steps S110 to S140 as shown in FIG. 1.
存储单元520可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)5201和/或高速缓存存储单元5202,还可以进一步包括只读存储单元(ROM)5203。The storage unit 520 can include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 5201 and/or a cache storage unit 5202, and can further include a read only storage unit (ROM) 5203.
存储单元520还可以包括具有一组(至少一个)程序模块5205的程序/实用工具5204,这样的程序模块5205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。The storage unit 520 can also include a program/utility 5204 having a set (at least one) of the program modules 5205, such as but not limited to: an operating system, one or more applications, other program modules, and program data, Implementations of the network environment may be included in each or some of these examples.
总线530可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。 Bus 530 may be representative of one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any of a variety of bus structures. bus.
电子设备500也可以与一个或多个外部设备570(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备500交互的设备通信,和/或与使得该电子设备500能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口550进行。并且,电子设备500还可以通过网络适配器560与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器560通过总线530与电子设备500的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备500使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。The electronic device 500 can also communicate with one or more external devices 570 (eg, a keyboard, pointing device, Bluetooth device, etc.), and can also communicate with one or more devices that enable the user to interact with the electronic device 500, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 500 to communicate with one or more other computing devices. This communication can take place via an input/output (I/O) interface 550. Also, electronic device 500 can communicate with one or more networks (e.g., a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) via network adapter 560. As shown, network adapter 560 communicates with other modules of electronic device 500 via bus 530. It should be understood that although not shown in the figures, other hardware and/or software modules may be utilized in conjunction with electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives. And data backup storage systems, etc.
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。Through the description of the above embodiments, those skilled in the art can easily understand that the exemplary embodiments described herein may be implemented by software, or may be implemented by software in combination with necessary hardware. Therefore, the technical solution according to an embodiment of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a mobile hard disk, etc.) or on a network. A number of instructions are included to cause a computing device (which may be a personal computer, server, terminal device, or network device, etc.) to perform a method in accordance with an embodiment of the present disclosure.
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本发明各种示例性实施例的步骤。In an exemplary embodiment of the present disclosure, there is also provided a computer readable storage medium having stored thereon a program product capable of implementing the above method of the present specification. In some possible embodiments, aspects of the present invention may also be embodied in the form of a program product comprising program code for causing said program product to run on a terminal device The terminal device performs the steps according to various exemplary embodiments of the present invention described in the "Exemplary Method" section of the present specification.
参考图6所示,描述了根据本发明的实施例的用于实现上述方法的程序产品600,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Referring to FIG. 6, a program product 600 for implementing the above method, which may employ a portable compact disk read only memory (CD-ROM) and includes program code, and may be in a terminal device, is illustrated in accordance with an embodiment of the present invention. For example running on a personal computer. However, the program product of the present invention is not limited thereto, and in the present document, the readable storage medium may be any tangible medium containing or storing a program that can be used by or in connection with an instruction execution system, apparatus or device.
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The program product can employ any combination of one or more readable media. The readable medium can be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples (non-exhaustive lists) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。The computer readable signal medium may include a data signal that is propagated in the baseband or as part of a carrier, carrying readable program code. Such propagated data signals can take a variety of forms including, but not limited to, electromagnetic signals, optical signals, or any suitable combination of the foregoing. The readable signal medium can also be any readable medium other than a readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a readable medium can be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上 执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。Program code for performing the operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++, etc., including conventional procedural Programming language—such as the "C" language or a similar programming language. The program code can execute entirely on the user computing device, partially on the user device, as a stand-alone software package, partially on the remote computing device on the user computing device, or entirely on the remote computing device or server. Execute on. In the case of a remote computing device, the remote computing device can be connected to the user computing device via any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computing device (eg, provided using an Internet service) Businesses are connected via the Internet).
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。Further, the above-described drawings are merely illustrative of the processes included in the method according to the exemplary embodiments of the present invention, and are not intended to be limiting. It is easy to understand that the processing shown in the above figures does not indicate or limit the chronological order of these processes. In addition, it is also easy to understand that these processes may be performed synchronously or asynchronously, for example, in a plurality of modules.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Other embodiments of the present disclosure will be apparent to those skilled in the <RTIgt; The present application is intended to cover any variations, uses, or adaptations of the present disclosure, which are in accordance with the general principles of the disclosure and include common general knowledge or common technical means in the art that are not disclosed in the present disclosure. . The specification and examples are to be regarded as illustrative only,
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。It is to be understood that the invention is not limited to the details of the details and The scope of the disclosure is to be limited only by the appended claims.
工业实用性Industrial applicability
一方面,将语音指令信息与用户行为特征对比,可以准确的判断用户行为是否与所述语音指令信息一致,进而实现对关注度的准确推断;另一方面,在指定时间段对用户行为特征时长分析来实现对用户行为特征对应比例计分的方式,使用户关注度的统计结果更加精确。On the one hand, comparing the voice instruction information with the user behavior characteristics, it is possible to accurately determine whether the user behavior is consistent with the voice instruction information, thereby achieving accurate inference of attention; on the other hand, the duration of the user behavior characteristics in a specified time period The analysis realizes the method of scoring the corresponding proportion of the user behavior characteristics, so that the statistical result of the user attention degree is more accurate.

Claims (13)

  1. 一种基于行为特征对比的关注度检测方法,其特征在于,所述方法包括:A method for detecting attention based on behavioral feature comparison, characterized in that the method comprises:
    视频信息获取步骤,获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;The video information obtaining step is to obtain video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
    行为特征识别步骤,识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征;a behavior feature recognition step of identifying voice instruction information, and analyzing user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
    行为特征对比步骤,在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;The behavior feature comparison step searches for a standard behavior feature corresponding to the current voice instruction information in the pre-established standard behavior feature model, and compares the user behavior feature with the standard behavior feature;
    关注度评分步骤,按照预设计分标准对与标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。The attention degree scoring step scores the user behavior characteristics that are inconsistent with the standard behavior characteristics according to the pre-designed sub-criteria, and collects the scoring results in the entire time period of the video information, and generates the teaching attention of the video information according to the scoring results. degree.
  2. 如权利要求1所述方法,其特征在于,所述行为特征识别步骤,包括:The method of claim 1 wherein said behavioral feature identification step comprises:
    在分析指定时间段内所述用户行为信息中的用户行为特征后,将所述指定时间段内相同的用户行为特征数量大于预设数量的用户行为特征作为待参考标准行为特征;After analyzing the user behavior feature in the user behavior information in the specified time period, the same user behavior feature number in the specified time period is greater than the preset number of user behavior features as a standard behavior feature to be referenced;
    所述方法还包括:The method further includes:
    根据所述待参考标准行为特征训练预先建立的标准行为特征模型。The pre-established standard behavior feature model is trained according to the standard behavior characteristics to be referenced.
  3. 如权利要求1所述方法,其特征在于,将所述用户行为特征与所述标准行为特征进行对比,包括:The method of claim 1 wherein comparing said user behavior characteristics to said standard behavior characteristics comprises:
    对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息是否包含与语音指令信息对应的标准行为特征,若包含,判断所述用户行为特征的保持时长是否满足所述标准行为特征的预设时间范围。Comparing whether the user behavior information in the video information includes a standard behavior characteristic corresponding to the voice instruction information within a specified time period after the voice instruction information is recognized, and if yes, determining whether the retention duration of the user behavior feature satisfies the standard The preset time range for behavioral characteristics.
  4. 如权利要求3所述方法,其特征在于,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:The method of claim 3, wherein the method further comprises a scoring criteria generating step, the scoring criteria generating step comprising:
    若所述用户行为特征的保持时长满足所述标准行为特征的预设时间范围, 对该指定时间段内对应用户的用户行为特征计满分。If the duration of the user behavior feature satisfies the preset time range of the standard behavior feature, a full score is calculated for the user behavior characteristic of the corresponding user in the specified time period.
  5. 如权利要求3所述方法,其特征在于,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:The method of claim 3, wherein the method further comprises a scoring criteria generating step, the scoring criteria generating step comprising:
    若对比在识别到语音指令信息后的指定时间段内,视频信息中的用户行为信息不包含与语音指令信息对应的标准行为特征,对该指定时间段内对应用户的用户行为特征计零分。If the user behavior information in the video information does not include the standard behavior feature corresponding to the voice instruction information within a specified time period after the voice instruction information is recognized, the user behavior characteristic of the corresponding user in the specified time period is zero.
  6. 如权利要求1所述方法,其特征在于,所述用户行为信息包括用户的头部转动行为以及眼部对焦行为;所述行为特征识别步骤包括:The method of claim 1, wherein the user behavior information comprises a user's head rotation behavior and an eye focus behavior; and the behavior feature recognition step comprises:
    提取用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态;Extracting a head moving direction, a moving angle, and an eye focusing state in a user's eye focusing behavior in a user's head rotation behavior;
    将用户头部转动行为中头部移动方向、移动角度,以及用户眼部对焦行为中的眼睛对焦状态作为用户行为特征。The head moving direction, the moving angle, and the eye focusing state in the user's eye focusing behavior are taken as user behavior characteristics.
  7. 如权利要求6所述方法,其特征在于,所述眼部对焦行为包括用户眼睛的闭眼信息,所述行为特征识别步骤,还包括:The method according to claim 6, wherein the eye focus behavior comprises closed eye information of the user's eyes, and the behavior feature identifying step further comprises:
    根据用户眼睛的闭眼信息,统计用户闭眼时长。The user's eyes are closed according to the closed eye information of the user's eyes.
  8. 如权利要求7所述方法,其特征在于,所述方法还包括计分标准生成步骤,所述计分标准生成步骤包括:The method of claim 7 wherein said method further comprises a scoring criteria generating step, said scoring criteria generating step comprising:
    若指定时间段内,根据用户眼部对焦行为确定用户闭眼时长大于预设时长,对该指定时间段内对应用户的用户行为特征记零分。If the user's eye-closing behavior is determined to be greater than the preset duration according to the user's eye focus behavior, the user behavior characteristic of the corresponding user in the specified time period is zero.
  9. 如权利要求1所述方法,其特征在于,所述用户行为获取步骤包括:The method of claim 1 wherein said user behavior acquisition step comprises:
    获取同一视频采集设备采集的包含多个用户的用户行为信息的视频信息;或者,Obtaining video information collected by the same video collection device and containing user behavior information of multiple users; or
    获取一个或多个视频采集设备采集的包含多个用户的用户行为信息的视频信息。Acquiring video information collected by one or more video collection devices that includes user behavior information of multiple users.
  10. 如权利要求9所述方法,其特征在于,所述方法还包括:The method of claim 9 wherein the method further comprises:
    当采集到多个用户的用户行为信息后,分别对各个用户对应的用户行为特征评分,根据各个用户行为特征评分生成与各个用户对应的教学关注度;After collecting user behavior information of multiple users, the user behavior characteristics corresponding to each user are respectively scored, and the teaching attention degree corresponding to each user is generated according to each user behavior characteristic score;
    对各个用户的教学关注度排序,并将关注度排序发送至指定对象。The teaching attention of each user is sorted, and the attention order is sent to the specified object.
  11. 一种基于行为特征对比的关注度检测装置,其特征在于,所述装置包括:A attention degree detecting device based on behavior feature comparison, characterized in that the device comprises:
    视频信息获取模块,用于获取视频采集设备采集的视频信息,所述视频信息包含用户行为信息以及语音指令信息;a video information acquiring module, configured to acquire video information collected by the video collection device, where the video information includes user behavior information and voice instruction information;
    行为特征识别模块,用于识别语音指令信息,分析识别到语音指令信息后指定时间段内所述用户行为信息中的用户行为特征;a behavior feature recognition module, configured to identify voice instruction information, and analyze user behavior characteristics in the user behavior information within a specified time period after the voice instruction information is recognized;
    行为特征对比模块,用于在预先建立的标准行为特征模型中查找与当前语音指令信息对应的标准行为特征,并将所述用户行为特征与所述标准行为特征进行对比;a behavior feature comparison module, configured to search for a standard behavior feature corresponding to current voice instruction information in a pre-established standard behavior feature model, and compare the user behavior feature with the standard behavior feature;
    关注度评分模块,用于按照预设计分标准对与标准行为特征不一致的用户行为特征进行评分,并统计所述视频信息全部时间段中的评分结果,根据所述评分结果生成所述视频信息的教学关注度。a degree of interest scoring module, configured to score a user behavior feature that is inconsistent with a standard behavior feature according to a pre-designed sub-criteria, and collect a score result in the entire time period of the video information, and generate the video information according to the score result. Teaching attention.
  12. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器;以及Processor;
    存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现根据权利要求1至9中任一项所述的方法。A memory having computer readable instructions stored thereon, the computer readable instructions being executed by the processor to implement the method of any one of claims 1 to 9.
  13. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至9中任一项所述方法。A computer readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the method of any one of claims 1 to 9.
PCT/CN2018/092786 2018-05-17 2018-06-26 Method and apparatus for detecting degree of attention based on comparison of behavior characteristics WO2019218427A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810476073.6 2018-05-17
CN201810476073.6A CN108875785B (en) 2018-05-17 2018-05-17 Attention degree detection method and device based on behavior feature comparison

Publications (1)

Publication Number Publication Date
WO2019218427A1 true WO2019218427A1 (en) 2019-11-21

Family

ID=64334562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092786 WO2019218427A1 (en) 2018-05-17 2018-06-26 Method and apparatus for detecting degree of attention based on comparison of behavior characteristics

Country Status (2)

Country Link
CN (1) CN108875785B (en)
WO (1) WO2019218427A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104979A (en) * 2019-12-18 2020-05-05 北京思维造物信息科技股份有限公司 Method, device and equipment for generating user behavior value evaluation model
CN111144321A (en) * 2019-12-28 2020-05-12 北京儒博科技有限公司 Concentration degree detection method, device, equipment and storage medium
CN111796752A (en) * 2020-05-15 2020-10-20 四川科华天府科技有限公司 Interactive teaching system based on PC
CN112306832A (en) * 2020-10-27 2021-02-02 北京字节跳动网络技术有限公司 User state response method and device, electronic equipment and storage medium
CN113409822A (en) * 2021-05-31 2021-09-17 青岛海尔科技有限公司 Object state determination method and device, storage medium and electronic device
CN113762803A (en) * 2021-09-18 2021-12-07 陕西师范大学 Attention validity evaluation method, system and device
CN114971425A (en) * 2022-07-27 2022-08-30 深圳市必提教育科技有限公司 Database information monitoring method, device, equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414838A (en) * 2020-03-16 2020-07-14 北京文香信息技术有限公司 Attention detection method, device, system, terminal and storage medium
CN113033329A (en) * 2021-03-04 2021-06-25 深圳市鹰硕技术有限公司 Method and device for judging abnormal answer of question in online education
CN114913974A (en) * 2022-05-10 2022-08-16 上海市东方医院(同济大学附属东方医院) Delirium evaluation method, delirium evaluation device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025616A (en) * 2017-05-08 2017-08-08 湖南科乐坊教育科技股份有限公司 A kind of childhood teaching condition detection method and its system
CN107025614A (en) * 2017-03-20 2017-08-08 广东小天才科技有限公司 Teaching efficiency detection method, system and device in a kind of live video

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408781B (en) * 2014-12-04 2017-04-05 重庆晋才富熙科技有限公司 Focus attendance checking system
CN104835356B (en) * 2015-05-31 2016-07-13 深圳市采集科技有限公司 A kind of student pays attention to the class measuring method and the system of focus
CN106228293A (en) * 2016-07-18 2016-12-14 重庆中科云丛科技有限公司 teaching evaluation method and system
CN106250822A (en) * 2016-07-21 2016-12-21 苏州科大讯飞教育科技有限公司 Student's focus based on recognition of face monitoring system and method
CN106851216B (en) * 2017-03-10 2019-05-28 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
CN107369341A (en) * 2017-06-08 2017-11-21 深圳市科迈爱康科技有限公司 Educational robot
CN107609517B (en) * 2017-09-15 2020-10-30 华中科技大学 Classroom behavior detection system based on computer vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025614A (en) * 2017-03-20 2017-08-08 广东小天才科技有限公司 Teaching efficiency detection method, system and device in a kind of live video
CN107025616A (en) * 2017-05-08 2017-08-08 湖南科乐坊教育科技股份有限公司 A kind of childhood teaching condition detection method and its system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104979A (en) * 2019-12-18 2020-05-05 北京思维造物信息科技股份有限公司 Method, device and equipment for generating user behavior value evaluation model
CN111104979B (en) * 2019-12-18 2023-08-01 北京思维造物信息科技股份有限公司 Method, device and equipment for generating user behavior value evaluation model
CN111144321A (en) * 2019-12-28 2020-05-12 北京儒博科技有限公司 Concentration degree detection method, device, equipment and storage medium
CN111144321B (en) * 2019-12-28 2023-06-09 北京如布科技有限公司 Concentration detection method, device, equipment and storage medium
CN111796752A (en) * 2020-05-15 2020-10-20 四川科华天府科技有限公司 Interactive teaching system based on PC
CN111796752B (en) * 2020-05-15 2022-11-15 四川科华天府科技有限公司 Interactive teaching system based on PC
CN112306832A (en) * 2020-10-27 2021-02-02 北京字节跳动网络技术有限公司 User state response method and device, electronic equipment and storage medium
CN113409822A (en) * 2021-05-31 2021-09-17 青岛海尔科技有限公司 Object state determination method and device, storage medium and electronic device
CN113409822B (en) * 2021-05-31 2023-06-20 青岛海尔科技有限公司 Object state determining method and device, storage medium and electronic device
CN113762803A (en) * 2021-09-18 2021-12-07 陕西师范大学 Attention validity evaluation method, system and device
CN114971425A (en) * 2022-07-27 2022-08-30 深圳市必提教育科技有限公司 Database information monitoring method, device, equipment and storage medium
CN114971425B (en) * 2022-07-27 2022-10-21 深圳市必提教育科技有限公司 Database information monitoring method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108875785B (en) 2021-04-06
CN108875785A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
WO2019218427A1 (en) Method and apparatus for detecting degree of attention based on comparison of behavior characteristics
US9754503B2 (en) Systems and methods for automated scoring of a user&#39;s performance
WO2019196205A1 (en) Foreign language teaching evaluation information generating method and apparatus
WO2018161917A1 (en) Intelligent scoring method and apparatus, computer device, and computer-readable medium
US11372942B2 (en) Method, apparatus, computer device and storage medium for verifying community question answer data
US10546508B2 (en) System and method for automated literacy assessment
CN111027486A (en) Auxiliary analysis and evaluation system and method for big data of teaching effect of primary and secondary school classroom
CN108898115B (en) Data processing method, storage medium and electronic device
CN111475627B (en) Method and device for checking solution deduction questions, electronic equipment and storage medium
WO2020007097A1 (en) Data processing method, storage medium and electronic device
TW202117616A (en) Adaptability job vacancies matching system and method
WO2019174150A1 (en) Method and apparatus for detecting difficult points in network teaching contents
CN111428686A (en) Student interest preference evaluation method, device and system
CN110546678B (en) Computationally derived assessment in a child education system
CN113033329A (en) Method and device for judging abnormal answer of question in online education
KR20210134614A (en) Data processing methods and devices, electronic devices and storage media
WO2021174829A1 (en) Crowdsourced task inspection method, apparatus, computer device, and storage medium
US10915819B2 (en) Automatic real-time identification and presentation of analogies to clarify a concept
CN114021962A (en) Teaching evaluation method, evaluation device and related equipment and storage medium
Jiang et al. A classroom concentration model based on computer vision
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
CN111199378A (en) Student management method, student management device, electronic equipment and storage medium
CN113842113A (en) Developing reading disorder intelligent identification method, system, equipment and storage medium
CN112446360A (en) Target behavior detection method and device and electronic equipment
Rincon et al. A context-aware baby monitor for the automatic selective archiving of the language of infants

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918551

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 16.04.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18918551

Country of ref document: EP

Kind code of ref document: A1