CN111179133B - Wisdom classroom interaction system - Google Patents

Wisdom classroom interaction system Download PDF

Info

Publication number
CN111179133B
CN111179133B CN201911399319.5A CN201911399319A CN111179133B CN 111179133 B CN111179133 B CN 111179133B CN 201911399319 A CN201911399319 A CN 201911399319A CN 111179133 B CN111179133 B CN 111179133B
Authority
CN
China
Prior art keywords
layer
state
recognition model
analysis module
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911399319.5A
Other languages
Chinese (zh)
Other versions
CN111179133A (en
Inventor
陈海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Campus Guangdong Education Technology Co Ltd
Original Assignee
Smart Campus Guangdong Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Campus Guangdong Education Technology Co Ltd filed Critical Smart Campus Guangdong Education Technology Co Ltd
Priority to CN201911399319.5A priority Critical patent/CN111179133B/en
Publication of CN111179133A publication Critical patent/CN111179133A/en
Application granted granted Critical
Publication of CN111179133B publication Critical patent/CN111179133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention provides an intelligent classroom interaction system, which is characterized by comprising the following components: the image acquisition module acquires images in at least a classroom including all students by using a plurality of cameras; the student identification module is used for detecting and identifying students on the image acquired by the image acquisition module; the individual learning state analysis module is used for analyzing the individual learning effect of each student on the current knowledge point by using the identification result, and the individual learning effect is expressed in a score form; the class overall learning state analysis module is used for carrying out comprehensive analysis by using the result obtained by the individual learning state analysis module to obtain the overall learning effect of the whole class, and the overall learning effect is expressed in a score form; and the effect feedback display module is used for performing feedback display on the result of the individual learning state analysis module and the result of the class integral learning state analysis module by using a display screen, and dynamically adjusting the teaching effect.

Description

Wisdom classroom interaction system
Technical Field
The invention provides an intelligent classroom interaction system, and relates to the technical fields of face recognition, state recognition, expression recognition, positioning detection, network interaction and the like.
Background
At present, in teaching practice, teachers often check teaching effects in an inquiry mode or an examination checking mode, however, accurate information fed back by students cannot be obtained necessarily in a oral inquiry mode, and the examination checking mode has a large workload for classrooms, involves processing steps such as correction and information statistical analysis, has hysteresis, and is difficult to obtain the teaching effects in the classroom in real time.
With the development of computer vision technology, cameras are introduced into classrooms of many schools, and are used for parents to check the conditions of their children in the schools, such as learning conditions in the classrooms, outdoor motion states and the like, through mobile phone terminals or personal PCs at any time. However, parents see a picture of the whole classroom or the whole sports field, and are difficult to accurately locate children within the visual angle, particularly in the sports field, the number of students is large, and the parents are in a sports state and are more difficult to locate own children. As parents are concerned about learning states or exercise states of their children, the current access system cannot meet this requirement.
The invention can at least solve the following technical problems:
1. the invention provides a method for adjusting teaching of knowledge points, which comprises the steps of utilizing cameras installed in a plurality of places of a school to identify each student in real time through a face recognition technology, further analyzing the teaching state of each student through a positioning detection and recognition technology, analyzing the learning condition of the teaching knowledge points according to the information such as the expression and the state of the teaching knowledge points, and feeding the information back to a teacher in real time, wherein the teacher utilizes the data analyzed by a system to adjust the teaching of the knowledge points.
2. The electronic device can help parents focus on the children, can automatically detect and identify the children, and directly pay attention to the learning or motion state of the children through switching and amplifying instructions.
The innovation of the invention is mainly as follows:
1. the invention provides an original invention which utilizes a computer recognition technology to promote the continuous improvement of teaching effect, an image acquisition module is utilized to acquire images, a student recognition module is utilized to realize face recognition, expression recognition and state recognition of students, and an individual learning state analysis module is utilized to analyze the individual learning effect of each student on the current knowledge point; and obtaining the whole learning effect of the whole class by using the class whole learning state analysis module, and dynamically adjusting the teaching effect by using the effect feedback display module.
2. Detection process for automatically detecting student position, using proposed detection function
Figure BDA0002347114840000011
The automatic detection of the target object can be realized, and then the student object to be identified is segmented.
3. Providing an individual learning state analysis model, utilizing MASK-RCNN human face recognition model to cascade with expression recognition model and state recognition model, analyzing individual learning state, and utilizing the provided loss function
Figure BDA0002347114840000021
The identification precision is continuously improved.
4. An SPSS neural network model is adopted as an expression recognition model, and a pooling method of a pooling layer is provided as follows:
Se=f(elogw+φ(Je))
Figure BDA0002347114840000022
the method can improve the operation efficiency of the system and improve the real-time processing and display effects.
Disclosure of Invention
The invention aims to provide an intelligent classroom interaction system, which is characterized by comprising the following components:
the image acquisition module acquires images in at least a classroom including all students by using a plurality of cameras;
the student identification module is used for carrying out student positioning detection on the image acquired by the image acquisition module, positioning a plurality of detected students in a rectangular frame mode, and carrying out face identification, expression identification and state identification on the detected students;
the individual learning state analysis module is used for obtaining the learning numbers and name information of students by using the result of face recognition, and further analyzing the individual learning effect of each student on the current knowledge point based on the results of expression recognition and state recognition, wherein the individual learning effect is expressed in a score form;
the class overall learning state analysis module is used for carrying out comprehensive analysis by using the result obtained by the individual learning state analysis module to obtain the overall learning effect of the whole class, and the overall learning effect is expressed in a score form;
and the effect feedback display module is used for performing feedback display on the result of the individual learning state analysis module and the result of the class integral learning state analysis module by using a display screen, guiding a teacher to accelerate, decelerate and re-explain the currently explained knowledge points or explain the currently explained knowledge points by means of an augmented reality technology, and dynamically adjusting the teaching effect.
The invention further provides an electronic terminal which can remotely access the intelligent classroom interaction system through the APP, the APP can automatically detect children according to the account registered by parents, the originally obtained low-resolution images are subjected to high-definition amplification through an amplification instruction, the performances of the children in the classroom or other places (such as sports places) are watched, and the amplification instruction obtains undistorted high-definition images through an image amplification algorithm.
The invention also proposes a program medium storing a computer program implementing the following functions:
the image acquisition module acquires images in at least a classroom including all students by using a plurality of cameras;
the student identification module is used for carrying out student positioning detection on the image acquired by the image acquisition module, positioning a plurality of detected students in a rectangular frame mode, and carrying out face identification, expression identification and state identification on the detected students;
the individual learning state analysis module is used for obtaining the learning numbers and name information of students by using the result of face recognition, and further analyzing the individual learning effect of each student on the current knowledge point based on the results of expression recognition and state recognition, wherein the individual learning effect is expressed in a score form;
the class overall learning state analysis module is used for carrying out comprehensive analysis by using the result obtained by the individual learning state analysis module to obtain the overall learning effect of the whole class, and the overall learning effect is expressed in a score form;
and the effect feedback display module is used for performing feedback display on the result of the individual learning state analysis module and the result of the class integral learning state analysis module by using a display screen, guiding a teacher to accelerate, decelerate and re-explain the currently explained knowledge points or explain the currently explained knowledge points by means of an augmented reality technology, and dynamically adjusting the teaching effect.
The invention has the beneficial effects that: the invention can complete the positioning detection of students based on the identification technology adopted by the invention, and analyze the learning effect of individual students and the whole students, thereby dynamically adjusting the lecture mode.
The invention can realize high-precision detection and identification for students, has extremely high real-time operation performance and is convenient for APP real-time access.
Drawings
FIG. 1 is a functional diagram of an intelligent classroom interaction system;
fig. 2 is a schematic structural diagram of an individual learning state analysis module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and embodiments.
As shown in fig. 1, the present invention provides an intelligent classroom interaction system, which includes:
the image acquisition module acquires images in at least a classroom including all students by using a plurality of cameras;
the student identification module is used for carrying out student positioning detection on the image acquired by the image acquisition module, positioning a plurality of detected students in a rectangular frame mode, and carrying out face identification, expression identification and state identification on the detected students;
the individual learning state analysis module is used for obtaining the learning numbers and name information of students by using the result of face recognition, and further analyzing the individual learning effect of each student on the current knowledge point based on the results of expression recognition and state recognition, wherein the individual learning effect is expressed in a score form;
the class overall learning state analysis module is used for carrying out comprehensive analysis by using the result obtained by the individual learning state analysis module to obtain the overall learning effect of the whole class, and the overall learning effect is expressed in a score form;
and the effect feedback display module is used for performing feedback display on the result of the individual learning state analysis module and the result of the class integral learning state analysis module by using a display screen, guiding a teacher to accelerate, decelerate and re-explain the currently explained knowledge points or explain the currently explained knowledge points by means of an augmented reality technology, and dynamically adjusting the teaching effect.
Preferably, the face recognition is obtained by a MASK-RCNN-based face recognition model, and the MASK-RCNN model includes: convolutional layer, RPN network layer, RoIAligh layer and output layer; the convolution layer performs convolution operation on an input image, a convolution kernel of 5 x 5 is adopted, and the RPN network layer is used for screening candidate areas; the RoIALigh layer extracts a feature map with a specified size from the selected ROI; and identifying and outputting the characteristic graph, and mapping an output result to corresponding school number and name information.
Preferably, the face recognition is obtained by a MASK-RCNN-based face recognition model, and the MASK-RCNN model includes: convolutional layer, RPN network layer, RoIAligh layer and output layer; the convolution layer performs convolution operation on an input image, a convolution kernel of 5 x 5 is adopted, and the RPN network layer is used for screening candidate areas; the RoIALigh layer extracts a feature map with a specified size from the selected ROI; and identifying and outputting the characteristic graph, and mapping an output result to corresponding school number and name information. The face recognition model can also be a neural network model or an SVM model in other forms.
Preferably, in the MASK-RCNN model, the recognition accuracy is continuously improved by using a loss function, where the loss function is:
Figure BDA0002347114840000041
in the formula
Figure BDA0002347114840000043
Wherein N is the number of training samples; thetayi,iIs a sample xiCorresponding to it with tag yiBy the weighted angle of (a) (-)j,iIs a sample xiThe included angle between the weight of the output node j and m is a preset parameter, and m is more than or equal to 2 and less than or equal to 5; k ═ abs (sign (cos θ)j,i)). Other prior art loss functions may also be employed by the present invention.
As shown in fig. 2, preferably, the individual learning state analysis module is obtained through an individual learning state model, and the individual learning state model is formed by cascading a face recognition model, an expression recognition model and a state recognition model to form a multi-level dense neural network; the expression recognition model is directly cascaded with the face recognition model; the state recognition model is directly cascaded to the face recognition model, and information input is obtained from the image acquisition module; the input in the expression recognition model is from any one of the convolutional layer, the RPN network layer and the RoIAiigh layer in the cascaded face recognition model; the input of the state recognition model is from a convolution layer in the face recognition model and also from an original image obtained by the image acquisition module.
Preferably, the expression recognition model is an SPSS neural network model, and includes an input layer, 4 convolutional layers, a pooling layer, a full-link layer, and an output layer; the number of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer is 32, 64 and 32, and the pooling method of the pooling layers is as follows:
Se=f(elogw+φ(Je))
Figure BDA0002347114840000042
wherein S iseRepresents the output of the current layer, JeRepresents the input of a loss function, f () represents an activation function, w represents the weight of the current layer, phi represents the loss function, Se-1The output of the previous layer is represented, representing a constant.
Preferably, the kind of expression at least comprises understanding, puzzling, anxiety, dysphoria, conflict, dislike, likes, calm.
Preferably, the type of state identification at least comprises a head state, a hand state, a shoulder state and an eye state, and the head state at least comprises a head nodding state, a head shaking state, a head raising state and a head lowering state.
The invention further provides an electronic terminal which can remotely access the intelligent classroom interaction system through the APP, the APP can automatically detect children according to the account registered by parents, the originally obtained low-resolution images are subjected to high-definition amplification through an amplification instruction, the performances of the children in the classroom or other places (such as sports places) are watched, and the amplification instruction obtains undistorted high-definition images through an image amplification algorithm.
Preferably, the specific process of the automatic detection is as follows:
step 1, defining a detection function
Figure BDA0002347114840000051
The form is as follows,
Figure BDA0002347114840000052
wherein K is a two-dimensional Gaussian kernel function;
Figure BDA0002347114840000053
for the level set function, g (| ▽ |, m) is about the image correction function, | ▽ | is the image gradient modulus, m, λ, α, β are constants, I represents the image to be processed, W represents the global energy correction term;
step 2, carrying out horizontal evolution and factorization on the obtained model by utilizing an Euler-Langerla Japanese equation to obtain a new evolution equation;
step 3, using a convergence judgment criterion as a judgment condition, if the convergence criterion is met, terminating iteration, otherwise, turning to the step 2;
and 4, obtaining the segmented child detection image based on the obtained evolution equation.
Preferably, W is specifically:
Figure BDA0002347114840000054
where Ω represents the current display area obtained.
The present application also proposes a computer readable medium storing computer program instructions capable of performing the functions of the intelligent classroom interaction system proposed by the present invention.
In the description herein, references to the description of "one embodiment," "an example," "a specific example" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
A storage medium containing computer executable instructions of the traceable internet of things storage method according to the embodiments, wherein the storage medium stores program instructions capable of implementing the method.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, or direct or indirect applications in other related fields, which are made by using the contents of the present specification and the accompanying drawings, are included in the scope of the present invention. The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (7)

1. An intelligent classroom interaction system, comprising:
an image acquisition module;
the student identification module is used for positioning a plurality of detected students in a rectangular frame mode and carrying out face identification, expression identification and state identification on the detected students;
an individual learning state analysis module;
the class overall learning state analysis module is used for carrying out comprehensive analysis by using the result obtained by the individual learning state analysis module to obtain the overall learning effect of the whole class, and the overall learning effect is expressed in a score form;
the effect feedback display module is used for performing feedback display on the result of the individual learning state analysis module and the result of the class integral learning state analysis module by using a display screen, guiding a teacher to accelerate, decelerate and re-explain the currently explained knowledge points or explain the currently explained knowledge points by means of an augmented reality technology, and dynamically adjusting the teaching effect;
the system further comprises an electronic terminal, the intelligent classroom interaction system can be remotely accessed through an APP, the APP can automatically detect children according to accounts registered by parents, high-definition amplification is carried out on originally obtained low-resolution images through amplification instructions, the performances of the children in a classroom or other places of a campus are watched, and undistorted high-definition images are obtained through image amplification algorithms through the amplification instructions;
the specific process of automatic detection is as follows:
step 1, defining a detection function
Figure FDA0002626126420000011
The form is as follows,
Figure FDA0002626126420000012
wherein K is a two-dimensional Gaussian kernel function;
Figure FDA0002626126420000013
in the form of a function of the level set,
Figure FDA0002626126420000014
it is with respect to the image correction function that,
Figure FDA0002626126420000015
is an image gradient module value, m, lambda, α, β are constants, I represents an image to be processed, and W represents a global energy correction term;
step 2, carrying out horizontal evolution and factorization on the obtained model by utilizing an Euler-Langerla Japanese equation to obtain a new evolution equation;
step 3, using a convergence judgment criterion as a judgment condition, if the convergence criterion is met, terminating iteration, otherwise, turning to the step 2;
step 4, obtaining a segmented child detection image based on the obtained evolution equation;
the W is specifically as follows:
Figure FDA0002626126420000016
where Ω represents the current display area obtained.
2. The intelligent classroom interaction system as claimed in claim 1, wherein the face recognition is derived from a MASK-RCNN based face recognition model, the MASK-RCNN model comprising: convolutional layer, RPN network layer, RoIAligh layer and output layer; the convolution layer performs convolution operation on an input image, a convolution kernel of 5 x 5 is adopted, and the RPN network layer is used for screening candidate areas; the RoIALigh layer extracts a feature map with a specified size from the selected ROI; and identifying and outputting the characteristic graph, and mapping an output result to corresponding school number and name information.
3. The intelligent classroom interaction system as claimed in claim 2, wherein the MASK-RCNN model is implemented with a loss function to continuously improve recognition accuracy, wherein the loss function is:
Figure FDA0002626126420000021
in the formula
Figure FDA0002626126420000022
Wherein N is the number of training samples; thetayi,iIs a sample xiCorresponding to it with tag yiBy the weighted angle of (a) (-)j,iIs a sample xiThe included angle between the weight of the output node j and m is a preset parameter, and m is more than or equal to 2 and less than or equal to 5; k ═ abs (sign (cos θ)j,i))。
4. The intelligent classroom interaction system as claimed in claim 2, wherein the individual learning state analysis module is obtained by an individual learning state model, and the individual learning state model is formed by cascading a face recognition model, an expression recognition model and a state recognition model to form a multi-level dense neural network; the expression recognition model is directly cascaded with the face recognition model; the state recognition model is directly cascaded to the face recognition model, and information input is obtained from the image acquisition module; the input in the expression recognition model is from any one of the convolutional layer, the RPN network layer and the RoIAiigh layer in the cascaded face recognition model; the input of the state recognition model is from a convolution layer in the face recognition model and also from an original image obtained by the image acquisition module.
5. The intelligent classroom interaction system of claim 4, the expression recognition model being an SPSS neural network model comprising an input layer, 4 convolutional layers, a pooling layer, a full-link layer, and an output layer; the number of convolution kernels of the first convolution layer, the second convolution layer, the third convolution layer and the fourth convolution layer is 32, 64 and 32, and the pooling method of the pooling layers is as follows:
Figure FDA0002626126420000023
wherein S iseRepresents the output of the current layer, JeTo representInput of a loss function, f () represents an activation function, w represents a weight of a current layer, phi represents a loss function, Se-1The output of the previous layer is represented, representing a constant.
6. The intelligent classroom interaction system as claimed in any one of claims 1-5, wherein the categories of expressions include at least understanding, confusion, anxiety, fidget, conflict, aversion, likes, and calm.
7. The intelligent classroom interaction system of claim 6 wherein the types of state identification include at least head state, hand state, shoulder state, and eye state, the head state including at least nod, shake, head up, and head down.
CN201911399319.5A 2019-12-30 2019-12-30 Wisdom classroom interaction system Active CN111179133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911399319.5A CN111179133B (en) 2019-12-30 2019-12-30 Wisdom classroom interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911399319.5A CN111179133B (en) 2019-12-30 2019-12-30 Wisdom classroom interaction system

Publications (2)

Publication Number Publication Date
CN111179133A CN111179133A (en) 2020-05-19
CN111179133B true CN111179133B (en) 2020-09-25

Family

ID=70658305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911399319.5A Active CN111179133B (en) 2019-12-30 2019-12-30 Wisdom classroom interaction system

Country Status (1)

Country Link
CN (1) CN111179133B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426700B (en) * 2011-11-04 2013-10-16 西安电子科技大学 Level set SAR image segmentation method based on local and global area information
CN206773760U (en) * 2017-01-21 2017-12-19 杭州科技职业技术学院 A kind of classroom instruction management system
CN107424153B (en) * 2017-04-18 2020-08-14 辽宁科技大学 Face segmentation method based on deep learning and level set
CN109341763B (en) * 2018-10-10 2020-02-04 广东长盈科技股份有限公司 Transportation data acquisition system and method based on Internet of things
CN109522815B (en) * 2018-10-26 2021-01-15 深圳博为教育科技有限公司 Concentration degree evaluation method and device and electronic equipment
CN109977903B (en) * 2019-04-03 2020-03-17 珠海读书郎网络教育有限公司 Method and device for intelligent classroom student management and computer storage medium
CN110046581A (en) * 2019-04-18 2019-07-23 广东德融汇科技有限公司 A kind of campus wisdom classroom system and shooting classification method based on biological identification technology
CN110163145A (en) * 2019-05-20 2019-08-23 西安募格网络科技有限公司 A kind of video teaching emotion feedback system based on convolutional neural networks

Also Published As

Publication number Publication date
CN111179133A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
US10902056B2 (en) Method and apparatus for processing image
CN109614934B (en) Online teaching quality assessment parameter generation method and device
CN111563502A (en) Image text recognition method and device, electronic equipment and computer storage medium
CN112949622B (en) Bimodal character classification method and device for fusing text and image
Tang et al. Classroom behavior detection based on improved YOLOv5 algorithm combining multi-scale feature fusion and attention mechanism
CN113705349A (en) Attention power analysis method and system based on sight estimation neural network
Krassanakis et al. Monitoring human visual behavior during the observation of unmanned aerial vehicles (UAVs) videos
Perrin et al. EyeTrackUAV2: A large-scale binocular eye-tracking dataset for UAV videos
Osco et al. The Potential of Visual ChatGPT for Remote Sensing
Farias et al. A distributed vision-based navigation system for Khepera IV mobile robots
Yang et al. A visual attention model based on eye tracking in 3d scene maps
Duraisamy et al. Classroom engagement evaluation using computer vision techniques
Zhang Innovation of English teaching model based on machine learning neural network and image super resolution
Yang et al. Follower: A novel self-deployable action recognition framework
Zou et al. Movement tube detection network integrating 3d cnn and object detection framework to detect fall
CN111179133B (en) Wisdom classroom interaction system
Fang et al. An industrial micro-defect diagnosis system via intelligent segmentation region
Lin et al. The application of adaptive tolerance and serialized facial feature extraction to automatic attendance systems
Zhang et al. Mobile Robot Tracking with Deep Learning Models under the Specific Environments
Berkol et al. Visual Lip Reading Dataset in Turkish
CN113239915A (en) Classroom behavior identification method, device, equipment and storage medium
CN113920540A (en) Knowledge distillation-based pedestrian re-identification method, device, equipment and storage medium
Liu et al. An embedded portable lightweight platform for real-time early smoke detection
Czúni et al. Lightweight active object retrieval with weak classifiers
Sommer et al. Deep learning based person search in aerial imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 510000 401, 4th floor, building 2, information hub building, University Town, No. 1, Mingzhi street, Xiaoguwei street, Panyu District, Guangzhou City, Guangdong Province

Patentee after: Smart campus (Guangdong) Education Technology Co.,Ltd.

Address before: 510000 room 1413, No.5, No.1, South Fifth Road, Huizhan, Haizhu District, Guangzhou City, Guangdong Province

Patentee before: Smart campus (Guangdong) Education Technology Co.,Ltd.