CN110837790B - Identification method - Google Patents

Identification method Download PDF

Info

Publication number
CN110837790B
CN110837790B CN201911061797.5A CN201911061797A CN110837790B CN 110837790 B CN110837790 B CN 110837790B CN 201911061797 A CN201911061797 A CN 201911061797A CN 110837790 B CN110837790 B CN 110837790B
Authority
CN
China
Prior art keywords
touch
user
processor
information
panel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911061797.5A
Other languages
Chinese (zh)
Other versions
CN110837790A (en
Inventor
田雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yundi Technology Co ltd
Original Assignee
Guangzhou Yundi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yundi Technology Co ltd filed Critical Guangzhou Yundi Technology Co ltd
Priority to CN201911061797.5A priority Critical patent/CN110837790B/en
Publication of CN110837790A publication Critical patent/CN110837790A/en
Application granted granted Critical
Publication of CN110837790B publication Critical patent/CN110837790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an identification method, which comprises the following steps: the touch control unit identifies touch control operation and generates touch control information according to the touch control operation; when the touch duration time exceeds a first duration and the touch coordinate is in a first set area, the processor determines a panel mode; the image acquisition unit acquires gesture information and face information; the processor determines user information according to the face information; determining a first type of user according to the gesture information, and outputting and displaying user information on a display unit; the touch control unit identifies selection touch control operation and determines a selection touch control coordinate according to the selection touch control operation; the processor determines a first user according to the selected touch coordinate; performing positioning processing according to the face information to generate positioning coordinates; the image acquisition unit acquires video image data according to the positioning coordinates; the touch control unit receives scoring data input by a second user according to the video image data, and the processor generates classroom quality scoring data according to the scoring data and the user ID of the first user.

Description

Identification method
Technical Field
The invention relates to the technical field of information, in particular to an identification method.
Background
In the conventional teaching process, since a teacher gives lectures to a plurality of students at the same time, the state of each student in the classroom cannot be taken into consideration. Nowadays, even though many schools have reduced the number of students, the behavior of students in a classroom still has a situation that cannot be discovered in time. Moreover, the classroom quality of students is mostly subjective evaluation, and no big data is taken as a basis, so that the study quality of the students cannot be analyzed objectively.
In addition, the teaching board in the traditional classroom can only be used for writing on a writing board, and cannot record and control the writing, which is not beneficial to improving the teaching quality.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an identification method which is used for digitizing the classroom state of a student, objectively scoring the student based on data and identifying a gesture instruction according to the operation of a user, so that intelligent control of blackboard writing is realized and the interestingness of the classroom is increased.
In order to achieve the above object, the present invention provides an identification method, including:
a touch unit of the panel identifies touch operation, generates touch information according to the touch operation and sends the touch information to a processor; the touch information comprises touch duration and touch coordinates;
when the touch duration time exceeds a first duration time and the touch coordinate is in a first set area, the processor determines a panel mode according to the touch duration time and the touch coordinate;
the image acquisition unit of the panel acquires gesture information and face information of a user in an image acquisition area according to a first image acquisition parameter and sends the gesture information and the face information to the processor;
the processor determines user information of a user according to the face information; the user information includes a user ID;
the processor determines a first class of users according to the gesture information, and outputs and displays user IDs of the first class of users on a display unit of the panel;
the touch control unit identifies selection touch control operation of a second user and determines a selection touch control coordinate according to the selection touch control operation;
the processor determines a selected first user in the first category of users according to the selection touch coordinate;
the processor performs positioning processing according to the face information of the first user to generate a positioning coordinate of the first user;
the image acquisition unit changes the first image acquisition parameter according to the positioning coordinate and acquires video image data of the first user according to the changed second image acquisition parameter;
and the touch control unit receives grading data input by a second user according to the video image data and sends the grading data to the processor, and the processor generates classroom quality grading data of the first user according to the grading data and the user ID of the first user.
Preferably, after the processor determines the first category of users according to the gesture information, the method further includes:
and the processor acquires the activity accumulated data of each user, updates the activity accumulated data of the first class of users according to the user information determined by the face information, and adds 1 to the activity accumulated data to obtain the updated activity accumulated data.
Further preferably, the processor acquires all classroom quality scoring data and liveness cumulative data of the first user, performs weighting quantification processing, and generates classroom scoring data of the first user.
Preferably, the voice recognition unit picks up voice information and converts the voice information into an electric signal;
the processor performs feature extraction processing according to the electric signal to obtain a voice recognition signal, and compares the voice recognition signal with a voice model signal in a database to obtain a voice recognition instruction;
the processor executes the speech recognition instructions.
Preferably, the panel pattern includes a first panel pattern and a second panel pattern; the determining, by the processor, the panel mode according to the touch duration and the touch coordinate specifically includes:
the processor acquires a current panel mode;
when the touch duration time exceeds a first duration and the touch coordinate is in a first set area, the processor switches the current first panel mode to the second panel mode, or the processor switches the current second panel mode to the first panel mode.
Preferably, after the image capturing unit changes the first image capturing parameter according to the positioning coordinate and captures video image data of the first user according to the changed second image capturing parameter, the method further includes:
the processor outputs and displays the video image data on the display unit;
and when the second user finishes grading the video image data, the processor stops outputting and displaying the video image data of the first user.
Preferably, the touch coordinates include a start point coordinate and an end point coordinate; after the touch unit of the panel recognizes a touch operation and generates touch information according to the touch operation to send to the processor, the method further comprises:
the processor judges whether the starting point coordinate is in a second set area; the second setting area is not overlapped with the first setting area;
when the starting point coordinate of the touch operation is in the second setting area, the processor determines a first touch object according to the starting point coordinate;
and migrating the display position of the first touch object from a starting point coordinate to an end point coordinate.
Further preferably, the displacement between the start point coordinate and the end point coordinate is greater than a preset displacement threshold.
Further preferably, when the coordinates of the starting point of the touch operation are outside the first setting area and the second setting area, the method further includes:
and the processor generates a writing track according to the touch operation and uploads the writing track to a database.
Preferably, when the touch interval time is smaller than a preset touch interval time threshold and the touch coordinate is in a first set area, the processor closes the panel according to the touch interval time and the touch coordinate.
The identification method provided by the embodiment of the invention realizes the purpose of digitizing the classroom state of students and objectively scoring the students based on the data, thereby optimizing the teaching classroom.
Drawings
Fig. 1 is a flowchart of an identification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a setting area of a panel according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
According to the identification method provided by the invention, the classroom state of the student can be digitalized, the student can be objectively graded based on the data, and the gesture instruction can be identified according to the operation of the user, so that the intelligent control of blackboard writing is realized, and the interestingness of the classroom is increased.
Fig. 1 is a flowchart of an identification method according to an embodiment of the present invention, which shows a process of identification statistical processing of gesture information by a teaching panel. Fig. 2 is a schematic diagram of a setting area of a panel according to an embodiment of the present invention. The technical solution of the present invention is described in detail below with reference to fig. 1 and 2.
To facilitate understanding of the technical solution of the present invention, first, the following first, second, and third setting regions will be described: the first setting area, the second setting area and the third setting area are all positioned on the panel and are not overlapped with each other. The first setting area is used for switching the mode of a panel for teaching, the second setting area is used for controlling an object in the second setting area based on touch recognition, and the third setting area is used for recognizing a writing track in the third setting area.
Next, the following users will be explained: taking a teaching class as an example, the first class of users are students who hold hands, the second class of users are teachers, and the first class of users are students selected by the teachers.
Next, the coordinates of the following description are described, and as shown in fig. 2, the panel is divided into 3 regions with the bottom left corner being the zero point (0,0), the length being the X axis, and the width being the Y axis. In this embodiment, a teaching board with a length of 400 cm and a width of 100 cm is taken as an example, wherein 1 cm is taken as 1 unit. According to the preset of the length and the area of the teaching board, the coordinates of each point are as follows: a (0,100), F (400,0), B (0,10), C (10,0), D (135,0) and E (265, 0).
The flow of the identification statistics processing on the gesture information comprises the following steps 101 to 110:
step 101, a touch unit of a panel identifies a touch operation, generates touch information according to the touch operation, and sends the touch information to a processor.
Specifically, the touch information includes a touch duration and a touch coordinate. For example, the threshold value of the touch duration is set to 1 second.
Preferably, the touch coordinates include a start point coordinate and an end point coordinate. After the touch control unit of the panel identifies touch control operation and generates touch control information according to the touch control operation and sends the touch control information to the processor, the processor judges a set area to which the starting point coordinates belong.
When the touch coordinate is in the first setting area, the panel state switching operation is considered. When the panel state is in the wake-up state, the classroom record is considered to be opened, and when the panel state is in the close state, the classroom record is closed.
When the touch coordinate is in the second setting area, the operation such as clicking, moving, turning and the like is considered to be performed on the touch object displayed on the panel, and the operation action can be specifically determined according to gesture recognition.
When the touch coordinates are outside the first setting area and the second setting area, i.e. the third setting area, it is considered that the writing operation is performed on the panel.
And step 102, when the touch duration time exceeds a first duration time and the touch coordinate is in a first set area, the processor starts the panel according to the touch duration time and the touch coordinate.
In a specific embodiment, when the touch information is "touch duration 2 seconds, touch coordinates (1, 1)", the touch duration 2 seconds exceeds the first duration 1 second, and the touch coordinates are within the first set area, the processor starts the panel according to the touch duration and the touch coordinates.
When the touch information is 'touch interval time 0.1 second, the touch coordinate is (1, 1)', the touch interval time is less than a preset touch interval time threshold value 0.4 second, and the touch coordinate is in a first set area, the processor closes the panel according to the touch interval time and the touch coordinate.
In a preferred scheme, the panel is a multi-purpose panel, comprises two states of a blackboard and an electronic screen and can be switched, and the panel mode is set to change the panel mode. The panel patterns include a first panel pattern and a second panel pattern. The processor obtains a current panel mode. When the touch duration time exceeds the second duration and the touch coordinate is in the first set area, the processor switches the current first panel mode to the second panel mode, or the processor switches the current second panel mode to the first panel mode.
In a specific embodiment, the first panel mode is a blackboard mode, and the second panel mode is an electronic screen mode. When the touch information is the touch duration of 5 seconds and the touch coordinates (1,1), the processor firstly judges that the touch coordinates are in a first preset area, and then judges that the touch operation is a panel mode switching instruction according to the touch duration reaching a preset first duration of 3 seconds. And switching the panel mode from the blackboard mode of the current mode to the electronic screen mode, or from the electronic screen mode of the current mode to the blackboard mode.
And 103, acquiring gesture information and face information of a user in the image acquisition area by the image acquisition unit of the panel according to the first image acquisition parameter, and sending the gesture information and the face information to the processor.
In a specific embodiment, the image acquisition unit of the panel acquires an image acquisition area, for example, a student seat area in a first classroom, which is opposite to the panel, according to a proportion of 100% of the image acquisition area, identifies gesture information and face information in an acquisition range, and sends the acquired gesture information and face information to the processor.
All students in the first classroom are seated in the student seating area, and therefore the image acquisition unit acquires facial information and gesture information of all students in the first classroom.
The processor determines user information for the user based on the facial information, step 104.
Specifically, the processor extracts features of the face information acquired by the image acquisition unit, compares the parameterized feature information extracted by the features with parameterized feature information of the face information prestored in the database to determine user information of the user, and identifies that the user information includes a user ID.
The face information pre-stored in the database refers to face information of all students at school, each face information has parametric feature information, and each face information corresponds to unique user information. The user information specifically includes: student education ID, student name, class school number, class information, etc. In this example, the student name and student number are used as the user ID.
And 105, the processor determines a first class of users according to the gesture information and outputs and displays user IDs of the first class of users on a display unit of the panel.
In a particular embodiment, the processor identifies gesture information, for example, the gesture information is a hand lift. The processor automatically determines the user initiating the hand lifting according to the coordinates of the starting point and the stopping point of the gesture information and identifies the user initiating the hand lifting as a first class user, and simultaneously outputs and displays the user ID of the user initiating the hand lifting, such as Zhang three 001 and Li four 002 on the panel.
Preferably, after the processor determines the first class of users according to the gesture information, the processor obtains the activity accumulated data of each user, updates the activity accumulated data of the first class of users according to the user information determined by the face information, and adds 1 to the activity accumulated data to obtain updated activity accumulated data. In a specific embodiment, after determining the user information of all the users who initiate the hand lifting, the processor records the number of times of initiating the hand lifting of each user, and updates the corresponding number of times of hand lifting of the user according to the face information of the user who initiates the hand lifting. Specifically, each time the hand lifting information is identified, the processor adds 1 to the corresponding hand lifting frequency of the user.
Preferably, the processor acquires all classroom quality scoring data and liveness cumulative data of the first user, and performs weighting quantification processing to generate classroom scoring data of the first user.
In a preferred embodiment, the processor calculates an average score of 8 for the classroom quality scores of the first user based on a plurality of classroom quality scores "7, 8, 9" of the second user for the first user.
The activity accumulated data of the user is 30, the activity accumulated data can be correspondingly converted according to a preset rule in the system, and corresponding quantitative scores are determined according to the ranking of the activity accumulated data of all the first users in the class. For example, if the number of times of raising hands of the user is 30 and the corresponding rank is 8, the quantitative score of the activity of the user is 10. The processor performs weighted calculation on the average score of 8 points of the classroom quality score and the quantitative score of 10 points of the liveness according to 50 percent and 50 percent to generate classroom score data of 9 points of the first user.
The preset rule of the quantitative score conversion of the activity can be set according to actual needs.
And 106, identifying the selection touch operation of the second user by the touch unit, and determining a selection touch coordinate according to the selection touch operation.
Specifically, the second user, i.e., the tutor, can perform the selection operation based on the user ID of the first user output on the display unit of the panel.
In a specific embodiment, the touch unit determines the selection touch coordinates (150,30) according to the selection touch operation of the second user on the panel for Zusanli 001 of the first user.
And step 107, the processor determines the selected first user in the first category of users according to the selection touch coordinate.
In a specific embodiment, the selected first user zhangsan 001 is determined according to the selection touch coordinates (150, 30).
And step 108, the processor performs positioning processing according to the face information of the first user to generate positioning coordinates of the first user.
In a specific embodiment, the processor searches face information in face information of users in the previous first image acquisition parameter acquisition image acquisition area according to Zusan 001 as the selected first user ID, and generates positioning coordinates according to the searched face information.
And step 109, the image acquisition unit changes the first image acquisition parameter according to the positioning coordinate, and acquires video image data of the first user according to the changed second image acquisition parameter.
In a specific embodiment, the image acquisition unit acquires the image acquisition area according to the positioning coordinates by a proportion of 10%, that is, the image acquisition area is reduced to 10% of the original acquisition area, so as to realize video close-up of the first user.
Preferably, after the image capturing unit changes the first image capturing parameter according to the positioning coordinate and captures video image data of the first user according to the changed second image capturing parameter, the method further includes: the processor outputs and displays the video image data on the display unit. And when the second user finishes the grading processing on the video image data, the processor stops the output display of the video image data of the first user.
In a specific embodiment, the processor outputs and displays the video image with the resolution changed by the image acquisition unit on the panel. And stopping displaying after the second user scores the first user.
And 110, receiving grading data input by a second user according to the video image data through the touch unit, sending the grading data to the processor, and generating classroom quality grading data of the first user through the processor according to the grading data and the user ID of the first user.
In a specific embodiment, the touch unit sends the score 8 of the first user to the second user to the processor, and the processor generates the classroom quality score data of the first user according to the user ID "zhangsan 001" and the score 8 of the first user.
The steps 101 to 110 realize the identification and statistical processing process of the gesture information of the user, and are convenient for objectively scoring the classroom performance of the student.
The recognition method provided by the embodiment of the invention further comprises the steps of recognizing the operation of the touch object displayed on the panel and recognizing and recording the handwriting of the second user. The following are described separately:
identifying the operation and control of the touch object displayed on the panel, specifically, judging whether the starting point coordinate is in a second set area by the processor; the second setting region does not overlap with the first setting region. When the starting point coordinate of the touch operation is in the second setting area, the processor determines the first touch object according to the starting point coordinate. And transferring the display position of the first touch object from the starting point coordinate to the end point coordinate. And when the coordinates of the starting point of the touch operation are outside the first set area and the second set area, the processor generates a writing track according to the touch operation and uploads the writing track to the database.
In one embodiment, the touch information received by the processor is the start point coordinates (140,30) and the end point coordinates (200, 60). At this time, the processor determines that the start point coordinate is in the second setting area. According to the fact that the coordinates of one picture in the second setting area are 120-150 of the abscissa and 20-40 of the ordinate, the processor can judge that the coordinates of the starting point (140,30) are in the coordinate range of the picture, finally determine that the touch object is the picture, and move the picture to the coordinates of the end point (200, 60).
In a preferred embodiment, the processor receives the touch information as start coordinates (140,30) and end coordinates (200,60) with a displacement of about 67. And the displacement between the start point coordinate and the end point coordinate is greater than a preset displacement threshold value 1, and the processor moves the picture. And if the displacement is less than or equal to the preset displacement threshold, not moving the picture.
The control recognition of the touch object displayed on the panel is convenient for demonstrating teaching contents so as to provide a better teaching environment.
Recognition and recording of the writing of the second user:
in another specific embodiment, when the coordinates of the starting point of the touch operation are outside the first setting area and the second setting area, that is, the third setting area, the processor generates a writing track according to each touch operation, the writing tracks can form writing contents, and the writing contents formed by the writing tracks are uploaded to the database.
The recognition and the recording of the handwriting of the second user are beneficial to recording the teaching content, and the student can review the handwriting conveniently after class, so that the teaching quality is improved.
In addition, the recognition method provided by the embodiment of the present invention further includes recognizing the user voice command, specifically: the voice recognition unit picks up voice information and converts the voice information into an electric signal. And the processor performs feature extraction processing according to the electric signal to obtain a voice recognition signal, and compares the voice recognition signal with a voice model signal in a database to obtain a voice recognition instruction. The processor executes the speech recognition instructions.
In a specific embodiment, when the speech recognition unit of the panel recognizes the speech information, the ambient noise in the speech information and the speech information of the user are weighted to remove noise to obtain a digital signal, and then the obtained digital signal is subjected to a spectral analysis to extract a parameterized representation of the speech signal, such as a feature matrix composed of feature vectors of the speech information. The speech signal is compared to speech model signals in a database. When the similarity of the two signals reaches a similarity threshold value, for example 90%, the speech information is determined to be matched with the semantic model signal. And the processor acquires and executes the instruction corresponding to the voice model signal.
The recognition of the voice instruction of the user can be simultaneously carried out with the touch operation, and the remote control can be carried out within a certain distance, so that the demonstration efficiency of the teaching classroom is improved.
The identification method realizes the datamation of the classroom state of the student, objectively scores the student based on the data, and identifies the gesture instruction according to the user operation, thereby realizing the intelligent control of blackboard writing and increasing the interest of the classroom.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An identification method, characterized in that the method comprises:
a touch unit of the panel identifies touch operation, generates touch information according to the touch operation and sends the touch information to a processor; the touch information comprises touch duration and touch coordinates;
when the touch duration time exceeds a first duration time and the touch coordinate is in a first set area, the processor starts a panel according to the touch duration time and the touch coordinate;
the image acquisition unit of the panel acquires gesture information and face information of a user in an image acquisition area according to a first image acquisition parameter and sends the gesture information and the face information to the processor;
the processor determines user information of a user according to the face information; the user information includes a user ID;
the processor determines a first class of users according to the gesture information, and outputs and displays user IDs of the first class of users on a display unit of the panel;
the touch control unit identifies selection touch control operation of a second user and determines a selection touch control coordinate according to the selection touch control operation;
the processor determines a selected first user in the first category of users according to the selection touch coordinate;
the processor performs positioning processing according to the face information of the first user to generate a positioning coordinate of the first user;
the image acquisition unit changes the first image acquisition parameter according to the positioning coordinate and acquires video image data of the first user according to the changed second image acquisition parameter;
and the touch control unit receives grading data input by a second user according to the video image data and sends the grading data to the processor, and the processor generates classroom quality grading data of the first user according to the grading data and the user ID of the first user.
2. The recognition method of claim 1, wherein after the processor determines a first class of users from the gesture information, the method further comprises:
and the processor acquires the activity accumulated data of each user, updates the activity accumulated data of the first class of users according to the user information determined by the face information, and adds 1 to the activity accumulated data to obtain the updated activity accumulated data.
3. The identification method according to claim 2, wherein the processor acquires all classroom quality scoring data and liveness cumulative data of the first user, performs weighting quantification processing, and generates classroom scoring data of the first user.
4. The identification method according to claim 1, characterized in that the method further comprises:
the voice recognition unit picks up voice information and converts the voice information into an electric signal;
the processor performs feature extraction processing according to the electric signal to obtain a voice recognition signal, and compares the voice recognition signal with a voice model signal in a database to obtain a voice recognition instruction;
the processor executes the speech recognition instructions.
5. The recognition method according to claim 1, wherein the panel pattern includes a first panel pattern and a second panel pattern; after the processor opens the panel according to the touch duration and the touch coordinates, the method further comprises:
the processor acquires a current panel mode;
when the touch duration exceeds a second duration and the touch coordinate is in a first set area, the processor switches the current first panel mode to the second panel mode, or the processor switches the current second panel mode to the first panel mode.
6. The identification method according to claim 1, wherein after the image capturing unit changes the first image capturing parameter according to the positioning coordinate and performs video image data capturing on the first user according to the changed second image capturing parameter, the method further comprises:
the processor outputs and displays the video image data on the display unit;
and when the second user finishes grading the video image data, the processor stops outputting and displaying the video image data of the first user.
7. The recognition method according to claim 1, wherein the touch coordinates include a start point coordinate and an end point coordinate; after the touch unit of the panel recognizes a touch operation and generates touch information according to the touch operation to send to the processor, the method further comprises:
the processor judges whether the starting point coordinate is in a second set area; the second setting area is not overlapped with the first setting area;
when the starting point coordinate of the touch operation is in the second setting area, the processor determines a first touch object according to the starting point coordinate;
and migrating the display position of the first touch object from a starting point coordinate to an end point coordinate.
8. The identification method according to claim 7, wherein the displacement between the start point coordinates and the end point coordinates is greater than a preset displacement threshold.
9. The identification method according to claim 7, wherein when the coordinates of the start point of the touch operation are outside the first and second setting areas, the method further comprises:
and the processor generates a writing track according to the touch operation and uploads the writing track to a database.
10. The identification method according to claim 1, wherein the touch information further includes a touch interval time; the method further comprises the following steps:
and when the touch interval time is smaller than a preset touch interval time threshold and the touch coordinate is in a first set area, the processor closes the panel according to the touch interval time and the touch coordinate.
CN201911061797.5A 2019-11-01 2019-11-01 Identification method Active CN110837790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911061797.5A CN110837790B (en) 2019-11-01 2019-11-01 Identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911061797.5A CN110837790B (en) 2019-11-01 2019-11-01 Identification method

Publications (2)

Publication Number Publication Date
CN110837790A CN110837790A (en) 2020-02-25
CN110837790B true CN110837790B (en) 2022-03-18

Family

ID=69575964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911061797.5A Active CN110837790B (en) 2019-11-01 2019-11-01 Identification method

Country Status (1)

Country Link
CN (1) CN110837790B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423717A (en) * 2013-08-26 2015-03-18 鸿合科技有限公司 Gesture input method and digital white board
JP2016157010A (en) * 2015-02-25 2016-09-01 ブラザー工業株式会社 Singing evaluation device and program for singing evaluation
CN109101132A (en) * 2018-08-07 2018-12-28 锐达互动科技股份有限公司 A kind of method that conventional teaching is switched fast with electronic white board projection teaching
CN110334620A (en) * 2019-06-24 2019-10-15 北京大米科技有限公司 Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423717A (en) * 2013-08-26 2015-03-18 鸿合科技有限公司 Gesture input method and digital white board
JP2016157010A (en) * 2015-02-25 2016-09-01 ブラザー工業株式会社 Singing evaluation device and program for singing evaluation
CN109101132A (en) * 2018-08-07 2018-12-28 锐达互动科技股份有限公司 A kind of method that conventional teaching is switched fast with electronic white board projection teaching
CN110334620A (en) * 2019-06-24 2019-10-15 北京大米科技有限公司 Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction

Also Published As

Publication number Publication date
CN110837790A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
CN108648757B (en) Analysis method based on multi-dimensional classroom information
CN105679121B (en) A kind of intelligent teaching system
CN111027486B (en) Auxiliary analysis and evaluation system and method for classroom teaching effect big data of middle and primary schools
CN109376612B (en) Method and system for assisting positioning learning based on gestures
CN112183238B (en) Remote education attention detection method and system
CN107992195A (en) A kind of processing method of the content of courses, device, server and storage medium
CN110009537B (en) Information processing method, device, equipment and storage medium
CN105427696A (en) Method for distinguishing answer to target question
CN115239527A (en) Teaching behavior analysis system for teaching characteristic fusion and modeling based on knowledge base
WO2022052941A1 (en) Intelligent identification method and system for giving assistance with piano teaching, and intelligent piano training method and system
WO2020007097A1 (en) Data processing method, storage medium and electronic device
CN109754653B (en) Method and system for personalized teaching
CN116109455B (en) Language teaching auxiliary system based on artificial intelligence
CN111814733A (en) Concentration degree detection method and device based on head posture
CN109814787A (en) Key message determines method, apparatus, equipment and storage medium
CN116050892A (en) Intelligent education evaluation supervision method based on artificial intelligence
CN115810163A (en) Teaching assessment method and system based on AI classroom behavior recognition
CN113657509A (en) Teaching training improving method and device, terminal and storage medium
CN110837790B (en) Identification method
JP2022014890A (en) Concentration determination program
CN115631501A (en) Intelligent blackboard writing cloud calling method based on circular screen teaching space
CN113256129A (en) Concentration degree analysis method and system and computer readable storage medium
JP7427906B2 (en) Information processing device, control method and program
CN109766413B (en) Searching method applied to family education equipment and family education equipment
CN114830208A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant