CN108805035A - Interactive teaching and learning method based on gesture identification and device - Google Patents

Interactive teaching and learning method based on gesture identification and device Download PDF

Info

Publication number
CN108805035A
CN108805035A CN201810495581.9A CN201810495581A CN108805035A CN 108805035 A CN108805035 A CN 108805035A CN 201810495581 A CN201810495581 A CN 201810495581A CN 108805035 A CN108805035 A CN 108805035A
Authority
CN
China
Prior art keywords
gesture
information
facial characteristics
sample
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810495581.9A
Other languages
Chinese (zh)
Inventor
陈鹏丞
卢鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Eaglesoul Technology Co Ltd
Original Assignee
Shenzhen Eaglesoul Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Eaglesoul Technology Co Ltd filed Critical Shenzhen Eaglesoul Technology Co Ltd
Priority to CN201810495581.9A priority Critical patent/CN108805035A/en
Priority to PCT/CN2018/092787 priority patent/WO2019223056A1/en
Publication of CN108805035A publication Critical patent/CN108805035A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Physics (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Multimedia (AREA)
  • Marketing (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Business, Economics & Management (AREA)
  • Algebra (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure is directed to a kind of interactive teaching and learning method, apparatus, electronic equipment and storage medium based on gesture identification.Wherein, this method includes:Obtain the vision signal of the first video capture device acquisition, extract the facial characteristics in the vision signal, and the facial characteristics specifies the gesture behavior of adjacent area, analyze the corresponding gesture feature of the gesture behavior, and the gesture feature is matched with pre-defined gesture sample, obtain matching result, match the related information of the facial characteristics and student identification, count all related informations according to all facial characteristics and gesture behavior formation in the vision signal, and generate interactive teaching and learning information, the display interface of device end is called to show the interactive teaching and learning information.The disclosure can be by specifying accurately identifying for adjacent area gesture to generate interactive teaching and learning information in student's face.

Description

Interactive teaching and learning method based on gesture identification and device
Technical field
This disclosure relates to field of computer technology, in particular to a kind of interactive teaching and learning method based on gesture identification, Device, electronic equipment and computer readable storage medium.
Background technology
The case where usually express statistic student in need is to the interactive information of the current content of courses in scene of imparting knowledge to students, to these The statistics and analysis promptly and accurately of interactive information can make instructor grasp teaching dynamic, be directed to the interactive information in time Corresponding adjustment is done in the content of courses, has positive facilitation to teaching.
However, in some teaching scenes, when number of students is more, or need the interactive teaching and learning information fed back more complex When, teacher needs to take a substantial amount of time the statistical work for doing information;Alternatively, submitting teaching mutual using electronic equipment using student Although instruction cost can be significantly increased to avoid said circumstances in dynamic information again.
In the prior art, CN201611164918 discloses a kind of interactive teaching methods based on gesture and interactive system System is to collect simultaneously upload process by the gesture based on smart machine to carry out information interaction, is set not by video acquisition The gesture feature recognition methods of standby gesture behavior;CN201710230183 disclose it is a kind of for virtual teach-in teaching Gesture interaction system and method, by establishing the modules such as Inertial Measurement Unit, subsidiary unit, data calculation and control unit The accurate acquisition to gesture is realized, but needs a large amount of calculating process, single video capture device can not be realized to big Measure the Quick Acquisition of gesture behavior.
Accordingly, it is desirable to provide one or more technical solutions that can at least solve the above problems.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Invention content
The disclosure be designed to provide a kind of interactive teaching and learning method, apparatus based on gesture identification, electronic equipment and Computer readable storage medium, and then overcome at least to a certain extent one caused by the limitation and defect of the relevant technologies A or multiple problems.
According to one aspect of the disclosure, a kind of interactive teaching and learning method based on gesture identification is provided, including:
Vision signal obtaining step obtains the vision signal of the first video capture device acquisition, extracts the vision signal In facial characteristics and the facial characteristics specify adjacent area gesture behavior;
Gesture feature analytical procedure analyzes the corresponding gesture feature of the gesture behavior, and by the gesture feature and in advance The gesture sample first defined is matched, and matching result is obtained;
Related information forming step matches and the facial characteristics according to the facial characteristics in student information data library Corresponding student identification, and the student identification that will match to is associated to form related information with the matching result;
Interactive information show step, statistics according in the vision signal all facial characteristics and gesture behavior formed All related informations, and generate interactive teaching and learning information, the display interface of device end called to show the interactive teaching and learning information.
In a kind of exemplary embodiment of the disclosure, the facial characteristics includes face contour,
The facial characteristics and the facial characteristics extracted in the vision signal obtaining step in the vision signal refers to Determine the gesture behavior of adjacent area, including:
It determines the face contour in vision signal, obtains the hand in the predeterminable area of the face contour preset direction Gesture behavior.
In a kind of exemplary embodiment of the disclosure, the method further includes:
Interim typing information is received, the interim typing information includes at least one gesture sample of acquisition;
After vision signal obtaining step, gesture feature is matched with the interim typing information, obtains matching knot Fruit.
In a kind of exemplary embodiment of the disclosure, the method further includes:
After obtaining matching result, the inquiry instruction for handling the interim typing information is sent, the inquiry instruction includes The gesture sample in the interim typing information is deleted, and preserves the gesture sample in the interim typing information;
The process instruction replied according to the inquiry instruction is received, the interim typing is deleted according to the process instruction and is believed Gesture sample in breath, or the gesture sample in the interim typing information is preserved to the gesture sample data pre-established In library.
In a kind of exemplary embodiment of the disclosure, the interim typing information of reception, including:
The gesture behavior that detection user inputs in touch apparatus;
Using the gesture behavior as the gesture sample in interim typing information.
In a kind of exemplary embodiment of the disclosure, the interim typing information of reception, including:
The gesture behavior of the second video capture device acquisition is made in the gesture behavior for detecting the acquisition of the second video capture device For the gesture sample in interim typing information.
In a kind of exemplary embodiment of the disclosure, including:
It receives sample and chooses instruction, the gesture sample in the gesture sample database pre-established is carried out in display equipment Displaying;
Sample determine instruction is received, the sample determine instruction includes that user chooses in the gesture sample database Gesture sample to be matched;
The gesture feature analytical procedure is executed according to the gesture sample to be matched of the selection.
In a kind of exemplary embodiment of the disclosure, the gesture feature includes separated straight in preset duration The number of vertical finger.
In a kind of exemplary embodiment of the disclosure, the interactive information shows step, including:
After the facial characteristics in extracting the vision signal, if specifying adjacent area not detect in the facial characteristics To gesture behavior, alternatively, specifying adjacent area to detect gesture behavior and pre-defined gesture sample in the facial characteristics It mismatches, then generates abnormal matching result.
In a kind of exemplary embodiment of the disclosure, the interactive information shows step, including:
The gesture feature is matched with pre-defined gesture sample, after obtaining matching result, according to a variety of Corresponding statistical information is generated with result, using the statistical information as interactive teaching and learning information.
In a kind of exemplary embodiment of the disclosure, the interactive information shows step, including:
The gesture feature is matched with pre-defined gesture sample, after obtaining matching result, if matching result To be a variety of, then the statistical information of corresponding number is generated according to a variety of matching results;
Chart-information is formed according to the statistical information of corresponding number, by the statistical information of the corresponding number and the figure Table information is as interactive teaching and learning information.
In one aspect of the present disclosure, a kind of interactive teaching and learning device based on gesture identification is provided, including:
Vision signal acquisition module, the vision signal for obtaining the acquisition of the first video capture device, extracts the video Facial characteristics and the facial characteristics in signal specify the gesture behavior of adjacent area;
Gesture feature analysis module, for analyzing the corresponding gesture feature of the gesture behavior, and by the gesture feature It is matched with pre-defined gesture sample, obtains matching result;
Related information forms module, for being matched in student information data library and the face according to the facial characteristics The corresponding student identification of feature, and the student identification that will match to is associated to form related information with the matching result;
Interactive information display module, for count according in the vision signal all facial characteristics and gesture behavior All related informations formed, and interactive teaching and learning information is generated, call the display interface of device end to show the interactive teaching and learning Information.
In one aspect of the present disclosure, a kind of electronic equipment is provided, including:
Processor;And
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is by the processing The method according to above-mentioned any one is realized when device executes.
In one aspect of the present disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with, institute State the method realized when computer program is executed by processor according to above-mentioned any one.
The interactive teaching and learning method based on gesture identification in the exemplary embodiment of the disclosure obtains the first video acquisition and sets The vision signal of standby acquisition, the facial characteristics and the facial characteristics extracted in the vision signal specify adjacent area Gesture behavior, analyzes the corresponding gesture feature of the gesture behavior, and by the gesture feature and pre-defined gesture sample It is matched, obtains matching result, match the related information of the facial characteristics and student identification, statistics is believed according to the video All related informations that all facial characteristics and gesture behavior in number are formed, and interactive teaching and learning information is generated, call equipment The display interface of terminal shows the interactive teaching and learning information.On the one hand, due to carrying out hand in the specified region of student's facial characteristics The rule of gesture Activity recognition improves the accuracy of identification, increases the feasibility of gesture behavior application;On the other hand, it is based on The gesture Activity recognition of the pre- typing of gesture sample improves the flexible of gesture Activity recognition on the basis of ensureing identification accuracy Property, it can adapt to the demand of various teaching scene.
It should be understood that above general description and following detailed description is only exemplary and explanatory, not The disclosure can be limited.
Description of the drawings
Its example embodiment is described in detail by referring to accompanying drawing, the above and other feature and advantage of the disclosure will become It is more obvious.
Fig. 1 shows the flow of the interactive teaching and learning method based on gesture identification according to one exemplary embodiment of the disclosure Figure;
Fig. 2A -2C show the interactive teaching and learning method application based on gesture identification according to one exemplary embodiment of the disclosure The schematic diagram of scene;
Fig. 3 A-3B show the interactive teaching and learning method application based on gesture identification according to one exemplary embodiment of the disclosure The schematic diagram of scene;
Fig. 4 shows the schematic block of the interactive teaching and learning device based on gesture identification according to one exemplary embodiment of the disclosure Figure;
Fig. 5 diagrammatically illustrates the block diagram of the electronic equipment according to one exemplary embodiment of the disclosure;And
Fig. 6 diagrammatically illustrates the schematic diagram of the computer readable storage medium according to one exemplary embodiment of the disclosure.
Specific implementation mode
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be real in a variety of forms It applies, and is not understood as limited to embodiment set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will be comprehensively and complete It is whole, and the design of example embodiment is comprehensively communicated to those skilled in the art.Identical reference numeral indicates in figure Same or similar part, thus repetition thereof will be omitted.
In addition, described feature, structure or characteristic can be incorporated in one or more implementations in any suitable manner In example.In the following description, many details are provided to fully understand embodiment of the disclosure to provide.However, It will be appreciated by persons skilled in the art that can be with technical solution of the disclosure without one in the specific detail or more It is more, or other methods, constituent element, material, device, step may be used etc..In other cases, it is not shown in detail or describes Known features, method, apparatus, realization, material or operation are to avoid fuzzy all aspects of this disclosure.
Block diagram shown in attached drawing is only functional entity, not necessarily must be corresponding with physically separate entity. I.e., it is possible to realize these functional entitys using software form, or these are realized in the module of one or more softwares hardening A part for functional entity or functional entity, or realized in heterogeneous networks and/or processor device and/or microcontroller device These functional entitys.
In this exemplary embodiment, a kind of interactive teaching and learning method based on gesture identification is provided firstly, can be applied to The electronic equipments such as computer;With reference to shown in figure 1, the interactive teaching and learning method based on gesture identification of being somebody's turn to do may comprise steps of:
Vision signal obtaining step S110 obtains the vision signal of the first video capture device acquisition, extracts the video Facial characteristics and the facial characteristics in signal specify the gesture behavior of adjacent area;
Gesture feature analytical procedure S120, analyzes the corresponding gesture feature of the gesture behavior, and by the gesture feature It is matched with pre-defined gesture sample, obtains matching result;
Related information forming step S130 is matched and the face according to the facial characteristics in student information data library The corresponding student identification of feature, and the student identification that will match to is associated to form related information with the matching result;
Interactive information show step S140, statistics according in the vision signal all facial characteristics and gesture behavior All related informations formed, and interactive teaching and learning information is generated, call the display interface of device end to show the interactive teaching and learning Information.
According to the interactive teaching and learning method based on gesture identification in this example embodiment, on the one hand, due in student's face The specified region of feature carries out the rule of gesture Activity recognition, improves the accuracy of identification, increases gesture behavior application Feasibility;On the other hand, the gesture Activity recognition based on the pre- typing of gesture sample improves on the basis of ensureing identification accuracy The flexibility of gesture Activity recognition, can adapt to the demand of various teaching scene.
In the following, by the interactive teaching and learning method based on gesture identification in this example embodiment is further detailed.
In vision signal obtaining step S110, the vision signal of the first video capture device acquisition, extraction can be obtained Facial characteristics and the facial characteristics in the vision signal specify the gesture behavior of adjacent area.
In this example embodiment, in common teaching scene, if student wants to react teaching by gesture behavior Information, for example the use of gesture behavior to express in teaching is a kind of quick intuitive expression way if the answer of topic, still In practical application scene, artificial statistics gesture behavior is relatively slow and video capture device is not easy to be directly targeted to gesture behavior area Domain causes gesture Activity recognition inaccurate, so needing the facial characteristics in the vision signal acquired using video capture device As positioning datum, the gesture behavior that the facial characteristics specifies adjacent area is thus further positioned, so that it may in solution State problem.In actual teaching scene, a video capture device can be used to complete all student's faces knowledges of entirely imparting knowledge to students Not with the identification of the gesture behavior in the region of corresponding gesture behavior, hardware cost is saved.
In this example embodiment, the facial characteristics includes face contour, is extracted in the vision signal obtaining step Facial characteristics and the facial characteristics in the vision signal specify the gesture behavior of adjacent area, including:Determine video Face contour in signal obtains the gesture behavior in the predeterminable area of the face contour preset direction.It is regarded described After facial characteristics identification in frequency signal, the preset assigned direction of the facial characteristics, scheduled region are searched, as gesture row For region, search corresponding gesture behavior in the region, as shown in Figure 2 A for certain teaching scene in, some in vision signal Specify the schematic diagram in the region of the gesture behavior of size (with facial characteristics same size) in the left ear side of the corresponding face of facial characteristics.
In this example embodiment, the method further includes:Interim typing information is received, the interim typing information includes At least one gesture sample of acquisition;After vision signal obtaining step, gesture feature and the interim typing information are carried out Matching, obtains matching result.Can be when each teaching be needed using gesture identification, the pre- interim typing information of typing is as gesture Sample, as shown in Figure 2 B, in certain teaching scene, the teaching of some gesture behavior of pre- typing, the gesture behavior representative is mutual Dynamic information waits corresponding to.
In this example embodiment, the method further includes:After obtaining matching result, sends and handle the interim typing The inquiry of information instructs, and the inquiry instruction includes the gesture sample deleted in the interim typing information, and described in preservation Gesture sample in interim typing information;The process instruction replied according to the inquiry instruction is received, according to the process instruction The gesture sample in the interim typing information is deleted, or the gesture sample in the interim typing information is preserved to advance In the gesture sample database of foundation.Deletion to the gesture behavior of current typing or preservation prompt operation, and to history Preservation and invocation step etc. of the typing gesture behavior in gesture sample data, can be more convenient make student realize opponent The calling and delete operation of gesture behavior, it is more convenient and hommization to make pre- logging lines.
In this example embodiment, the interim typing information of reception, including:The hand that detection student inputs in touch apparatus Gesture behavior;Using the gesture behavior as the gesture sample in interim typing information.The gesture behavior of the pre- typing can lead to It crosses equipment of haunting and is manually entered gesture behavior to realize, such gesture sample method of determination more controllable precise.
In this example embodiment, the interim typing information of reception, including:Detect the acquisition of the second video capture device Gesture behavior, using the gesture behavior of the second video capture device acquisition as the gesture sample in interim typing information.It is described pre- The gesture behavior of typing can also carry out gesture behavior acquisition, such gesture sample determination side by the second video capture device Formula more intelligent quick is also more convenient to carry out aspect ratio pair with student's gesture behavior of the first video capture device acquisition.
In this example embodiment, including:It receives sample and chooses instruction, it will be in the gesture sample database that pre-established Gesture sample is shown in display equipment;Sample determine instruction is received, the sample determine instruction includes student in the hand The gesture sample to be matched chosen in gesture sample database;The hand is executed according to the gesture sample to be matched of the selection Gesture signature analysis step.It, can be using the gesture sample as gesture sample to be matched after determining all gesture samples Matching standard, carry out next step operation.
In gesture feature analytical procedure S120, the corresponding gesture feature of the gesture behavior can be analyzed, and will be described Gesture feature is matched with pre-defined gesture sample, obtains matching result.
In this example embodiment, the corresponding gesture feature of the gesture behavior and pre-defined gesture sample are carried out Matching, determines the interactive teaching and learning information that the gesture behavior of the corresponding student of the gesture behavior includes.
In this example embodiment, the gesture feature includes of the separated upright finger in preset duration Number.As shown in Figure 2 C, it is the corresponding gesture feature of gesture behavior of student in certain teaching scene:2 separated upright Finger.
In related information forming step S130, can be matched in student information data library according to the facial characteristics with The corresponding student identification of the facial characteristics, and the student identification that will match to is associated to be formed and be associated with the matching result Information.
In this example embodiment, to the identification of the facial characteristics of student not only for the region for carrying out gesture behavior It searchs and locates, alsos for carrying out pupilage matching, using the face recognition mode of facial characteristics, in preset student information number The matching of pupilage can be realized according to student identification corresponding with the facial characteristics is matched in library, then establish described The related information of the student identification and the matching result that are fitted on.
In this example embodiment, the facial characteristics identification includes:Analyze the face of each face in the vision signal Portion's characteristic point;Facial characteristics is generated according to the face feature point of each face;In default facial characteristics and student information library Search student information corresponding with the facial characteristics.It is as shown in Figure 2 A the schematic diagram of the face feature point of user, according to institute Face feature point is stated, generates facial characteristics, and then searched and the facial characteristics in default facial characteristics and student information library Corresponding student information.
In interactive information shows step S140, can count according in the vision signal all facial characteristics and All related informations that gesture behavior is formed, and generate interactive teaching and learning information, call described in the display interface of device end shows Interactive teaching and learning information.
In this example embodiment, by the body of the gesture behavior of the student corresponding interactive teaching and learning information and the student Part is united, and just completes the statistics of the gesture behavior of the student, is all carried out to all students by video capture device The statistics of interactive teaching and learning information, be all it is automatic call presetting method to complete, need not think to operate, can quickly realize pair The identification and statistics of information.Such as in certain teaching scene, teacher wants answer situation of all students of statistics to some multiple-choice question, It only needs that each student is allowed to represent corresponding option with corresponding gesture, a such as upright finger represents " A " option, and two mutually Independent upright finger represents " B " option, and three mutually independent upright fingers represent " C " option, and four independently of each other Upright finger represent " D " option, acquire by video capture device and analyze all students facial left ear side gesture Hand feature in behavior, that is, mutually independent upright finger number, so that it may be answered situation to the multiple-choice question with realizing Statistics, as Fig. 3 A be the teaching scene in, to the schematic diagram of the statistical result of the situation of answering of the multiple-choice question.
In this example embodiment, the interactive information shows step, including:Face in extracting the vision signal After portion's feature, if specifying adjacent area that gesture behavior is not detected in the facial characteristics, alternatively, specified in the facial characteristics Adjacent area detects that gesture behavior is mismatched with pre-defined gesture sample, then generates abnormal matching result.Do not have to some Have and answers or the student of gesture abnormal behavior generates and one field flag of corresponding matching and counts.
In this example embodiment, the interactive information shows step, including:By the gesture feature with it is pre-defined Gesture sample is matched, and after obtaining matching result, corresponding statistical information is generated according to a variety of matching results, by the statistics Information is as interactive teaching and learning information.According to different teaching scenes, same gesture feature corresponds to one or more interactive teaching and learning letters Breath.Gesture feature as representated by the number of separated upright finger within a preset period of time corresponds to interactive teaching and learning respectively Information " A " " B ", can also correspond to interactive teaching and learning information " to " or " mistake ".
In this example embodiment, the interactive information shows step, including:By the gesture feature with it is pre-defined Gesture sample is matched, and after obtaining matching result, if matching result is a variety of, corresponding number is generated according to a variety of matching results The statistical information of amount;Form chart-information according to the statistical information of corresponding number, by the statistical information of the corresponding number and The chart-information is as interactive teaching and learning information.As shown in Figure 3B, it is in the teaching scene, to the feelings of answering of the multiple-choice question The schematic diagram for the graphic statistics information that condition generates further can select to check the corresponding multiple use of each gesture feature Family information.
It should be noted that although describing each step of method in the disclosure with particular order in the accompanying drawings, This, which does not require that or implies, to execute these steps according to the particular order, or has to carry out the step shown in whole It could realize desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and held by certain steps Row, and/or a step is decomposed into execution of multiple steps etc..
In addition, in this exemplary embodiment, additionally providing a kind of interactive teaching and learning device based on gesture identification.With reference to Fig. 4 Shown, being somebody's turn to do the interactive teaching and learning device 400 based on gesture identification may include:Vision signal acquisition module 410, gesture feature analysis Module 420, related information form module 430 and interactive information display module 440.Wherein:
Vision signal acquisition module 410 is used to obtain the vision signal of the first video capture device acquisition, is regarded described in extraction Facial characteristics and the facial characteristics in frequency signal specify the gesture behavior of adjacent area;
Gesture feature analysis module 420, for analyzing the corresponding gesture feature of the gesture behavior, and the gesture is special It levies and is matched with pre-defined gesture sample, obtain matching result;
Related information forms module 430, for matched in student information data library according to the facial characteristics with it is described The corresponding student identification of facial characteristics, and the student identification that will match to is associated to be formed with the matching result and is associated with letter Breath;
Interactive information display module 440, for counting according to all facial characteristics and gesture in the vision signal All related informations that behavior is formed, and interactive teaching and learning information is generated, call the display interface of device end to show the teaching Interactive information.
Among the above respectively the detail of the interactive teaching and learning apparatus module based on gesture identification in corresponding audio paragraph It is described in detail in recognition methods, therefore details are not described herein again.
If it should be noted that although being referred to the interactive teaching and learning device 400 based on gesture identification in above-detailed Dry module or unit, but this division is not enforceable.In fact, according to embodiment of the present disclosure, above description The feature and function of two or more modules either unit can be embodied in a module or unit.Conversely, above Either the feature and function of unit can be further divided into and embodied by multiple modules or unit one module of description.
In addition, in an exemplary embodiment of the disclosure, additionally providing a kind of electronic equipment that can realize the above method.
Person of ordinary skill in the field it is understood that various aspects of the invention can be implemented as system, method or Program product.Therefore, various aspects of the invention can be embodied in the following forms, i.e.,:Complete hardware embodiment, completely Software implementation (including firmware, microcode etc.) or hardware and software in terms of combine embodiment, may be collectively referred to as here Circuit, " module " or " system ".
The electronic equipment 500 of this embodiment according to the present invention is described referring to Fig. 5.The electronics that Fig. 5 is shown is set Standby 500 be only an example, should not bring any restrictions to the function and use scope of the embodiment of the present invention.
As shown in figure 5, electronic equipment 500 is showed in the form of universal computing device.The component of electronic equipment 500 can wrap It includes but is not limited to:Above-mentioned at least one processing unit 510, above-mentioned at least one storage unit 520, connection different system component The bus 530 of (including storage unit 520 and processing unit 510), display unit 540.
Wherein, the storage unit has program stored therein code, and said program code can be held by the processing unit 510 Row so that the processing unit 510 executes various according to the present invention described in above-mentioned " illustrative methods " part of this specification The step of exemplary embodiment.For example, the processing unit 510 can execute step S110 as shown in fig. 1 to step S140。
Storage unit 520 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit (RAM) 5201 and/or cache memory unit 5202, it can further include read-only memory unit (ROM) 5203.
Storage unit 520 can also include program/utility with one group of (at least one) program module 5205 5204, such program module 5205 includes but not limited to:Operating system, one or more application program, other program moulds Block and program data may include the realization of network environment in each or certain combination in these examples.
Bus 530 can be to indicate one or more in a few class bus structures, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use the arbitrary bus structures in a variety of bus structures Local bus.
Electronic equipment 500 can also be with one or more external equipments 570 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 500 communicate, and/or with make Any equipment that the electronic equipment 500 can be communicated with one or more of the other computing device (such as router, modulation /demodulation Device etc.) communication.This communication can be carried out by input/output (I/O) interface 550.Also, electronic equipment 500 can be with By network adapter 560 and one or more network (such as LAN (LAN), wide area network (WAN) and/or public network, Such as internet) communication.As shown, network adapter 560 is communicated by bus 530 with other modules of electronic equipment 500. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 500, including but not It is limited to:Microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and Data backup storage system etc..
By the description of above embodiment, those skilled in the art is it can be readily appreciated that example embodiment described herein It can also be realized in such a way that software is in conjunction with necessary hardware by software realization.Therefore, implemented according to the disclosure The technical solution of example can be expressed in the form of software products, which can be stored in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) or on network, including some instructions are so that a computing device (can To be personal computer, server, terminal installation or network equipment etc.) it executes according to the method for the embodiment of the present disclosure.
In an exemplary embodiment of the disclosure, a kind of computer readable storage medium is additionally provided, energy is stored thereon with Enough realize the program product of this specification above method.In some possible embodiments, various aspects of the invention can be with It is embodied as a kind of form of program product comprising program code, it is described when described program product is run on the terminal device Program code be used for make the terminal device execute described in above-mentioned " illustrative methods " part of this specification according to the present invention The step of various exemplary embodiments.
Refering to what is shown in Fig. 6, the program product 600 according to an embodiment of the invention for realizing the above method is described, It may be used portable compact disc read only memory (CD-ROM) and includes program code, and can in terminal device, such as It is run on PC.However, the program product of the present invention is without being limited thereto, in this document, readable storage medium storing program for executing can be appointed What include or storage program tangible medium, the program can be commanded execution system, device either device use or and its It is used in combination.
The arbitrary combination of one or more readable mediums may be used in described program product.Readable medium can be readable letter Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or System, device or the device of semiconductor, or the arbitrary above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive List) include:It is electrical connection, portable disc, hard disk, random access memory (RAM) with one or more conducting wires, read-only Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, In carry readable program code.The data-signal of this propagation may be used diversified forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing Matter, which can send, propagate either transmission for used by instruction execution system, device or device or and its The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have Line, optical cable, RF etc. or above-mentioned any appropriate combination.
It can be write with any combination of one or more programming languages for executing the program that operates of the present invention Code, described program design language include object oriented program language-Java, C++ etc., further include conventional Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user It executes on computing device, partly execute on a user device, being executed as an independent software package, partly in user's calculating Upper side point is executed or is executed in remote computing device or server completely on a remote computing.It is being related to far In the situation of journey computing device, remote computing device can pass through the network of any kind, including LAN (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
In addition, above-mentioned attached drawing is only the schematic theory of the processing included by method according to an exemplary embodiment of the present invention It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable Sequence.In addition, being also easy to understand, these processing for example can be executed either synchronously or asynchronously in multiple modules.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Adaptive change follow the general principles of this disclosure and include the undocumented common knowledge in the art of the disclosure or Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the attached claims.

Claims (14)

1. a kind of interactive teaching and learning method based on gesture identification, which is characterized in that the method includes:
Vision signal obtaining step obtains the vision signal of the first video capture device acquisition, extracts in the vision signal Facial characteristics and the facial characteristics specify the gesture behavior of adjacent area;
Gesture feature analytical procedure, the corresponding gesture feature of analysis gesture behavior, and by the gesture feature with it is pre-defined Gesture sample is matched, and matching result is obtained;
Related information forming step is matched according to the facial characteristics in student information data library corresponding with the facial characteristics Student identification, and the student identification that will match to and the matching result are associated to form related information;
Interactive information shows step, counts the institute according to all facial characteristics and gesture behavior formation in the vision signal Relevant information, and interactive teaching and learning information is generated, call the display interface of device end to show the interactive teaching and learning information.
2. the method as described in claim 1, which is characterized in that the facial characteristics includes face contour,
The facial characteristics and the facial characteristics extracted in the vision signal obtaining step in the vision signal specifies phase The gesture behavior in neighbouring region, including:
It determines the face contour in vision signal, obtains the gesture behavior in the predeterminable area of face contour preset direction.
3. the method as described in claim 1, which is characterized in that the method further includes:
Interim typing information is received, the interim typing information includes at least one gesture sample of acquisition;
After vision signal obtaining step, gesture feature is matched with the interim typing information, obtains matching result.
4. method as claimed in claim 3, which is characterized in that the method further includes:
After obtaining matching result, the inquiry instruction for handling the interim typing information is sent, the inquiry instruction includes deleting Gesture sample in the interim typing information, and preserve the gesture sample in the interim typing information;
The process instruction replied according to the inquiry instruction is received, is deleted in the interim typing information according to the process instruction Gesture sample, or the gesture sample in the interim typing information is preserved to the gesture sample database pre-established In.
5. method as claimed in claim 3, which is characterized in that the interim typing information of reception, including:
The gesture behavior that detection user inputs in touch apparatus;
Using the gesture behavior as the gesture sample in interim typing information.
6. method as claimed in claim 3, which is characterized in that the interim typing information of reception, including:
The gesture behavior for detecting the acquisition of the second video capture device, using the gesture behavior of the second video capture device acquisition as facing When typing information in gesture sample.
7. the method as described in claim 1, which is characterized in that including:
It receives sample and chooses instruction, the gesture sample in the gesture sample database pre-established is opened up in display equipment Show;
Receive sample determine instruction, the sample determine instruction includes waiting for of being chosen in the gesture sample database of user The gesture sample matched;
The gesture feature analytical procedure is executed according to the gesture sample to be matched of the selection.
8. the method as described in claim 1, which is characterized in that the gesture feature includes separated in preset duration The number of upright finger.
9. the method as described in claim 1, which is characterized in that the interactive information shows step, including:
After the facial characteristics in extracting the vision signal, if specifying adjacent area that hand is not detected in the facial characteristics Gesture behavior, alternatively, specifying adjacent area not detect gesture behavior and pre-defined gesture sample not in the facial characteristics Match, then generates abnormal matching result.
10. the method as described in claim 1 or 9, which is characterized in that the interactive information shows step, including:
The gesture feature is matched with pre-defined gesture sample, after obtaining matching result, is tied according to a variety of matchings Fruit generates corresponding statistical information, using the statistical information as interactive teaching and learning information.
11. the method as described in claim 1 or 9, which is characterized in that the interactive information shows step, including:
The gesture feature is matched with pre-defined gesture sample, after obtaining matching result, if matching result is more Kind, then the statistical information of corresponding number is generated according to a variety of matching results;
Chart-information is formed according to the statistical information of corresponding number, the statistical information of the corresponding number and the chart are believed Breath is used as interactive teaching and learning information.
12. a kind of interactive teaching and learning device based on gesture identification, which is characterized in that described device includes:
Vision signal acquisition module, the vision signal for obtaining the acquisition of the first video capture device, extracts the vision signal In facial characteristics and the facial characteristics specify adjacent area gesture behavior;
Gesture feature analysis module is determined for analyzing the corresponding gesture feature of gesture behavior, and by the gesture feature with advance The gesture sample of justice is matched, and matching result is obtained;
Related information forms module, corresponding with the facial characteristics for being matched in student information data library according to facial characteristics Student identification, and the student identification that will match to and the matching result are associated to form related information;
Interactive information display module, for count according in the vision signal all facial characteristics and gesture behavior formed All related informations, and generate interactive teaching and learning information, the display interface of device end called to show the interactive teaching and learning information.
13. a kind of electronic equipment, which is characterized in that including
Processor;And
Memory is stored with computer-readable instruction on the memory, and the computer-readable instruction is held by the processor Method according to any one of claim 1 to 11 is realized when row.
14. a kind of computer readable storage medium, is stored thereon with computer program, the computer program is executed by processor Shi Shixian is according to any one of claim 1 to 11 the method.
CN201810495581.9A 2018-05-22 2018-05-22 Interactive teaching and learning method based on gesture identification and device Pending CN108805035A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810495581.9A CN108805035A (en) 2018-05-22 2018-05-22 Interactive teaching and learning method based on gesture identification and device
PCT/CN2018/092787 WO2019223056A1 (en) 2018-05-22 2018-06-26 Gesture recognition-based teaching and learning method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810495581.9A CN108805035A (en) 2018-05-22 2018-05-22 Interactive teaching and learning method based on gesture identification and device

Publications (1)

Publication Number Publication Date
CN108805035A true CN108805035A (en) 2018-11-13

Family

ID=64091397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810495581.9A Pending CN108805035A (en) 2018-05-22 2018-05-22 Interactive teaching and learning method based on gesture identification and device

Country Status (2)

Country Link
CN (1) CN108805035A (en)
WO (1) WO2019223056A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660275A (en) * 2019-09-18 2020-01-07 武汉天喻教育科技有限公司 Teacher-student classroom instant interaction system and method based on video analysis
CN111681474A (en) * 2020-06-17 2020-09-18 中国银行股份有限公司 Online live broadcast teaching method and device, computer equipment and readable storage medium
CN113485619A (en) * 2021-07-13 2021-10-08 腾讯科技(深圳)有限公司 Information collection table processing method and device, electronic equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309153B (en) * 2020-03-25 2024-04-09 北京百度网讯科技有限公司 Man-machine interaction control method and device, electronic equipment and storage medium
CN112668476B (en) * 2020-12-28 2024-04-16 华中师范大学 Data processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699225A (en) * 2013-12-17 2014-04-02 深圳市威富多媒体有限公司 Method for interacting with mobile terminal through hand shape and device for implementing same
CN104656890A (en) * 2014-12-10 2015-05-27 杭州凌手科技有限公司 Virtual realistic intelligent projection gesture interaction all-in-one machine
CN106250822A (en) * 2016-07-21 2016-12-21 苏州科大讯飞教育科技有限公司 Student's focus based on recognition of face monitoring system and method
CN106774894A (en) * 2016-12-16 2017-05-31 重庆大学 Interactive teaching methods and interactive system based on gesture
CN107491755A (en) * 2017-08-16 2017-12-19 京东方科技集团股份有限公司 Method and device for gesture identification

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012139241A1 (en) * 2011-04-11 2012-10-18 Intel Corporation Hand gesture recognition system
CN103488299B (en) * 2013-10-15 2016-11-23 大连市恒芯科技有限公司 A kind of intelligent terminal man-machine interaction method merging face and gesture
US9465444B1 (en) * 2014-06-30 2016-10-11 Amazon Technologies, Inc. Object recognition for gesture tracking
CN104407694B (en) * 2014-10-29 2018-02-23 山东大学 The man-machine interaction method and device of a kind of combination face and gesture control
CN104484645B (en) * 2014-11-14 2017-06-16 华中科技大学 A kind of " 1 " gesture identification method and system towards man-machine interaction
CN105159444B (en) * 2015-08-07 2018-05-25 珠海格力电器股份有限公司 Method and device for determining capture object for gesture recognition
CN106648079A (en) * 2016-12-05 2017-05-10 华南理工大学 Human face identification and gesture interaction-based television entertainment system
CN107679860A (en) * 2017-08-09 2018-02-09 百度在线网络技术(北京)有限公司 A kind of method, apparatus of user authentication, equipment and computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699225A (en) * 2013-12-17 2014-04-02 深圳市威富多媒体有限公司 Method for interacting with mobile terminal through hand shape and device for implementing same
CN104656890A (en) * 2014-12-10 2015-05-27 杭州凌手科技有限公司 Virtual realistic intelligent projection gesture interaction all-in-one machine
CN106250822A (en) * 2016-07-21 2016-12-21 苏州科大讯飞教育科技有限公司 Student's focus based on recognition of face monitoring system and method
CN106774894A (en) * 2016-12-16 2017-05-31 重庆大学 Interactive teaching methods and interactive system based on gesture
CN107491755A (en) * 2017-08-16 2017-12-19 京东方科技集团股份有限公司 Method and device for gesture identification

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110660275A (en) * 2019-09-18 2020-01-07 武汉天喻教育科技有限公司 Teacher-student classroom instant interaction system and method based on video analysis
CN111681474A (en) * 2020-06-17 2020-09-18 中国银行股份有限公司 Online live broadcast teaching method and device, computer equipment and readable storage medium
CN113485619A (en) * 2021-07-13 2021-10-08 腾讯科技(深圳)有限公司 Information collection table processing method and device, electronic equipment and storage medium
CN113485619B (en) * 2021-07-13 2024-03-19 腾讯科技(深圳)有限公司 Information collection table processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2019223056A1 (en) 2019-11-28

Similar Documents

Publication Publication Date Title
CN108805035A (en) Interactive teaching and learning method based on gesture identification and device
US11669990B2 (en) Object area measurement method, electronic device and storage medium
CN109240576A (en) Image processing method and device, electronic equipment, storage medium in game
CN110178179A (en) Voice signature for being authenticated to electronic device user
CN107734373A (en) Barrage sending method and device, storage medium, electronic equipment
CN107194213A (en) A kind of personal identification method and device
CN108536684A (en) Automatically generate the method and device of English multiple-choice question answer choice
CN108717673A (en) Difficult point detection method and device in Web-based instruction content
CN116521841A (en) Method, device, equipment and medium for generating reply information
CN112667510A (en) Test method, test device, electronic equipment and storage medium
CN108804971A (en) A kind of image identification system, augmented reality show equipment and image-recognizing method
CN111611365A (en) Flow control method, device, equipment and storage medium of dialog system
CN110069991A (en) Feedback information determines method, apparatus, electronic equipment and storage medium
CN112306447A (en) Interface navigation method, device, terminal and storage medium
CN113205281A (en) Scene simulation-based personnel ability evaluation method and related equipment
JP2024509014A (en) Sorting method, sorting model training method, device, electronic device and storage medium
CN115205883A (en) Data auditing method, device, equipment and storage medium based on OCR (optical character recognition) and NLP (non-line language)
CN114510305B (en) Model training method and device, storage medium and electronic equipment
CN108665769A (en) Network teaching method based on convolutional neural networks and device
CN112925470B (en) Touch control method and system of interactive electronic whiteboard and readable medium
CN110796096A (en) Training method, device, equipment and medium for gesture recognition model
CN113051022A (en) Graphical interface processing method and graphical interface processing device
CN111241236B (en) Task-oriented question-answering method, system, electronic device and readable storage medium
CN112950443A (en) Adaptive privacy protection method, system, device and medium based on image sticker
CN110765326A (en) Recommendation method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113