CN111258427A - Blackboard control method and control system based on binocular camera gesture interaction - Google Patents

Blackboard control method and control system based on binocular camera gesture interaction Download PDF

Info

Publication number
CN111258427A
CN111258427A CN202010052958.0A CN202010052958A CN111258427A CN 111258427 A CN111258427 A CN 111258427A CN 202010052958 A CN202010052958 A CN 202010052958A CN 111258427 A CN111258427 A CN 111258427A
Authority
CN
China
Prior art keywords
gesture
binocular
control
blackboard
rgb image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010052958.0A
Other languages
Chinese (zh)
Inventor
杜国铭
张毅
李文越
冯大志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Tuobo Technology Co ltd
Original Assignee
Harbin Tuobo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Tuobo Technology Co ltd filed Critical Harbin Tuobo Technology Co ltd
Priority to CN202010052958.0A priority Critical patent/CN111258427A/en
Publication of CN111258427A publication Critical patent/CN111258427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a binocular camera gesture interaction-based blackboard control method and a control system, wherein a blackboard slideway driven by a motor is installed on a wall surface, a binocular RGB image capturing module is installed at the position, higher than the topmost end of the slideway, of the top edge of a blackboard, the binocular RGB image capturing module is a binocular camera, and the control system comprises a binocular RGB image capturing module, an image analysis module and a motor driving module; the gesture control blackboard can realize high-efficiency blackboard movement and can realize writing and moving at the same time. Especially, the huge blackboard does not need manpower when moving, and only needs the gesture to operate the motor. Binocular RGB vision has a great cost advantage over infrared depth vision.

Description

Blackboard control method and control system based on binocular camera gesture interaction
Technical Field
The invention belongs to the technical field of gesture control, and particularly relates to a blackboard control method and system based on binocular camera gesture interaction.
Background
In contemporary educational appliances, the blackboard is never one of the irreplaceable educational tools. From a simple blackboard hung on a wall to a push-pull blackboard using a slideway in order to be able to write at different heights with comfortable postures. Meanwhile, a push-pull composite blackboard applied to a university classroom appears by utilizing the slide way. However, problems with such blackboards have been revealed over the years. For example, the slideway needs a large force to move due to the weight of the blackboard. Although a blackboard controlled by a remote controller and a motor appears after that, it is necessary to stop all writing operations and to displace the blackboard. And the controller is inconvenient to use because the controller needs to be connected with the motor and is sometimes far away from a writing area.
The solution is that the operation of the blackboard is controlled by voice recognition or image recognition, and the blackboard is difficult to control by judging a specific voice command by using voice recognition because of the special condition (generally, a teacher uses the blackboard and needs to speak continuously during lectures) when in use.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides a blackboard control method and system based on binocular camera gesture interaction. The binocular camera used by the invention identifies the gesture of the operator and positions the space. And judging an instruction to control the motor to move through positioning the specific gesture and the displacement of the specific gesture. The user can write with one hand, and the other hand controls the position of the blackboard through gestures, so that the blackboard can be displaced at any place in front of the blackboard without influencing the pace of lectures.
The invention is realized by the following technical scheme, the invention provides a blackboard control method based on binocular camera gesture interaction, a blackboard slideway driven by a motor is installed on a wall surface, a binocular RGB image capturing module is installed at the position, higher than the topmost end of the slideway, of the uppermost edge of a blackboard, the binocular RGB image capturing module is a binocular camera, and the method comprises the following steps:
step 1: calling a binocular RGB image capturing module to capture binocular RGB image data of a user, and transmitting each frame of binocular RGB image data to an image analysis module;
step 2: calling an image analysis module to process binocular RGB image data, and analyzing and identifying gesture types in the image, wherein the gesture types are two types: a start control gesture and an end control gesture; if the recognized gesture is a gesture for starting control, recording the space coordinate of the gesture, and circulating the step by other gestures;
and step 3: repeating the step 2, if the gesture is continuously the gesture for starting control, extracting the displacement change from the first frame of the gesture for starting control to the current frame, analyzing the command represented by the displacement change, and converting the command into an electric signal to be sent to the motor driving module;
and 4, step 4: and repeating the step 3 until the control ending gesture is recognized to represent the ending of the primary control flow, and returning to the step 2.
Further, the image analysis module specifically comprises the following operation steps:
step 1.1: receiving two pieces of binocular RGB image data transmitted by a binocular RGB image capturing module;
step 1.2: respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands;
step 1.3: performing gesture type recognition on the image by using a neural network;
step 1.4: if the gesture is identified as the control starting gesture, recording coordinates of key points, repeating the step 1.3, if the gesture is continuously the control starting gesture and the key points generate space coordinate displacement, converting the space coordinate displacement into a control command, and controlling the motor driving module to move;
step 1.5: and if the control gesture is identified to be the control ending gesture, sending a stopping command to the motor driving module.
Further, the human hand detection specifically comprises the following steps:
step 2.1: performing image analysis by using an image detection model based on a convolutional neural network, and finding and extracting hand information;
step 2.2: if the hand position is found, starting real-time hand tracking on the time sequence information of the front and rear frames of images, and determining target displacement according to the detection result;
step 2.3: and 2.2, simultaneously utilizing the thread to use the image detection model to perform non-real-time detection, and utilizing a tracking algorithm running on the image of the corresponding frame to correct the initial position of image tracking.
Further, the gesture type recognition specifically comprises the following steps:
step 3.1: receiving the binocular RGB image intercepted after the frame selection;
step 3.2: inputting a binocular RGB image into a model based on a neural network to return coordinates of key points of hands, and simultaneously outputting gesture types;
step 3.3: the key point coordinates are corrected in position in a Kalman filtering mode according to the data of the previous frame;
step 3.4: and determining the distance between the key point and the binocular camera according to the positions of the key points in the two binocular RGB images in the image, and determining the distance between the hand and the binocular camera.
Further, the motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
The invention also provides a blackboard control system based on binocular camera gesture interaction, wherein a blackboard slideway driven by a motor is installed on the wall surface, a binocular RGB image capturing module is installed at the position, higher than the topmost end of the slideway, of the uppermost edge of a blackboard, the binocular RGB image capturing module is a binocular camera, and the control system comprises a binocular RGB image capturing module, an image analysis module and a motor driving module;
the binocular RGB image capturing module captures binocular RGB image data of a user and transmits each frame of binocular RGB image data to the image analysis module;
the image analysis module receives two pieces of binocular RGB image data transmitted by the binocular RGB image capturing module; respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands; then, performing gesture type recognition on the image by using a neural network; thereby converting the identified result into a control command and sending the control command to the motor driving module;
the motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
The gesture control blackboard can realize high-efficiency blackboard movement and can realize writing and moving at the same time. Especially, the huge blackboard does not need manpower when moving, and only needs the gesture to operate the motor. Binocular RGB vision has a great cost advantage over infrared depth vision.
Drawings
FIG. 1 is a block diagram of a binocular camera gesture interaction based blackboard control system of the present invention;
FIG. 2 is a flowchart illustrating the detailed operation of the image analysis module;
fig. 3 is a structural diagram of a blackboard control system based on binocular camera gesture interaction according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With reference to fig. 1 and 2, the invention provides a binocular camera gesture interaction-based blackboard control method, wherein a blackboard slideway driven by a motor is installed on a wall surface, a binocular RGB image capturing module is installed at a position, higher than the topmost end of the slideway, of the top edge of a blackboard, the binocular RGB image capturing module is a binocular camera, and the method comprises the following steps:
step 1: calling a binocular RGB image capturing module to capture binocular RGB image data of a user, and transmitting each frame of binocular RGB image data to an image analysis module;
step 2: calling an image analysis module to process binocular RGB image data, and analyzing and identifying gesture types in the image, wherein the gesture types are two types: a start control gesture and an end control gesture; if the recognized gesture is a gesture for starting control, recording the space coordinate of the gesture, and circulating the step by other gestures;
and step 3: repeating the step 2, if the gesture is continuously the gesture for starting control, extracting the displacement change from the first frame of the gesture for starting control to the current frame, analyzing the command represented by the displacement change, and converting the command into an electric signal to be sent to the motor driving module;
and 4, step 4: and repeating the step 3 until the control ending gesture is recognized to represent the ending of the primary control flow, and returning to the step 2.
The image analysis module comprises the following specific operation steps:
step 1.1: receiving two pieces of binocular RGB image data transmitted by a binocular RGB image capturing module;
step 1.2: respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands;
step 1.3: performing gesture type recognition on the image by using a neural network;
step 1.4: if the gesture is identified as the control starting gesture, recording coordinates of key points, repeating the step 1.3, if the gesture is continuously the control starting gesture and the key points generate space coordinate displacement, converting the space coordinate displacement into a control command, and controlling the motor driving module to move;
step 1.5: and if the control gesture is identified to be the control ending gesture, sending a stopping command to the motor driving module.
The hand detection is a detection method for extracting the hand position information of a user by positioning the direction and the size of the hand characteristics of the user through comparison of a model and comparison of front and rear frames. The human hand detection comprises the following specific steps:
step 2.1: performing image analysis by using an image detection model based on a convolutional neural network, and finding and extracting hand information;
step 2.2: if the hand position is found, starting real-time hand tracking on the time sequence information of the front and rear frames of images, and determining target displacement according to the detection result;
step 2.3: and 2.2, simultaneously utilizing the thread to use the image detection model to perform non-real-time detection, and utilizing a tracking algorithm running on the image of the corresponding frame to correct the initial position of image tracking.
After a human hand is detected, after the hand part region intercepting operation is carried out on the image, the following operations are carried out through a neural network model: 1. regression of feature point coordinates, 2, judgment of gesture types; determining the distance between the hand and the binocular camera according to the corresponding positions of the key points of the two RGB images in the respective images; the gesture type identification specifically comprises the following steps:
step 3.1: receiving the binocular RGB image intercepted after the frame selection;
step 3.2: inputting a binocular RGB image into a model based on a neural network to return coordinates of key points of hands, and simultaneously outputting gesture types;
step 3.3: the key point coordinates are corrected in position in a Kalman filtering mode according to the data of the previous frame;
step 3.4: and determining the distance between the key point and the binocular camera according to the positions of the key points in the two binocular RGB images in the image, and determining the distance between the hand and the binocular camera.
The motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
The invention also provides a blackboard control system based on binocular camera gesture interaction, wherein a blackboard slideway driven by a motor is installed on the wall surface, a binocular RGB image capturing module is installed at the position, higher than the topmost end of the slideway, of the uppermost edge of a blackboard, the binocular RGB image capturing module is a binocular camera, and the control system comprises a binocular RGB image capturing module, an image analysis module and a motor driving module;
the binocular RGB image capturing module captures binocular RGB image data of a user and transmits each frame of binocular RGB image data to the image analysis module; the image capturing module comprises a binocular RGB camera and is used for shooting images in front of a blackboard, and the image capturing module continuously captures the images and transmits the images to the image analysis module after being electrified. The camera contained in the device can capture two RGB images within a field range, the field range can be adjusted according to practical application, and the two RGB images are input into the image analysis module.
The image analysis module receives two pieces of binocular RGB image data transmitted by the binocular RGB image capturing module; respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands; then, performing gesture type recognition on the image by using a neural network; thereby converting the identified result into a control command and sending the control command to the motor driving module;
the motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
Examples
In a common classroom, a blackboard is fixed on a wall, and some of the classrooms are even provided with a slide way controlled by a hand. Therefore, the wall surface behind the blackboard can be improved, and the slideway driven by the motor is arranged on the wall surface. And then, installing a binocular camera at the position, higher than the topmost end of the slide way, of the uppermost edge of the blackboard, as shown in fig. 3, so that the effective analysis view field of the binocular camera is the blackboard width multiplied by the length of the vertical blackboard by 30 cm. And is connected with the motor and the binocular camera through the image analysis module. When the system starts, the camera is in a state of constantly acquiring images, when a user needs to write on a blackboard, the camera is located in a position close to the front of the blackboard and can enter an effective analysis view field, one hand holds a chalk or other writing pens, the other hand makes a control starting gesture, the image analysis module identifies the back motor to start operating, the gesture is kept at the moment, the blackboard is displaced through the displacement of the hand, and the control ending gesture is made after the camera is moved to a proper position. And writing on the blackboard is continued, and in the whole operation process, the user does not move steps to do other operations, so that the thought is not interrupted, and the user does not need to spend great effort to move the blackboard.
The blackboard control method and system based on binocular camera gesture interaction provided by the invention are introduced in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. The utility model provides a blackboard control method based on binocular camera gesture is mutual, installs the blackboard slide by motor drive at the wall, is higher than slide topmost department at the top of blackboard and installs binocular RGB image capture module, binocular RGB image capture module is binocular camera, its characterized in that: the method comprises the following steps:
step 1: calling a binocular RGB image capturing module to capture binocular RGB image data of a user, and transmitting each frame of binocular RGB image data to an image analysis module;
step 2: calling an image analysis module to process binocular RGB image data, and analyzing and identifying gesture types in the image, wherein the gesture types are two types: a start control gesture and an end control gesture; if the recognized gesture is a gesture for starting control, recording the space coordinate of the gesture, and circulating the step by other gestures;
and step 3: repeating the step 2, if the gesture is continuously the gesture for starting control, extracting the displacement change from the first frame of the gesture for starting control to the current frame, analyzing the command represented by the displacement change, and converting the command into an electric signal to be sent to the motor driving module;
and 4, step 4: and repeating the step 3 until the control ending gesture is recognized to represent the ending of the primary control flow, and returning to the step 2.
2. The method of claim 1, wherein: the image analysis module comprises the following specific operation steps:
step 1.1: receiving two pieces of binocular RGB image data transmitted by a binocular RGB image capturing module;
step 1.2: respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands;
step 1.3: performing gesture type recognition on the image by using a neural network;
step 1.4: if the gesture is identified as the control starting gesture, recording coordinates of key points, repeating the step 1.3, if the gesture is continuously the control starting gesture and the key points generate space coordinate displacement, converting the space coordinate displacement into a control command, and controlling the motor driving module to move;
step 1.5: and if the control gesture is identified to be the control ending gesture, sending a stopping command to the motor driving module.
3. The method of claim 2, wherein: the human hand detection comprises the following specific steps:
step 2.1: performing image analysis by using an image detection model based on a convolutional neural network, and finding and extracting hand information;
step 2.2: if the hand position is found, starting real-time hand tracking on the time sequence information of the front and rear frames of images, and determining target displacement according to the detection result;
step 2.3: and 2.2, simultaneously utilizing the thread to use the image detection model to perform non-real-time detection, and utilizing a tracking algorithm running on the image of the corresponding frame to correct the initial position of image tracking.
4. The method of claim 3, wherein: the gesture type identification specifically comprises the following steps:
step 3.1: receiving the binocular RGB image intercepted after the frame selection;
step 3.2: inputting a binocular RGB image into a model based on a neural network to return coordinates of key points of hands, and simultaneously outputting gesture types;
step 3.3: the key point coordinates are corrected in position in a Kalman filtering mode according to the data of the previous frame;
step 3.4: and determining the distance between the key point and the binocular camera according to the positions of the key points in the two binocular RGB images in the image, and determining the distance between the hand and the binocular camera.
5. The method of claim 1, wherein: the motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
6. The utility model provides a blackboard control system based on binocular camera gesture is mutual, is higher than slide topmost department at the top of blackboard and installs binocular RGB image capture module by motor drive's blackboard slide at the wall installation, binocular RGB image capture module is binocular camera, its characterized in that: the control system comprises a binocular RGB image capturing module, an image analysis module and a motor driving module;
the binocular RGB image capturing module captures binocular RGB image data of a user and transmits each frame of binocular RGB image data to the image analysis module;
the image analysis module receives two pieces of binocular RGB image data transmitted by the binocular RGB image capturing module; respectively using a neural network to detect the human hands of the two binocular RGB images, and positioning the rectangular frame selection coordinates and the key point coordinates of the human hands; then, performing gesture type recognition on the image by using a neural network; thereby converting the identified result into a control command and sending the control command to the motor driving module;
the motor driving module receives a control command obtained from the image analysis module to perform displacement on the blackboard, and the displacement distance and the direction are the displacement and the direction of the hand from the beginning of the control gesture to the ending of the control gesture.
CN202010052958.0A 2020-01-17 2020-01-17 Blackboard control method and control system based on binocular camera gesture interaction Pending CN111258427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010052958.0A CN111258427A (en) 2020-01-17 2020-01-17 Blackboard control method and control system based on binocular camera gesture interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010052958.0A CN111258427A (en) 2020-01-17 2020-01-17 Blackboard control method and control system based on binocular camera gesture interaction

Publications (1)

Publication Number Publication Date
CN111258427A true CN111258427A (en) 2020-06-09

Family

ID=70950776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010052958.0A Pending CN111258427A (en) 2020-01-17 2020-01-17 Blackboard control method and control system based on binocular camera gesture interaction

Country Status (1)

Country Link
CN (1) CN111258427A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306237A (en) * 2020-10-21 2021-02-02 广州朗国电子科技有限公司 Three-dimensional touch method based on electromagnetic wave reflection, touch equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200830A (en) * 2010-03-25 2011-09-28 夏普株式会社 Non-contact control system and control method based on static gesture recognition
CN102385439A (en) * 2011-10-21 2012-03-21 华中师范大学 Man-machine gesture interactive system based on electronic whiteboard
US20140177909A1 (en) * 2012-12-24 2014-06-26 Industrial Technology Research Institute Three-dimensional interactive device and operation method thereof
CN103927016A (en) * 2014-04-24 2014-07-16 西北工业大学 Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
CN107357428A (en) * 2017-07-07 2017-11-17 京东方科技集团股份有限公司 Man-machine interaction method and device based on gesture identification, system
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
CN109960406A (en) * 2019-03-01 2019-07-02 清华大学 Based on the intelligent electronic device gesture capture acted between both hands finger and identification technology
CN110347266A (en) * 2019-07-23 2019-10-18 哈尔滨拓博科技有限公司 A kind of space gesture control device based on machine vision
CN110442242A (en) * 2019-08-13 2019-11-12 哈尔滨拓博科技有限公司 A kind of smart mirror system and control method based on the interaction of binocular space gesture

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102200830A (en) * 2010-03-25 2011-09-28 夏普株式会社 Non-contact control system and control method based on static gesture recognition
CN102385439A (en) * 2011-10-21 2012-03-21 华中师范大学 Man-machine gesture interactive system based on electronic whiteboard
US20140177909A1 (en) * 2012-12-24 2014-06-26 Industrial Technology Research Institute Three-dimensional interactive device and operation method thereof
CN103927016A (en) * 2014-04-24 2014-07-16 西北工业大学 Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
CN107357428A (en) * 2017-07-07 2017-11-17 京东方科技集团股份有限公司 Man-machine interaction method and device based on gesture identification, system
CN108227912A (en) * 2017-11-30 2018-06-29 北京市商汤科技开发有限公司 Apparatus control method and device, electronic equipment, computer storage media
CN108363482A (en) * 2018-01-11 2018-08-03 江苏四点灵机器人有限公司 A method of the three-dimension gesture based on binocular structure light controls smart television
CN109960406A (en) * 2019-03-01 2019-07-02 清华大学 Based on the intelligent electronic device gesture capture acted between both hands finger and identification technology
CN110347266A (en) * 2019-07-23 2019-10-18 哈尔滨拓博科技有限公司 A kind of space gesture control device based on machine vision
CN110442242A (en) * 2019-08-13 2019-11-12 哈尔滨拓博科技有限公司 A kind of smart mirror system and control method based on the interaction of binocular space gesture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306237A (en) * 2020-10-21 2021-02-02 广州朗国电子科技有限公司 Three-dimensional touch method based on electromagnetic wave reflection, touch equipment and storage medium
CN112306237B (en) * 2020-10-21 2024-10-18 广州朗国电子科技股份有限公司 Three-dimensional touch method based on electromagnetic wave reflection, touch equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102693047B (en) The method of numerical information and image is shown in cooperative surroundings
US10059005B2 (en) Method for teaching a robotic arm to pick or place an object
CN103353935B (en) A kind of 3D dynamic gesture identification method for intelligent domestic system
US20160154996A1 (en) Robot cleaner and method for controlling a robot cleaner
CN105361429A (en) Intelligent studying platform based on multimodal interaction and interaction method of intelligent studying platform
CN106201172B (en) Canvas display method and device for touch screen terminal
CN102707817B (en) Laser inscription system
CN102609093A (en) Method and device for controlling video playing by using gestures
CN102819403A (en) Terminal equipment and man-machine interaction method thereof
CN111597969A (en) Elevator control method and system based on gesture recognition
CN109240494B (en) Control method, computer-readable storage medium and control system for electronic display panel
US20160342224A1 (en) Remote Control Method and Apparatus
CN106031163A (en) Method and apparatus for controlling projection display
CN103135746A (en) Non-touch control method and non-touch control system and non-touch control device based on static postures and dynamic postures
CN111656313A (en) Screen display switching method, display device and movable platform
CN111258427A (en) Blackboard control method and control system based on binocular camera gesture interaction
CN107506033A (en) Information display control method and device, intelligent teaching equipment and storage medium
CN104598138A (en) Method and device for controlling electronic map
CN203950270U (en) Body sense recognition device and by the man-machine interactive system of its mouse beacon keyboard operation
CN113434081A (en) Teaching system capable of implementing man-machine interaction and facilitating teaching information exchange and use method
CN107538485B (en) Robot guiding method and system
CN107479713A (en) The man-machine interaction method and mobile device of a kind of mobile device
CN103914186A (en) Image location recognition system
CN114860143B (en) Navigation control method and device, terminal equipment and storage medium
CN116149477A (en) Interaction method, interaction device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200609