CN112383804A - Gesture recognition method based on empty mouse track - Google Patents

Gesture recognition method based on empty mouse track Download PDF

Info

Publication number
CN112383804A
CN112383804A CN202011276538.7A CN202011276538A CN112383804A CN 112383804 A CN112383804 A CN 112383804A CN 202011276538 A CN202011276538 A CN 202011276538A CN 112383804 A CN112383804 A CN 112383804A
Authority
CN
China
Prior art keywords
data
mouse
empty
gesture
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011276538.7A
Other languages
Chinese (zh)
Inventor
杨柳
杨恩泽
唐海林
庞善斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN202011276538.7A priority Critical patent/CN112383804A/en
Publication of CN112383804A publication Critical patent/CN112383804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42222Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42221Transmission circuitry, e.g. infrared [IR] or radio frequency [RF]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a gesture recognition method based on a mouse-in-air track, which directly uses the existing mouse-in-air track data, so that the method has the advantage of low cost, and simultaneously adopts a machine learning algorithm to recognize gestures, so that the method has wider gesture definition and use scenes, can conveniently and quickly carry out television operation by the method, and accords with the daily operation habit.

Description

Gesture recognition method based on empty mouse track
Technical Field
The invention relates to the technical field of smart televisions, in particular to a gesture recognition method based on an empty mouse track.
Background
At present, smart televisions are more and more powerful in functions and richer in content, for example, in new forms such as live broadcast and online shopping, interaction between the smart televisions and users is more and more important, remote controllers of traditional televisions are slightly complicated and humanized in operation in scenes with high interaction dependence, and in order to achieve quick and smooth interaction with a television, an air mouse remote controller is basically carried, so that the users can operate the television just like operating a mobile phone.
However, the existing air mouse recognition schemes need to balance and compromise between recognition accuracy and recognition cost, and the motion recognition completely conforming to human operation habits is difficult to realize or realize. The gesture recognition with the camera needs to increase high hardware cost, the influence of environment change on recognition accuracy is large, and certain potential privacy hazards of users exist. The recognition in the touch pad mode also needs extra hardware, the size of the remote controller is increased, the portability is poor, the types of gesture recognition are few, and the recognition operation is complicated.
Considering the implementation mode of gesture recognition on the current smart television, in order to reduce cost, improve the use range and enhance the portability of the smart television, a gesture recognition algorithm based on the empty mouse track on the smart television is provided, and the gesture recognition of a user is completed by directly utilizing the track data of the empty mouse by taking the empty mouse data as a source. Corresponding gesture track data can be intuitively generated after a user finishes certain specific gestures, deep learning is carried out according to the gesture track data to finish automatic identification, additional data acquisition equipment is not needed, existing mouse-and-air remote controller track data are utilized to directly process, meanwhile, various gestures can be customized, and therefore the mouse-and-air remote controller has high performance and a wider application range, and unnecessary cost overhead is reduced.
Disclosure of Invention
The invention aims to solve the problems and provide a gesture recognition method based on a mouse-in-air track, which is applied to intelligent television equipment with a mouse-in-air remote controller. On the basis of the existing intelligent television air mouse device and software thereof, the invention obtains the original data of the air mouse through the interface, collects the air mouse track data of the predefined specific gesture, preprocesses the collected track data and learns the neural network to obtain a final model with high recognition rate, deploys the model into the air mouse software, recognizes the air mouse operation performed by the user in real time, and completes the corresponding function control through the function control module when the operation is recognized as the predefined gesture.
The invention can be applied to any intelligent television with an empty mouse remote controller, can be conveniently and quickly carried into the existing product, and can be used for conveniently and quickly carrying out gesture expansion subsequently so as to adapt to more functions.
The invention realizes the purpose through the following technical scheme:
a gesture recognition method based on a mouse-empty track comprises the following steps:
step one, acquiring software and hardware settings required by user action data by an air mouse remote controller;
the data acquisition module adopts an accelerometer and a gyroscope sensor and is integrated in the air mouse remote controller; a Bluetooth module is integrated in the air mouse remote controller at the same time to complete data communication with the operating equipment; the intelligent television integrates a Bluetooth module to complete connection and communication with the remote controller;
secondly, acquiring user action data by using sensor hardware of the air mouse remote controller and carrying out deep learning model training by using the acquired gesture data to obtain a final model with higher recognition rate;
sensor data are transmitted to a data receiving unit through a transmitting module, and the data receiving unit receives the data and transmits the data to an empty mouse coordinate transformation algorithm program for processing;
mapping the converted coordinates of the mouse and the function control result to the intelligent television to finish the drawing of the mouse and the picture and function control of the intelligent television;
extracting coordinate track data of the empty mouse and converting the coordinate track data into available data of an empty mouse deep learning algorithm;
feeding the transformed empty rat trajectory data to a deep learning algorithm for training, evaluating and adjusting a model to finally obtain an optimal training model;
deploying the trained model into air mouse software; completing the prediction and recognition of the user gesture by utilizing a model deployment framework of an app end; and judging the recognition accuracy according to the recognition result, and when the accuracy exceeds 95%, determining that the recognition is accurate and executing the related gesture function.
In the step 1, voltage change caused by the action of inertial force borne by the accelerometer is detected, a quantized value is given through the ADC, the value reflects a specific numerical value of a current object under the action of the resultant force of gravity and external force, and the current resultant acceleration value can be obtained through conversion; and simultaneously detecting the mutual orthogonal vibration caused by the action of external force and the alternating Coriolis force caused by rotation of the gyroscope, converting the angular velocity of a rotating object into a direct current voltage signal in direct proportion to the angular velocity, and finally giving a quantized value through an ADC (analog to digital converter), wherein the quantized value reflects the rotation angle of the current object.
In the step 1, the sending module is realized by a bluetooth module, and the sending module transmits the data through a standard HID protocol; the data receiving unit, namely the smart television, adopts a linux kernel to provide a user space USB and a Bluetooth man-machine interaction node hidraw to complete communication with the Bluetooth module.
In the step 2, the data receiving unit receives the original sensor data, and the original sensor data is converted into specific coordinate values through an empty mouse coordinate conversion algorithm; the receiving unit also receives return information related to the control of the empty mouse.
In the step 2, after the null mouse coordinates are obtained, the smart television needs to be actually controlled and the null mouse needs to be displayed on a television picture, the television control adopts an accessibility service function and a system simulation Instrumentation function, and the mouse display can adopt custom drawing or utilize created UInput equipment to realize input and display.
The further scheme is that in the step 2, the air mouse track is recognized, the gesture operation of the user can be intuitively reflected on the air mouse track, and before the data are used, the original air mouse track needs to be converted into a data format which can be recognized by a machine learning algorithm; namely, the air mouse track data is converted into a picture format for learning and prediction of the recognition model.
Further, in the step 2, the air mouse gesture track data acquisition, model training, model evaluation and model adjustment are required to be continuously carried out;
a) before collection, firstly, defining the collection rule and condition of data, firstly determining when the operation of a user is taken as the start and the end of a gesture, and defining that the user has large shaking and short pause as the start and the end of the gesture;
b) because the number of manual acquisition is small in the data acquisition process, the model precision is not high, data expansion needs to be performed first by using a data enhancement algorithm, and meanwhile, region interception and trajectory enhancement operations are performed on an original picture.
In the step 3, the trained model is deployed in the air mouse software, and the model deployment framework tensoflow lite is adopted to complete the use of the model.
The invention has the beneficial effects that:
the gesture recognition method based on the air mouse track directly uses the ready-made air mouse track data, so that the method has the advantage of low cost, and meanwhile, the gesture is recognized by adopting a machine learning algorithm, so that the method has wider gesture definition and use scenes, can conveniently and quickly perform television operation by the method, and accords with daily operation habits.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following briefly introduces the embodiments or the drawings needed to be practical in the prior art description, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a core flow chart of the recognition algorithm of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
FIG. 1 is a flow chart of the present invention, which includes a whole set of processes from data acquisition to data processing, gesture recognition, and gesture execution. The gesture collection occurs in an air mouse hardware sensor layer, and the data processing to gesture execution occurs in an Android layer.
Fig. 2 is a core flow chart of the recognition algorithm of the present invention, which includes detailed descriptions of data extraction, data preprocessing, model input, model training, model deployment, model recognition, gesture execution, and the like.
In any embodiment, as shown in fig. 1-2, the gesture recognition method based on the empty mouse track of the present invention includes the following steps:
step one, acquiring software and hardware settings required by user action data by an air mouse remote controller;
the data acquisition module adopts an accelerometer and a gyroscope sensor and is integrated in the air mouse remote controller; a Bluetooth module is integrated in the air mouse remote controller at the same time to complete data communication with the operating equipment; the intelligent television integrates a Bluetooth module to complete connection and communication with the remote controller;
secondly, acquiring user action data by using sensor hardware of the air mouse remote controller and carrying out deep learning model training by using the acquired gesture data to obtain a final model with higher recognition rate;
sensor data are transmitted to a data receiving unit through a transmitting module, and the data receiving unit receives the data and transmits the data to an empty mouse coordinate transformation algorithm program for processing;
mapping the converted coordinates of the mouse and the function control result to the intelligent television to finish the drawing of the mouse and the picture and function control of the intelligent television;
extracting coordinate track data of the empty mouse and converting the coordinate track data into available data of an empty mouse deep learning algorithm;
feeding the transformed empty rat trajectory data to a deep learning algorithm for training, evaluating and adjusting to finally obtain an optimal model;
deploying the trained model into air mouse software; completing the prediction and recognition of the user gesture by utilizing a model deployment framework of an app end; and judging the recognition accuracy according to the recognition result, and when the accuracy exceeds 95%, determining that the recognition is accurate and executing the related gesture function.
In a specific embodiment, as shown in fig. 1-2, the gesture recognition method based on the empty mouse track of the present invention specifically includes the following steps:
the method comprises the following steps:
1) acquiring user action data by using sensor hardware of the air mouse remote controller; and detecting the voltage change caused by the action of inertial force borne by the accelerometer, giving a quantized value through the ADC, reflecting the specific numerical value of the current object under the action of the resultant force of gravity and external force, and obtaining the current resultant acceleration numerical value through certain conversion. And simultaneously detecting the mutual orthogonal vibration caused by the action of external force and the alternating Coriolis force caused by rotation of the gyroscope, converting the angular velocity of a rotating object into a direct current voltage signal in direct proportion to the angular velocity, and finally giving a quantized value through an ADC (analog to digital converter), wherein the quantized value reflects the rotation angle of the current object.
2) Transmitting the sensor data to a data receiving unit through a transmitting module; the sending module is realized by a Bluetooth module, has the advantages of low power consumption, low cost and the like, has the transmission distance limit completely meeting the requirements of people, and transmits through a standard HID protocol; the data receiving unit, namely the smart television, adopts a linux kernel to provide a user space USB and a Bluetooth man-machine interaction node hidraw to complete communication with the Bluetooth module.
3) And the data receiving unit receives the data and transmits the data to the empty mouse algorithm program for processing. The data receiving unit receives original sensor data which needs to be converted into specific coordinate values through a coordinate conversion algorithm; the receiving unit also receives return information related to the control of the empty mouse.
mHidRaw=new HidRaw(BuildConfig.DEVICE_NAME,true){
@Override
protected void onHidInput(byte[]buf,int length){
Sensor wake and sleep control
V/conversion of sensor data to coordinates
v/Return after user-defined function is completed }
}
The HidRaw class is java implementation of a packaged linux human-computer interaction interface HidRaw. Responsible for communicating with bluetooth devices.
4) And mapping the converted coordinates of the mouse and the function control result to the intelligent television to finish the drawing of the mouse and the picture and function control of the intelligent television. After the air mouse coordinates are obtained, the smart television needs to be actually controlled and the air mouse needs to be displayed on a television picture, the television control adopts a powerful auxiliary service accessibility service function and a system simulation Instrumentation function, and the mouse display can adopt various implementation schemes, such as custom drawing or using created UInput equipment to realize input and display.
AccessibilityNodeInfo node=getRootInActiveWindow();
// automatically acquiring Focus
if(node!=null&&node.isFocusable())
node.performAction(AccessibilityNodeInfo.ACTION_FOCUS);
}
// simulating click, select, long press, etc
……
// send button
mInstrumentation.sendKeySync(new KeyEvent(0,0,KeyEvent.ACTION_DOWN,keyCode,0))
// send button and coordinates
mUInput.sendEvent(type,code,value)
5) Extracting coordinate track data of the empty mouse and converting the coordinate track data into data available for an algorithm; the method is characterized in that the air mouse track is directly recognized, because the gesture operation of a user can intuitively reflect on the air mouse track, before the data are used, in order to have higher recognition accuracy and be capable of utilizing a mature recognition scheme, the original air mouse track needs to be converted into a data format which can be recognized by a machine learning algorithm. The main content comprises the steps of converting the air mouse track data into a picture format for learning and predicting an identification model.
Figure BDA0002777901800000071
Figure BDA0002777901800000081
6) And training the deep learning model by using the collected gesture data to obtain a final model with higher recognition rate. The air mouse gesture trajectory data acquisition, model training, model evaluation and model adjustment are required to be continuously carried out.
a) Before collection, firstly, the collection rules and conditions of data need to be defined, and firstly, the time when the operation of a user is taken as the start and the end of a gesture is determined. A user may be defined to have a large shake and a short pause as the start and end of the gesture.
b) Because the number of manual acquisition is small in the data acquisition process, the model precision is not high, data expansion needs to be performed first by using a data enhancement algorithm, and operations such as region interception, trajectory enhancement and the like are performed on an original picture.
Figure BDA0002777901800000082
Figure BDA0002777901800000091
Figure BDA0002777901800000101
7) And deploying the trained model into the air mouse software. This section uses the model deployment framework tensoflow lite to accomplish the use of the model.
// deployment framework dependencies
implementation files(’libs/libandroid_tensorflow_inference_java.jar′)
// saving model files
Saving models to asset files
8) And the model deployment framework of the app end is utilized to complete the prediction recognition of the user gesture. And returning the gesture data recognition result and the recognition accuracy.
// deployment framework uses trained models
mClassifier=new Classifer(getAssets(),MODEL_FILE);
ArrayList<String>result=mClassifier.predict(bitmapForPredict);
9) And finishing the corresponding function according to the recognition result. And judging the recognition accuracy according to the recognition result, and when the accuracy exceeds 95%, determining that the recognition is accurate and executing the related gesture function. For example, recognizing the hooking operation of the user, executing a confirmation operation on the current page, performing a volume increasing operation if the user recognizes that the user circles clockwise, and performing a volume decreasing operation if the user recognizes that the user circles counterclockwise, which is only used for illustrating the purpose of the gesture, is perfected according to the actual situation.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims. It should be noted that the various technical features described in the above embodiments can be combined in any suitable manner without contradiction, and the invention is not described in any way for the possible combinations in order to avoid unnecessary repetition. In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as the disclosure of the present invention as long as it does not depart from the spirit of the present invention.

Claims (8)

1. A gesture recognition method based on a mouse-empty track is characterized by comprising the following steps:
step one, acquiring software and hardware settings required by user action data by an air mouse remote controller;
the data acquisition module adopts an accelerometer and a gyroscope sensor and is integrated in the air mouse remote controller; a Bluetooth module is integrated in the air mouse remote controller to complete data communication with the operating equipment; the intelligent television integrates a Bluetooth module to complete connection and communication with the remote controller;
secondly, acquiring user action data by using sensor hardware of the air mouse remote controller and carrying out deep learning model training by using the acquired gesture data to obtain a final model with higher recognition rate;
sensor data are transmitted to a data receiving unit through a transmitting module, and the data receiving unit receives the data and transmits the data to an empty mouse coordinate transformation algorithm program for processing;
mapping the converted coordinates of the mouse and the function control result to the intelligent television to finish the drawing of the mouse and the picture and function control of the intelligent television;
extracting coordinate track data of the empty mouse and converting the coordinate track data into available data of an empty mouse deep learning algorithm;
feeding the transformed empty rat trajectory data to a deep learning algorithm for training, evaluating and adjusting to finally obtain an optimal model;
deploying the trained model into air mouse software; completing the prediction and recognition of the user gesture by utilizing a model deployment framework of an app end; and judging the recognition accuracy according to the recognition result, and when the accuracy exceeds 95%, determining that the recognition is accurate and executing the related gesture function.
2. The gesture recognition method based on the empty rat track as claimed in claim 1, wherein in the step 1, voltage change caused by inertial force action borne by an accelerometer is detected, and a quantized value is given through an ADC (analog to digital converter), wherein the quantized value reflects a specific numerical value of a current object under the action of resultant force of gravity and external force, and a current resultant acceleration numerical value can be obtained through conversion; and simultaneously detecting the mutual orthogonal vibration caused by the action of external force and the alternating Coriolis force caused by rotation of the gyroscope, converting the angular velocity of a rotating object into a direct current voltage signal in direct proportion to the angular velocity, and finally giving a quantized value through an ADC (analog to digital converter), wherein the quantized value reflects the rotation angle of the current object.
3. The gesture recognition method based on the empty mouse track as claimed in claim 1, wherein in the step 1, the sending module is implemented by a bluetooth module, and is transmitted through a standard HID protocol; the data receiving unit, namely the smart television, adopts a linux kernel to provide a user space USB and a Bluetooth man-machine interaction node hidraw to complete communication with the Bluetooth module.
4. The gesture recognition method based on the empty mouse track as claimed in claim 1, wherein in step 2, the data receiving unit receives raw sensor data, and the raw sensor data is converted into specific coordinate values through an empty mouse coordinate conversion algorithm; the receiving unit also receives return information related to the control of the empty mouse.
5. The method for gesture recognition based on the nude mouse track as claimed in claim 1, wherein in step 2, after obtaining the nude mouse coordinates, it is further required to be able to actually control the smart tv and display the nude mouse on the tv screen, and the tv control can adopt custom drawing or utilize a created UInput to implement input and display by using an auxiliary service accessibility service function and a system simulation Instrumentation function mouse display.
6. The method for recognizing the gesture based on the empty mouse track as claimed in claim 1, wherein in the step 2, the empty mouse track is recognized, the gesture operation of the user can be visually reflected on the empty mouse track, and before the data are used, the original empty mouse track needs to be converted into a data format which can be recognized by a machine learning algorithm; namely, the air mouse track data is converted into a picture format for learning and prediction of the recognition model.
7. The method for recognizing gestures based on empty mouse trajectories as claimed in claim 1, wherein in the step 2, the data acquisition, model training, model evaluation and model adjustment of the empty mouse gestures are continuously performed;
a) before collection, firstly, defining the collection rule and condition of data, firstly determining when the operation of a user is taken as the start and the end of a gesture, and defining that the user has large shaking and short pause as the start and the end of the gesture;
b) because the number of manual acquisition is small in the data acquisition process, the model precision is not high, data expansion needs to be performed first by using a data enhancement algorithm, and meanwhile, region interception and trajectory enhancement operations are performed on an original picture.
8. The method for recognizing gestures based on empty mouse track as claimed in claim 1, wherein in step 3, the trained model is deployed into the empty mouse software, and the model deployment framework tensoflow lite is adopted to complete the use of the model.
CN202011276538.7A 2020-11-13 2020-11-13 Gesture recognition method based on empty mouse track Pending CN112383804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011276538.7A CN112383804A (en) 2020-11-13 2020-11-13 Gesture recognition method based on empty mouse track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011276538.7A CN112383804A (en) 2020-11-13 2020-11-13 Gesture recognition method based on empty mouse track

Publications (1)

Publication Number Publication Date
CN112383804A true CN112383804A (en) 2021-02-19

Family

ID=74584179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011276538.7A Pending CN112383804A (en) 2020-11-13 2020-11-13 Gesture recognition method based on empty mouse track

Country Status (1)

Country Link
CN (1) CN112383804A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113485597A (en) * 2021-07-09 2021-10-08 上海明我信息技术有限公司 Video conference interaction method and controller

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
EP2446341A1 (en) * 2008-12-24 2012-05-02 Ioannis Tarnanas Virtual reality interface system
CN103974113A (en) * 2014-05-26 2014-08-06 中国科学院上海高等研究院 Remote control device with gesture recognition function
CN104182035A (en) * 2013-05-28 2014-12-03 中国电信股份有限公司 Method and system for controlling television application program
EP2901246A1 (en) * 2012-09-28 2015-08-05 Movea Remote control with 3d pointing and gesture recognition capabilities
CN108089693A (en) * 2016-11-22 2018-05-29 比亚迪股份有限公司 Gesture identification method and device, intelligence wearing terminal and server
CN111885406A (en) * 2020-07-30 2020-11-03 深圳创维-Rgb电子有限公司 Smart television control method and device, rotatable television and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2446341A1 (en) * 2008-12-24 2012-05-02 Ioannis Tarnanas Virtual reality interface system
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
EP2901246A1 (en) * 2012-09-28 2015-08-05 Movea Remote control with 3d pointing and gesture recognition capabilities
CN104182035A (en) * 2013-05-28 2014-12-03 中国电信股份有限公司 Method and system for controlling television application program
CN103974113A (en) * 2014-05-26 2014-08-06 中国科学院上海高等研究院 Remote control device with gesture recognition function
CN108089693A (en) * 2016-11-22 2018-05-29 比亚迪股份有限公司 Gesture identification method and device, intelligence wearing terminal and server
CN111885406A (en) * 2020-07-30 2020-11-03 深圳创维-Rgb电子有限公司 Smart television control method and device, rotatable television and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113485597A (en) * 2021-07-09 2021-10-08 上海明我信息技术有限公司 Video conference interaction method and controller

Similar Documents

Publication Publication Date Title
AU2022271496B2 (en) Controlling a device based on processing of image data that captures the device and/or an installation environment of the device
KR102624327B1 (en) Method for location inference of IoT device, server and electronic device supporting the same
CN103021410A (en) Information processing apparatus, information processing method, and computer readable medium
EP3671549B1 (en) Electronic device for assisting a user during exercise
US20160334880A1 (en) Gesture recognition method, computing device, and control device
WO2022142830A1 (en) Application device and air gesture recognition method thereof
CN103248874A (en) Front-end device for portable wireless data acquisition and transmission system on construction site
CN106598422B (en) hybrid control method, control system and electronic equipment
CN112383804A (en) Gesture recognition method based on empty mouse track
CN106648040B (en) Terminal control method and device
CN115453903A (en) Intelligent home control method and device, wearable device and storage medium
CN110852217B (en) Face recognition method and electronic equipment
CN109542229B (en) Gesture recognition method, user equipment, storage medium and device
CN112882577B (en) Gesture control method, device and system
WO2020124389A1 (en) Apparatus in mobile terminal for identifying application program, and terminal
CN117170495A (en) Multi-functional gesture interaction terminal control method based on sEMG electromyographic signal detection
CN115798054A (en) Gesture recognition method based on AR/MR technology and electronic device
CN114647300A (en) System control method, device, wearable device and storage medium
CN117133051A (en) Vehicle gesture control method, device, equipment, system and storage medium
CN116627253A (en) Intelligent home control system based on gesture recognition
CN112102831A (en) Cross-data, information and knowledge modal content encoding and decoding method and component
CN114299949A (en) User fuzzy instruction receiving system
CN116382458A (en) Man-machine interaction method, device, terminal equipment and storage medium
CN117931344A (en) Equipment control method, device, medium and intelligent wearable equipment
CN115686328A (en) Contactless interaction method and device in free space, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219

RJ01 Rejection of invention patent application after publication