CN115620397A - Vehicle-mounted gesture recognition system based on Leapmotion sensor - Google Patents
Vehicle-mounted gesture recognition system based on Leapmotion sensor Download PDFInfo
- Publication number
- CN115620397A CN115620397A CN202211386000.0A CN202211386000A CN115620397A CN 115620397 A CN115620397 A CN 115620397A CN 202211386000 A CN202211386000 A CN 202211386000A CN 115620397 A CN115620397 A CN 115620397A
- Authority
- CN
- China
- Prior art keywords
- gesture
- module
- data
- information
- data processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 claims abstract description 50
- 230000004044 response Effects 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 20
- 230000003993 interaction Effects 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000005094 computer simulation Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 claims description 2
- 230000003287 optical effect Effects 0.000 claims description 2
- 238000009877 rendering Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/72—Data preparation, e.g. statistical preprocessing of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/84—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
- G06V10/85—Markov-related models; Markov random fields
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle-mounted gesture recognition system based on a Leapmotion sensor, wherein the system comprises: the emergency control system comprises a data acquisition module, a data processing module, a control module, a response module and an emergency outburst handling module which are connected in sequence; the data acquisition module acquires a gesture image of a user by using a binocular camera, and the Leapmotion sensor generates a corresponding three-dimensional gesture model according to the gesture image and acquires gesture data information of an acquisition object; the data processing module comprises a complex background gesture training library and a data processing unit and is used for matching the extracted gesture information with the gesture training library; the control module outputs the matched control signal; the response module comprises a plurality of display screens and other interaction units and is used for carrying out system response of corresponding signals; when an emergency occurs, a special gesture signal is input into the collision module to perform emergency braking. The system can provide better human-computer interaction experience for the user, and improve the operation experience and driving safety of the user.
Description
Technical Field
The invention relates to the field of vehicle-mounted human-computer interaction systems and vehicle-mounted gesture recognition, in particular to a vehicle-mounted gesture recognition system based on a Leapmotion sensor.
Background
With the development of artificial intelligence, machine vision technology gradually enters daily life of people, spirit and culture life of people is enriched, and joyful experience is brought to people. The artificial intelligence interactive mode development of gesture recognition allows a user to use gestures to control or interact with equipment, and a computer to understand human behaviors, so that the development of human-computer interaction is promoted.
The current vehicle-mounted gesture recognition mainly adopts wearable sensing equipment and a simple static gesture recognition mode, wherein although the wearable sensing equipment is high in accuracy and good in robustness, the input and use cost is relatively high, and the vehicle-mounted gesture recognition is not suitable for mass production. Although the vehicle-mounted static gesture recognition has high recognition rate and is easy to recognize, the method is not suitable for the current production and living standard. Based on the Leapmotion sensor, a three-dimensional dynamic gesture can be generated and dynamically tracked in real time, the depth and three-dimensional space information of an image obtained by evaluating the posture of the three-dimensional gesture can be estimated, hand characteristic information can be extracted, the recognition precision is high under certain conditions, and the method has obvious precision advantage and high tracking frame number compared with a Kniect sensor. But also has certain defects, such as being susceptible to external environment interference, and the recognition rate in the vertical direction is relatively low.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide the vehicle-mounted gesture recognition system based on the Leapmotion sensor, which has higher recognition accuracy and efficiency, is not easily interfered by external noise and can provide good human-computer interaction experience for users.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a vehicle-mounted gesture recognition system based on a Leapmotion sensor is characterized by comprising a data acquisition module, a data processing module, a control module, a response module and an emergency handling module which are sequentially connected;
the data acquisition module acquires image information by using a binocular camera, performs space rectangular coordinate system modeling through a Leapmotion sensor, acquires user gestures, is additionally provided with an infrared Led, establishes a three-dimensional hand model and transmits gesture signals to the data processing module, wherein an X axis and a Z axis are horizontal and a Y axis is vertical;
the data processing module comprises a gesture training library under a complex background, dynamic gesture HMM model training and a data processing unit, and is used for extracting and processing extracted gesture information;
the control module is used for matching the information extracted by the data processing module with the vehicle-mounted gesture library, outputting the matched information as a control signal and transmitting the control signal to the response module;
the response module comprises a plurality of screen units and a man-machine interaction unit in the cabin;
and the emergency projection module performs forced braking by an appointed gesture when encountering emergency during driving.
Further, the data acquisition module is used for acquiring gesture information in an acquirable range in front of the vehicle-mounted screen, extracting gesture characteristics and transmitting a gesture signal to the next module, wherein the acquisition step of the gesture information comprises the following steps:
(1) Extracting three-dimensional hand position information by using a binocular camera according to a binocular stereoscopic vision imaging principle, and establishing a three-dimensional hand model;
(2) A Leapmotion sensor is adopted, and an optical gesture tracking technology is used for capturing a gesture track;
(3) The operation data amount is reduced by adopting a gray camera, and the algorithm speed is improved;
(4) By additionally arranging the infrared LED, external interference is reduced, the entrance and exit of infrared light waves are enhanced, the identification precision is improved, and noise is effectively removed.
Further, the data processing module performs feature extraction on the input frame number with the gesture information to obtain gesture feature parameters, and transmits the processing parameter information to the control module, and the specific steps include:
(1) Initializing collected gesture data information through Leapmotion;
(2) Before vehicle-mounted gesture data are output, testing the data;
(3) The data is normalized to prepare for subsequent data processing, convergence is faster when a program runs, and a normalization processing formula is as follows:
X i denotes normalized data, x i Represents normalized data; max (x) represents the maximum value of the data; min (x) represents the minimum value in the data.
(4) A NUS-II complex background database is adopted, and an HMM dynamic gesture model is used for training; and matching by using a vehicle-mounted gesture database, and transmitting a processing signal through a data processing unit.
Furthermore, the data processing module further comprises a three-dimensional posture regressor, the three-dimensional hand network is processed through the three-dimensional posture regressor, the hand network is rendered through the network renderer to obtain a hand depth map and a posture map, data are tested and matched, and a processing signal is output.
The invention has the beneficial effects that:
1. compared with the traditional Leap motion sensor, the infrared Led is additionally arranged to enhance the use effect and reduce noise interference; and the gray level camera is adopted to reduce the operation data amount and improve the algorithm speed.
2. The gesture training library adopts an NUS-II complex data set, training of a dynamic gesture HMM model is added, recognition accuracy is improved, and compared with a traditional training method, robustness of gesture recognition is improved.
3. Compared with the traditional training method, the data is normalized before training, so that preparation is made for subsequent data processing, and the convergence is faster when the program runs.
4. Compared with a three-dimensional gesture graph under a space rectangular coordinate system based on a traditional learning sensor, the gesture, the nodes and the deep graph of the improved three-dimensional gesture are improved, gesture recognition in all directions is improved, and particularly gesture recognition rate in the vertical direction is improved.
5. The emergency processing module is added, when the road driving is in an emergency, the situations of brake failure or untimely reaction and the like can occur, the designated gesture signal can make the vehicle brake emergently, and the driving safety is improved.
6. Compared with the traditional single screen, the vehicle-mounted multi-screen can be separately subjected to gesture recognition, and the multi-user experience is improved.
Drawings
FIG. 1 is a block diagram of the present invention;
FIG. 2 is a block diagram of a data processing module architecture of the present invention;
FIG. 3 is a block diagram of the structure of a human-computer interaction unit of the present invention;
FIG. 4 is a block diagram of the structure of a processing module according to the present invention;
fig. 5 is a diagrammatic view of the spatial coordinate system of the Leapmotion sensor of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
As shown in FIG. 1, the vehicle-mounted gesture recognition system based on the Leapmotion sensor comprises a data acquisition module, a data processing module, a control module, a response module and an emergency processing module which are connected in sequence. Wherein:
the data acquisition module comprises sub-units such as a binocular camera, a Leapmotion sensor and a Led infrared light source which are sequentially connected, and in the actual working process, the binocular camera and the built-in grayscale camera acquire gesture information of a user; the additionally arranged infrared Led light source is used for increasing the stability of background light and improving the accuracy of gesture recognition; the Leapmotion sensor generates a three-dimensional hand model according to the gesture information and a space rectangular coordinate system (shown in FIG. 5); the gesture signal is then output to the data processing module.
The data processing module consists of a complex background gesture training library, an HMM dynamic gesture training model, a data processing unit and the like, and transmits the processed gesture information to the control module; the control module matches the gesture information with the gesture library and outputs an instruction signal to the response module; the response module is sequentially connected with a plurality of vehicle-mounted screens and the human-computer interaction unit to perform real-time response; when the designated gesture signal is sent out, the signal is transmitted to the collision module, and the vehicle can be braked emergently.
The response module comprises a plurality of screens and a human-computer interaction unit, wherein the screens are contained in a vehicle, and each screen is a response unit of the gesture recognition system; the man-machine interaction unit (shown in fig. 3) comprises various functions related to screen response and controls the functions through gesture information.
The invention relates to a vehicle-mounted gesture recognition system based on a Leapmotion sensor, which specifically comprises the following working methods:
s1, acquiring a gesture information image of a user, and inputting the gesture information image into a data processing module;
s2, the data processing module processes the gesture information image of the data acquisition module to obtain a three-dimensional hand gesture depth map, and outputs gesture recognition information after training matching to the control module;
s3, the control module judges and responds the information and transmits the information to the response module, and if the classification is a designated signal, the information is transmitted to the outburst processing module;
s4, the response module operates the plurality of visual display screens through the input gesture matching information, wherein the human-computer interaction unit comprises sub-units of entertainment, multimedia, beidou navigation and the like;
and S5, finishing the gesture signal response, and finishing one-time man-machine interaction operation by the user.
In this embodiment, a flowchart of a specific working method of the gesture recognition process of the data processing module is shown in fig. 2, and includes:
s1, transmitting gesture information of a data acquisition module to a data processing module;
s2, carrying out graph convolution processing on the hand characteristic information to obtain a three-dimensional hand grid;
s3, obtaining a three-dimensional hand posture depth map through a grid renderer and a three-dimensional posture regression method, and strengthening gesture feature information;
s4, performing data testing on the preliminarily processed gesture information, and preparing for training;
s5, before training, carrying out normalization processing on the data to prepare for subsequent data processing;
s6, training the obtained data through a complex background gesture library and training a data model through a dynamic gesture HMM model;
s7, matching the trained data model with a gesture library;
s8, transmitting the processing signal to a control module after the identification;
s9, if the dynamic gesture target disappears, acquiring and processing the gesture signal again, and repeating the steps;
in this embodiment, a specific work flow diagram of the handling module is shown in fig. 4, and includes:
s1, when a user meets an emergency situation in the driving process, emergency braking is required but cannot be implemented;
s2, a gesture signal is designated to be input to the response unit;
s3, gesture matching recognition, signal output and system response;
and S4, emergency braking of the vehicle.
In conclusion, the vehicle-mounted gesture recognition method based on the Leapmotion sensor is realized, the accuracy of vehicle-mounted gesture recognition can be effectively improved, external noise interference is reduced, the robustness of the whole recognition process is improved, the problem of low vertical direction recognition rate can be solved through the three-dimensional hand gesture depth map, meanwhile, the man-machine interaction operation of multiple visual display screens is realized, the purposes of improving the user experience and the gesture recognition rate are achieved, and meanwhile, the safety of a user in the driving process is improved to a certain extent
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. A vehicle-mounted gesture recognition system based on a Leapmotion sensor is characterized by comprising a data acquisition module, a data processing module, a control module, a response module and an emergency handling module which are sequentially connected;
the data acquisition module acquires image information by using a binocular camera, performs space rectangular coordinate system modeling through a Leapmotion sensor, acquires user gestures, is additionally provided with an infrared Led, establishes a three-dimensional hand model and transmits gesture signals to the data processing module, wherein an X axis and a Z axis are horizontal and a Y axis is vertical;
the data processing module comprises a gesture training library under a complex background, a dynamic gesture HMM model training unit and a data processing unit, and is used for extracting and processing the extracted gesture information;
the control module matches the information extracted by the data processing module with the vehicle-mounted gesture library, outputs the matched information as a control signal and transmits the control signal to the response module;
the response module comprises a plurality of screen units and a man-machine interaction unit in the cabin;
and the emergency projection module performs forced braking by an appointed gesture when encountering emergency during driving.
2. The Leapmotion sensor-based vehicle-mounted gesture recognition system as recited in claim 1, wherein the data acquisition module is configured to acquire gesture information within an acquirable range in front of a vehicle-mounted screen, extract gesture features, and transmit a gesture signal to a next module, wherein the acquisition of the gesture information includes:
(1) Extracting three-dimensional hand position information by using a binocular camera according to a binocular stereoscopic vision imaging principle, and establishing a three-dimensional hand model;
(2) A Leapmotion sensor is adopted, and an optical gesture tracking technology is used for capturing a gesture track;
(3) The operation data amount is reduced by adopting a gray camera, and the algorithm speed is improved;
(4) By additionally arranging the infrared LED, the external interference is reduced, the entrance and exit of infrared light waves are enhanced, the identification precision is improved, and the noise is effectively removed.
3. The system of claim 2, wherein the data processing module performs feature extraction on an input frame number with gesture information to obtain a gesture feature parameter, processes the parameter information and transmits the processed parameter information to the control module, and the system comprises the following specific steps:
(1) Initializing acquired gesture data information through Leapmotion;
(2) Before vehicle-mounted gesture data are output, testing the data;
(3) The data is normalized to prepare for subsequent data processing, convergence is faster when a program runs, and a normalization processing formula is as follows:
in the formula: x i Denotes normalized data, x i Represents normalized data; max (x) represents the maximum value of the data; min (x) represents the minimum value in the data.
(4) An NUS-II complex background database is adopted, and an HMM dynamic gesture model is used for training; and matching by using a vehicle-mounted gesture database, and transmitting a processing signal through a data processing unit.
4. The Leapmotion sensor-based vehicle-mounted gesture recognition system of claim 3, wherein the data processing module further comprises a three-dimensional gesture regressor, the three-dimensional gesture regressor is used for processing a three-dimensional hand network, the network renderer is used for rendering the hand network to obtain a hand depth map and a gesture map, the data is tested and matched, and a processing signal is output.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211386000.0A CN115620397A (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted gesture recognition system based on Leapmotion sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211386000.0A CN115620397A (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted gesture recognition system based on Leapmotion sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115620397A true CN115620397A (en) | 2023-01-17 |
Family
ID=84878471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211386000.0A Pending CN115620397A (en) | 2022-11-07 | 2022-11-07 | Vehicle-mounted gesture recognition system based on Leapmotion sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115620397A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105138125A (en) * | 2015-08-25 | 2015-12-09 | 华南理工大学 | Intelligent vehicle-mounted system based on Leapmotion gesture recognition |
KR101643690B1 (en) * | 2015-04-21 | 2016-08-10 | 한국과학기술원 | Apparatus and method for reconstruction of human locomotion by using motion sensor embedding a portable device |
CN107688390A (en) * | 2017-08-28 | 2018-02-13 | 武汉大学 | A kind of gesture recognition controller based on body feeling interaction equipment |
CN109766822A (en) * | 2019-01-07 | 2019-05-17 | 山东大学 | Gesture identification method neural network based and system |
CN110795990A (en) * | 2019-09-11 | 2020-02-14 | 中国海洋大学 | Gesture recognition method for underwater equipment |
CN110837792A (en) * | 2019-11-04 | 2020-02-25 | 东南大学 | Three-dimensional gesture recognition method and device |
CN112163447A (en) * | 2020-08-18 | 2021-01-01 | 桂林电子科技大学 | Multi-task real-time gesture detection and recognition method based on Attention and Squeezenet |
CN112513787A (en) * | 2020-07-03 | 2021-03-16 | 华为技术有限公司 | Interaction method, electronic device and system for in-vehicle isolation gesture |
CN113033398A (en) * | 2021-03-25 | 2021-06-25 | 深圳市康冠商用科技有限公司 | Gesture recognition method and device, computer equipment and storage medium |
CN115063849A (en) * | 2022-05-23 | 2022-09-16 | 中国第一汽车股份有限公司 | Dynamic gesture vehicle control system and method based on deep learning |
-
2022
- 2022-11-07 CN CN202211386000.0A patent/CN115620397A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101643690B1 (en) * | 2015-04-21 | 2016-08-10 | 한국과학기술원 | Apparatus and method for reconstruction of human locomotion by using motion sensor embedding a portable device |
CN105138125A (en) * | 2015-08-25 | 2015-12-09 | 华南理工大学 | Intelligent vehicle-mounted system based on Leapmotion gesture recognition |
CN107688390A (en) * | 2017-08-28 | 2018-02-13 | 武汉大学 | A kind of gesture recognition controller based on body feeling interaction equipment |
CN109766822A (en) * | 2019-01-07 | 2019-05-17 | 山东大学 | Gesture identification method neural network based and system |
CN110795990A (en) * | 2019-09-11 | 2020-02-14 | 中国海洋大学 | Gesture recognition method for underwater equipment |
CN110837792A (en) * | 2019-11-04 | 2020-02-25 | 东南大学 | Three-dimensional gesture recognition method and device |
CN112513787A (en) * | 2020-07-03 | 2021-03-16 | 华为技术有限公司 | Interaction method, electronic device and system for in-vehicle isolation gesture |
CN112163447A (en) * | 2020-08-18 | 2021-01-01 | 桂林电子科技大学 | Multi-task real-time gesture detection and recognition method based on Attention and Squeezenet |
CN113033398A (en) * | 2021-03-25 | 2021-06-25 | 深圳市康冠商用科技有限公司 | Gesture recognition method and device, computer equipment and storage medium |
CN115063849A (en) * | 2022-05-23 | 2022-09-16 | 中国第一汽车股份有限公司 | Dynamic gesture vehicle control system and method based on deep learning |
Non-Patent Citations (1)
Title |
---|
3D视觉工坊: "3D体感控制器Leap Motion,使用红外LED+灰阶相机的方式实时采集手部", pages 1, Retrieved from the Internet <URL:https://www.zhihu.com/zvideo/1349092493598928896?utm_id=0> * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200126250A1 (en) | Automated gesture identification using neural networks | |
JP7016522B2 (en) | Machine vision with dimensional data reduction | |
CN107545302B (en) | Eye direction calculation method for combination of left eye image and right eye image of human eye | |
CN111837144A (en) | Enhanced image depth sensing using machine learning | |
CN106598226A (en) | UAV (Unmanned Aerial Vehicle) man-machine interaction method based on binocular vision and deep learning | |
CN104598915A (en) | Gesture recognition method and gesture recognition device | |
EP3811337A1 (en) | System for predicting articulated object feature location | |
CN105144196A (en) | Method and device for calculating a camera or object pose | |
Wang et al. | Using human body gestures as inputs for gaming via depth analysis | |
US20230085384A1 (en) | Characterizing and improving of image processing | |
CN110135277B (en) | Human behavior recognition method based on convolutional neural network | |
CN109389035A (en) | Low latency video actions detection method based on multiple features and frame confidence score | |
Liu et al. | RGB‐D human action recognition of deep feature enhancement and fusion using two‐stream convnet | |
US11106899B2 (en) | Electronic device, avatar facial expression system and controlling method thereof | |
CN112400148A (en) | Method and system for performing eye tracking using off-axis cameras | |
CN116935203B (en) | Diver intelligent monitoring method and system based on acousto-optic fusion | |
CN116449947B (en) | Automobile cabin domain gesture recognition system and method based on TOF camera | |
KR101480816B1 (en) | Visual speech recognition system using multiple lip movement features extracted from lip image | |
JPH04260979A (en) | Detecting and tracking system for mobile objection | |
CN110490165B (en) | Dynamic gesture tracking method based on convolutional neural network | |
CN115620397A (en) | Vehicle-mounted gesture recognition system based on Leapmotion sensor | |
Monica et al. | Recognition of medicine using cnn for visually impaired | |
KR20130081126A (en) | Method for hand-gesture recognition and apparatus thereof | |
CN112667088B (en) | Gesture application identification method and system based on VR walking platform | |
CN112764531A (en) | Augmented reality ammunition identification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |