CN110942040B - Gesture recognition system and method based on ambient light - Google Patents

Gesture recognition system and method based on ambient light Download PDF

Info

Publication number
CN110942040B
CN110942040B CN201911203896.2A CN201911203896A CN110942040B CN 110942040 B CN110942040 B CN 110942040B CN 201911203896 A CN201911203896 A CN 201911203896A CN 110942040 B CN110942040 B CN 110942040B
Authority
CN
China
Prior art keywords
signal
gesture recognition
data
photoelectric
photoelectric receivers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911203896.2A
Other languages
Chinese (zh)
Other versions
CN110942040A (en
Inventor
黄苗
段海涵
杨彦兵
陈良银
陈彦如
郭敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201911203896.2A priority Critical patent/CN110942040B/en
Publication of CN110942040A publication Critical patent/CN110942040A/en
Application granted granted Critical
Publication of CN110942040B publication Critical patent/CN110942040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of gesture recognition, and aims to provide a low-cost and high-accuracy gesture recognition system and method based on ambient light, which can overcome the requirements of similar systems on light sources and have richer use scenes. The technical scheme is as follows: the system comprises a data acquisition terminal, a gesture recognition server and an application terminal, wherein the data acquisition terminal comprises a plurality of photoelectric receivers, a signal amplification module, an analog-to-digital conversion module and a signal processing module; the photoelectric receiver is used for capturing optical signal changes generated by gesture actions, the output end of the photoelectric receiver is connected with the input end of the signal amplification module respectively, the output end of the signal amplification module is connected with the input end of the analog-to-digital conversion module, the output end of the analog-to-digital conversion module is connected with the input end of the signal processing module, data information processed by the signal processing module is transmitted to the gesture recognition server to be recognized, the gesture recognition server outputs recognition information and sends the recognition information to the application end, and the application end displays the recognition information in real time.

Description

Gesture recognition system and method based on ambient light
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a gesture recognition system and method based on ambient light.
Background
The popularization of intelligent equipment complements the continuous innovation of man-machine interaction. The traditional smart home or smart building generally uses the internet of things technology to network the smart devices, and the smart terminals send instructions to interact with the smart devices. This traditional interaction approach is gradually evolving towards a natural interaction approach with lower learning costs. On the market, intelligent sound boxes such as a cat sprite and intelligent equipment voice assistants such as Siri are frequently available, and voice recognition as one of natural interaction modes is gradually mature and widely applied. Despite products such as Kinect, leap, etc. on the market, gesture recognition is still rarely applied in daily life at present.
Common methods of implementing gesture recognition include the use of image, acoustic, radio frequency, and visible light technologies. But image and acoustic techniques require image acquisition and audio acquisition, which can raise security and privacy concerns. The rf technology not only needs to use complicated technical means at the transmitting end and the receiving end, but also is easily interfered by various electromagnetic fields. The ambient light has the advantages of wide frequency spectrum, almost ubiquitous property, safety, no privacy worry, convenience in acquisition and the like.
A gesture reconstruction method and a system Aili based on LED Light are proposed and constructed by Tianxing Li et al in Reconstructing Hand positions Using visual Light. The system adopts the modulated LED array as a light source, and the photodiode as a visible light sensor to realize 3D gesture reconstruction. However, this system and many similar systems in the field of visible light communication have the following disadvantages:
1) Additional modulation equipment needs to be installed at the light source, so that installation is inconvenient and system cost is increased;
2) Because of the modulation requirement, the light source type of the system can only be an LED, and the popularization rate of the LED is not high at present;
3) The system mainly works in 3D reconstruction, and other work is still required to be additionally added if the system is used for gesture recognition scenes.
Disclosure of Invention
The invention aims to provide a low-cost and high-accuracy gesture recognition system and method based on ambient light, which can overcome the requirements of similar systems on light sources and have richer use scenes.
In order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows: a gesture recognition system based on ambient light comprises a data acquisition terminal, a gesture recognition server and an application end, wherein the data acquisition terminal comprises a plurality of photoelectric receivers, a signal amplification module, an analog-to-digital conversion module and a signal processing module; the photoelectric receivers are used for capturing optical signal changes generated by gesture motions, and the photoelectric receivers are combined and arranged; the output end of the photoelectric receiver is connected with the input end of the signal amplification module, the output end of the signal amplification module is connected with the input end of the analog-to-digital conversion module, the output end of the analog-to-digital conversion module is connected with the input end of the signal processing module, data information processed by the signal processing module is transmitted to the gesture recognition server to be recognized, the gesture recognition server outputs recognition information and sends the recognition information to the application end, and the application end displays the recognition information in real time.
Furthermore, the gesture recognition server comprises a data preprocessing unit and a deep learning network model unit, the data preprocessing unit decodes the data packet output by the signal processing module and restores the data packet into multi-channel data, and the deep learning network model unit recognizes and classifies the restored multi-channel data; and the multi-channel data and the result of the deep learning network model unit recognition classification are used as the recognition information output by the gesture recognition server.
Further, the combination arrangement is in a rectangular array arrangement, a trapezoid arrangement or a dispersed arrangement.
Further, the deep learning network model unit is a gating cycle unit.
Further, the application end is a front end of a webpage.
A gesture recognition method based on ambient light comprises the following recognition steps:
s1: the method comprises the following steps that a plurality of photoelectric receivers capture optical signal changes generated by gesture actions at different hand positions in real time, and convert a plurality of collected optical signals into current signals respectively;
s2: the signal amplification module converts the current signal into a voltage signal and amplifies the voltage signal;
s3: the amplified voltage signal is converted into a digital signal through an analog-to-digital conversion template;
s4: the multi-channel digital signals are merged and coded by the signal processing module and then transmitted to the gesture recognition server;
s5: the data preprocessing unit decodes the received original data and restores the original data into multi-channel data;
s6: inputting the restored multi-channel data into a deep learning network model unit to finish the recognition and classification of gestures;
s7: and meanwhile, the restored multi-channel data and the recognition and classification result are used as the input of the application terminal, and the application terminal displays the multi-channel data and the recognized gesture in real time.
Further, the gesture recognition method further comprises a training step before recognition, wherein the training step is as follows: and the gesture recognition server trains through a back propagation algorithm according to different gesture actions to establish a deep network model.
The beneficial effects of the invention are concentrated and expressed as follows:
1. the photoelectric receiver can perform gesture recognition only by using ambient light, so that the light source is not limited, and the application scene is wider;
2. extra light source modulation equipment is not required to be installed, so that the system cost is reduced, and the use and installation are simpler and more convenient;
3. the photoelectric receiver is very sensitive and can detect tiny light intensity changes, so that subtle actions can be recognized;
4. the photoelectric receivers are combined and arranged to form multichannel photosensitive data, so that the gesture recognition accuracy can be improved, and the recognition delay is reduced.
Drawings
FIG. 1 is a block diagram of the system architecture of the present invention;
FIG. 2 is a block diagram of a recurrent neural network model of the present invention;
figure 3 is a schematic diagram of the layout of the photoelectric receiver of the present invention;
FIG. 4 is a schematic diagram of a deep learning gesture training framework of the present invention;
FIG. 5 is a schematic diagram of a gesture training action.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a gesture recognition system based on ambient light includes a data acquisition terminal, a gesture recognition server, and an application terminal, where the data acquisition terminal includes a plurality of photoelectric receivers, a signal amplification module, an analog-to-digital conversion module, and a signal processing module;
the photoelectric receivers are used for receiving ambient light in any free space and capturing optical signal changes generated by gesture motions, and the photoelectric receivers are combined and arranged; the photoelectric receiver converts the captured optical signals into current signals, and the output ends of the photoelectric receiver are connected with the input end of the signal amplification module;
the signal amplifier converts the tiny current signal into a voltage signal, and then performs amplification processing, wherein the output end of the signal amplification module is connected with the input end of the analog-to-digital conversion module;
the digital-to-analog conversion module converts the analog voltage signal output by the signal amplifier into a digital signal of a digital quantity, the output end of the analog-to-digital conversion module is connected with the input end of the signal processing module, and the signal processing module is used for combining and coding the digital signal output by the analog-to-digital conversion module; after the gesture recognition server sends a request to the signal processing module, the signal processing module sends a data packet to the gesture recognition server;
the gesture recognition server comprises a data preprocessing unit and a deep learning network model unit, wherein the deep learning network model unit is a gate control cycle unit in the embodiment; as shown in fig. 2, where h represents an iterative vector of the neural network, t is a time point, v represents an input vector, U and W are gated cyclic unit parameter matrices, σ and tanh in the box represent sigmoid and tanh activation functions, respectively, g is an output vector of each gate,
Figure BDA0002296520700000051
the representative vectors are multiplied element by element; the gated loop cell includes two gates in total, suffixed by u for the update gate, suffixed by r for the reset gate, and finally suffixed by c for the state bitCalculating a neural network iteration vector at the next moment by new operation;
the data preprocessing unit decodes the data packet output by the signal processing module and restores the data packet into multi-channel data, the restored multi-channel data are used as the input of the deep learning network model unit, and the restored multi-channel data are identified and classified, wherein the specific gesture classification is shown in fig. 5 and comprises seven gestures, namely five fingers respectively naturally droop, palm naturally spreads and fist making.
The multi-channel data and the recognition and classification result of the deep learning network model unit are used as recognition information output by the gesture recognition server, the gesture recognition server outputs the recognition information and sends the recognition information to the application end, in the embodiment, the application end is a webpage front end, the webpage front end can display the voltage value of each channel for a user, and the application end can also be used for displaying a final recognition gesture result and related interactive application.
In this embodiment, the photo-receivers are photodiodes or phototriodes, the number of which is preferably eight, the photo-receivers are arranged on the hand of the user in three combinations, as shown in fig. 3, in the hand a of fig. 3, the photo-receivers are arranged in a 2*4 rectangular array, and the distance between every two adjacent photo-receivers is 5cm, which has the advantages of bilateral symmetry and simple deployment;
in the hand b of fig. 3, the photoelectric receivers are arranged in a trapezoid shape, three photoelectric receivers are arranged on the upper side and arranged on the index finger, the middle finger and the ring finger of the hand, five photoelectric receivers are arranged on the lower side and are transversely and uniformly distributed, and the distance between the upper photoelectric receiver and the lower photoelectric receiver is 5cm; the mode has the advantages that the left hand and the right hand can be conveniently switched in bilateral symmetry without arrangement and adjustment, the mode can adapt to different hand types, and the purpose is to capture the corresponding small motion change of the gesture of the embodiment by using the limited photoelectric receivers as much as possible;
in the hand c of fig. 3, the photoelectric receivers are distributed, and the photoelectric receivers are respectively installed near the fingertip of each finger and on the palm of the hand, and this arrangement mode can maximally capture the change of the optical signal of the gesture recognized in this embodiment, but can only be used for recognizing the gesture of a single hand due to its asymmetric feature.
A gesture recognition method based on ambient light comprises the following recognition steps:
s1: the photoelectric receivers capture optical signal changes generated by gesture actions at different hand positions in real time, and convert the eight collected optical signals into current signals respectively;
s2: the signal amplification module converts the current signal into a voltage signal and amplifies the voltage signal;
s3: the amplified voltage signal is converted into a digital signal through an analog-to-digital conversion template;
s4: the eight-channel digital signals are merged and coded by the signal processing module and then transmitted to the gesture recognition server;
s5: the data preprocessing unit decodes the received original data and restores the original data into eight-channel data;
s6: inputting the restored eight-channel data into a deep learning network model unit to finish the recognition and classification of gestures;
s7: meanwhile, the restored eight-channel data and the recognition and classification results are used as input of the application end, and the application end displays the eight-channel data and recognized gestures in real time.
The gesture recognition method further comprises a training step before recognition, wherein the training step comprises the following steps: the gesture recognition server trains through a back propagation algorithm according to different gesture actions, and a deep network model is established; the training principle is as follows: the main way of training is to iteratively update the parameters of the parameter matrix through a back propagation algorithm, and the back propagation iteration is performed by mainly using multi-classification cross entropy as a loss function. As shown in fig. 4, each V represents the photoelectric receiver data input to the neural network at each moment, the Output (RNN Output) of the neural network is obtained after the neural network operation, and the gesture corresponding to the input photoelectric receiver data can be obtained through the full connection layer and the softmax function; comparing the gesture output by the algorithm with the correct gesture, if the gesture is correct, the parameters of the neural network weight matrix are not adjusted, and if the gesture is wrong, the neural network weight matrix is updated through a back propagation algorithm.
It should be noted that, for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the order of acts described, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.

Claims (7)

1. A non-contact gesture recognition device based on ambient light, comprising:
a data acquisition terminal: the system comprises 8 photoelectric receivers, a controller and a display, wherein the 8 photoelectric receivers are used for receiving an ambient light signal and generating a current signal, and the ambient light signal is generated in one or more non-contact gestures under an ambient light scene; the 8 photoelectric receivers are arranged in a trapezoidal shape and located below the hand, the upper side of the 8 photoelectric receivers is provided with 3 photoelectric receivers which are arranged at positions corresponding to the index finger, the middle finger and the ring finger of the hand, and the lower side of the 8 photoelectric receivers is provided with 5 photoelectric receivers which are transversely and uniformly distributed;
the signal amplification module: the photoelectric receiver is used for converting the current signal into a voltage signal and amplifying the voltage signal, and the input end of the signal amplification module is connected with the output end of the photoelectric receiver;
an analog-to-digital conversion module: the analog-to-digital conversion module is used for converting the voltage signal into a digital signal, and the input end of the analog-to-digital conversion module is connected with the output end of the signal amplification module;
the signal processing module: the photoelectric receivers are used for receiving the 8 photoelectric signals generated by the photoelectric receivers;
a gesture recognition server: the data preprocessing unit is used for decoding the data output by the signal processing module and restoring the data into a plurality of groups of data;
deep learning network model unit: the data processing system is used for identifying and classifying the restored groups of data in real time;
an application end: the gesture recognition server is used for displaying the one or more non-contact gestures output by the gesture recognition server in real time.
2. The device according to claim 1, wherein the deep learning network model unit is a gate control cycle unit, the deep learning network model unit is trained in advance based on a non-contact gesture interpretation data set in a historical environment light scene, and the training algorithm is a back propagation algorithm.
3. The device according to claim 1, wherein the application terminal is a front end of a web page.
4. The ambient light-based non-contact gesture recognition device according to claim 1, wherein the 8 photo receivers are arranged in a 2 x 4 rectangular array and located below the hand, and the distance between every two adjacent photo receivers is 5cm.
5. The device according to claim 1, wherein the 8 photoelectric receivers are distributed and located under the hand, and the photoelectric receivers are respectively located near the fingertip of each finger and at the palm position.
6. A non-contact gesture recognition method based on ambient light is characterized by comprising the following steps: the method comprises the following identification steps:
s1:8 photoelectric receivers capture the light signal change generated by the gesture action at different hand positions in real time, and collect a plurality of collected lights
The signals are respectively converted into current signals; the 8 photoelectric receivers are arranged in a trapezoid manner and positioned below the hand, the upper side of the 8 photoelectric receivers is provided with 3 photoelectric receivers which are arranged at positions corresponding to the index finger, the middle finger and the ring finger of the hand, and the lower side of the 8 photoelectric receivers is provided with 5 photoelectric receivers which are transversely and uniformly distributed;
s2: the signal amplification module converts the current signal into a voltage signal and amplifies the voltage signal;
s3: the amplified voltage signal is converted into a digital signal through an analog-to-digital conversion template;
s4: the multi-channel digital signals are merged and coded by the signal processing module and then transmitted to the gesture recognition server;
s5: the data preprocessing unit decodes the received original data and restores the original data into multi-channel data;
s6: inputting the restored multi-channel data into a deep learning network model unit to finish the recognition and classification of gestures;
s7: and meanwhile, the restored multi-channel data and the recognition and classification results are used as the input of the application terminal, and the application terminal displays the data of the multiple channels and recognized gestures in real time.
7. The ambient light-based non-contact gesture recognition method of claim 6, wherein: the deep learning network model unit is a gating cycle unit, and the deep learning network model unit further comprises a training step before recognition, wherein the training step comprises the following steps: and the gesture recognition server trains through a back propagation algorithm according to different gesture actions to establish a deep network model.
CN201911203896.2A 2019-11-29 2019-11-29 Gesture recognition system and method based on ambient light Active CN110942040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911203896.2A CN110942040B (en) 2019-11-29 2019-11-29 Gesture recognition system and method based on ambient light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911203896.2A CN110942040B (en) 2019-11-29 2019-11-29 Gesture recognition system and method based on ambient light

Publications (2)

Publication Number Publication Date
CN110942040A CN110942040A (en) 2020-03-31
CN110942040B true CN110942040B (en) 2023-04-18

Family

ID=69908994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911203896.2A Active CN110942040B (en) 2019-11-29 2019-11-29 Gesture recognition system and method based on ambient light

Country Status (1)

Country Link
CN (1) CN110942040B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459283A (en) * 2020-04-07 2020-07-28 电子科技大学 Man-machine interaction implementation method integrating artificial intelligence and Web3D
CN111781758A (en) * 2020-07-03 2020-10-16 武汉华星光电技术有限公司 Display screen and electronic equipment
CN113625882B (en) * 2021-10-12 2022-06-14 四川大学 Myoelectric gesture recognition method based on sparse multichannel correlation characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205436A (en) * 2014-06-03 2015-12-30 北京创思博德科技有限公司 Gesture identification system based on multiple forearm bioelectric sensors
CN105786177A (en) * 2016-01-27 2016-07-20 中国人民解放军信息工程大学 Gesture recognition device and method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8432372B2 (en) * 2007-11-30 2013-04-30 Microsoft Corporation User input using proximity sensing
US8947353B2 (en) * 2012-06-12 2015-02-03 Microsoft Corporation Photosensor array gesture detection
EP3155560B1 (en) * 2014-06-14 2020-05-20 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US10261685B2 (en) * 2016-12-29 2019-04-16 Google Llc Multi-task machine learning for predicted touch interpretations
CN109923583A (en) * 2017-07-07 2019-06-21 深圳市大疆创新科技有限公司 A kind of recognition methods of posture, equipment and moveable platform
CN107589832A (en) * 2017-08-01 2018-01-16 深圳市汇春科技股份有限公司 It is a kind of based on optoelectronic induction every empty gesture identification method and its control device
CN108088371B (en) * 2017-12-19 2020-12-01 厦门大学 Photoelectric detector position layout for large displacement monitoring
EP3767433A4 (en) * 2018-03-12 2021-07-28 Sony Group Corporation Information processing device, information processing method, and program
CN109099942A (en) * 2018-07-11 2018-12-28 厦门中莘光电科技有限公司 A kind of photoelectricity modulus conversion chip of integrated silicon-based photodetector
CN208903277U (en) * 2018-11-21 2019-05-24 Oppo广东移动通信有限公司 Display screen and electronic equipment
CN110046585A (en) * 2019-04-19 2019-07-23 西北工业大学 A kind of gesture identification method based on environment light

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205436A (en) * 2014-06-03 2015-12-30 北京创思博德科技有限公司 Gesture identification system based on multiple forearm bioelectric sensors
CN105786177A (en) * 2016-01-27 2016-07-20 中国人民解放军信息工程大学 Gesture recognition device and method

Also Published As

Publication number Publication date
CN110942040A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN110942040B (en) Gesture recognition system and method based on ambient light
US11640208B2 (en) Gesture feedback in distributed neural network system
US20180186452A1 (en) Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation
CN105389148B (en) Data transmission method, data receiving method, related equipment and system
CN102752574A (en) Video monitoring system and method
CN103942559A (en) Image sensing device and decoding circuit thereof
CN110536078A (en) Handle the method and dynamic visual sensor of the data of dynamic visual sensor
Weiyao et al. Fusion of skeleton and RGB features for RGB-D human action recognition
CN104122987A (en) Light sensing module and system
CN110020626B (en) Attention mechanism-based multi-source heterogeneous data identity recognition method
CN114677185A (en) Intelligent large-screen advertisement intelligent recommendation system and recommendation method thereof
Teli et al. Performance evaluation of neural network assisted motion detection schemes implemented within indoor optical camera based communications
CN112102738A (en) Interactive COB display module and LED display screen
Li et al. 3d human skeleton data compression for action recognition
CN101819476A (en) Alternating current pointing light pen, multi-light-pen identification device and method of multi-light-pen identification
Wang et al. Feature representation and compression methods for event-based data
CN105786177A (en) Gesture recognition device and method
CN115240120A (en) Behavior identification method based on countermeasure network and electronic equipment
CN104731324A (en) Gesture inner plane rotating detecting model generating method based on HOG+SVM framework
CN105491336B (en) A kind of low power image identification module
US20200295675A1 (en) Self-Powered Wireless Optical Communication Systems and Methods
CN109951866B (en) People flow monitoring method based on hidden Markov model
CN113526279A (en) Intelligent elevator interaction control system based on machine vision
CN106778537B (en) Animal social network structure acquisition and analysis system and method based on image processing
CN111525958B (en) Optical camera communication system with data communication and gesture action recognition functions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant