CN105681859A - Man-machine interaction method for controlling smart TV based on human skeletal tracking - Google Patents
Man-machine interaction method for controlling smart TV based on human skeletal tracking Download PDFInfo
- Publication number
- CN105681859A CN105681859A CN201610017797.5A CN201610017797A CN105681859A CN 105681859 A CN105681859 A CN 105681859A CN 201610017797 A CN201610017797 A CN 201610017797A CN 105681859 A CN105681859 A CN 105681859A
- Authority
- CN
- China
- Prior art keywords
- elbow
- palm
- point
- circle
- intelligent television
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42201—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42204—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
- H04N21/42206—User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
- H04N21/4221—Dedicated function buttons, e.g. for the control of an EPG, subtitles, aspect ratio, picture-in-picture or teletext
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses man-machine interaction method for controlling a smart TV based on human skeletal tracking. As the technical scheme is adopted, the man-machine interaction method for controlling the smart TV based on the human skeletal tracking, in comparison with the prior art, has the following advantages: a novel man-machine interaction system for controlling the smart TV is established; a Kinect sensor is adopted to realize voice identification, hand gesture identification and posture identification. All the operations of the smart TV are controlled by using voice and double hands of a user, so that the user can get rid of dependence on an entity controller; and the man-machine interaction method for controlling the smart TV based on the human skeletal tracking is a brand-new smart TV control way.
Description
Technical field
The present invention relates to a kind of man-machine interaction method being applied to intelligent television.
Background technology
Intelligent television has the application platform advantage not available for traditional tv manufacturer. After connecting network, it is provided that IE browser, full HD 3D somatic sensation television game, video calling, family KTV and educates multiple amusement, information, the education resources such as online, and can infinitely expand, moreover it is possible to respectively supporting tissue and individual, professional and amateurish software fan's independent development, the utilitarian function software that share is ten hundreds of. It will realize the various application services such as web search, Internet Protocol Television, video request program (VOD), Internet news. User may search for television channel and website, recording TV program, it is possible to plays satellite and cable television program and Internet video. Intelligent television will make an open system platform that can load unlimited content, unlimited application for users, it is possible to according to self needing to carry out personalized installation, makes TV never out-of-date.
Intelligent television is controlled, the method always continuing to use the IR remote controller that conventional television uses, it is possible to complete the anencephalyization operations such as switching on and shutting down, channel selection, volume control. But the control to intelligent television, such operation has been faded in weak, and some operation even cannot realize. Such as when browsing webpage, remote controller cannot quickly position, and wastes time and energy; When needing input word or symbol, even if there being dummy keyboard, equally enough loaded down with trivial details; More frighteningly, along with the addition of more and more game application, a lot of game have directly been abandoned using traditional remote controller.
For case above, each intelligent television producer is also all proposed corresponding solution, but does not have one at present and dare to be referred to as perfection. The several frequently seen controlling equipment of current intelligent television will be summarized below, all its pluses and minuses.
Traditional remote controller: be still the controller of main flow at present, it is possible to realize the operation to TV functions. Its advantage is simple to operate, meets traditional use habit of user. Shortcoming is location difficulty, and fricton-tight pointer, input mode is loaded down with trivial details.
Full keyboard touching telecontrol: in view of the input difficulty of most remote control units, and this is very the key link in the application of intelligent television, then part manufacturer is had to be proposed full keyboard touching telecontrol, front is the same with normal remote control, but one piece of Petting Area more than the keyboard of back side full keyboard mobile phone, collect common remote control and intelligent distant control one, need not more exchange device back and forth during use.Its advantage is convenient for input, controlling equipment one. Shortcoming is relatively costly, and keyboard is intensive is not suitable for old people's use.
Keyboard and mouse: the PC version standard configuration of keyboard and mouse has also been transplanted on television set, nowadays a lot of intelligent televisions all support USB external connection keyboard mouse action, part type even supports cordless key Mus, Each performs its own functions with common remote control, it is the controlling equipment of at present most Simple And Practical, simply in response speed, is also unable to reach the experience effect the same with on computer. Its advantage is that left-hand seat is easy, Device-General, manipulation simplicity. Shortcoming is that wired key Mus uses distance too near, and cordless key Mus then reacts slightly aobvious blunt.
Mobile phone, panel computer: be currently based on the new opplication multi-screen interactive that the intelligent television of Android system is released. After the intelligent terminal such as TV or mobile phone plane plate installs multi-screen interactive app, namely mobile phone, flat board are transformed into virtual remote control, it is achieved the indirect touch control of television set. Its advantage is touch control operation, is easy to use, it is not necessary to other remote control equipments are assisted. Shortcoming only limits Android smartphone or panel computer at present for application.
Summary of the invention
It is an object of the invention to provide a kind of that be applied to intelligent television, conveniently man-machine interaction method.
In order to achieve the above object, the technical scheme is that and provide a kind of man-machine interaction method based on skeleton Tracing Control intelligent television, it is characterised in that comprise the following steps:
The first step, intelligent television is connected with Kinect sensor, is defined whether entering the gesture that gesture recognition controls in the intelligence system that intelligent television carries, controls and do not enter gesture recognition to control to be defined as different hands gesture by entering gesture recognition;
Second step, the human body palm obtained according to reality measurement size, calculate the size of palm area, after effector makes gesture, utilize the depth transducer in Kinect sensor to obtain depth image, this depth image intercepts real-time palm image according to calculated palm area size and palm coordinate, after real-time palm image is carried out binary conversion treatment, obtain real-time gesture figure;
3rd step, real-time gesture figure and defined gesture being compared, control if entering gesture recognition, then enter next step, otherwise, Kinect sensor continues waiting for catching image;
4th step, entrance gesture recognition realize skeleton by Kinect sensor and follow the trail of, complete true man's mapping to visual human after controlling, and effector completes light target with the action of a hands and moves, and completes feature operation with the action of another hands, wherein:
Complete light target with the action of a hands to move and comprise the following steps:
Extract the elbow point in visual human and palm point, with elbow point for the center of circle, the length preset is that radius does circle, formation elbow circle, when palm point be positioned at elbow circle or be positioned at elbow circle go up time, be not operated, when palm point be positioned at elbow circle outer time, control light target moving direction according to the direction of elbow round heart sensing palm point;
Complete feature operation with the action of another hands to comprise the following steps:
Extract the elbow point in visual human and palm point, with elbow point for the center of circle, the length preset is that radius does circle, formation elbow is justified, and when palm point is positioned at elbow circle or is positioned on elbow circle, is not operated, when palm point be positioned at elbow circle outer time, region outside elbow circle is divided into zones of different, and the feature operation that different regions is corresponding different, palm point is positioned at zones of different and namely carries out different feature operations.
Preferably, described intelligent television has different application scene, then also include in the described first step: in the intelligence system that intelligent television carries, the different phonetic instruction that different application scene is corresponding is defined;
After said first step, and also include before described second step:
Being received effector by Kinect sensor and send real-time voice instruction, itself and defined phonetic order are compared after this real-time voice instruction is resolved by intelligence system, enter the application scenarios corresponding to phonetic order matched.
Preferably, in described 4th step, control light target moving direction according to the direction of the round heart sensing palm point of elbow and comprise the following steps:
The more big then translational speed of radius r, then the distance d=m-r that cursor to move, distance d of the distance m of acquisition palm point and the round heart of elbow and elbow circle is more fast, more little then more slow.
Preferably, in described 4th step, point to the direction of palm point according to the round heart of elbow and control the step that cursor moves to small icon and include:
By the position of palm point in elbow circle and on intelligent television correspondence position mapped, palm point position in small icon region is equal to the position in the circle that light is marked in intelligent television the corresponding elbow same proportion of circle.
Preferably, in described 4th step, it is judged that palm point is positioned at the method for zones of different and comprises the following steps:
After obtaining the coordinate of palm point and the round heart of elbow, it is decided to be initial point with the round heart of elbow, set up polar coordinate system, what the round heart of elbow pointed to the angle of palm point ranges for-180 ° to 180 °, this angular range is divided different angles range section, the corresponding region of each angular range section, calculate the round heart of elbow in real time and point to the angle of palm point, judge which angular range section is this angle fall in, then obtain the region that palm point is positioned at.
Preferably, in described second step, after real-time palm image is carried out binary conversion treatment, rim detection is carried out from the image after binaryzation, extract edge, smooth, then utilize polygonal segments algorithm draw the convex closure of whole palm and calculate its notch trapping spot, obtain real-time gesture figure by judging that the form of its sags and crests identifies.
Owing to have employed above-mentioned technical scheme, the present invention compared with prior art, has the advantage that the present invention constructs a novel man-machine interactive system and controls intelligent television, adopts Kinect sensor to realize speech recognition, gesture identification and gesture recognition. Control the operations of intelligent television with voice and human body both hands, make user break away from the dependence to entity controller, be a kind of brand-new intelligent television control mode.
Accompanying drawing explanation
Fig. 1 illustrates the system main flow chart according to invention;
Fig. 2 illustrates the button block plan according to invention.
Detailed description of the invention
For making the purpose of the present invention, technical scheme and advantage clearly understand, referring to accompanying drawing, the embodiment of the present invention is further described.
The phonetic order of effector is identified by a kind of man-machine interaction method based on skeleton Tracing Control intelligent television provided by the invention first with Kinect sensor, intelligent television system carry out judging intelligent television needs to enter which kind of application scenarios according to effector.
Consider the characteristic of Kinect and the requirement of system, use the Kinect speech recognition carrying out free form there is no too big meaning. Therefore what the present invention adopted is the mode of command recognition, and the SDK off-line storehouse provided by the open cloud platform of Baidu realizes voice command identification. Preset the specific word of system requirements to remit and be identified, the application scenarios that each specific vocabulary is corresponding different. By the recognition effect of phonetic order is realized predefined application scenarios. Owing to Chinese information processing system is not temporarily supported in the speech recognition of current Kinect sensor, therefore, in the present embodiment, specific vocabulary is defined as " TURNON ", " TURNOFF ", " TV ", " GAME ", " INTERNET ", " MOVIE ", " NEWS ", and corresponding is start, shutdown, Internet Protocol Television, game, web search, video request program, Internet news respectively.By the confidence level parameter of speech recognition, carry out mating and performing operation, start corresponding application scenarios.
In the present invention, the application scenarios of intelligent television respectively: Internet Protocol Television, web search, video request program, Internet news, these five scenes of playing, below by required for each scene realize control function be described in detail.
Internet Protocol Television
As the topmost function of intelligent television, it is the control that remote controller be can be done by watching the control completed required for Internet Protocol Television, uses gesture recognition and gesture identification to complete. Controlled to predefine to whether entering gesture recognition by gesture identification. When starting after gesture recognition, the action of both hands being flutterred and catches, in the present embodiment, left hand complete light target and move, the right hand completes channel selection, regulate volume etc. controls operation.
Web search
The function that web search to realize is respectively: started gesture recognition by gesture identification, after starting gesture recognition, the action of both hands is flutterred and catches, in the present embodiment, left hand completes light target and move, and the right hand completes to determine, returns operation. Being started input method by gesture identification simultaneously, left hand complete light target and move, the right hand completes clicking operation.
Video request program
The required control mode realized of video request program is identical with Internet Protocol Television, gesture identification predefine whether entering gesture recognition control. After starting gesture recognition, both hands are flutterred and catch, both hands complete to select video, regulate the control operations such as volume.
Network reading
Network reading to realize function respectively light target and move and page-turning function. Started gesture recognition by gesture identification, after starting gesture recognition, the action of both hands flutterred and catches, left hand complete light target and move, the right hand completes to determine, returns, front turn over, after turn over operation.
Game
Game implemented here is all somatic sensation television game, after being directly entered game, uses Kinect sensor that game is controlled.
Each application scenarios uses gesture identification and gesture recognition respectively. Each function will be made detailed elaboration below.
Gesture recognition
The depth transducer of Kinect sensor is infrared projector and receptor in fact, and infrared projector constantly sends out infrared structure light, and infrared structure light shines the local intensity of different distance can be different, differentiates the distance residing for user with this. After obtaining depth information, Kinect sensor will follow the tracks of the tangent plane picture of human body in this degree of depth, and identification is sold and lower limb and head, then scanning user realizes skeleton tracking from top to bottom, completes true man's mapping to visual human.
In the present invention, it is only necessary to extract elbow point and palm point, and with elbow point for the center of circle, certain radius does circle, forms elbow circle.
Move so that ability of posture control intelligent television is actual for solution control light target and complete the operation of remote controller key. The present invention is justified by left hand elbow and palm point moves cursor, and right hand elbow is justified and palm point control keypress function, it is achieved that to completely disengaging from of entity remote controller.
Mobile cursor
The position of the position of palm point and elbow circle has three kinds, respectively: put in circle, point is on circle and puts outside circle. System will by whether diacritical point moves cursor or trigger key external the judging whether of circle. When the position of palm point is in elbow circle or on elbow circle, it is not operated. And the position of palm point is when elbow circle outside, will control, according to the direction of elbow round heart sensing palm point, the direction that mouse moves.So just the all-around mobile of mouse 360 ° is achieved.
In order to bring better mobile experience to user, the present invention proposes and uses the distance that the speed that cursor is moved by motion-vector principle leaves the round heart of elbow with palm point to associate. If the distance of palm point and the round heart of elbow is m, the radius of elbow circle is r, then the distance d=m-r that cursor to move. Distance d is more big, and then translational speed is more fast, more little then more slow. Thus realizing user can arbitrary control light target translational speed.
When carrying out trickle operation, there is certain weak point in motion-vector. When the smaller icon above screen being operated, cursor will be made to be parked in desired location certain difficulty. In order to solve this problem, the present invention proposes the method for local coordinate. In zonule, namely in elbow circle, on the position of palm point and screen, correspondence position has mapped, and palm point position in region is mouse position in the circle of the corresponding elbow same proportion of circle in intelligent television screen. So just the easy produced problem when carrying out precise manipulation is solved, it is achieved that the control that hommization is convenient more.
Function of remote controller keys:
As in figure 2 it is shown, right hand elbow circle is divided into four direction, respectively up and down. When this four different regions outside circle are dropped in the position of palm point, it is mapped on the button that remote controller is conventional, realizes different operating functions in different modes. Under Internet Protocol Television scene and video request program scene, the increase and decrease of lower regions corresponding volume respectively, the increase and decrease of left and right corresponding channel respectively. Under web search and Internet news scene, region, left and right is respectively determined and returns; Button operation is not done in lower regions. When palm point drops on left region outside circle, perform to pin control key operation, when palm point returns in circle, discharge button, complete one click and operate. Diagram is as shown in Figure 2.
Concrete methods of realizing: after obtaining the coordinate of palm point and the round heart of elbow, it is decided to be initial point with the round heart of elbow, set up polar coordinate system, what the round heart of elbow pointed to the angle of palm point ranges for-180 ° to 180 °, this angular range is divided different angles range section, the corresponding region of each angular range section, calculate the round heart of elbow in real time and point to the angle of palm point, judge which angular range section is this angle fall in, then obtain the region that palm point is positioned at.
Gesture identification
When watching TV, gesture recognition can not be constantly in start-up mode, and such brought maloperation will extremely affect viewing and experience. Then propose to use gesture identification to set certain gesture and tell whether intelligent television starts gesture recognition and it is controlled. In systems, gesture is predefined.
The joints such as finger are not provided due to Kinect sensor, therefore the information such as gesture cannot be obtained, it is necessary to carry out image procossing and image recognition and gesture is extracted and identifies. For solving problem above, present invention utilizes the advantage depth transducer that Kinect sensor is exclusive, obtain depth image, thus realizing carrying out accurately identifying of gesture under completely pitch-dark environment.
The depth camera first passing through Kinect sensor obtains depth image, then obtains the position of hands according to palm coordinate. In order to intercept suitable palm area, palm is measured by we, and by calculating the size of palm area. Truncated picture is carried out binary conversion treatment, obtains gesture figure clearly.In order to improve the real-time system extraction to gesture, finally have employed a kind of concavo-convex defects detection recognition methods. From the image after black white binarization, carry out rim detection, extract edge, smooth, then utilize polygonal segments algorithm draw the convex closure of whole palm and calculate its notch trapping spot. By judging that the form of its sags and crests identifies, improve the speed of image recognition.
Technical scheme is described in detail, it is achieved control the operations of intelligent television with voice and human body both hands, make user break away from the dependence to entity controller above in association with accompanying drawing. It it is a kind of brand-new intelligent television control mode.
Claims (6)
1. the man-machine interaction method based on skeleton Tracing Control intelligent television, it is characterised in that comprise the following steps:
The first step, intelligent television is connected with Kinect sensor, is defined whether entering the gesture that gesture recognition controls in the intelligence system that intelligent television carries, controls and do not enter gesture recognition to control to be defined as different hands gesture by entering gesture recognition;
Second step, the human body palm obtained according to reality measurement size, calculate the size of palm area, after effector makes gesture, utilize the depth transducer in Kinect sensor to obtain depth image, this depth image intercepts real-time palm image according to calculated palm area size and palm coordinate, after real-time palm image is carried out binary conversion treatment, obtain real-time gesture figure;
3rd step, real-time gesture figure and defined gesture being compared, control if entering gesture recognition, then enter next step, otherwise, Kinect sensor continues waiting for catching image;
4th step, entrance gesture recognition realize skeleton by Kinect sensor and follow the trail of, complete true man's mapping to visual human after controlling, and effector completes light target with the action of a hands and moves, and completes feature operation with the action of another hands, wherein:
Complete light target with the action of a hands to move and comprise the following steps:
Extract the elbow point in visual human and palm point, with elbow point for the center of circle, the length preset is that radius does circle, formation elbow circle, when palm point be positioned at elbow circle or be positioned at elbow circle go up time, be not operated, when palm point be positioned at elbow circle outer time, control light target moving direction according to the direction of elbow round heart sensing palm point;
Complete feature operation with the action of another hands to comprise the following steps:
Extract the elbow point in visual human and palm point, with elbow point for the center of circle, the length preset is that radius does circle, formation elbow is justified, and when palm point is positioned at elbow circle or is positioned on elbow circle, is not operated, when palm point be positioned at elbow circle outer time, region outside elbow circle is divided into zones of different, and the feature operation that different regions is corresponding different, palm point is positioned at zones of different and namely carries out different feature operations.
2. a kind of man-machine interaction method based on skeleton Tracing Control intelligent television as claimed in claim 1, it is characterized in that, described intelligent television has different application scene, then also include in the described first step: in the intelligence system that intelligent television carries, the different phonetic instruction that different application scene is corresponding is defined;
After said first step, and also include before described second step:
Being received effector by Kinect sensor and send real-time voice instruction, itself and defined phonetic order are compared after this real-time voice instruction is resolved by intelligence system, enter the application scenarios corresponding to phonetic order matched.
3. a kind of man-machine interaction method based on skeleton Tracing Control intelligent television as claimed in claim 1, it is characterised in that in described 4th step, points to the direction of palm point according to the round heart of elbow and controls light target moving direction and comprise the following steps:
The more big then translational speed of radius r, then the distance d=m-r that cursor to move, distance d of the distance m of acquisition palm point and the round heart of elbow and elbow circle is more fast, more little then more slow.
4. a kind of man-machine interaction method based on skeleton Tracing Control intelligent television as claimed in claim 1, it is characterised in that in described 4th step, points to the direction of palm point according to the round heart of elbow and controls the step that cursor moves to small icon and include:
By the position of palm point in elbow circle and on intelligent television correspondence position mapped, palm point position in small icon region is equal to the position in the circle that light is marked in intelligent television the corresponding elbow same proportion of circle.
5. a kind of man-machine interaction method based on skeleton Tracing Control intelligent television as claimed in claim 1, it is characterised in that in described 4th step, it is judged that palm point is positioned at the method for zones of different and comprises the following steps:
After obtaining the coordinate of palm point and the round heart of elbow, it is decided to be initial point with the round heart of elbow, set up polar coordinate system, what the round heart of elbow pointed to the angle of palm point ranges for-180 ° to 180 °, this angular range is divided different angles range section, the corresponding region of each angular range section, calculate the round heart of elbow in real time and point to the angle of palm point, judge which angular range section is this angle fall in, then obtain the region that palm point is positioned at.
6. a kind of man-machine interaction method based on skeleton Tracing Control intelligent television as claimed in claim 1, it is characterized in that, in described second step, after real-time palm image is carried out binary conversion treatment, from the image after binaryzation, carry out rim detection, extract edge, smooth, then utilize polygonal segments algorithm draw the convex closure of whole palm and calculate its notch trapping spot, obtain real-time gesture figure by judging that the form of its sags and crests identifies.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610017797.5A CN105681859A (en) | 2016-01-12 | 2016-01-12 | Man-machine interaction method for controlling smart TV based on human skeletal tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610017797.5A CN105681859A (en) | 2016-01-12 | 2016-01-12 | Man-machine interaction method for controlling smart TV based on human skeletal tracking |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105681859A true CN105681859A (en) | 2016-06-15 |
Family
ID=56300120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610017797.5A Pending CN105681859A (en) | 2016-01-12 | 2016-01-12 | Man-machine interaction method for controlling smart TV based on human skeletal tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105681859A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106293083A (en) * | 2016-08-07 | 2017-01-04 | 苏州苍龙电子科技有限公司 | A kind of large-screen interactive system and exchange method thereof |
CN108096833A (en) * | 2017-12-20 | 2018-06-01 | 北京奇虎科技有限公司 | Somatic sensation television game control method and device based on cascade neural network, computing device |
CN109143875A (en) * | 2018-06-29 | 2019-01-04 | 广州市得腾技术服务有限责任公司 | A kind of gesture control smart home method and its system |
CN110823515A (en) * | 2018-08-14 | 2020-02-21 | 宁波舜宇光电信息有限公司 | Structured light projection module multi-station detection device and detection method thereof |
CN113190109A (en) * | 2021-03-30 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Input control method and device of head-mounted display equipment and head-mounted display equipment |
CN113625878A (en) * | 2021-08-16 | 2021-11-09 | 百度在线网络技术(北京)有限公司 | Gesture information processing method, device, equipment, storage medium and program product |
CN116328276A (en) * | 2021-12-22 | 2023-06-27 | 成都拟合未来科技有限公司 | Gesture interaction method, system, device and medium based on body building device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710948A (en) * | 2009-09-01 | 2010-05-19 | 俞吉 | Gesture motion remote control device |
CN102685581A (en) * | 2012-05-24 | 2012-09-19 | 尹国鑫 | Multi-hand control system for intelligent television |
CN102929547A (en) * | 2012-10-22 | 2013-02-13 | 四川长虹电器股份有限公司 | Intelligent terminal contactless interaction method |
CN103472916A (en) * | 2013-09-06 | 2013-12-25 | 东华大学 | Man-machine interaction method based on human body gesture recognition |
WO2014169566A1 (en) * | 2013-04-15 | 2014-10-23 | 中兴通讯股份有限公司 | Gesture control method, apparatus and system |
CN104656877A (en) * | 2013-11-18 | 2015-05-27 | 李君� | Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method |
-
2016
- 2016-01-12 CN CN201610017797.5A patent/CN105681859A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710948A (en) * | 2009-09-01 | 2010-05-19 | 俞吉 | Gesture motion remote control device |
CN102685581A (en) * | 2012-05-24 | 2012-09-19 | 尹国鑫 | Multi-hand control system for intelligent television |
CN102929547A (en) * | 2012-10-22 | 2013-02-13 | 四川长虹电器股份有限公司 | Intelligent terminal contactless interaction method |
WO2014169566A1 (en) * | 2013-04-15 | 2014-10-23 | 中兴通讯股份有限公司 | Gesture control method, apparatus and system |
CN103472916A (en) * | 2013-09-06 | 2013-12-25 | 东华大学 | Man-machine interaction method based on human body gesture recognition |
CN104656877A (en) * | 2013-11-18 | 2015-05-27 | 李君� | Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106293083A (en) * | 2016-08-07 | 2017-01-04 | 苏州苍龙电子科技有限公司 | A kind of large-screen interactive system and exchange method thereof |
CN108096833A (en) * | 2017-12-20 | 2018-06-01 | 北京奇虎科技有限公司 | Somatic sensation television game control method and device based on cascade neural network, computing device |
CN108096833B (en) * | 2017-12-20 | 2021-10-01 | 北京奇虎科技有限公司 | Motion sensing game control method and device based on cascade neural network and computing equipment |
CN109143875A (en) * | 2018-06-29 | 2019-01-04 | 广州市得腾技术服务有限责任公司 | A kind of gesture control smart home method and its system |
CN109143875B (en) * | 2018-06-29 | 2021-06-15 | 广州市得腾技术服务有限责任公司 | Gesture control smart home method and system |
CN110823515A (en) * | 2018-08-14 | 2020-02-21 | 宁波舜宇光电信息有限公司 | Structured light projection module multi-station detection device and detection method thereof |
CN113190109A (en) * | 2021-03-30 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Input control method and device of head-mounted display equipment and head-mounted display equipment |
CN113625878A (en) * | 2021-08-16 | 2021-11-09 | 百度在线网络技术(北京)有限公司 | Gesture information processing method, device, equipment, storage medium and program product |
CN113625878B (en) * | 2021-08-16 | 2024-03-26 | 百度在线网络技术(北京)有限公司 | Gesture information processing method, device, equipment, storage medium and program product |
CN116328276A (en) * | 2021-12-22 | 2023-06-27 | 成都拟合未来科技有限公司 | Gesture interaction method, system, device and medium based on body building device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105681859A (en) | Man-machine interaction method for controlling smart TV based on human skeletal tracking | |
JP6721713B2 (en) | OPTIMAL CONTROL METHOD BASED ON OPERATION-VOICE MULTI-MODE INSTRUCTION AND ELECTRONIC DEVICE APPLYING THE SAME | |
US10564799B2 (en) | Dynamic user interactions for display control and identifying dominant gestures | |
US9658695B2 (en) | Systems and methods for alternative control of touch-based devices | |
CN108132744B (en) | Method and equipment for remotely controlling intelligent equipment | |
CN103139627A (en) | Intelligent television and gesture control method thereof | |
WO2018000519A1 (en) | Projection-based interaction control method and system for user interaction icon | |
Jeong et al. | Single-camera dedicated television control system using gesture drawing | |
JP2013533541A (en) | Select character | |
CN108616712A (en) | A kind of interface operation method, device, equipment and storage medium based on camera | |
CN103778549A (en) | Mobile application popularizing system and method | |
WO2022017421A1 (en) | Interaction method, display device, emission device, interaction system, and storage medium | |
CN105094344B (en) | Fixed terminal control method and device | |
CN113191184A (en) | Real-time video processing method and device, electronic equipment and storage medium | |
US9880733B2 (en) | Multi-touch remote control method | |
CN102685581B (en) | Multi-hand control system for intelligent television | |
CN105446468A (en) | Manipulation mode switching method and device | |
CN106507201A (en) | A kind of video playing control method and device | |
KR20160063075A (en) | Apparatus and method for recognizing a motion in spatial interactions | |
Goto et al. | Development of an Information Projection Interface Using a Projector–Camera System | |
Lin et al. | Projection-based user interface for smart home environments | |
JP5396332B2 (en) | Information input device, method and program using gesture | |
CN205353937U (en) | Laser response interactive installation | |
CN111093030B (en) | Equipment control method and electronic equipment | |
JP5449074B2 (en) | Information input device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160615 |
|
WD01 | Invention patent application deemed withdrawn after publication |