CN104461524A - Song requesting method based on Kinect - Google Patents

Song requesting method based on Kinect Download PDF

Info

Publication number
CN104461524A
CN104461524A CN201410705518.5A CN201410705518A CN104461524A CN 104461524 A CN104461524 A CN 104461524A CN 201410705518 A CN201410705518 A CN 201410705518A CN 104461524 A CN104461524 A CN 104461524A
Authority
CN
China
Prior art keywords
kinect
song
user
palm
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410705518.5A
Other languages
Chinese (zh)
Inventor
关沫
梁梦雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang University of Technology
Original Assignee
Shenyang University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang University of Technology filed Critical Shenyang University of Technology
Priority to CN201410705518.5A priority Critical patent/CN104461524A/en
Publication of CN104461524A publication Critical patent/CN104461524A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides a song requesting method based on Kinect. According to the song requesting method based on Kinect, non-contact song requesting is achieved based on Kinect, the speech control function and the skeletal tracking function of Kinect are adopted for achieving intelligent song requesting, and a user can request a song at a distance from equipment or request a song through a detachable notebook computer or request a song by using the detachable notebook computer as a tablet computer. Thus, song requesting is more intelligent and more convenient.

Description

Based on the method for ordering song of Kinect
Technical field: the invention provides a kind of method for ordering song based on Kinect, belongs to the method for intelligentized control method requesting song software, specifically realizes controlling requesting song with the method operated in conjunction with non-contact type.
Background technology: in order to meet the growing entertainment requirements of people, entertainment selection intelligence degree is always in continuous improve.Order programme is one indispensable in KTV entertainment selection originally, and current order programme is no longer exclusively enjoyed by public place of entertainment already, in family life, implants more and more recreational facilities originally only just had in public places.Order programme experienced by update again and again, and the application computer of being requested a song finally by initial application LD machine is requested a song, VOD intellectual technology requesting song by now.But be adopt to click before computer or telepilot clicks mostly during requesting song, process is loaded down with trivial details, not intelligence, and convenient not, effect is not very desirable.
Summary of the invention:
Goal of the invention: the invention provides a kind of method for ordering song based on Kinect, its objective is the problem that effect existing for solution requesting song is in the past undesirable.
Technical scheme: the present invention is achieved through the following technical solutions:
Based on a method for ordering song of Kinect, it is characterized in that: the method is realized by display screen, Kinect device and computer, and computer is detachable notebook computer or common computer; Described Kinect device completes non-contact type requesting song for catching voice messaging and gesture instruction, the image of user is obtained by means of Kinect camera, the voice messaging of user is obtained by the speech identifying function of Kinect, sent the image of user and voice messaging the computing machine and computer with the information processing function to by USB data line, finally process is carried out to the image received and voice messaging and obtain corresponding gesture steering order and phonetic control command realization requesting song operation.Here computer can also manually be requested a song in a conventional manner, and when detachable notebook computer selected by computer, its display screen can disassemble, and then can serve as dull and stereotyped use, can mutually communicate use between men and request a song on seat.
The method comprises the following steps:
1) above-mentioned hardware device is connected well, Kinect device is accessed computer system, after starting outfit, completes the recognition and tracking to user's gesture and speech recognition;
2) non-contact type method for ordering song part of the present invention mainly realizes with the method for gesture identification and speech recognition, gesture identification part, according to the user images that Kinect camera obtains, by signal conditioning package and host computer, the gesture instruction of process user, by the message reflection after process on a display screen, speech recognition part, first the voice messaging that the unique user of specifying sends is caught, then the voice messaging obtained is processed in computerized information treating apparatus, on a display screen the phonetic order bearing reaction of user's transmission finally;
3) the present invention also comprises traditional manual requesting song mode, is entered directly into row requesting song, or directly the touch screen portion of detachable notebook computer can be unloaded down and request a song by computor-keyboard.
Described step 2) described in gesture identification part by the depth information image of Kinect camera collection user palm, and then extract palm portion, remove the depth information that other is useless, effectively locate the centre of the palm and follow the tracks of palm, using the center of circle of the centre of the palm as circle, launch to obtain a maximum incircle as palm area, because depth coordinate in palm is all identical, so the point coordinate of palm can be indicated by planimetric coordinates using certain radius; Q 1(x 1, y 1), Q 2(x 2, y 2) distance computing formula between 2 is: d ( Q 1 , Q 2 ) = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 ;
Described step 2) in, by filtering mode, right.The image obtained carries out the judgement of identification and target trajectory, thus determines position, the centre of the palm fast; Formula C r=(R q, P q) middle C rrepresent the circle comprising palm area, R qrepresent the center of circle and the centre of the palm, P qit is radius of a circle; The operation of corresponding gesture identification is completed by following the tracks of palm.
Stretch out palm repeatedly shake the hand fast about same level At The Height in the plane of front represent start Kinect device prepare enter order programme; Stretch out palm same level At The Height palm in the plane of front to be transformed to fist and to represent shutdown; Stretch out palm in the plane of front same level At The Height from left to right translation one segment distance represent and translate into page up; In like manner represent from right to left and translate into lower one page.Also or from top to bottom one page under parallel transformation expressing, represent page up from top to bottom, relevant a series of gesture settings all can go setting in conjunction with daily habits, in addition, user need be in Kinect device visual range and operate, the visual range of Kinect be horizontal view angle scope be less than 57 degree, vertical angle of view scope be less than 43 degree, sensing depth range between 1.2 meters to 3.5 meters, so during user operation don't fail to regulation this within the scope of just can capture complete information.
Described step 2) in, speech recognition part obtains audio data stream by Kinect microphone array, first, by carrying out feature extraction to the phonetic order of user, in order to those audio instructions of anticipation can be completed, also be required to be these audio frequency and set up an audio database, voice messaging is sampled, thus generate corresponding eigenvector, then, the phonetic order of extraction is mated with existing speech pattern, using result the highest for matching degree as last result, finally, the result matched is converted into the instruction feedback of specifying on a display screen, Kinect microphone array strengthens the process of effect algorithm by audio frequency and shields noise in environment, even if in the space environment that area is very large, user distance microphone has larger distance, or can identify voice well, the array technique that Kinect device adopts comprises effective noise to be eliminated and echo suppressing algorithm, meanwhile adopt beam forming technique, sound source position is determined by the response time of each autonomous device, and avoid the impact of neighbourhood noise as far as possible.
Described step 3) in, the computer of detachable notebook computer class is selected to rely on the wish of individual to unload touch screen portion at any time, touch-screen under unloading is equivalent to the same existence of common flat board, or in the environment of KTV requesting song, serve as Manual touch screen requesting song equipment, or requests a song more freely; Requesting song waited in line by need not request a song each time screen that all will go manually to request a song of user, is sitting at home on sofa or on bed and just realizes this process, also or having the place of certain distance to request a song with equipment.
In order to all go to follow the tracks of the order that a user specifying sends when requesting a song under each non-contact type situation, when this method has multiple user to exist in Kinect visual range, the user performing requesting song operation is distinguished in crowd, Kinect SDK has the function of analysis depth data and detection human body or user profile, it once can identify at most 6 users, SDK is that each Customs Assigned Number tracked is as index, user index is stored in the front three of depth data, user index position span is 0 to 6, so system can set the user that sends phonetic order or gesture instruction at first as this operator, and phonetic order and the gesture instruction of following the trail of this user is only gone before end requesting song flow process, until this user confirms to send complete operation instruction.
This method obtains depth image and the bone information of human body by API application programming interfaces corresponding in Kinect SDK system development tool bag, and it is not subject to the externalities such as illumination and environmental change, even if also can capture the depth image of human body and corresponding bone information when illumination is very low in acquisition depth image process.
Advantage and effect: the invention provides a kind of method for ordering song based on Kinect, method for ordering song based on Kinect relies on Kinect to realize non-contact type requesting song, method adopts the voice control function of Kinect and bone following function to realize the requesting song operation of intelligence, and user can maintain a certain distance with equipment and carry out requesting song operation.Can also be requested a song by detachable notebook computer simultaneously, then or detachable notebook be requested a song as flat board.Requesting song is operated and becomes more Intelligent portable, follow one's bent.
The invention provides on a kind of basis based on Kinect hardware device, realize a kind of more intelligent, more convenience-for-people method for ordering song and complete requesting song operation.This method effectively can reduce the time needed for requesting song operation, saves the quality time of user.Non-contact type operation can also bring brand-new experience to user, makes user even not need just can complete thought operation with equipment close contact, and make that operation is more intelligent, hommization, specific.Emerging equipment Kinect has wide application space.Allow user recognize even can by talk with and the gesture of imagery goes to realize with exchanging of machine, further man-machine between distance.The requesting song of user that adopted detachable notebook computer method more convenient, adopts the fusion method of tradition and modernization technology, and the today of development of making rapid progress in science and technology, the birth of this technology is more noticeable.
Accompanying drawing illustrates:
Fig. 1 is Kinect schematic diagram of the present invention;
Fig. 2 is a kind of device structure schematic diagram of the method for ordering song based on Kinect;
Fig. 3 is the speech recognition part run schematic diagram of the method for ordering song based on Kinect.
Embodiment: the present invention is described further below in conjunction with accompanying drawing:
As shown in Figure 2, the invention provides a kind of method for ordering song based on Kinect, the method is realized by display screen 1, Kinect device 2 and computer 3, and computer is detachable notebook computer or common computer, described Kinect device completes non-contact type requesting song for catching voice messaging and gesture instruction, the image of user is obtained by means of Kinect camera, the voice messaging of user is obtained by the speech identifying function of Kinect, the image of user and voice messaging is sent the computing machine and computer with the information processing function to by USB data line, finally process is carried out to the image received and voice messaging and obtain corresponding instruction realization requesting song operation, here computer can also manually be requested a song in a conventional manner, when detachable notebook computer selected by computer, its display screen can disassemble, and then dull and stereotyped use can be served as, on seat, can mutually communicate use between men to request a song.If be applied in KTV requesting song environment and effectively can save cost.
The method comprises the following steps:
1) above-mentioned hardware device is connected well, Kinect device is accessed computer system, after starting outfit, completes the recognition and tracking to user's gesture and speech recognition;
2) non-contact type method for ordering song part of the present invention mainly realizes with the method for gesture identification and speech recognition, gesture identification part, according to the user images that Kinect camera obtains, by signal conditioning package and host computer, the gesture instruction of process user, by the message reflection after process on a display screen, speech recognition part, first the voice messaging that the unique user of specifying sends is caught, then the voice messaging obtained is processed in computerized information treating apparatus, on a display screen the phonetic order bearing reaction of user's transmission finally;
3) the present invention also comprises traditional manual requesting song mode, is entered directly into row requesting song, or directly the touch screen portion of detachable notebook computer can be unloaded down and request a song by computor-keyboard.
Described step 2) described in gesture identification part by the depth information image of Kinect camera collection user palm, and then extract palm portion, remove the depth information that other is useless, effectively locate the centre of the palm and follow the tracks of palm, so not only effectively can reduce the interference that extraneous things brings gesture identification, counting yield can also be improved, using the center of circle of the centre of the palm as circle, launch to obtain a maximum incircle as palm area using certain radius, because depth coordinate in palm is all identical, so the point coordinate of palm can be indicated by planimetric coordinates, Q 1(x 1, y 1), Q 2(x 2, y 2) distance computing formula between 2 is: d ( Q 1 , Q 2 ) = ( x 2 - x 1 ) 2 + ( y 2 - y 1 ) 2 ;
Described step 2) in, can some filtering modes be passed through, as Kalman filtering mode identifies and the judgement of target trajectory the image obtained, thus determine position, the centre of the palm fast; As C r=(R q, P q) middle C rrepresent the circle comprising palm area, R qrepresent the center of circle and the centre of the palm, P qit is radius of a circle; By follow the tracks of palm complete corresponding gesture identification operation, such as: stretch out palm repeatedly shake the hand fast about same level At The Height in the plane of front represent start Kinect device prepare enter order programme; Stretch out palm same level At The Height palm in the plane of front to be transformed to fist and to represent shutdown; Stretch out palm in the plane of front same level At The Height from left to right translation one segment distance represent and translate into page up; In like manner represent from right to left and translate into lower one page.Also or from top to bottom one page under parallel transformation expressing, represents page up from top to bottom.Relevant a series of gesture settings all can go setting in conjunction with daily habits.In addition, also need to be illustrated: user need be in Kinect device visual range and operate.Kinect device is a kind of body sensing system with infrared positioning function, and has 3D body sense camera.The visual range of Kinect is that horizontal view angle scope is less than 57 degree, vertical angle of view scope is less than 43 degree, sensing depth range is between 1.2 meters to 3.5 meters.So during user operation don't fail to regulation this within the scope of just can capture complete information.
Described step 2) in, speech recognition part obtains audio data stream by Kinect microphone array, first, by carrying out feature extraction to the phonetic order of user, in order to those audio instructions of anticipation can be completed, also be required to be these audio frequency and set up an audio database, voice messaging is sampled, thus generate corresponding eigenvector, then, the phonetic order of extraction is mated with existing speech pattern, using result the highest for matching degree as last result, finally, the result matched is converted into the instruction feedback of specifying on a display screen .Kinect microphone array strengthens the process of effect algorithm by audio frequency and shields noise in environment, even if in the space environment that area is very large, user distance microphone has larger distance, or can identify voice well, the array technique that Kinect device adopts comprises effective noise to be eliminated and echo suppressing algorithm, meanwhile adopt beam forming technique, determine sound source position by the response time of each autonomous device, and avoid the impact of neighbourhood noise as far as possible.
Described step 3) in, select the computer of detachable notebook computer class that the wish of individual can be relied on to unload touch screen portion at any time, touch-screen under unloading is equivalent to the same existence of common flat board, also can serve as Manual touch screen requesting song equipment in the environment of KTV requesting song, can request a song more freely; Requesting song waited in line by can request a song each time screen that all will go manually to request a song of user, also can be sitting at home on sofa or on bed, also or having the place of certain distance to request a song with equipment.
In order to all go to follow the tracks of the order that a user specifying sends when requesting a song under each non-contact type situation, when this method has multiple user to exist in Kinect visual range, it can distinguish the user performing requesting song operation in crowd, Kinect SDK has the function of analysis depth data and detection human body or user profile, it once can identify at most 6 users, SDK is that each Customs Assigned Number tracked is as index, user index is stored in the front three of depth data, user index position span is 0 to 6, so system can set the user that sends phonetic order or gesture instruction at first as this operator, and phonetic order and the gesture instruction of following the trail of this user is only gone before end requesting song flow process, until this user confirms to send complete operation instruction.
The problem of surrounding environment time owing to considering requesting song, in the environment that such as lighting effects are bad, in the private room of such as KTV, light is not under good situation.The present invention can obtain depth image and the bone information of human body by API application programming interfaces corresponding in Kinect SDK system development tool bag, and it is not subject to the externalities such as illumination and environmental change, even if also can capture the depth image of human body and corresponding bone information when illumination is very low in acquisition depth image process.So just can worry in the environment that the light is dusky, be difficult to capture the such problem in user's hand present position.
Be described further below in conjunction with accompanying drawing, key equipment Kinect schematic diagram used in the present invention is as shown in Figure 1, be connected with corresponding hardware device song-order machine touch-screen by Kinect, this hardware device is the decisive equipment realizing voice/gesture requesting song operation.See that he has three " eyes " in appearance, they are infrared projection machine, colour imagery shot, infrared depth projection head from left to right respectively.Kinect ratio is easier to dismounting, but its inner structure is complicated, wherein comprises a lot of inductor components and process chip, except phonetic order and body sense operational order, Kinect does not have other forms of user to input, the induction system that the key of input system is made up of microphone and camera.
Figure 2 shows that a kind of method for ordering song structural representation based on Kinect, is the schematic diagram of user and hardware device in figure, and Kinect hardware device is used for obtaining phonetic order and gesture identification task.User only need be in Kinect visual range, in the face of to song-order machine and by send command adapted thereto just can complete requesting song operate.
Fig. 3 is a kind of speech recognition part run schematic diagram of the method for ordering song based on Kinect, first sampling processing is carried out to the voice messaging that user sends, then generate corresponding eigenwert and carry out pattern matching operation, mate with the phonetic order database of system itself, the result gained after completing phonetic order feeds back to user again.
The present invention is achieved in the combination of hardware, software, and the present invention can be included in be had in the article of computer usable medium.This medium has such as computer-readable program code means or logic wherein to provide and uses ability of the present invention.These manufacture article can be used as a part for computer system or sell separately.All above-mentioned changes are considered to a claimed part of the present invention.
The present invention can solve the problem existing for method for ordering song, and this method for ordering song can use at home, also can be applied to following KTV and request a song in environment.User both can move input at seat remote holder and carry out requesting a song or the touch-screen that mutually transmits detachable notebook on seat is used as flat board and is equally requested a song, can also request a song, make Modern Live become more intelligent having the place of a segment distance to open requesting song pattern by gesture motion and phonetic order with equipment.
In sum, the present invention seeks to allow requesting song operation become more intelligent, more convenient, bring complete new experience for people's life & amusement and can simplify the operation.

Claims (9)

1. based on a method for ordering song of Kinect, it is characterized in that: the method is realized by display screen, Kinect device and computer, and computer is detachable notebook computer or common computer; Described Kinect device completes non-contact type requesting song for catching voice messaging and gesture instruction, the image of user is obtained by means of Kinect camera, the voice messaging of user is obtained by the speech identifying function of Kinect, sent the image of user and voice messaging the computing machine and computer with the information processing function to by USB data line, finally process is carried out to the image received and voice messaging and obtain corresponding gesture steering order and phonetic control command realization requesting song operation.Here computer can also manually be requested a song in a conventional manner, and when detachable notebook computer selected by computer, its display screen can disassemble, and then can serve as dull and stereotyped use, can mutually communicate use between men and request a song on seat.
2. the method for ordering song based on Kinect according to claim 1, is characterized in that: the method comprises the following steps:
1) above-mentioned hardware device is connected well, Kinect device is accessed computer system, after starting outfit, completes the recognition and tracking to user's gesture and speech recognition;
2) non-contact type method for ordering song part of the present invention mainly realizes with the method for gesture identification and speech recognition, gesture identification part, according to the user images that Kinect camera obtains, by signal conditioning package and host computer, the gesture instruction of process user, by the message reflection after process on a display screen, speech recognition part, first the voice messaging that the unique user of specifying sends is caught, then the voice messaging obtained is processed in computerized information treating apparatus, on a display screen the phonetic order bearing reaction of user's transmission finally;
3) the present invention also comprises traditional manual requesting song mode, is entered directly into row requesting song, or directly the touch screen portion of detachable notebook computer can be unloaded down and request a song by computor-keyboard.
3. the method for ordering song based on Kinect according to claim 2, is characterized in that:
Described step 2) described in gesture identification part by the depth information image of Kinect camera collection user palm, and then extract palm portion, remove the depth information that other is useless, effectively locate the centre of the palm and follow the tracks of palm, using the center of circle of the centre of the palm as circle, launch to obtain a maximum incircle as palm area, because depth coordinate in palm is all identical, so the point coordinate of palm can be indicated by planimetric coordinates using certain radius; Q 1(x 1, y 1), Q 2(x 2, y 2) distance computing formula between 2 is: .
4. the method for ordering song based on Kinect according to claim 2, is characterized in that: described step 2) in, by filtering mode, the image obtained is identified and the judgement of target trajectory, thus determines position, the centre of the palm fast; Formula C r=(R q, P q) middle C rrepresent the circle comprising palm area, R qrepresent the center of circle and the centre of the palm, P qit is radius of a circle; The operation of corresponding gesture identification is completed by following the tracks of palm.
5. the method for ordering song based on Kinect according to claim 4, is characterized in that: stretch out palm repeatedly shake the hand fast about same level At The Height in the plane of front represent start Kinect device prepare enter order programme; Stretch out palm same level At The Height palm in the plane of front to be transformed to fist and to represent shutdown; Stretch out palm in the plane of front same level At The Height from left to right translation one segment distance represent and translate into page up; In like manner represent from right to left and translate into lower one page.Also or from top to bottom one page under parallel transformation expressing, represent page up from top to bottom, relevant a series of gesture settings all can go setting in conjunction with daily habits, in addition, user need be in Kinect device visual range and operate, the visual range of Kinect be horizontal view angle scope be less than 57 degree, vertical angle of view scope be less than 43 degree, sensing depth range between 1.2 meters to 3.5 meters, so during user operation don't fail to regulation this within the scope of just can capture complete information.
6. the method for ordering song based on Kinect according to claim 2, it is characterized in that: described step 2) in, speech recognition part obtains audio data stream by Kinect microphone array, first, by carrying out feature extraction to the phonetic order of user, in order to those audio instructions of anticipation can be completed, also be required to be these audio frequency and set up an audio database, voice messaging is sampled, thus generate corresponding eigenvector, then, the phonetic order of extraction is mated with existing speech pattern, using result the highest for matching degree as last result, finally, the result matched is converted into the instruction feedback of specifying on a display screen, Kinect microphone array strengthens the process of effect algorithm by audio frequency and shields noise in environment, even if in the space environment that area is very large, user distance microphone has larger distance, or can identify voice well, the array technique that Kinect device adopts comprises effective noise to be eliminated and echo suppressing algorithm, meanwhile adopt beam forming technique, sound source position is determined by the response time of each autonomous device, and avoid the impact of neighbourhood noise as far as possible.
7. the method for ordering song based on Kinect according to claim 2, it is characterized in that: described step 3) in, the computer of detachable notebook computer class is selected to rely on the wish of individual to unload touch screen portion at any time, touch-screen under unloading is equivalent to the same existence of common flat board, or in the environment of KTV requesting song, serve as Manual touch screen requesting song equipment, or request a song more freely; Requesting song waited in line by need not request a song each time screen that all will go manually to request a song of user, is sitting at home on sofa or on bed and just realizes this process, also or having the place of certain distance to request a song with equipment.
8. the method for ordering song based on Kinect according to claim 2, it is characterized in that: in order to all go to follow the tracks of the order that a user specifying sends when requesting a song under each non-contact type situation, when this method has multiple user to exist in Kinect visual range, the user performing requesting song operation is distinguished in crowd, Kinect SDK has the function of analysis depth data and detection human body or user profile, it once can identify at most 6 users, SDK is that each Customs Assigned Number tracked is as index, user index is stored in the front three of depth data, user index position span is 0 to 6, so system can set the user that sends phonetic order or gesture instruction at first as this operator, and phonetic order and the gesture instruction of following the trail of this user is only gone before end requesting song flow process, until this user confirms to send complete operation instruction.
9. the method for ordering song based on Kinect according to claim 2, is characterized in that:
This method obtains depth image and the bone information of human body by API application programming interfaces corresponding in Kinect SDK system development tool bag, and it is not subject to the externalities such as illumination and environmental change, even if also can capture the depth image of human body and corresponding bone information when illumination is very low in acquisition depth image process.
CN201410705518.5A 2014-11-27 2014-11-27 Song requesting method based on Kinect Pending CN104461524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410705518.5A CN104461524A (en) 2014-11-27 2014-11-27 Song requesting method based on Kinect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410705518.5A CN104461524A (en) 2014-11-27 2014-11-27 Song requesting method based on Kinect

Publications (1)

Publication Number Publication Date
CN104461524A true CN104461524A (en) 2015-03-25

Family

ID=52907635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410705518.5A Pending CN104461524A (en) 2014-11-27 2014-11-27 Song requesting method based on Kinect

Country Status (1)

Country Link
CN (1) CN104461524A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105353959A (en) * 2015-10-30 2016-02-24 小米科技有限责任公司 Method and apparatus for controlling list to slide
CN108733200A (en) * 2017-04-18 2018-11-02 芦伟杰 A kind of AR screens motion sensing control power distribution method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019643A1 (en) * 2010-07-26 2012-01-26 Atlas Advisory Partners, Llc Passive Demographic Measurement Apparatus
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN103118227A (en) * 2012-11-16 2013-05-22 佳都新太科技股份有限公司 Method, device and system of pan tilt zoom (PTZ) control of video camera based on kinect
CN103268153A (en) * 2013-05-31 2013-08-28 南京大学 Human-computer interactive system and man-machine interactive method based on computer vision in demonstration environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120019643A1 (en) * 2010-07-26 2012-01-26 Atlas Advisory Partners, Llc Passive Demographic Measurement Apparatus
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television
CN103118227A (en) * 2012-11-16 2013-05-22 佳都新太科技股份有限公司 Method, device and system of pan tilt zoom (PTZ) control of video camera based on kinect
CN103268153A (en) * 2013-05-31 2013-08-28 南京大学 Human-computer interactive system and man-machine interactive method based on computer vision in demonstration environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄康泉 等: ""Kinect在视频会议系统中的应用"", 《广西人学学报:自然科学版》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105353959A (en) * 2015-10-30 2016-02-24 小米科技有限责任公司 Method and apparatus for controlling list to slide
CN108733200A (en) * 2017-04-18 2018-11-02 芦伟杰 A kind of AR screens motion sensing control power distribution method

Similar Documents

Publication Publication Date Title
US11914792B2 (en) Systems and methods of tracking moving hands and recognizing gestural interactions
US11132065B2 (en) Radar-enabled sensor fusion
JP6968154B2 (en) Control systems and control processing methods and equipment
US9921660B2 (en) Radar-based gesture recognition
US9658695B2 (en) Systems and methods for alternative control of touch-based devices
KR102162373B1 (en) Associating an object with a subject
US10782657B2 (en) Systems and methods of gestural interaction in a pervasive computing environment
US8823642B2 (en) Methods and systems for controlling devices using gestures and related 3D sensor
US20190317594A1 (en) System and method for detecting human gaze and gesture in unconstrained environments
WO2018000200A1 (en) Terminal for controlling electronic device and processing method therefor
CN103890696A (en) Authenticated gesture recognition
CN106354253A (en) Cursor control method and AR glasses and intelligent ring based on same
CN106468917B (en) A kind of long-range presentation exchange method and system of tangible live real-time video image
CN104871227B (en) Use the remote control of depth cameras
CN102789312A (en) User interaction system and method
CN108681399A (en) A kind of apparatus control method, device, control device and storage medium
CA2882005A1 (en) Input device, apparatus, input method, and recording medium
CN104461524A (en) Song requesting method based on Kinect
US11223729B2 (en) Information processing apparatus and non-transitory computer readable medium for instructing an object to perform a specific function
JP4053903B2 (en) Pointing method, apparatus, and program
TWI587175B (en) Dimensional pointing control and interaction system
Yeo et al. OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera
WO2023035177A1 (en) Target confirmation method and apparatus, and handheld system
CN109542654A (en) Vending machine and the method for determining the selected commodity of user

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150325