CN102375542A - Method for remotely controlling television by limbs and television remote control device - Google Patents

Method for remotely controlling television by limbs and television remote control device Download PDF

Info

Publication number
CN102375542A
CN102375542A CN2011103325529A CN201110332552A CN102375542A CN 102375542 A CN102375542 A CN 102375542A CN 2011103325529 A CN2011103325529 A CN 2011103325529A CN 201110332552 A CN201110332552 A CN 201110332552A CN 102375542 A CN102375542 A CN 102375542A
Authority
CN
China
Prior art keywords
user
limb action
control interface
limbs
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103325529A
Other languages
Chinese (zh)
Other versions
CN102375542B (en
Inventor
杨劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201110332552.9A priority Critical patent/CN102375542B/en
Publication of CN102375542A publication Critical patent/CN102375542A/en
Application granted granted Critical
Publication of CN102375542B publication Critical patent/CN102375542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention is suitable for the technical field of human and machine interaction, and provides a method and device for remotely controlling a television by limbs. The method comprises the steps of: monitoring a limbs action of a user in front of the television, after monitoring the special limbs action, recognizing the user and determining a user identity; judging whether a control interface preset by the user exists according to the user identity; if yes, displaying the control interface; obtaining a current scene image, figuring a scene depth map; according to the scene depth graph, keying a user image in the current scene image; overlapping the keyed user image to the control interface to generate a virtual control interface; and detecting the limbs action of the user according to the depth map, sending a control command to the television according to the limbs action of the user and the virtual control interface. According to the invention, the accuracy and the practicability of the limbs remote control television can be effectively improved.

Description

A kind of method of limbs RTV remote television and television remote control
Technical field
The invention belongs to human-computer interaction technique field, relate in particular to a kind of method and television remote control of limbs RTV remote television.
Background technology
Along with the development of TV tech, the digital processing ability of televisor is more and more stronger, and the televisor of internet occurred can directly linking such as Web TV etc., and various television terminal products with superpower computing ability such as video game all-in-one.These televisors no longer are confined to watching of traditional TV programme, but can be directly view Internet information on TV, share audio-video frequency content, carry out interactive recreation recreation etc.The remote control of traditional key-press formula can't have been satisfied the demand of increasing content choice of televisor and mode of operation, yet the user urgently hopes and can replace traditional remote controller simply based on the input and the control mode of button from interactive mode intuitively through the gesture remote control is this.
Existing gesture remote control is mainly caught the specific gesture form of user through the built-in camera of televisor; Gesture form preset in gesture form of catching and the televisor is compared, and is that corresponding TV steering order is controlled televisor with the gesture form transformation of catching when comparative result is identical.Yet, be two-dimensional image information because existing gesture remote control obtains, not high to the degree of accuracy of the identification of user's hand gesture location coordinate and form, cause existing gesture remote control can only discern several fixing and simple gestures, practicality is not strong.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of method of limbs RTV remote television, is intended to solve the problem that existing gesture remote control gesture identification degree of accuracy is not high, practicality is not strong.
The embodiment of the invention is achieved in that a kind of method of limbs RTV remote television, said method comprising the steps of:
User's limb action after monitoring specific limb action, is discerned definite user identity to said user before A, the monitoring television machine;
B, judge whether the control interface that exists said user to be provided with in advance according to said user identity;
C, when there is the control interface that the user is provided with in advance in judgement, showing should control interface;
D, obtain current scene image, calculate scene depth figure;
E, according to said scene depth figure, the user images in the current scene image is scratched picture;
F, the user images that will scratch behind the picture are added in the said control interface, generate the virtual controlling interface;
G, detect user's limb action according to depth map, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
Another purpose of the embodiment of the invention is to provide a kind of television remote control, and said device comprises:
Identity determination unit is used for user's before the monitoring television machine limb action, after monitoring specific limb action, said user is discerned definite user identity;
Judging unit is used for judging whether the control interface that exists said user to be provided with in advance according to said user identity;
Control interface display unit is used in said judgment unit judges result showing the control interface that said user is provided with in advance when being;
The depth map computing unit is used to obtain current scene image, calculates scene depth figure;
Image is scratched the picture unit, is used for according to said scene depth figure the user images in the current scene image being scratched picture;
The image overlay unit is used for generating the virtual controlling interface with scratching user images behind the picture said control interface that is added to;
Instruction sending unit is used for detecting according to depth map user's limb action, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
Can find out from technique scheme, the invention enables the each several part of user's both hands and health thereof need not load any equipment and just can accomplish wireless remote control televisor.And; Owing to used depth map and based on three-dimensional virtual controlling interface; Can obtain the locus coordinate at user's limbs place and concrete limbs form accurately, improve the rate of precision and the practicality of limbs remote controls, strengthen the interactivity of user and televisor.
Description of drawings
Fig. 1 is the realization flow figure of the limbs RTV remote television method that provides of the embodiment of the invention one;
Fig. 2 is the restriction relation figure to utmost point geometry that the embodiment of the invention one provides;
Fig. 3 is the exemplary plot of the depth map that provides of the embodiment of the invention one;
Fig. 4 a, 4b are that current scene exemplary plot and the user that the embodiment of the invention one provides scratches the picture exemplary plot;
Fig. 5 is the exemplary plot at the virtual controlling interface that provides of the embodiment of the invention one;
Fig. 6 is the realization flow figure of the limbs RTV remote television method that provides of the embodiment of the invention two;
Fig. 7 is the camera imaging schematic diagram that the embodiment of the invention two provides;
Fig. 8 is the exemplary plot of the virtual background at the virtual controlling interface that provides of the embodiment of the invention two;
Fig. 9 is the composition structural drawing of the television remote control that provides of the embodiment of the invention three;
Figure 10 is the composition structural drawing of the television remote control that provides of the embodiment of the invention four.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
For technical scheme of the present invention is described, describe through specific embodiment below.
Embodiment one:
Fig. 1 shows the realization flow of the limbs RTV remote television method that the embodiment of the invention one provides, and details are as follows for this procedure:
In step S101, user's limb action before the monitoring television machine after detecting specific limb action, is discerned definite said user identity to said user.
In the present embodiment, television remote control comprises a binocular 3D camera (perhaps an infrared degree of depth camera comprises infrared emission and receiving trap).Television remote control is through said camera or the preceding user's of infrared degree of depth camera monitoring television machine limb action; After monitoring user's specific limb action (for example: the user palm, foot had before been pushed and regained action or the user nods forward or puts first-class action backward); User before the televisor is carried out people's face or fingerprint recognition, confirm user's identity.In the present embodiment, user's limbs can be preferably staff, and following examples are recognition objective with the staff.
In this process; Because the startup limbs of television remote control are fairly simple; In order to raise the efficiency and save the storage space of televisor, only need obtain user's limbs form, need not obtain the volume coordinate of user's limbs; When the startup limbs of the limbs form of judges and preset television remote control are identical, start television remote control.
In the present embodiment, after starting television remote control, can confirm active user's identity through discerning to carrying out recognition of face or require the user to carry out fingerprint recognition etc. from the nearest face of said limb action.Wherein, recognition of face and fingerprint recognition can adopt existing techniques in realizing, repeat no more at this.
In step S102, judge whether the control interface that exists said user to be provided with in advance according to said user identity, in judged result during for " being ", execution in step S104, when judged result is " denying ", execution in step S103.
In the present embodiment, the control interface that said user identity is corresponding comprises a plurality of TV functions Control-Menus, for example: the adjusting menu of the choice menus of television channel, television sound volume etc.If the user uses the limbs remote control for the first time, personalized control interface and preservation then can be set, when next time, this user used, judge that then there is preset control interface in this user.
In step S103, when the control interface that does not exist said user to be provided with in advance, show the control interface of acquiescence.
In the present embodiment, the user can be provided with one's own personalized control interface in default interface, and preserves the interface that sets.
In step S104, when the control interface that exists said user to be provided with in advance, showing should the control interface.
In the present embodiment, when the control interface that exists said user to be provided with in advance, television remote control is called the data that are stored in this control interface in the televisor, and it is presented on the television screen through televisor.
In step S105, obtain current scene image, calculate scene depth figure.
In the present embodiment, can be after display control interface, obtain the scene image before the televisor through binocular 3D camera or infrared camera, calculate scene depth figure.
Concrete, present embodiment obtains current scene image through binocular 3D camera, and the process of calculating scene depth figure is following:
As shown in Figure 2, C 1And C 2Be the photocentre of binocular 3D two cameras, the line C of two photocentres 1C 2Be called baseline.I 1And I 2Be the corresponding plane of delineation of viewpoint separately, the intersection point e of the baseline and two planes of delineation 1, e 2Be respectively the antipodal points of two cameras.M is a bit of space, m 1And m 2Be respectively the M o'clock picture point on two planes of delineation, I 2Picture point m on the plane 2Corresponding I 1Picture point m on the plane 1, m then 1One fixes on straight line L 1On, L 1Be called picture point m 2To polar curve.If the projection matrix of two cameras is respectively P 1And P 2, then the projection equation of two cameras can be written as:
λ 1m 1=K 1[R 1,t 1]M=P 1M
λ 2m 2=K 2[R 2,t 2]M=P 2M
With following formula cancellation M, can get:
λ 2 m 2 - λ 1 R 2 R 1 - 1 m 1 = K 2 t 2 - R 2 R 1 - 1 K 1 t 1 - - - ( 1 - 1 )
Definition is by tri-vector t=(t x, t y, t z) TThe antisymmetric matrix of forming [t] x:
[ t ] x = 0 - t z t y t z 0 - t x t y t x 0
Make that p is the right-hand component of formula (1-1), [p] xPremultiplication formula (1-1) obtains:
[ p ] x ( λ 2 m 2 - λ 1 R 2 R 1 - 1 m 1 ) = 0
With the following formula both sides divided by λ 2, and make λ=λ 1/ λ 2, then
[ p ] x λ R 2 R 1 - 1 m 1 = [ p ] x m 2
Can get with premultiplication following formula:
m 2 T [ p ] x R 2 R 1 - 1 m 1 = 0
Make
Figure BDA0000102760840000061
then following formula can be written as:
m 2 T F m 1 = 0
The F matrix is a basis matrix in the formula, by the position of two cameras and the parameter determining of camera, owing to magazine two the camera position parameters of 3D are fixed, so F is known.Thereby can be released by following formula, some m on another width of cloth image to polar curve is: L2=FM.
That is to say, the point on visual point image the corresponding point on another visual point image be limited at one can try to achieve to polar curve on.
Then, two images that the 3D camera is photographed carry out Feature Points Matching, obtain the corresponding point of two visual point images, the object that photographs through simple trigonometric function computing camera again and the distance of camera, thus obtain scene depth figure.Wherein, Feature Points Matching can adopt the SIFT scheduling algorithm to realize.
In addition; Present embodiment also can adopt infrared camera to obtain current scene image; Calculate scene depth figure; Through the infrared ray of infrared transmitter emission specific pattern form, receiving end calculates scene depth figure (the scene depth figure that obtains can be as shown in Figure 3) through two image parallactics after receiving the infrared image that reflection object returns.
In step S106,, the user images in the current scene image is scratched picture according to said scene depth figure.
In the present embodiment; After obtaining scene depth figure; Current scene image (effect is shown in Fig. 4 a) to photographing carries out simple three-dimensional reconstruction; Promptly the user images of locked position in the current scene image is separated (be called and scratch picture) with other objects, and confirmed the three-dimensional coordinate of this user region according to said scene depth figure.
Concrete; Present embodiment can be separated the user images of locked position in the current scene image through scratching picture with other objects; Because present embodiment has obtained the depth map of current scene; Can be easy to detect user profile, and this user profile is plucked out (effect is shown in Fig. 4 b) on the current scene image.
In step S107, be added in the said control interface scratching user images behind the picture, generate the virtual controlling interface.
In the present embodiment; Can be added in the said control interface through the user images that virtual reality technology will be scratched behind the picture; And be presented on the TV screen, the user will be the real time video image and the control interface single-phase picture that combines of Chinese food (as shown in Figure 5) of oneself at the picture of on TV screen, seeing like this.Wherein, on the virtual controlling interface pattern of menu including, but not limited to following at least a: the current program video of two dimensional image, three-dimensional model and different electric tv channel etc.
In step S108, detect user's limb action according to the depth map that detects, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
In the present embodiment; Because the scene depth figure that obtains through binocular 3D camera or infrared camera is a kind of 3-D view; So the degree of accuracy to limbs identification is higher, present embodiment can detect user's limb action more accurately according to this scene depth figure.Operation with detected user's limb action definite position on said virtual controlling interface converts this operation control instruction corresponding to and sends to televisor.For example: gesture motion is the position that thumb makes progress, the television sound volume of this gesture motion at the virtual controlling interface increases, and then converts corresponding television sound volume to and transfers big instruction to send to televisor.
The present invention detects user's limb action according to depth map; And according to user's limb action and three-dimensional virtual controlling interface televisor is controlled, make the each several part of user's both hands and health thereof need not load any equipment and just can accomplish by the wireless remote control of televisor.And; Owing to used depth map and based on three-dimensional virtual controlling interface; Can obtain the locus coordinate at user's limbs place and concrete limbs form accurately, improve the rate of precision and the practicality of limbs remote controls, strengthen the interactivity of user and televisor.
Embodiment two:
Fig. 6 shows the realization flow of the limbs RTV remote television method that the embodiment of the invention two provides, and details are as follows for this procedure:
In step S201, user's limb action after monitoring specific limb action, is discerned definite user identity to said user before the monitoring television machine;
In step S202, judge whether the control interface that exists said user to be provided with in advance according to said user identity; When judged result is " being ", execution in step S204, when judged result is " denying ", execution in step S203;
In step S203, when the control interface that does not exist said user to be provided with in advance, show the control interface of acquiescence;
In step S204, when the control interface that exists said user to be provided with in advance, show and to control the interface;
In step S205, obtain current scene image, calculate scene depth figure;
In step S206,, the user images in the current scene image is scratched picture according to said scene depth figure;
In step S207, be added in the said control interface scratching user images behind the picture, generate the virtual controlling interface.
Step S201 in the present embodiment~S207 is identical with step S101~S107 among the embodiment one, and its embodiment repeats no more at this referring to the associated description of step S101~S107 among the embodiment one.
In step S208, according to depth map detection user's limb action, said limb action comprises limbs form and locus coordinate thereof.
In the present embodiment; Collect the two dimensional image of user's limb action through 3D camera or infrared degree of depth camera; Obtain the 3-D view of this limb action again according to said depth map; Obtaining this limb action limbs form (for example form or the form of foot of finger etc.) accurately, and be initial point, obtain the locus coordinate of this limb action with the photocentre of 3D camera or infrared camera.Television remote control compares the limbs form of this limb action with the limbs form that preestablishes and preserve.
In the present embodiment; Because obtain the limbs form and the locus coordinate time thereof of limb motion image; Used depth map,, effectively raised the rate of precision and the practicality of limbs remote control so can know the locus coordinate and the limbs form thereof at user's limbs place accurately.
In step S209, judge whether said locus coordinate is corresponding with the menu position zone at said virtual controlling interface, when judged result is " being "; Execution in step S210; When judged result is " denying ", do not respond this operation, return and continue execution in step S208.
In the present embodiment; The imaging model of any 1 M on camera image plane is as shown in Figure 7 on the space; The O point is the photocentre of 3D camera or infrared camera among the figure, and XYZ is a camera place coordinate system, and uv is the coordinate system of imaging plane; M be the M point at imaging plane imaging point, m and M satisfy following formula:
λm=K[R,t]M=PM
Wherein, λ is a constant, and the volume coordinate of M is confirmed, and camera is fixed; All camera matrix P confirm that also therefore the imaging point m according to above-mentioned formula target just can obtain, and promptly can confirm the position of target at the virtual controlling interface; Thereby whether the locus coordinate at judges limbs place is corresponding with the menu position zone at virtual controlling interface, when corresponding, and execution in step S210; Otherwise, do not respond this operation, return and continue execution in step S208.
In step S210, when coordinate is corresponding with the menu position zone at said virtual controlling interface in said locus, become corresponding steering order to send said limbs form transformation to televisor according to preset rules.
In the present embodiment; Television remote control preestablished user's limbs form with steering order corresponding relation (for example: hand-tight clench fist control instruction corresponding for hide, the vertical angle of pressing of forefinger is greater than or equal to 45 degree control instruction corresponding for choosing etc.); And when the menu position zone at coordinate and virtual controlling interface is corresponding, become corresponding steering order that TV is controlled the limbs form transformation of obtaining in the locus of user's limbs.
For example: when certain menu position zone at coordinate and virtual controlling interface is corresponding in the locus of user's limbs, detect this user's limbs form,, then change into predefined hiding instruction, this menu is hidden if when clenching.
In the present embodiment, the demonstration of said steering order each menu on the setting of said virtual controlling interface virtual background, the said virtual controlling interface or hide and display menu is big or small and the setting of position.Wherein, for the virtual background at virtual controlling interface, the user can select to use virtual background also can select not use virtual background, when using virtual background, can also select picture stored in advance or current scene picture as a setting.In the present embodiment, when the user selects the current scene picture as the virtual background at virtual controlling interface (as shown in Figure 8), can effectively strengthen telepresenc and the user and the mutual interest of televisor of limbs remote control, improve user's satisfaction.
As one embodiment of the present of invention, when the user withdrawed from virtual controlling interface (user leaves or the user carries out preset limb action), whether television remote control can point out the user need preserve amended virtual controlling interface.
In the present embodiment, the each several part of user's both hands and health thereof need not load any equipment, just can accomplish the wireless remote control to televisor.And; Owing to used depth map and based on three-dimensional virtual controlling interface; Can obtain the locus coordinate at user's limbs place and concrete limbs form accurately, improve the rate of precision and the practicality of limbs remote controls, strengthen the interactivity of user and television.
Embodiment three:
Fig. 9 shows the composition structure of the television remote control that the embodiment of the invention three provides, and for the ease of explanation, only shows the part relevant with the embodiment of the invention.
This television remote control can be to run on the unit that hardware cell or software and hardware in the televisor combine.
This television remote control 9 comprises identity determination unit 91, judging unit 92, control interface display unit 93, depth map computing unit 94, the stingy picture of image unit 95, image overlay unit 96, instruction sending unit 97.Wherein, the concrete function of each unit is following:
Identity determination unit 91 is used for user's before the monitoring television machine limb action, after monitoring specific limb action, said user is discerned definite user identity;
Judging unit 92 is used for judging whether the control interface that exists said user to be provided with in advance according to said user identity;
Control interface display unit 93 is used in said judging unit 92 judged results showing the control interface that said user is provided with in advance when being;
Depth map computing unit 94 is used to obtain current scene image, calculates scene depth figure;
Image is scratched picture unit 95, is used for according to said scene depth figure the user images in the current scene image being scratched picture;
Image overlay unit 96 is used for generating the virtual controlling interface with scratching user images behind the picture said control interface that is added to;
Instruction sending unit 97 is used for detecting according to depth map user's limb action, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
Further, said control interface display unit 93 is used in said judging unit 92 judged results showing the control interface of acquiescence for not the time.
In the present embodiment, said specific limb action comprises that palm is previous and pushes and regain action, the demonstration of said steering order each menu on the setting of said virtual controlling interface virtual background and the said virtual controlling interface or hide.
The television remote control 9 that present embodiment provides can be used the limbs RTV remote television method in aforementioned correspondence, and details repeat no more at this referring to the associated description of above-mentioned limbs RTV remote television method embodiment one.
Embodiment four:
Figure 10 shows the composition structure of the television remote control that the embodiment of the invention four provides, and for the ease of explanation, only shows the part relevant with the embodiment of the invention.
This television remote control can be to run on the unit that hardware cell or software and hardware in the televisor combine.
This television remote control 10 comprises identity determination unit 101, judging unit 102, control interface display unit 103, depth map computing unit 104, the stingy picture of image unit 105, image overlay unit 106, instruction sending unit 107.Wherein, the concrete function of each unit is following:
Identity determination unit 101 is used for user's before the monitoring television machine limb action, after monitoring specific limb action, said user is discerned definite user identity;
Judging unit 102 is used for judging whether the control interface that exists said user to be provided with in advance according to said user identity;
Control interface display unit 103 is used in said judging unit 102 judged results showing the control interface that said user is provided with in advance when being;
Depth map computing unit 104 is used to obtain current scene image, calculates scene depth figure;
Image is scratched picture unit 105, is used for according to said scene depth figure the user images in the current scene image being scratched picture;
Image overlay unit 106 is used for generating the virtual controlling interface with scratching user images behind the picture said control interface that is added to;
Instruction sending unit 107 is used for detecting according to depth map user's limb action, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
Further, said control interface display unit 103 also is used in said judging unit 102 judged results showing the control interface of acquiescence for not the time.
Further, said instruction sending unit 107 also comprises detection module 1071, judge module 1072 and control module 1073:
Said detection module 1071 is used for detecting according to depth map user's limb action, and said limb action comprises limbs form and locus coordinate thereof;
Said judge module 1072 is used to judge whether said locus coordinate is corresponding with the menu position zone at said virtual controlling interface;
Said control module 1073 is used for the menu position zone of coordinate and said virtual controlling interface in said locus when corresponding, becomes corresponding steering order to send to televisor said limbs form transformation according to preset rules.
In the present embodiment, said specific limb action comprises that palm is previous and pushes and regain action, the demonstration of said steering order each menu on the setting of said virtual controlling interface virtual background and the said virtual controlling interface or hide.
The television remote control 10 that present embodiment provides can be used the limbs RTV remote television method in aforementioned correspondence, and details repeat no more at this referring to the associated description of above-mentioned limbs RTV remote television method embodiment two.
In embodiments of the present invention; Television remote control 10 is through the limb action of monitor user '; Obtain the limbs form and the locus coordinate thereof of this limb action; Through judging that whether said locus coordinate is corresponding with the menu position zone at the virtual controlling interface of presetting, and when corresponding, becomes corresponding steering order that TV is controlled according to said limbs form transformation.The invention enables the each several part of user's both hands and health thereof need not load any equipment, just can accomplish by the wireless remote control of televisor.And, owing to used depth map and, can obtain the locus coordinate at user's limbs place and concrete limbs form accurately, improved the rate of precision and the practicality of limbs remote controls based on three-dimensional virtual controlling interface.Through display technique based on virtual reality, user's real time video image and control interface are superposeed, made things convenient for user's limbs straighforward operation, strengthened the interactivity of user and televisor.In addition, can also the virtual background of current scene picture as the virtual controlling interface be strengthened telepresenc and the user and the mutual interest of televisor of limbs remote control, improve the satisfaction of user television remote control.
The above is merely preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. the method for a limbs RTV remote television is characterized in that, said method comprising the steps of:
User's limb action after monitoring specific limb action, is discerned definite user identity to said user before A, the monitoring television machine;
B, judge whether the control interface that exists said user to be provided with in advance according to said user identity;
C, when there is the control interface that the user is provided with in advance in judgement, showing should control interface;
D, obtain current scene image, calculate scene depth figure;
E, according to said scene depth figure, the user images in the current scene image is scratched picture;
F, the user images that will scratch behind the picture are added in the said control interface, generate the virtual controlling interface;
G, detect user's limb action according to depth map, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
2. the method for claim 1 is characterized in that, said specific limb action comprises the previous propelling movement of palm and regains action.
3. the method for claim 1 is characterized in that, said method also comprises:
Judging when not having the control interface that said user is provided with in advance, showing the control interface of acquiescence.
4. the method for claim 1 is characterized in that, said step G comprises:
G1, detect user's limb action according to depth map, said limb action comprises limbs form and locus coordinate thereof;
G2, judge whether said locus coordinate is corresponding with the menu position zone at said virtual controlling interface;
G3, when the menu position zone at coordinate and said virtual controlling interface is corresponding in said locus, become corresponding steering order to send said limbs form transformation to televisor according to preset rules.
5. like claim 1 or 4 described methods, it is characterized in that said steering order comprises the demonstration of each menu on setting and the said virtual controlling interface of said virtual controlling interface virtual background or hides.
6. a television remote control is characterized in that, said device comprises:
Identity determination unit is used for user's before the monitoring television machine limb action, after monitoring specific limb action, said user is discerned definite user identity;
Judging unit is used for judging whether the control interface that exists said user to be provided with in advance according to said user identity;
Control interface display unit is used in said judgment unit judges result showing the control interface that said user is provided with in advance when being;
The depth map computing unit is used to obtain current scene image, calculates scene depth figure;
Image is scratched the picture unit, is used for according to said scene depth figure the user images in the current scene image being scratched picture;
The image overlay unit is used for generating the virtual controlling interface with scratching user images behind the picture said control interface that is added to;
Instruction sending unit is used for detecting according to depth map user's limb action, according to user's limb action and said virtual controlling interface to the televisor sending controling instruction.
7. device as claimed in claim 6 is characterized in that, said specific limb action comprises the previous propelling movement of palm and regains action.
8. device as claimed in claim 6 is characterized in that, said control interface display unit also is used in said judgment unit judges result showing the control interface of acquiescence for not the time.
9. device as claimed in claim 6 is characterized in that, said instruction sending unit comprises:
Detection module is used for the limb action according to depth map detection user, and said limb action comprises limbs form and locus coordinate thereof;
Judge module is used to judge whether said locus coordinate is corresponding with the menu position zone at said virtual controlling interface;
Control module is used for the menu position zone of coordinate and said virtual controlling interface in said locus when corresponding, becomes corresponding steering order to send to televisor said limbs form transformation according to preset rules.
10. like claim 6 or 9 described devices, it is characterized in that said steering order comprises the demonstration of each menu on setting and the said virtual controlling interface of said virtual controlling interface virtual background or hides.
CN201110332552.9A 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device Active CN102375542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110332552.9A CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110332552.9A CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Publications (2)

Publication Number Publication Date
CN102375542A true CN102375542A (en) 2012-03-14
CN102375542B CN102375542B (en) 2015-02-11

Family

ID=45794248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110332552.9A Active CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Country Status (1)

Country Link
CN (1) CN102375542B (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801924A (en) * 2012-07-20 2012-11-28 合肥工业大学 Television program host interaction system based on Kinect
CN103019376A (en) * 2012-12-04 2013-04-03 深圳Tcl新技术有限公司 Identity-recognition-based remote control function configuration method and system
CN103023654A (en) * 2012-12-10 2013-04-03 深圳Tcl新技术有限公司 Dither removing method and device in recognition process of intelligent remote control system
CN103366159A (en) * 2013-06-28 2013-10-23 京东方科技集团股份有限公司 Hand gesture recognition method and device
CN103514387A (en) * 2012-11-29 2014-01-15 Tcl集团股份有限公司 Method for improving user identification precision of electronic device and electronic device
CN103529762A (en) * 2013-02-22 2014-01-22 Tcl集团股份有限公司 Intelligent household control method and system based on sensor technology
CN103530060A (en) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 Display device and control method thereof and gesture recognition method
CN103902192A (en) * 2012-12-28 2014-07-02 腾讯科技(北京)有限公司 Trigger control method and trigger control device for man-machine interactive operation
CN104063041A (en) * 2013-03-21 2014-09-24 联想(北京)有限公司 Information processing method and electronic equipment
WO2014174523A1 (en) * 2013-04-26 2014-10-30 Hewlett-Packard Development Company, L.P. Detecting an attentive user for providing personalized content on a display
WO2014172815A1 (en) * 2013-04-24 2014-10-30 Li Min Gesture television remote controller
CN104183011A (en) * 2013-05-27 2014-12-03 万克林 Three-dimensional interactive virtual reality (3D IVR) restoring system
CN104219587A (en) * 2014-08-20 2014-12-17 深圳智意能电子科技有限公司 Method and device used for controlling application
CN104363494A (en) * 2013-12-21 2015-02-18 滁州惠智科技服务有限公司 Gesture recognition system for smart television
CN104571530A (en) * 2015-01-30 2015-04-29 联想(北京)有限公司 Information processing method and information processing unit
CN104618819A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Television terminal-based 3D somatosensory shopping system and method
CN104616190A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
CN104765459A (en) * 2015-04-23 2015-07-08 无锡天脉聚源传媒科技有限公司 Virtual operation implementation method and device
CN104871227A (en) * 2012-11-12 2015-08-26 微软技术许可有限责任公司 Remote control using depth camera
CN105391964A (en) * 2015-11-04 2016-03-09 广东欧珀移动通信有限公司 Video data processing method and apparatus
CN105517248A (en) * 2016-01-19 2016-04-20 浙江大学 Flashing watching and moving type LED (light emitting diode) lamp controller and control method thereof
CN105528060A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Terminal device and control method
CN106204751A (en) * 2016-07-13 2016-12-07 广州大西洲科技有限公司 The real-time integration method of real-world object and virtual scene and integration system
CN106331855A (en) * 2015-06-18 2017-01-11 冠捷投资有限公司 Display for passively sensing and identifying remote control behavior
CN106875465A (en) * 2017-01-20 2017-06-20 深圳奥比中光科技有限公司 The method for building up and equipment in the three-dimensional manipulation space based on RGBD images
CN106997457A (en) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 Human limbs recognition methods, human limbs identifying device and electronic installation
CN107092347A (en) * 2017-03-10 2017-08-25 深圳市博乐信息技术有限公司 A kind of augmented reality interaction systems and image processing method
CN107255928A (en) * 2017-06-05 2017-10-17 珠海格力电器股份有限公司 A kind of apparatus control method, device and home appliance
CN107272888A (en) * 2013-07-26 2017-10-20 株式会社东芝 Message processing device
CN107864390A (en) * 2017-10-24 2018-03-30 深圳前海茂佳软件科技有限公司 Control method, television set and the computer-readable storage medium of television set
CN107888961A (en) * 2017-11-27 2018-04-06 信利光电股份有限公司 A kind of camera function management method and relevant apparatus based on intelligent television
CN108256497A (en) * 2018-02-01 2018-07-06 北京中税网控股股份有限公司 A kind of method of video image processing and device
WO2018201334A1 (en) * 2017-05-03 2018-11-08 深圳市智晟达科技有限公司 Digital television system
CN112363667A (en) * 2020-11-12 2021-02-12 四川长虹电器股份有限公司 Touch remote control method and system
WO2021097600A1 (en) * 2019-11-18 2021-05-27 华为技术有限公司 Inter-air interaction method and apparatus, and device
WO2021190336A1 (en) * 2020-03-24 2021-09-30 华为技术有限公司 Device control method, apparatus and system
WO2022012602A1 (en) * 2020-07-17 2022-01-20 华为技术有限公司 Screen interaction method and apparatus for electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019589A1 (en) * 2006-07-19 2008-01-24 Ho Sub Yoon Method and apparatus for recognizing gesture in image processing system
CN201294582Y (en) * 2008-11-11 2009-08-19 天津三星电子有限公司 Television set controlled through user gesture motion
CN101998161A (en) * 2009-08-14 2011-03-30 Tcl集团股份有限公司 Face recognition-based television program watching method
CN102221887A (en) * 2011-06-23 2011-10-19 康佳集团股份有限公司 Interactive projection system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080019589A1 (en) * 2006-07-19 2008-01-24 Ho Sub Yoon Method and apparatus for recognizing gesture in image processing system
CN201294582Y (en) * 2008-11-11 2009-08-19 天津三星电子有限公司 Television set controlled through user gesture motion
CN101998161A (en) * 2009-08-14 2011-03-30 Tcl集团股份有限公司 Face recognition-based television program watching method
CN102221887A (en) * 2011-06-23 2011-10-19 康佳集团股份有限公司 Interactive projection system and method

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801924B (en) * 2012-07-20 2014-12-03 合肥工业大学 Television program host interaction system based on Kinect
CN102801924A (en) * 2012-07-20 2012-11-28 合肥工业大学 Television program host interaction system based on Kinect
CN104871227A (en) * 2012-11-12 2015-08-26 微软技术许可有限责任公司 Remote control using depth camera
CN103514387B (en) * 2012-11-29 2016-06-22 Tcl集团股份有限公司 A kind of method improving electronic device user identification precision and electronic equipment
CN103514387A (en) * 2012-11-29 2014-01-15 Tcl集团股份有限公司 Method for improving user identification precision of electronic device and electronic device
CN103019376B (en) * 2012-12-04 2016-08-10 深圳Tcl新技术有限公司 The distant control function collocation method of identity-based identification and system
CN103019376A (en) * 2012-12-04 2013-04-03 深圳Tcl新技术有限公司 Identity-recognition-based remote control function configuration method and system
CN103023654B (en) * 2012-12-10 2016-06-29 深圳Tcl新技术有限公司 Removal dither method in intelligent remote control system identification process and device
CN103023654A (en) * 2012-12-10 2013-04-03 深圳Tcl新技术有限公司 Dither removing method and device in recognition process of intelligent remote control system
CN103902192A (en) * 2012-12-28 2014-07-02 腾讯科技(北京)有限公司 Trigger control method and trigger control device for man-machine interactive operation
US9829974B2 (en) 2012-12-28 2017-11-28 Tencent Technology (Shenzhen) Company Limited Method for controlling triggering of human-computer interaction operation and apparatus thereof
CN103529762A (en) * 2013-02-22 2014-01-22 Tcl集团股份有限公司 Intelligent household control method and system based on sensor technology
CN103529762B (en) * 2013-02-22 2016-08-31 Tcl集团股份有限公司 A kind of intelligent home furnishing control method based on sensor technology and system
CN104063041A (en) * 2013-03-21 2014-09-24 联想(北京)有限公司 Information processing method and electronic equipment
CN104063041B (en) * 2013-03-21 2018-02-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
WO2014172815A1 (en) * 2013-04-24 2014-10-30 Li Min Gesture television remote controller
WO2014174523A1 (en) * 2013-04-26 2014-10-30 Hewlett-Packard Development Company, L.P. Detecting an attentive user for providing personalized content on a display
US9767346B2 (en) 2013-04-26 2017-09-19 Hewlett-Packard Development Company, L.P. Detecting an attentive user for providing personalized content on a display
CN104183011A (en) * 2013-05-27 2014-12-03 万克林 Three-dimensional interactive virtual reality (3D IVR) restoring system
CN103366159A (en) * 2013-06-28 2013-10-23 京东方科技集团股份有限公司 Hand gesture recognition method and device
CN107272888A (en) * 2013-07-26 2017-10-20 株式会社东芝 Message processing device
CN107272888B (en) * 2013-07-26 2019-12-27 株式会社东芝 Information processing apparatus
CN103530060B (en) * 2013-10-31 2016-06-22 京东方科技集团股份有限公司 Display device and control method, gesture identification method
CN103530060A (en) * 2013-10-31 2014-01-22 京东方科技集团股份有限公司 Display device and control method thereof and gesture recognition method
CN104363494A (en) * 2013-12-21 2015-02-18 滁州惠智科技服务有限公司 Gesture recognition system for smart television
CN104219587A (en) * 2014-08-20 2014-12-17 深圳智意能电子科技有限公司 Method and device used for controlling application
CN105528060A (en) * 2014-09-30 2016-04-27 联想(北京)有限公司 Terminal device and control method
CN105528060B (en) * 2014-09-30 2018-11-09 联想(北京)有限公司 terminal device and control method
CN104571530B (en) * 2015-01-30 2019-03-29 联想(北京)有限公司 Information processing method and information processing unit
CN104571530A (en) * 2015-01-30 2015-04-29 联想(北京)有限公司 Information processing method and information processing unit
CN104618819A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Television terminal-based 3D somatosensory shopping system and method
CN104616190A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
CN104765459B (en) * 2015-04-23 2018-02-06 无锡天脉聚源传媒科技有限公司 The implementation method and device of pseudo operation
CN104765459A (en) * 2015-04-23 2015-07-08 无锡天脉聚源传媒科技有限公司 Virtual operation implementation method and device
CN106331855A (en) * 2015-06-18 2017-01-11 冠捷投资有限公司 Display for passively sensing and identifying remote control behavior
CN105391964A (en) * 2015-11-04 2016-03-09 广东欧珀移动通信有限公司 Video data processing method and apparatus
CN105517248B (en) * 2016-01-19 2018-06-19 浙江大学 One kind, which is seen, flashes LED light controller and its control method
CN105517248A (en) * 2016-01-19 2016-04-20 浙江大学 Flashing watching and moving type LED (light emitting diode) lamp controller and control method thereof
CN106204751A (en) * 2016-07-13 2016-12-07 广州大西洲科技有限公司 The real-time integration method of real-world object and virtual scene and integration system
CN106875465B (en) * 2017-01-20 2021-06-11 奥比中光科技集团股份有限公司 RGBD image-based three-dimensional control space establishment method and device
CN106875465A (en) * 2017-01-20 2017-06-20 深圳奥比中光科技有限公司 The method for building up and equipment in the three-dimensional manipulation space based on RGBD images
CN106997457A (en) * 2017-03-09 2017-08-01 广东欧珀移动通信有限公司 Human limbs recognition methods, human limbs identifying device and electronic installation
CN106997457B (en) * 2017-03-09 2020-09-11 Oppo广东移动通信有限公司 Figure limb identification method, figure limb identification device and electronic device
CN107092347A (en) * 2017-03-10 2017-08-25 深圳市博乐信息技术有限公司 A kind of augmented reality interaction systems and image processing method
CN107092347B (en) * 2017-03-10 2020-06-09 深圳市博乐信息技术有限公司 Augmented reality interaction system and image processing method
WO2018201334A1 (en) * 2017-05-03 2018-11-08 深圳市智晟达科技有限公司 Digital television system
CN107255928A (en) * 2017-06-05 2017-10-17 珠海格力电器股份有限公司 A kind of apparatus control method, device and home appliance
CN107864390A (en) * 2017-10-24 2018-03-30 深圳前海茂佳软件科技有限公司 Control method, television set and the computer-readable storage medium of television set
CN107888961A (en) * 2017-11-27 2018-04-06 信利光电股份有限公司 A kind of camera function management method and relevant apparatus based on intelligent television
CN108256497A (en) * 2018-02-01 2018-07-06 北京中税网控股股份有限公司 A kind of method of video image processing and device
WO2021097600A1 (en) * 2019-11-18 2021-05-27 华为技术有限公司 Inter-air interaction method and apparatus, and device
WO2021190336A1 (en) * 2020-03-24 2021-09-30 华为技术有限公司 Device control method, apparatus and system
US11880220B2 (en) 2020-03-24 2024-01-23 Huawei Technologies Co., Ltd. Device control method, apparatus, and system
WO2022012602A1 (en) * 2020-07-17 2022-01-20 华为技术有限公司 Screen interaction method and apparatus for electronic device
CN112363667A (en) * 2020-11-12 2021-02-12 四川长虹电器股份有限公司 Touch remote control method and system

Also Published As

Publication number Publication date
CN102375542B (en) 2015-02-11

Similar Documents

Publication Publication Date Title
CN102375542B (en) Method for remotely controlling television by limbs and television remote control device
KR101381928B1 (en) virtual touch apparatus and method without pointer on the screen
CN105229582B (en) Gesture detection based on proximity sensor and image sensor
KR101151962B1 (en) Virtual touch apparatus and method without pointer on the screen
US20170038850A1 (en) System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
EP3395066B1 (en) Depth map generation apparatus, method and non-transitory computer-readable medium therefor
KR101815020B1 (en) Apparatus and Method for Controlling Interface
JP6165485B2 (en) AR gesture user interface system for mobile terminals
CN105763917B (en) A kind of control method and system of terminal booting
US20120249429A1 (en) Continued virtual links between gestures and user interface elements
US20120319949A1 (en) Pointing device of augmented reality
US20140139429A1 (en) System and method for computer vision based hand gesture identification
US8416189B2 (en) Manual human machine interface operation system and method thereof
US20140267004A1 (en) User Adjustable Gesture Space
Chu et al. Hand gesture for taking self portrait
KR101441882B1 (en) method for controlling electronic devices by using virtural surface adjacent to display in virtual touch apparatus without pointer
KR20120126508A (en) method for recognizing touch input in virtual touch apparatus without pointer
Igorevich et al. Hand gesture recognition algorithm based on grayscale histogram of the image
KR101321274B1 (en) Virtual touch apparatus without pointer on the screen using two cameras and light source
KR101539087B1 (en) Augmented reality device using mobile device and method of implementing augmented reality
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
CN111901518B (en) Display method and device and electronic equipment
Goto et al. Development of an Information Projection Interface Using a Projector–Camera System
JP5558899B2 (en) Information processing apparatus, processing method thereof, and program
WO2016102948A1 (en) Coherent touchless interaction with stereoscopic 3d images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant