CN102375542B - Method for remotely controlling television by limbs and television remote control device - Google Patents

Method for remotely controlling television by limbs and television remote control device Download PDF

Info

Publication number
CN102375542B
CN102375542B CN201110332552.9A CN201110332552A CN102375542B CN 102375542 B CN102375542 B CN 102375542B CN 201110332552 A CN201110332552 A CN 201110332552A CN 102375542 B CN102375542 B CN 102375542B
Authority
CN
China
Prior art keywords
user
limbs
control
television
control interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110332552.9A
Other languages
Chinese (zh)
Other versions
CN102375542A (en
Inventor
杨劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201110332552.9A priority Critical patent/CN102375542B/en
Publication of CN102375542A publication Critical patent/CN102375542A/en
Application granted granted Critical
Publication of CN102375542B publication Critical patent/CN102375542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention is suitable for the technical field of human and machine interaction, and provides a method and device for remotely controlling a television by limbs. The method comprises the steps of: monitoring a limbs action of a user in front of the television, after monitoring the special limbs action, recognizing the user and determining a user identity; judging whether a control interface preset by the user exists according to the user identity; if yes, displaying the control interface; obtaining a current scene image, figuring a scene depth map; according to the scene depth graph, keying a user image in the current scene image; overlapping the keyed user image to the control interface to generate a virtual control interface; and detecting the limbs action of the user according to the depth map, sending a control command to the television according to the limbs action of the user and the virtual control interface. According to the invention, the accuracy and the practicability of the limbs remote control television can be effectively improved.

Description

A kind of method of limbs RTV remote television and television remote control
Technical field
The invention belongs to human-computer interaction technique field, particularly relate to a kind of method and television remote control of limbs RTV remote television.
Background technology
Along with the development of TV tech, the digital processing capabilities of televisor is more and more stronger, and has occurred that such as Web TV etc. directly can link the televisor of internet, and the various television terminal product with superpower computing ability such as video game all-in-one.These televisors are no longer confined to watching of traditional TV programme, but can direct view Internet information on TV, share audio-video frequency content, carry out interactive recreation game etc.Traditional button remote control cannot meet the demand of the increasing content choice of televisor and mode of operation, user urgently wish by gesture remote control this from however intuitively interactive mode to replace traditional remote controller simply based on input and the control mode of button.
Existing gesture remote control catches the specific gesture form of user mainly through the built-in camera of televisor, the gesture form preset in the gesture form of catching and televisor is compared, when comparative result is identical, the gesture form of catching is converted into corresponding TV steering order and televisor is controlled.But what obtain due to existing gesture remote control is two-dimensional image information, not high to the degree of accuracy of the identification of user's hand gesture location coordinate and form, cause existing gesture remote control can only identify several fixing and simple gesture, practicality is not strong.
Summary of the invention
The object of the embodiment of the present invention is a kind of method providing limbs RTV remote television, is intended to solve the problem that existing gesture remote control gesture identification degree of accuracy is not high, practicality is not strong.
The embodiment of the present invention is achieved in that a kind of method of limbs RTV remote television, said method comprising the steps of:
The limbs form of user before A, monitoring television machine, after monitoring specific limbs form, carries out identification to described user and determines user identity;
B, judge whether to exist the control inerface that described user pre-sets according to described user identity;
C, when judging to exist the control inerface that user pre-sets, show this control inerface;
D, obtain current scene image, calculate scene depth figure;
E, according to described scene depth figure, stingy picture is carried out to the user images in current scene image;
F, the user images after stingy picture is added in described control inerface, generating virtual control inerface;
G1, detect the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof;
G2, judge that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface;
G3, when described locus coordinate is corresponding with the menu position region of described fictitious control interface, corresponding steering order is become to send to televisor described limbs Forms Transformation according to preset rules.
Another object of the embodiment of the present invention is to provide a kind of television remote control, and described device comprises:
Identity determination unit, for the limbs form of user before monitoring television machine, after monitoring specific limbs form, carries out identification to described user and determines user identity;
Judging unit, for judging whether to exist the control inerface that described user pre-sets according to described user identity;
Control inerface display unit, for when described judging unit judged result is for being, shows the control inerface that described user pre-sets;
Depth map computing unit, for obtaining current scene image, calculates scene depth figure;
Image scratches picture unit, for according to described scene depth figure, carries out stingy picture to the user images in current scene image;
Image superimposition unit, for the user images after stingy picture is added to in described control inerface, generating virtual control inerface;
Instruction sending unit, described instruction sending unit comprises:
Detection module, for detecting the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof;
Judge module, for judging that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface;
Control module, for when described locus coordinate is corresponding with the menu position region of described fictitious control interface, becomes corresponding steering order to send to televisor described limbs Forms Transformation according to preset rules.
As can be seen from technique scheme, the invention enables each several part of user's both hands and health thereof not need to load any equipment and just can complete wireless remote control to televisor.And, owing to employing depth map and based on three-dimensional fictitious control interface, the locus coordinate at user's limbs place and concrete limbs form can be obtained accurately, improve rate of precision and the practicality of limbs remote control, enhance the interactivity of user and televisor.
Accompanying drawing explanation
Fig. 1 is the realization flow figure of the limbs RTV remote television method that the embodiment of the present invention one provides;
Fig. 2 is the restriction relation figure of the Epipolar geometry that the embodiment of the present invention one provides;
Fig. 3 is the exemplary plot of the depth map that the embodiment of the present invention one provides;
Fig. 4 a, 4b are that the current scene exemplary plot that provides of the embodiment of the present invention one and user scratch picture exemplary plot;
Fig. 5 is the exemplary plot of the fictitious control interface that the embodiment of the present invention one provides;
Fig. 6 is the realization flow figure of the limbs RTV remote television method that the embodiment of the present invention two provides;
Fig. 7 is the camera imaging schematic diagram that the embodiment of the present invention two provides;
Fig. 8 is the exemplary plot of the virtual background of the fictitious control interface that the embodiment of the present invention two provides;
Fig. 9 is the composition structural drawing of the television remote control that the embodiment of the present invention three provides;
Figure 10 is the composition structural drawing of the television remote control that the embodiment of the present invention four provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
In order to technical solutions according to the invention are described, be described below by specific embodiment.
embodiment one:
Fig. 1 shows the realization flow of the limbs RTV remote television method that the embodiment of the present invention one provides, and details are as follows for the method process:
In step S101, the limb action of user before monitoring television machine, after specific limb action being detected, carries out identification to described user and determines described user identity.
In the present embodiment, television remote control comprises a binocular 3D camera (or an infrared depth camera, comprise infrared emission and receiving trap).Television remote control is by the limb action of user before described camera or infrared depth camera monitoring television machine, after monitoring the specific limb action (such as: palm, foot's pushed forward and withdrawal action or user are nodded forward or put first-class action backward by user) of user, face or fingerprint recognition are carried out to the user before televisor, determines the identity of user.In the present embodiment, the limbs of user can be preferably staff, and following examples are identify target with staff.
In this process, because the startup limbs of television remote control are fairly simple, in order to raise the efficiency and save the storage space of televisor, only need the limbs form obtaining user, do not need the volume coordinate obtaining user's limbs, when judging that the limbs form of user is identical with the startup limbs of the television remote control preset, start television remote control.
In the present embodiment, after startup television remote control, can recognition of face being carried out the face nearest from described limb action or require that user carries out fingerprint recognition etc., determining current user identities by identifying.Wherein, recognition of face and fingerprint recognition can adopt existing techniques in realizing, do not repeat them here.
In step s 102, judge whether to exist according to described user identity the control inerface that described user pre-sets, when judged result is "Yes", perform step S104, when judged result is "No", perform step S103.
In the present embodiment, the control inerface that described user identity is corresponding comprises multiple TV functions Control-Menu, such as: the choice menus of television channel, the adjustment menu etc. of television sound volume.If user uses limbs remote control first time, then personalized control inerface can be set and preserve, when next time, this user used, then judging that this user exists default control inerface.
In step s 103, when there is not the control inerface that described user pre-sets, the control inerface of display acquiescence.
In the present embodiment, user can arrange one's own personalized control inerface in default interface, and preserves the interface set.
In step S104, when there is the control inerface that described user pre-sets, show this control inerface.
In the present embodiment, when there is the control inerface that described user pre-sets, television remote control calls the data being stored in this control inerface in televisor, and it is presented on television screen by televisor.
In step S105, obtain current scene image, calculate scene depth figure.
In the present embodiment, after display control interface, the scene image before televisor can be obtained by binocular 3D camera or infrared camera, calculates scene depth figure.
Concrete, the present embodiment obtains current scene image by binocular 3D camera, and the process calculating scene depth figure is as follows:
As shown in Figure 2, C 1and C 2for the photocentre of binocular 3D two camera, the line C of two photocentres 1c 2be called baseline.I 1and I 2for the plane of delineation that respective viewpoint is corresponding, the intersection point e of baseline and two planes of delineation 1, e 2be respectively the antipodal points of two cameras.M be space a bit, m 1and m 2be respectively the picture point of M point on two planes of delineation, I 2picture point m in plane 2corresponding I 1picture point m in plane 1, then m 1one fixes on straight line L 1on, L 1be called picture point m 2to polar curve.If the projection matrix of two cameras is respectively P 1and P 2, then the projection equation of two cameras can be written as:
λ 1m 1=K 1[R 1,t 1]M=P 1M
λ 2m 2=K 2[R 2,t 2]M=P 2M
By above formula cancellation M, can obtain:
λ 2 m 2 - λ 1 R 2 R 1 - 1 m 1 = K 2 t 2 - R 2 R 1 - 1 K 1 t 1 - - - ( 1 - 1 )
Definition is by tri-vector t=(t x, t y, t z) tthe antisymmetric matrix [t] of composition x:
[ t ] x = 0 - t z t y t z 0 - t x t y t x 0
P is made to be the right-hand component of formula (1-1), [p] xpremultiplication formula (1-1) obtains:
[ p ] x ( λ 2 m 2 - λ 1 R 2 R 1 - 1 m 1 ) = 0
By above formula both sides divided by λ 2, and make λ=λ 1/ λ 2, then
[ p ] x λ R 2 R 1 - 1 m 1 = [ p ] x m 2
With premultiplication above formula can obtain:
m 2 T [ p ] x R 2 R 1 - 1 m 1 = 0
Order then above formula can be written as:
m 2 T F m 1 = 0
Matrix based on F matrix in formula, is determined, because magazine two the camera position parameters of 3D are fixed, so F is known by the position of two cameras and the parameter of camera.Thus can be released by above formula, some m to polar curve is on another piece image: L2=FM.
That is, the corresponding point of point on another visual point image on a visual point image be limited at one can try to achieve on polar curve.
Then, Feature Points Matching is carried out to two images that 3D camera photographs, obtain the corresponding point of two visual point images, then calculate the distance of object that camera photographs and camera by simple trigonometric function, thus obtain scene depth figure.Wherein, Feature Points Matching can adopt SIFT scheduling algorithm to realize.
In addition, the present embodiment also can adopt infrared camera to obtain current scene image, calculate scene depth figure, the infrared ray of specific pattern form is launched by infrared transmitter, after receiving end receives the infrared image that reflections off objects returns, calculate scene depth figure (the scene depth figure obtained can as shown in Figure 3) by two image parallactics.
In step s 106, according to described scene depth figure, stingy picture is carried out to the user images in current scene image.
In the present embodiment, after obtaining scene depth figure, simple three-dimensional reconstruction is carried out to the current scene image photographed (effect as shown in fig. 4 a), namely according to described scene depth figure, the user images of position locked in current scene image and other objects are separated (being called stingy picture), and determine the three-dimensional coordinate of this user region.
Concrete, the user images of position locked in current scene image and other objects are separated by stingy picture by the present embodiment, because the present embodiment obtains the depth map of current scene, can be easy to user profile be detected, and this user profile is plucked out (effect as shown in Figure 4 b) on current scene image.
In step s 107, the user images after stingy picture is added in described control inerface, generating virtual control inerface.
In the present embodiment, by virtual reality technology, the user images after stingy picture is added in described control inerface, and display on the tv screen, such user will be the real time video image of oneself and the picture (as shown in Figure 5) of the single-phase combination of control inerface Chinese food at the picture seen on the tv screen.Wherein, on fictitious control interface the pattern of menu including, but not limited to following at least one: the actual program video etc. of two dimensional image, three-dimensional model and different television channel.
In step S108, detect the limb action of user according to the depth map detected, according to the limb action of user and described fictitious control interface to televisor sending controling instruction.
In the present embodiment, because the scene depth figure obtained by binocular 3D camera or infrared camera is a kind of 3-D view, so higher to the degree of accuracy of limbs identification, the present embodiment, according to this scene depth figure, can detect the limb action of user more accurately.The user's limb action detected is determined on described fictitious control interface the operation of position, the steering order converting this operation correspondence to sends to televisor.Such as: gesture motion be thumb upwards, the position that increases at the television sound volume of fictitious control interface of this gesture motion, then convert corresponding television sound volume to and tune up instruction and send to televisor.
The present invention detects the limb action of user according to depth map, and according to the limb action of user and three-dimensional fictitious control interface, televisor is controlled, make each several part of user's both hands and health thereof not need to load any equipment and just can complete by the wireless remote control of televisor.And, owing to employing depth map and based on three-dimensional fictitious control interface, the locus coordinate at user's limbs place and concrete limbs form can be obtained accurately, improve rate of precision and the practicality of limbs remote control, enhance the interactivity of user and televisor.
Embodiment two:
Fig. 6 shows the realization flow of the limbs RTV remote television method that the embodiment of the present invention two provides, and details are as follows for the method process:
In step s 201, the limb action of user before monitoring television machine, after monitoring specific limb action, carries out identification to described user and determines user identity;
In step S202, judge whether to exist according to described user identity the control inerface that described user pre-sets; When judged result is "Yes", perform step S204, when judged result is "No", perform step S203;
In step S203, when there is not the control inerface that described user pre-sets, the control inerface of display acquiescence;
In step S204, when there is the control inerface that described user pre-sets, show this control inerface;
In step S205, obtain current scene image, calculate scene depth figure;
In step S206, according to described scene depth figure, stingy picture is carried out to the user images in current scene image;
In step S207, the user images after stingy picture is added in described control inerface, generating virtual control inerface.
Step S201 in the present embodiment ~ S207 is identical with the step S101 in embodiment one ~ S107, and its embodiment, see the associated description of step S101 ~ S107 in embodiment one, does not repeat them here.
In step S208, detect the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof.
In the present embodiment, the two dimensional image of user's limb action is collected by 3D camera or infrared depth camera, the 3-D view of this limb action is obtained again according to described depth map, to obtain this limb action limbs form (form such as pointed or the form etc. of foot) accurately, and with the photocentre of 3D camera or infrared camera for initial point, obtain the locus coordinate of this limb action.Television remote control by the limbs form of this limb action with to preset and the limbs form of preserving compares.
In the present embodiment, because obtain limbs form and the locus coordinate time thereof of limb motion image, employ depth map, so locus coordinate and the limbs form thereof at user's limbs place can be known accurately, effectively raise rate of precision and the practicality of limbs remote control.
In step S209, judge that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface, when judged result is "Yes", perform step S210, when judged result is "No", do not respond this operation, return and continue to perform step S208.
In the present embodiment, spatially the imaging model of any point M in camera image plane as shown in Figure 7, in figure, O point is the photocentre of 3D camera or infrared camera, XYZ is camera place coordinate system, uv is the coordinate system of imaging plane, m is M point picture point formed by imaging plane, and m and M meets following formula:
λm=K[R,t]M=PM
Wherein, λ is constant, and the volume coordinate of M is determined, and camera is fixing, all camera matrix P also determine, therefore just can obtain according to the imaging point m of above-mentioned formula target, namely can determine the position of target at fictitious control interface, thus judge that whether the locus coordinate at user's limbs place is corresponding with the menu position region of fictitious control interface, when corresponding, perform step S210, otherwise, do not respond this operation, return and continue to perform step S208.
In step S210, when described locus coordinate is corresponding with the menu position region of described fictitious control interface, corresponding steering order is become to send to televisor described limbs Forms Transformation according to preset rules.
In the present embodiment, the corresponding relation of the limbs form that television remote control has preset user and steering order (such as: hand-tight corresponding steering order of clenching fist be hide, the vertical angle pressed of forefinger is greater than or equal to 45 degree of corresponding steering orders for choosing), and the locus coordinate of user's limbs and the menu position region of fictitious control interface corresponding time, become corresponding steering order to control TV the limbs Forms Transformation of acquisition.
Such as: the locus coordinate of user's limbs and certain menu position region of fictitious control interface corresponding time, detect the limbs form of this user, if when clenching, then change into the hiding instruction preset, this menu is hidden.
In the present embodiment, described steering order including, but not limited to each menu in the setting, described fictitious control interface of described fictitious control interface virtual background display or hide and the setting of display menu size and position.Wherein, for the virtual background of fictitious control interface, user can also can select not use virtual background by choice for use virtual background, can also select the picture that prestores or current scene picture as a setting when using virtual background.In the present embodiment, when user selects the virtual background of current scene picture as fictitious control interface (as shown in Figure 8), effectively can strengthen the interest that the telepresenc of limbs remote control and user and televisor are mutual, improve the satisfaction of user.
As one embodiment of the present of invention, when user exits fictitious control interface (user leaves or user performs default limb action), television remote control can point out user to preserve the need of to amended fictitious control interface.
In the present embodiment, each several part of user's both hands and health thereof does not need to load any equipment, just can complete the wireless remote control to televisor.And, owing to employing depth map and based on three-dimensional fictitious control interface, the locus coordinate at user's limbs place and concrete limbs form can be obtained accurately, improve rate of precision and the practicality of limbs remote control, enhance the interactivity of user and television.
embodiment three:
Fig. 9 shows the composition structure of the television remote control that the embodiment of the present invention three provides, and for convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
This television remote control can be run on the unit that hardware cell in televisor or software and hardware combine.
This television remote control 9 comprises identity determination unit 91, judging unit 92, control inerface display unit 93, depth map computing unit 94, image stingy picture unit 95, image superimposition unit 96, instruction sending unit 97.Wherein, the concrete function of each unit is as follows:
Identity determination unit 91, for the limb action of user before monitoring television machine, after monitoring specific limb action, carries out identification to described user and determines user identity;
Judging unit 92, for judging whether to exist the control inerface that described user pre-sets according to described user identity;
Control inerface display unit 93, for when described judging unit 92 judged result is for being, shows the control inerface that described user pre-sets;
Depth map computing unit 94, for obtaining current scene image, calculates scene depth figure;
Image scratches picture unit 95, for according to described scene depth figure, carries out stingy picture to the user images in current scene image;
Image superimposition unit 96, for the user images after stingy picture is added to in described control inerface, generating virtual control inerface;
Instruction sending unit 97, for detecting the limb action of user according to depth map, according to the limb action of user and described fictitious control interface to televisor sending controling instruction.
Further, described control inerface display unit 93, for when described judging unit 92 judged result is no, shows the control inerface of acquiescence.
In the present embodiment, described specific limb action comprise palm pushed forward and regain action, described steering order including, but not limited to each menu in the setting and described fictitious control interface of described fictitious control interface virtual background display or hide.
The television remote control 9 that the present embodiment provides can be used in the limbs RTV remote television method of aforementioned correspondence, and details, see the associated description of above-mentioned limbs RTV remote television embodiment of the method one, do not repeat them here.
Embodiment four:
Figure 10 shows the composition structure of the television remote control that the embodiment of the present invention four provides, and for convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
This television remote control can be run on the unit that hardware cell in televisor or software and hardware combine.
This television remote control 10 comprises identity determination unit 101, judging unit 102, control inerface display unit 103, depth map computing unit 104, image stingy picture unit 105, image superimposition unit 106, instruction sending unit 107.Wherein, the concrete function of each unit is as follows:
Identity determination unit 101, for the limb action of user before monitoring television machine, after monitoring specific limb action, carries out identification to described user and determines user identity;
Judging unit 102, for judging whether to exist the control inerface that described user pre-sets according to described user identity;
Control inerface display unit 103, for when described judging unit 102 judged result is for being, shows the control inerface that described user pre-sets;
Depth map computing unit 104, for obtaining current scene image, calculates scene depth figure;
Image scratches picture unit 105, for according to described scene depth figure, carries out stingy picture to the user images in current scene image;
Image superimposition unit 106, for the user images after stingy picture is added in described control inerface, generating virtual control inerface;
Instruction sending unit 107, for detecting the limb action of user according to depth map, according to the limb action of user and described fictitious control interface to televisor sending controling instruction.
Further, described control inerface display unit 103, also for when described judging unit 102 judged result is no, shows the control inerface of acquiescence.
Further, described instruction sending unit 107 also comprises detection module 1071, judge module 1072 and control module 1073:
Described detection module 1071 is for detecting the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof;
Described judge module 1072 is for judging that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface;
Described limbs Forms Transformation, for when described locus coordinate is corresponding with the menu position region of described fictitious control interface, becomes corresponding steering order to send to televisor according to preset rules by described control module 1073.
In the present embodiment, described specific limb action comprise palm pushed forward and regain action, described steering order including, but not limited to each menu in the setting and described fictitious control interface of described fictitious control interface virtual background display or hide.
The television remote control 10 that the present embodiment provides can be used in the limbs RTV remote television method of aforementioned correspondence, and details, see the associated description of above-mentioned limbs RTV remote television embodiment of the method two, do not repeat them here.
In embodiments of the present invention, television remote control 10 is by the limb action of monitor user ', obtain limbs form and the locus coordinate thereof of this limb action, by judging that whether described locus coordinate is corresponding with the menu position region of the fictitious control interface preset, when corresponding, corresponding steering order is become to control TV according to described limbs Forms Transformation.The invention enables each several part of user's both hands and health thereof not need to load any equipment, just can complete by the wireless remote control of televisor.And, owing to employing depth map and based on three-dimensional fictitious control interface, the locus coordinate at user's limbs place and concrete limbs form can be obtained accurately, improve rate of precision and the practicality of limbs remote control.By the display technique based on virtual reality, the real time video image of user is superposed with control inerface, facilitate the limbs straighforward operation of user, enhance the interactivity of user and televisor.In addition, can also using the virtual background of current scene picture as fictitious control interface, strengthen the telepresenc of limbs remote control and user's interest mutual with televisor, user is to the satisfaction of television remote control in raising.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. a method for limbs RTV remote television, is characterized in that, said method comprising the steps of:
The limbs form of user before A, monitoring television machine, after monitoring specific limbs form, carries out identification to described user and determines user identity;
B, judge whether to exist the control inerface that described user pre-sets according to described user identity;
C, when judging to exist the control inerface that user pre-sets, show this control inerface;
D, obtain current scene image, calculate scene depth figure;
E, according to described scene depth figure, stingy picture is carried out to the user images in current scene image;
F, be added in described control inerface by the user images after stingy picture, generating virtual control inerface, and display on the tv screen, the picture that TV screen shows is the real time video image of user and the picture of the single-phase combination of control inerface Chinese food;
G1, detect the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof;
G2, judge that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface;
G3, when described locus coordinate is corresponding with the menu position region of described fictitious control interface, corresponding steering order is become to send to televisor described limbs Forms Transformation according to preset rules.
2. the method for claim 1, is characterized in that, described specific limb action comprises palm pushed forward and regains action.
3. the method for claim 1, is characterized in that, described method also comprises:
When judging not exist the control inerface that described user pre-sets, the control inerface of display acquiescence.
4. the method for claim 1, is characterized in that, described steering order comprises the display of each menu on the setting of described fictitious control interface virtual background and described fictitious control interface or hides.
5. a television remote control, is characterized in that, described device comprises:
Identity determination unit, for the limbs form of user before monitoring television machine, after monitoring specific limbs form, carries out identification to described user and determines user identity;
Judging unit, for judging whether to exist the control inerface that described user pre-sets according to described user identity;
Control inerface display unit, for when described judging unit judged result is for being, shows the control inerface that described user pre-sets;
Depth map computing unit, for obtaining current scene image, calculates scene depth figure;
Image scratches picture unit, for according to described scene depth figure, carries out stingy picture to the user images in current scene image;
Image superimposition unit, for the user images after stingy picture is added to in described control inerface, generating virtual control inerface, and display is on the tv screen, the picture that TV screen shows is the real time video image of user and the picture of the single-phase combination of control inerface Chinese food;
Instruction sending unit, described instruction sending unit comprises:
Detection module, for detecting the limb action of user according to depth map, described limb action comprises limbs form and locus coordinate thereof;
Judge module, for judging that whether described locus coordinate is corresponding with the menu position region of described fictitious control interface;
Control module, for when described locus coordinate is corresponding with the menu position region of described fictitious control interface, becomes corresponding steering order to send to televisor described limbs Forms Transformation according to preset rules.
6. device as claimed in claim 5, is characterized in that, described specific limb action comprises palm pushed forward and regains action.
7. device as claimed in claim 5, is characterized in that, described control inerface display unit, also for when described judging unit judged result is no, shows the control inerface of acquiescence.
8. device as claimed in claim 5, is characterized in that, described steering order comprises the display of each menu on the setting of described fictitious control interface virtual background and described fictitious control interface or hides.
CN201110332552.9A 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device Active CN102375542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110332552.9A CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110332552.9A CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Publications (2)

Publication Number Publication Date
CN102375542A CN102375542A (en) 2012-03-14
CN102375542B true CN102375542B (en) 2015-02-11

Family

ID=45794248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110332552.9A Active CN102375542B (en) 2011-10-27 2011-10-27 Method for remotely controlling television by limbs and television remote control device

Country Status (1)

Country Link
CN (1) CN102375542B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801924B (en) * 2012-07-20 2014-12-03 合肥工业大学 Television program host interaction system based on Kinect
US9304603B2 (en) * 2012-11-12 2016-04-05 Microsoft Technology Licensing, Llc Remote control using depth camera
CN103514387B (en) * 2012-11-29 2016-06-22 Tcl集团股份有限公司 A kind of method improving electronic device user identification precision and electronic equipment
CN103019376B (en) * 2012-12-04 2016-08-10 深圳Tcl新技术有限公司 The distant control function collocation method of identity-based identification and system
CN103023654B (en) * 2012-12-10 2016-06-29 深圳Tcl新技术有限公司 Removal dither method in intelligent remote control system identification process and device
CN103902192A (en) * 2012-12-28 2014-07-02 腾讯科技(北京)有限公司 Trigger control method and trigger control device for man-machine interactive operation
CN103529762B (en) * 2013-02-22 2016-08-31 Tcl集团股份有限公司 A kind of intelligent home furnishing control method based on sensor technology and system
CN104063041B (en) * 2013-03-21 2018-02-27 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN203151688U (en) * 2013-04-24 2013-08-21 滨州学院 Gesture television remote control
EP2989528A4 (en) 2013-04-26 2016-11-23 Hewlett Packard Development Co Detecting an attentive user for providing personalized content on a display
CN104183011A (en) * 2013-05-27 2014-12-03 万克林 Three-dimensional interactive virtual reality (3D IVR) restoring system
CN103366159A (en) * 2013-06-28 2013-10-23 京东方科技集团股份有限公司 Hand gesture recognition method and device
CN104345871B (en) * 2013-07-26 2017-06-23 株式会社东芝 device selection system
CN103530060B (en) * 2013-10-31 2016-06-22 京东方科技集团股份有限公司 Display device and control method, gesture identification method
CN104363494A (en) * 2013-12-21 2015-02-18 滁州惠智科技服务有限公司 Gesture recognition system for smart television
CN104219587A (en) * 2014-08-20 2014-12-17 深圳智意能电子科技有限公司 Method and device used for controlling application
CN105528060B (en) * 2014-09-30 2018-11-09 联想(北京)有限公司 terminal device and control method
CN104571530B (en) * 2015-01-30 2019-03-29 联想(北京)有限公司 Information processing method and information processing unit
CN104616190A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Multi-terminal 3D somatosensory shopping method and system based on internet and mobile internet
CN104618819A (en) * 2015-03-05 2015-05-13 广州新节奏智能科技有限公司 Television terminal-based 3D somatosensory shopping system and method
CN104765459B (en) * 2015-04-23 2018-02-06 无锡天脉聚源传媒科技有限公司 The implementation method and device of pseudo operation
CN106331855A (en) * 2015-06-18 2017-01-11 冠捷投资有限公司 Display for passively sensing and identifying remote control behavior
CN105391964B (en) * 2015-11-04 2019-02-12 Oppo广东移动通信有限公司 A kind of video data handling procedure and device
CN105517248B (en) * 2016-01-19 2018-06-19 浙江大学 One kind, which is seen, flashes LED light controller and its control method
CN106204751B (en) * 2016-07-13 2020-07-17 广州大西洲科技有限公司 Real object and virtual scene real-time integration method and integration system
CN106875465B (en) * 2017-01-20 2021-06-11 奥比中光科技集团股份有限公司 RGBD image-based three-dimensional control space establishment method and device
CN106997457B (en) * 2017-03-09 2020-09-11 Oppo广东移动通信有限公司 Figure limb identification method, figure limb identification device and electronic device
CN107092347B (en) * 2017-03-10 2020-06-09 深圳市博乐信息技术有限公司 Augmented reality interaction system and image processing method
WO2018201334A1 (en) * 2017-05-03 2018-11-08 深圳市智晟达科技有限公司 Digital television system
CN107255928A (en) * 2017-06-05 2017-10-17 珠海格力电器股份有限公司 Equipment control method and device and household appliance
CN107864390A (en) * 2017-10-24 2018-03-30 深圳前海茂佳软件科技有限公司 Control method, television set and the computer-readable storage medium of television set
CN107888961A (en) * 2017-11-27 2018-04-06 信利光电股份有限公司 A kind of camera function management method and relevant apparatus based on intelligent television
CN108256497A (en) * 2018-02-01 2018-07-06 北京中税网控股股份有限公司 A kind of method of video image processing and device
CN111527468A (en) * 2019-11-18 2020-08-11 华为技术有限公司 Air-to-air interaction method, device and equipment
CN113448427B (en) 2020-03-24 2023-09-12 华为技术有限公司 Equipment control method, device and system
CN113949936A (en) * 2020-07-17 2022-01-18 华为技术有限公司 Screen interaction method and device of electronic equipment
CN112363667A (en) * 2020-11-12 2021-02-12 四川长虹电器股份有限公司 Touch remote control method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201294582Y (en) * 2008-11-11 2009-08-19 天津三星电子有限公司 Television set controlled through user gesture motion
CN102221887A (en) * 2011-06-23 2011-10-19 康佳集团股份有限公司 Interactive projection system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100776801B1 (en) * 2006-07-19 2007-11-19 한국전자통신연구원 Gesture recognition method and system in picture process system
CN101998161A (en) * 2009-08-14 2011-03-30 Tcl集团股份有限公司 Face recognition-based television program watching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201294582Y (en) * 2008-11-11 2009-08-19 天津三星电子有限公司 Television set controlled through user gesture motion
CN102221887A (en) * 2011-06-23 2011-10-19 康佳集团股份有限公司 Interactive projection system and method

Also Published As

Publication number Publication date
CN102375542A (en) 2012-03-14

Similar Documents

Publication Publication Date Title
CN102375542B (en) Method for remotely controlling television by limbs and television remote control device
KR101381928B1 (en) virtual touch apparatus and method without pointer on the screen
US9329691B2 (en) Operation input apparatus and method using distinct determination and control areas
CN103135759B (en) Control method for playing multimedia and system
US10313657B2 (en) Depth map generation apparatus, method and non-transitory computer-readable medium therefor
CN105763917B (en) A kind of control method and system of terminal booting
US9286722B2 (en) Information processing apparatus, display control method, and program
US20120293544A1 (en) Image display apparatus and method of selecting image region using the same
US20140257532A1 (en) Apparatus for constructing device information for control of smart appliances and method thereof
US20140300542A1 (en) Portable device and method for providing non-contact interface
US20130249786A1 (en) Gesture-based control system
US8416189B2 (en) Manual human machine interface operation system and method thereof
JP7026825B2 (en) Image processing methods and devices, electronic devices and storage media
KR101797260B1 (en) Information processing apparatus, information processing system and information processing method
CN104205083B (en) A kind of method and apparatus for data processing based on cloud
JP2013521544A (en) Augmented reality pointing device
CN105425964A (en) Gesture identification method and system
KR101441882B1 (en) method for controlling electronic devices by using virtural surface adjacent to display in virtual touch apparatus without pointer
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
KR101654311B1 (en) User motion perception method and apparatus
CN107145822A (en) Deviate the method and system of user's body feeling interaction demarcation of depth camera
KR101321274B1 (en) Virtual touch apparatus without pointer on the screen using two cameras and light source
CN111901518B (en) Display method and device and electronic equipment
Goto et al. Development of an Information Projection Interface Using a Projector–Camera System
CN110688018B (en) Virtual picture control method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant