CN112637655A - Control method and system of smart television and mobile terminal - Google Patents
Control method and system of smart television and mobile terminal Download PDFInfo
- Publication number
- CN112637655A CN112637655A CN202110026191.9A CN202110026191A CN112637655A CN 112637655 A CN112637655 A CN 112637655A CN 202110026191 A CN202110026191 A CN 202110026191A CN 112637655 A CN112637655 A CN 112637655A
- Authority
- CN
- China
- Prior art keywords
- information
- mobile terminal
- user
- camera device
- operation instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000009471 action Effects 0.000 claims abstract description 37
- 230000002452 interceptive effect Effects 0.000 claims abstract description 9
- 238000012795 verification Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 41
- 230000001276 controlling effect Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000002478 hand joint Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a control method of an intelligent television, which is a mobile terminal connected with the intelligent television, wherein the mobile terminal comprises a camera device; the method comprises the following steps: the mobile terminal collects gesture information of a user through the camera device, wherein the gesture information comprises at least one of gesture information and action information; determining an operation instruction corresponding to the attitude information; and sending the operation instruction to the intelligent television so that the intelligent television executes the operation instruction to realize interactive control on the intelligent television. In addition, a control system and a mobile terminal of the intelligent television are also disclosed. By adopting the embodiment of the invention, the interactivity and convenience of intelligent television control can be improved.
Description
Technical Field
The invention relates to the technical field of Internet of things, in particular to a control method and system of a smart television and a mobile terminal.
Background
With the development of intelligent televisions, the traditional television is controlled in a single and tedious control mode through a remote controller, and the multidimensional requirements of users cannot be met. In the related technical scheme, different gestures are input through the remote controller and are matched with operation instructions corresponding to the remote controller, such as up, down, left and right, and the like, and the operation mode is still mechanical and unnatural operation experience.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a method and a system for controlling a smart television, and a mobile terminal.
In a first part of the invention, a method for controlling a smart television is provided, wherein the method comprises a mobile terminal connected with the smart television, and the mobile terminal comprises a camera device;
the method comprises the following steps:
the mobile terminal collects gesture information of a user through the camera device, wherein the gesture information comprises at least one of gesture information and action information;
determining an operation instruction corresponding to the attitude information;
and sending the operation instruction to the intelligent television so that the intelligent television executes the operation instruction to realize interactive control on the intelligent television.
Optionally, before the step of acquiring the posture information of the user by the mobile terminal through the camera device, the method further includes:
acquiring an image through the camera device, and identifying user information in the acquired image;
and verifying the user information, and executing the step that the mobile terminal acquires the posture information of the user through the camera device under the condition that the user information passes the verification.
Optionally, the step of verifying the user information further includes:
and performing living body recognition on the user information, and executing the step that the mobile terminal collects the gesture information of the user through the camera device under the condition that the living body recognition passes.
Optionally, the step of acquiring, by the mobile terminal, the posture information of the user through the camera device further includes:
acquiring an image acquired by the camera device, performing feature recognition, and extracting gesture feature information to determine gesture information;
the step of determining the operation instruction corresponding to the attitude information further includes:
and determining an operation instruction corresponding to the extracted gesture information according to the corresponding relation between the preset gesture information and the operation instruction.
Optionally, the step of acquiring, by the mobile terminal, the posture information of the user through the camera device further includes:
acquiring an image acquired by the camera device, performing feature recognition, and extracting action feature information; determining action information according to the action characteristic information;
the step of determining the operation instruction corresponding to the attitude information further includes:
and determining an operation instruction corresponding to the determined action information according to the corresponding relation between the preset action information and the operation instruction.
Optionally, the step of acquiring the image acquired by the camera device, performing feature recognition, and extracting the motion feature information further includes:
performing feature recognition on the acquired image, recognizing a user in the image, and determining a plurality of action feature points corresponding to the user, wherein the action feature information comprises position information of the plurality of action feature points;
the step of determining the motion information according to the motion characteristic information further includes:
and determining the action information according to the position information of the plurality of action characteristic points.
Optionally, the method further includes:
receiving a target locking instruction;
determining an input target locking instruction through a camera device, and determining position information corresponding to the target locking instruction;
and determining a locking target according to the position information and the picture displayed by the intelligent television, so that the intelligent television executes an operation instruction aiming at the locking target.
In a second aspect of the present invention, a control system for a smart television is provided, where the system includes a smart television and a mobile terminal connected to the smart television, and the mobile terminal is provided with a camera device for collecting images;
the mobile terminal is used for acquiring gesture information of a user through the camera device, determining an operation instruction corresponding to the gesture information, and sending the operation instruction to the smart television, wherein the gesture information comprises at least one of gesture information and action information;
the intelligent television is used for executing the operation instruction so as to realize interactive control on the intelligent television.
Optionally, the system further includes a base, where the base is used for placing the mobile terminal and is connected with the mobile terminal;
the mobile terminal generates a rotation instruction based on the gesture information of the user collected by the camera device and sends the rotation instruction to the base so that the base rotates based on the rotation device included by the base, and therefore the rotation of the mobile terminal is driven.
In a third aspect of the present invention, there is provided a mobile terminal, including a memory and a processor, where the memory has an executable code, and when the executable code runs on the processor, the mobile terminal implements the control method of the smart television as described above.
The embodiment of the invention has the following beneficial effects:
after the control method and the control system for the smart television and the mobile terminal are adopted, the mobile terminal such as the smart mobile phone is connected with the smart television, the gestures and the actions of the user are collected and recognized through the camera of the mobile terminal, and then the preset operation instructions corresponding to the corresponding gestures and actions are determined to control the smart television, so that interactive control between the smart television and the smart television is realized, the control diversity and the pleasure of the smart television are increased, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
fig. 1 is a schematic composition diagram of a control system of a smart television in one embodiment;
fig. 2 is a schematic flowchart of a control method of a smart television according to an embodiment;
fig. 3 is a schematic composition diagram of a control system of a smart television in one embodiment;
fig. 4 is a schematic structural diagram of a mobile terminal operating the control method of the smart television in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiment, a control system of a smart television and a control method of the smart television based on the system are provided.
Specifically, please refer to fig. 1, wherein fig. 1 shows a schematic composition diagram of a control system of a smart television. The control system 100 of the smart television comprises the smart television 200 and the mobile terminal 300 connected with the smart television 200. The smart television 200 and the mobile terminal 300 can be in communication connection in a Bluetooth, WIFI or NFC mode, and data communication can be performed between the smart television 200 and the mobile terminal. In this embodiment, the mobile terminal 300 is used to control the smart television 200, so as to improve the application range and convenience of the smart television control.
In the present embodiment, in order to achieve convenience in controlling the smart tv 200, the smart tv 200 is controlled based on the fact that the camera of the mobile terminal 300 collects information such as the user's actions and the like based on the control of the smart tv 200 by the mobile terminal 300.
The mobile terminal is a smart phone, a tablet personal computer and the like, is provided with a camera device, and can acquire images through the camera device to realize control over the smart television 200. In specific implementation, an idle smart phone can be used for control, and extra cost is not required to be increased under the condition of convenient control. The method comprises the steps that relevant information such as gestures, actions and faces of a user are collected through a camera device of the mobile terminal, analysis is carried out on the basis of collected images and videos to obtain corresponding control instructions, and then the control instructions are sent to the intelligent television through communication connection between the intelligent television and the mobile terminal, so that the intelligent television is controlled.
Further, referring to fig. 2, fig. 2 is a schematic diagram illustrating a control method of the smart television based on the smart television control system.
Specifically, the method for controlling the smart television includes the steps shown in fig. 2:
step S102: the mobile terminal collects gesture information of a user through the camera device, wherein the gesture information comprises at least one of gesture information and action information;
step S104: the mobile terminal determines an operation instruction corresponding to the attitude information;
step S106: the mobile terminal sends the operation instruction to the intelligent television;
step S108: and the intelligent television executes the operation instruction to realize interactive control on the intelligent television.
After the communication connection between the mobile terminal and the smart television is established, relevant information of a user is collected through a camera device on the mobile terminal, wherein the information collected through the camera device comprises posture information of the user. The user here refers to a user identified by face recognition in an image acquired by the imaging device, and acquires at least one of motion information, gesture information, and the like corresponding to the identified user. And then recognizing gesture information such as the collected action information and gesture information to determine a preset operation instruction corresponding to the collected gesture information, and then directly sending the determined operation instruction to the smart television to realize interactive control of the smart television.
Before the gesture information of the user is collected through the camera device of the mobile terminal, the user needs to be identified so as to determine whether the identified user is used as a user capable of controlling the smart television. Specifically, an image is collected through a camera device, whether a face area exists in the collected image is determined through face area recognition, and then further face recognition is carried out on the recognized face area so as to determine whether a user corresponding to the recognized face is a user with a face input in advance. That is to say, the user information in the acquired image needs to be verified, and the subsequent step of controlling the smart television is executed only when the verification is passed, whereas the corresponding user is considered to have no control right of the smart television and the subsequent step related to controlling the smart television is not executed when the verification is not passed.
Furthermore, in order to improve the security of the smart tv control, it is necessary to perform living body recognition on the recognized user information. Specifically, after the image is captured by the image capturing device and the user information is recognized, if the face recognition corresponding to the recognized user information passes, it is necessary to perform living body recognition based on the user information, where the living body recognition may be a living body recognition result obtained by analyzing a video captured by the image capturing device or a living body recognition result obtained by a sensor such as an infrared sensor provided on the mobile terminal. And executing subsequent control operation on the intelligent television only in the case that the living body identification passes, and otherwise, prohibiting executing the control operation on the intelligent television in the case that the living body identification does not pass.
Further, in this embodiment, the face recognition and the user identity recognition of the user not only can be used as a control right for whether the smart television is available, but also can realize face login and user management functions based on the face login, including functions of intelligent recommendation, personalized customization and the like of the user corresponding to the recognized face. Moreover, the intelligent lock function for children and the face brushing authorization function for parents can be realized based on the face waving function, so that the intelligent management function of multiple users is realized, and the user experience is improved.
The process of recognizing the image captured by the camera device to determine the posture information of the user is a process of extracting image features. Specifically, feature recognition is performed on an image acquired by a camera device, wherein features of the image are extracted according to a preset feature recognition algorithm to acquire gesture feature information and/or motion feature information contained in the image. After the gesture feature information and/or the action feature information are/is acquired, corresponding action information or gesture information is determined. For example, the gesture information includes, but is not limited to, gestures such as gesture movement, fist grasping, hand releasing, and the like. The specific types of gesture information and motion information are not limited in this embodiment. In a specific process, feature recognition may be performed on the acquired image to determine a plurality of action feature points of the user, that is, position information of the plurality of action points, and then determine corresponding action information. The motion feature points may be bone feature points corresponding to the recognized gesture or human body, for example, feature points corresponding to hand joints or feature points corresponding to human body joints or bones; and recognizing the gesture or action of the user according to the recognized bone characteristic points and based on a preset algorithm to determine corresponding action information or gesture information.
After the gesture information of the user such as the motion information and the gesture information is corresponded, the operation instruction corresponding to the motion information acquired according to the image can be determined according to the preset corresponding relationship between the gesture information and the operation instruction, the corresponding relationship between the motion information and the operation instruction, or the corresponding relationship between other gesture information and the operation instruction, wherein each gesture information, motion information, or corresponding relationship between the gesture information and the operation instruction is preset, and the preset operation can be preset by a system or can be customized by the user according to the requirement of the user.
For example, when a user inputs a preset action or gesture within the range of the camera device of the mobile terminal, the camera device may acquire and recognize a corresponding image, determine a corresponding control instruction, send the control instruction to the smart television, and then control the smart television.
It should be noted that, in this embodiment, the control of the smart television not only realizes basic operations such as up, down, left, right, confirmation, and returning of the smart television based on the action information and the gesture information of the user acquired by the camera device, but also can realize three-dimensional interaction operations such as gesture movement, tapping, praise, and pull.
Further, the interaction control between the smart television and the mobile terminal can also be control in a game process, and in the process, a control target of a user in the game process or other television applications needs to be selected. Specifically, a target locking instruction input by a user is acquired through a camera device, position information corresponding to the target locking instruction is acquired, and then a locking target in the intelligent television is determined according to the position information and picture information displayed by the intelligent television, so that the locking of the locking target and subsequent control of the locking target are performed in time.
Further, in this embodiment, in the process of identifying the gesture information or the motion information, it is also necessary to identify an object to which the user inputs the corresponding gesture information or motion information, for example, identify whether the current gesture information or motion information is input by a left hand or a right hand, and then in the process of determining the control instruction corresponding to the motion information or the gesture information, it is necessary to determine the control instruction corresponding to the left hand or the control instruction corresponding to the right hand. In a specific example, the left hand and the right hand correspond to different types of control instructions, for example, the left hand corresponds to a control instruction related to a menu, the right hand corresponds to a control instruction corresponding to a specific operation, and the control instructions corresponding to the same gesture information are different when the same gesture information is given by the left hand and the right hand. The relation between the specific left hand or right hand and the control instruction can be preset by the system, and can also be set by the user according to own habits, so that the interest of interactive control with the intelligent television is further realized, and the user experience is improved.
In this embodiment, in order to further improve the user experience of controlling the smart television, the control system 100 of the smart television further includes a base 400, which may be specifically shown in fig. 3, where the base 400 is used for placing the mobile terminal 300. In this embodiment, the camera of the mobile terminal may also realize tracking with human body movement, but in case that the user's position moves, the user may move out of the shooting range of the camera, where tracking of the user's position may be realized by using the cradle 400. Specifically, be provided with rotary device, power device and controlling means on the base, be connected between base and the mobile terminal, mobile terminal sends the instruction according to the condition of shooing the image and gives controlling means to make controlling means control power device, with the rotation of control optional equipment, thereby make camera device realize the pursuit to the user's position, in order to further improve the diversification of smart television control.
In the control method and the control system of the smart television, the camera and the processor on the mobile terminal such as the smart phone can be utilized to solve the problems that the smart television has no camera device or low performance and can not realize various intelligent control functions, and the smart television does not need to be modified and the manufacturing cost of the smart television is not increased.
Fig. 4 shows an internal structure diagram of a mobile terminal (computer device) implementing the control method of the smart television in one embodiment. The computer device may specifically be a terminal, and may also be a server. As shown in fig. 4, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program which, when executed by the processor, causes the processor to carry out the above-mentioned method. The internal memory may also have stored therein a computer program which, when executed by the processor, causes the processor to perform the method described above. Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims. Please enter the implementation content part.
Claims (10)
1. The method for controlling the intelligent television is characterized by comprising the following steps that a mobile terminal is connected with the intelligent television, and the mobile terminal comprises a camera device;
the method comprises the following steps:
the mobile terminal collects gesture information of a user through the camera device, wherein the gesture information comprises at least one of gesture information and action information;
determining an operation instruction corresponding to the attitude information;
and sending the operation instruction to the intelligent television so that the intelligent television executes the operation instruction to realize interactive control on the intelligent television.
2. The method for controlling the smart television set according to claim 1, wherein before the step of the mobile terminal acquiring the posture information of the user through the camera device, the method further comprises:
acquiring an image through the camera device, and identifying user information in the acquired image;
and verifying the user information, and executing the step that the mobile terminal acquires the posture information of the user through the camera device under the condition that the user information passes the verification.
3. The method for controlling the smart tv as claimed in claim 2, wherein the step of verifying the user information further comprises:
and performing living body recognition on the user information, and executing the step that the mobile terminal collects the gesture information of the user through the camera device under the condition that the living body recognition passes.
4. The method for controlling the smart television set according to claim 1, wherein the step of the mobile terminal acquiring the posture information of the user through the camera device further comprises:
acquiring an image acquired by the camera device, performing feature recognition, and extracting gesture feature information to determine gesture information;
the step of determining the operation instruction corresponding to the attitude information further includes:
and determining an operation instruction corresponding to the extracted gesture information according to the corresponding relation between the preset gesture information and the operation instruction.
5. The method for controlling the smart television set according to claim 1, wherein the step of the mobile terminal acquiring the posture information of the user through the camera device further comprises:
acquiring an image acquired by the camera device, performing feature recognition, and extracting action feature information; determining action information according to the action characteristic information;
the step of determining the operation instruction corresponding to the attitude information further includes:
and determining an operation instruction corresponding to the determined action information according to the corresponding relation between the preset action information and the operation instruction.
6. The method for controlling the smart television according to claim 5, wherein the step of acquiring the image captured by the camera device, performing feature recognition, and extracting motion feature information further comprises:
performing feature recognition on the acquired image, recognizing a user in the image, and determining a plurality of action feature points corresponding to the user, wherein the action feature information comprises position information of the plurality of action feature points;
the step of determining the motion information according to the motion characteristic information further includes:
and determining the action information according to the position information of the plurality of action characteristic points.
7. The method for controlling the smart television set according to claim 1, wherein the method further comprises:
receiving a target locking instruction;
determining an input target locking instruction through a camera device, and determining position information corresponding to the target locking instruction;
and determining a locking target according to the position information and the picture displayed by the intelligent television, so that the intelligent television executes an operation instruction aiming at the locking target.
8. The control system of the intelligent television is characterized by comprising the intelligent television and a mobile terminal connected with the intelligent television, wherein the mobile terminal is provided with a camera device for collecting images;
the mobile terminal is used for acquiring gesture information of a user through the camera device, determining an operation instruction corresponding to the gesture information, and sending the operation instruction to the smart television, wherein the gesture information comprises at least one of gesture information and action information;
the intelligent television is used for executing the operation instruction so as to realize interactive control on the intelligent television.
9. The control system of the smart television as claimed in claim 8, further comprising a base for placing the mobile terminal and connecting with the mobile terminal;
the mobile terminal generates a rotation instruction based on the gesture information of the user collected by the camera device and sends the rotation instruction to the base so that the base rotates based on the rotation device included by the base, and therefore the rotation of the mobile terminal is driven.
10. A mobile terminal, characterized in that the mobile terminal comprises a memory and a processor, the memory has executable codes, and when the executable codes run on the processor, the mobile terminal implements the control method of the intelligent television according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110026191.9A CN112637655A (en) | 2021-01-08 | 2021-01-08 | Control method and system of smart television and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110026191.9A CN112637655A (en) | 2021-01-08 | 2021-01-08 | Control method and system of smart television and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112637655A true CN112637655A (en) | 2021-04-09 |
Family
ID=75293858
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110026191.9A Pending CN112637655A (en) | 2021-01-08 | 2021-01-08 | Control method and system of smart television and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112637655A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113342170A (en) * | 2021-06-11 | 2021-09-03 | 北京字节跳动网络技术有限公司 | Gesture control method, device, terminal and storage medium |
CN113849065A (en) * | 2021-09-17 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Method and device for triggering client operation instruction by using body-building action |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110299774A1 (en) * | 2008-04-22 | 2011-12-08 | Corey Mason Manders | Method and system for detecting and tracking hands in an image |
CN102520786A (en) * | 2011-09-13 | 2012-06-27 | 深圳市比维视创科技有限公司 | Method and system for utilizing user action to realize control for electronic equipment by intelligent terminal |
CN105302434A (en) * | 2015-06-16 | 2016-02-03 | 深圳市腾讯计算机系统有限公司 | Method and device for locking targets in game scene |
CN105338390A (en) * | 2015-12-09 | 2016-02-17 | 陈国铭 | Intelligent television control system |
CN111901681A (en) * | 2020-05-04 | 2020-11-06 | 东南大学 | Intelligent television control device and method based on face recognition and gesture recognition |
-
2021
- 2021-01-08 CN CN202110026191.9A patent/CN112637655A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110299774A1 (en) * | 2008-04-22 | 2011-12-08 | Corey Mason Manders | Method and system for detecting and tracking hands in an image |
CN102520786A (en) * | 2011-09-13 | 2012-06-27 | 深圳市比维视创科技有限公司 | Method and system for utilizing user action to realize control for electronic equipment by intelligent terminal |
CN105302434A (en) * | 2015-06-16 | 2016-02-03 | 深圳市腾讯计算机系统有限公司 | Method and device for locking targets in game scene |
CN105338390A (en) * | 2015-12-09 | 2016-02-17 | 陈国铭 | Intelligent television control system |
CN111901681A (en) * | 2020-05-04 | 2020-11-06 | 东南大学 | Intelligent television control device and method based on face recognition and gesture recognition |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113342170A (en) * | 2021-06-11 | 2021-09-03 | 北京字节跳动网络技术有限公司 | Gesture control method, device, terminal and storage medium |
CN113849065A (en) * | 2021-09-17 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Method and device for triggering client operation instruction by using body-building action |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109992237B (en) | Intelligent voice equipment control method and device, computer equipment and storage medium | |
CN107770452B (en) | Photographing method, terminal and related medium product | |
CN112637655A (en) | Control method and system of smart television and mobile terminal | |
AU2018324282B2 (en) | Method and apparatus for recognizing user to provide personalized guide, content and services, and targeted advertisement without intentional user registration | |
CN109230927B (en) | Elevator control method, elevator control device, computer equipment and storage medium | |
CN111131735B (en) | Video recording method, video playing method, video recording device, video playing device and computer storage medium | |
CN112905074B (en) | Interactive interface display method, interactive interface generation method and device and electronic equipment | |
US11361588B2 (en) | Information processing apparatus, information processing method, and program | |
CN104778416A (en) | Information hiding method and terminal | |
CN110705356B (en) | Function control method and related equipment | |
KR20160135155A (en) | Method and device for sharing image | |
CN108551587B (en) | Method, device, computer equipment and medium for automatically collecting data of television | |
JP2023063314A (en) | Information processing device, information processing method, and recording medium | |
CN111131702A (en) | Method and device for acquiring image, storage medium and electronic equipment | |
CN113219851A (en) | Control device of intelligent household equipment, control method thereof and storage medium | |
CN113076007A (en) | Display screen visual angle adjusting method and device and storage medium | |
CN108647633B (en) | Identification tracking method, identification tracking device and robot | |
CN111160251B (en) | Living body identification method and device | |
CN110154016B (en) | Robot control method, device, storage medium and computer equipment | |
CN111147766A (en) | Special effect video synthesis method and device, computer equipment and storage medium | |
CN112887601B (en) | Shooting method and device and electronic equipment | |
CN108037829B (en) | Multi-mode interaction method and system based on holographic equipment | |
CN111880660A (en) | Display screen control method and device, computer equipment and storage medium | |
CN111274925A (en) | Method and device for generating recommended video, electronic equipment and computer storage medium | |
CN115292011A (en) | Method, system, device and medium for recognizing human face and two-dimensional code |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210409 |