CN104978029A - Screen manipulation method and apparatus - Google Patents
Screen manipulation method and apparatus Download PDFInfo
- Publication number
- CN104978029A CN104978029A CN201510373952.2A CN201510373952A CN104978029A CN 104978029 A CN104978029 A CN 104978029A CN 201510373952 A CN201510373952 A CN 201510373952A CN 104978029 A CN104978029 A CN 104978029A
- Authority
- CN
- China
- Prior art keywords
- user
- primary user
- time
- primary
- strategy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a screen manipulation method and apparatus. The method comprises the steps of: shooting a first region in which all users are located; according to a preset rule, identifying a primary user and secondary users in the shot image respectively; and according to operation modes corresponding to the primary user or the secondary users, performing corresponding operations on a screen. According to the method, the primary user and the secondary users are identified and confirmed by utilizing different schemes according to the number of different people, and the primary user and the secondary users are automatically subjected to identity conversion, so that the screen can be operated more efficiently, and the intelligence and the user experience are improved.
Description
Technical field
The present invention relates to electronic device field, particularly a kind of screen control method and device.
Background technology
Along with the development of electronic technology in recent years, the control mode of electronic equipment is also tending towards variation, and intelligent level also improves gradually.Especially based on the electronic equipment of limb motion induction manipulation, higher press close to real experience effect owing to having, be more and more widely used in teaching and entertainment field.But in current correlation technique, when many people use this class of electronic devices to carry out interaction manipulation, usually need manually to set primary user and time user, and when primary user drops by the wayside, need to re-start manual setting, intelligence degree and experience effect are still very low.
Summary of the invention
The invention provides a kind of screen control method and device, determine primary and secondary user in order to identify more quickly and easily, and automatically status transformation is carried out to primary and secondary user.
According to the first aspect of the embodiment of the present invention, a kind of screen control method is provided, can comprises:
The first area at all user places is taken;
From the image of shooting, primary user and time user is identified respectively according to the rule preset;
According to described primary user or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.
Preferably, the rule that described basis is preset identifies primary user and time user respectively from the image of shooting, can comprise:
According to the image of shooting, judge the total number of persons of primary user and time user;
When judging that described total number of persons is greater than threshold value, identify primary user and time user according to the first strategy;
Otherwise, identify primary user and time user according to the second strategy;
Wherein, the described first tactful sequencing for being sent specific action by identification user determines primary user and time user; Described second strategy is determine primary user and time user by being identified in the sequencing that in second area, user occurs, described second area is positioned at first area.
Preferably, describedly identify primary user and time user according to the first strategy, can comprise:
Export the information determining primary user and time user according to the first strategy;
The specific action that in described first area, user sends is identified from the image of shooting;
First will send specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Preferably, describedly identify primary user and time user according to the second strategy, can comprise:
Export the information determining primary user and time user according to the second strategy;
The user occurred in described second predeterminable area is identified from the image of shooting;
To first appear in described second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Preferably, described determine primary user and time user according to the first strategy or the second strategy after, also can comprise:
Store the characteristic information of primary user and time user;
Characteristic information according to described storage identifies primary user and time user and follows the tracks of;
When recognition and tracking primary user failure, delete the information characteristics of described primary user, and first will send specific action in follow-up described first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
Preferably, according to described primary user or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen, can comprise:
When the identity of primary user or secondary user occurs to change, described primary user or each self-corresponding steer mode of secondary user are refreshed; According to the described primary user after refreshing or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.
According to the second aspect of the embodiment of the present invention, a kind of screen actuation means is provided, can comprises:
Taking module, for taking the first area at all user places;
Identification module, for identifying primary user and time user according to the rule preset respectively from the image of shooting;
Operational module, for according to described primary user or each self-corresponding steer mode of secondary user, performs corresponding manipulation to described screen.
Preferably, described identification module can comprise:
Judge submodule, for the image according to shooting, judge the total number of persons of primary user and time user;
First recognin module, for when judging that described total number of persons is greater than threshold value, identifies primary user and time user according to the first strategy;
Second recognin module, for when judging that described total number of persons is not more than threshold value, identifies primary user and time user according to the second strategy.
Preferably, described first recognin module can comprise:
First output unit, for exporting the information determining primary user and time user according to the first strategy;
First recognition unit, for identifying the specific action that in described first area, user sends in the image from shooting;
First determining unit, for first sending specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Preferably, described second recognin module can comprise:
Second output unit, for exporting the information determining primary user and time user according to the second strategy;
Second recognition unit, for identifying the user occurred in described second predeterminable area in the image from shooting;
Second determining unit, for will first to appear in described second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Preferably, described identification module also can comprise:
Store submodule, for storing the characteristic information of primary user and time user;
Follow the tracks of submodule, for the characteristic information according to described storage, primary user and time user are identified and followed the tracks of;
Determine submodule, for when recognition and tracking primary user is failed, delete the information characteristics of described primary user, and first will send specific action in follow-up described first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
Preferably, described operational module can comprise:
Refresh submodule, for when the identity of primary user or secondary user occurs to change, described primary user or each self-corresponding steer mode of secondary user are refreshed;
Manipulation submodule, for performing corresponding manipulation according to the described primary user after refreshing or each self-corresponding steer mode of secondary user to described screen.
The technical scheme that the embodiment of the present invention provides can produce following beneficial effect: by taking the first area at all user places; From the image of shooting, primary user and time user is identified respectively according to the rule preset; According to described primary user or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.Said method is according to different numbers, utilize different schemes to carry out identification to primary user and time user really to appoint, and achieve and automatically status transformation is carried out to primary and secondary user, thus can identify more quickly and easily and determine primary and secondary user, and automatically status transformation is carried out to primary and secondary user.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from instructions, or understand by implementing the present invention.Object of the present invention and other advantages realize by structure specifically noted in write instructions, claims and accompanying drawing and obtain.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Accompanying drawing explanation
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for instructions, together with embodiments of the present invention for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the process flow diagram of the screen control method of the present invention according to an exemplary embodiment;
Fig. 2 is the process flow diagram of another the screen control method of the present invention according to an exemplary embodiment;
Fig. 3 is the process flow diagram of another the screen control method of the present invention according to an exemplary embodiment;
Fig. 4 is the process flow diagram of another the screen control method of the present invention according to an exemplary embodiment;
Fig. 5 is the process flow diagram of another the screen control method of the present invention according to an exemplary embodiment;
Fig. 6 is the block diagram of the screen actuation means of the present invention according to an exemplary embodiment;
Fig. 7 is the block diagram of another the screen actuation means of the present invention according to an exemplary embodiment;
Fig. 8 is the block diagram of another the screen actuation means of the present invention according to an exemplary embodiment;
Fig. 9 is the block diagram of another the screen actuation means of the present invention according to an exemplary embodiment;
Figure 10 is the block diagram of another the screen actuation means of the present invention according to an exemplary embodiment;
Figure 11 is the block diagram of another the screen actuation means of the present invention according to an exemplary embodiment;
Embodiment
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein is only for instruction and explanation of the present invention, is not intended to limit the present invention.
According to the first aspect of the embodiment of the present invention, provide a kind of screen control method, in the program that the method can be used for should manipulating screen based on acrognosis or equipment, as shown in Figure 1, the method comprising the steps of S101-S103:
In step S101, the first area at all user places is taken.
Wherein, the mode of shooting can be visible ray shooting or non-visible light shooting the, also or both be combined, first area is positioned at can be taken and within the scope identified, can be limited by user's setting.
In step s 102, from the image of shooting, primary user and time user is identified according to the rule preset respectively.
Particularly, preset rules has two kinds of embodiments, according to the user number judged in the image from shooting, determines and uses which kind of embodiment to identify, and then identify primary user and time user according to the embodiment determined.Wherein, primary user only has a people, and secondary user can be a people or many people.
In step s 103, according to primary user or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to screen.
Wherein, can be carried out by many people the manipulation of screen simultaneously, and in order to make the process primary and secondary of manipulation orderly, be provided with primary user's steer mode and time user's steer mode, now the above-mentioned primary user determined and time user are set up relation one to one with primary user's steer mode and time user's steer mode respectively, thus make that this manipulation process is well-regulated carries out.
Said method can identify more quickly and easily and determine primary and secondary user, improves the experience of user.
In one embodiment, as shown in Figure 2, step S102 can be embodied as S1021-S1023:
Step S1021, according to the image of shooting, judges the total number of persons of primary user and time user, when judging that total number of persons is greater than threshold value, performs step S1022; When judging that total number of persons is not more than threshold value, perform step S1023.
Wherein, threshold value is arranged according to service condition by user, such as arranging threshold value is 3, then when judging that the user number in first area is 4 people, primary user and time user is identified according to the embodiment of step S1022, when judging that the user number in first area is 3 people, identify primary user and time user according to the embodiment of step S1023.
Step S1022, identifies primary user and time user according to the first strategy.
Step S1023, identifies primary user and time user according to the second strategy.
Wherein, the above-mentioned first tactful sequencing for being sent specific action by identification user determines primary user and time user; Above-mentioned second strategy is determine primary user and time user by being identified in the sequencing that in second area, user occurs, second area is positioned at first area, can be arranged limit by user.So make the process of identification primary and secondary user more flexible and changeable, improve the efficiency of identification.
In one embodiment, as shown in Figure 3, above-mentioned steps S1022 can be embodied as S201-S203:
In step s 201, the information determining primary user and time user according to the first strategy is exported.
In step S202, from the image of shooting, identify the specific action that user in first area sends.
In step S203, first will send specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Particularly, after judgement employing first strategy identifies primary user and time user, export to screen and show the information that employing first strategy identifies primary and secondary user, specific action is made in prompting user's stipulated time inherent first area, such as raise one's hand higher than shoulder and keep more than 2 seconds, and action user in the stipulated time subsequently made identifies, first raised one's hand higher than shoulder and keeps the user of more than 2 seconds to be defined as primary user, other raised one's hand higher than shoulder and keep the user of more than 2 seconds to be defined as time user.This embodiment is suitable for when user number is more, can complete the identification to primary and secondary user efficiently.
In one embodiment, as shown in Figure 4, above-mentioned steps S1023 can be embodied as S301-S303:
In step S301, export the information determining primary user and time user according to the second strategy.
In step s 302, from the image of shooting, the user occurred in described second predeterminable area is identified.
In step S303, will first to appear in described second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
Particularly, after judgement employing second strategy identifies primary user and time user, export to screen and show the information that employing second strategy identifies primary and secondary user, stand in prompting user's stipulated time in second area and keep more than 2 seconds, subsequently the user appeared in the stipulated time in second area is identified, and the user first being stood in maintenance more than 2 seconds in second area is defined as primary user, other stand in second area and keep users of more than 2 seconds to be defined as time user.Embodiment be applicable to user number less when, efficiently and accurately identify primary and secondary user.
In one embodiment, after identifying primary user and time user, run corresponding program, such as game or teaching software, now, as shown in Figure 5, said method also can comprise step S401-S402:
In step S401, store the characteristic information of primary user and time user.
In step S402, the characteristic information according to described storage identifies primary user and time user and follows the tracks of.
In step S403, when recognition and tracking primary user failure, delete the information characteristics of described primary user, and first will send specific action in follow-up described first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
Particularly, after said procedure brings into operation, if to the tracking of primary user failure, such as subscriber's main station was secondary user more than 5 seconds after one's death, or primary user leaves first area, then deleted by the characteristic information of primary user, and cancelled the manipulation authority of primary user.After this, specific action (such as overhand) will be first sent and the duration user that reaches Preset Time (such as 2 seconds) is defined as new primary user in first area, other users, still as time user, continue the operation of this program of manipulation.
In one embodiment, step S103 can be embodied as:
When the identity of primary user or secondary user occurs to change, described primary user or each self-corresponding steer mode of secondary user are refreshed; According to the described primary user after refreshing or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.
Wherein, when in above-mentioned steps, former primary user is deleted, when the status transformation of former user or other users is new primary user, new primary user is identified and follows the tracks of, also must refresh the corresponding relation of new primary user and primary user's steer mode, to continue to manipulate running application simultaneously.This embodiment can successfully manage primary and secondary user identity in program operation and the emergency case changed occurs, and improves intelligence degree and user's experience.
According to the second aspect of the embodiment of the present invention, as shown in Figure 6, a kind of screen actuation means is provided, can comprises:
Taking module 61, for taking the first area at all user places;
Identification module 62, for identifying primary user and time user according to the rule preset respectively from the image of shooting;
Operational module 63, for according to primary user or each self-corresponding steer mode of secondary user, performs corresponding manipulation to described screen.
In one embodiment, as shown in Figure 7, identification module 62 can comprise:
Judge submodule 621, for the image according to shooting, judge the total number of persons of primary user and time user;
First recognin module 622, for when judging that total number of persons is greater than threshold value, identifies primary user and time user according to the first strategy;
Second recognin module 623, for when judging that total number of persons is not more than threshold value, identifies primary user and time user according to the second strategy;
In one embodiment, as shown in Figure 8, the first recognin module 622 can comprise:
First output unit 6221, for exporting the information determining primary user and time user according to the first strategy;
First recognition unit 6222, for identifying the specific action that user in first area sends in the image from shooting;
First determining unit 6223, for first sending specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
In one embodiment, as shown in Figure 9, the second recognin module 623 can comprise:
Second output unit 6231, for exporting the information determining primary user and time user according to the second strategy;
Second recognition unit 6232, for identifying the user occurred in the second predeterminable area in the image from shooting;
Second determining unit 6233, for will first to appear in second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
In one embodiment, as shown in Figure 10, identification module 62 also can comprise:
Store submodule 624, for storing the characteristic information of primary user and time user;
Follow the tracks of submodule 625, for identifying according to the characteristic information stored primary user and time user and follow the tracks of;
Determine submodule 626, for when recognition and tracking primary user is failed, delete the information characteristics of primary user, and first will send specific action in follow-up first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
In one embodiment, as shown in figure 11, operational module 63 can comprise:
Refresh submodule 631, for when the identity of primary user or secondary user occurs to change, primary user or each self-corresponding steer mode of secondary user are refreshed;
Manipulation submodule 632, for performing corresponding manipulation according to the primary user after refreshing or each self-corresponding steer mode of secondary user to screen.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.
Claims (12)
1. a screen control method, is characterized in that, comprising:
The first area at all user places is taken;
From the image of shooting, primary user and time user is identified respectively according to the rule preset;
According to described primary user or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.
2. the method for claim 1, is characterized in that, the rule that described basis is preset identifies primary user and time user respectively from the image of shooting, comprising:
According to the image of shooting, judge the total number of persons of primary user and time user;
When judging that described total number of persons is greater than threshold value, identify primary user and time user according to the first strategy;
Otherwise, identify primary user and time user according to the second strategy;
Wherein, the described first tactful sequencing for being sent specific action by identification user determines primary user and time user; Described second strategy is determine primary user and time user by being identified in the sequencing that in second area, user occurs, described second area is positioned at first area.
3. method as claimed in claim 2, is characterized in that, describedly identifies primary user and time user according to the first strategy, comprising:
Export the information determining primary user and time user according to the first strategy;
The specific action that in described first area, user sends is identified from the image of shooting;
First will send specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
4. method as claimed in claim 2, is characterized in that, describedly identifies primary user and time user according to the second strategy, comprising:
Export the information determining primary user and time user according to the second strategy;
The user occurred in described second predeterminable area is identified from the image of shooting;
To first appear in described second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
5. the method as described in claim 3 or 4, is characterized in that, described determine primary user and time user according to the first strategy or the second strategy after, also comprise:
Store the characteristic information of primary user and time user;
Characteristic information according to described storage identifies primary user and time user and follows the tracks of;
When recognition and tracking primary user failure, delete the information characteristics of described primary user, and first will send specific action in follow-up described first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
6. the method for claim 1, is characterized in that, according to described primary user or each self-corresponding steer mode of secondary user, performs corresponding manipulation, comprising described screen:
When the identity of primary user or secondary user occurs to change, described primary user or each self-corresponding steer mode of secondary user are refreshed; According to the described primary user after refreshing or each self-corresponding steer mode of secondary user, corresponding manipulation is performed to described screen.
7. a screen actuation means, is characterized in that, comprising:
Taking module, for taking the first area at all user places;
Identification module, for identifying primary user and time user according to the rule preset respectively from the image of shooting;
Operational module, for according to described primary user or each self-corresponding steer mode of secondary user, performs corresponding manipulation to described screen.
8. device as claimed in claim 7, it is characterized in that, described identification module comprises:
Judge submodule, for the image according to shooting, judge the total number of persons of primary user and time user;
First recognin module, for when judging that described total number of persons is greater than threshold value, identifies primary user and time user according to the first strategy;
Second recognin module, for when judging that described total number of persons is not more than threshold value, identifies primary user and time user according to the second strategy.
9. device as claimed in claim 8, it is characterized in that, described first recognin module comprises:
First output unit, for exporting the information determining primary user and time user according to the first strategy;
First recognition unit, for identifying the specific action that in described first area, user sends in the image from shooting;
First determining unit, for first sending specific action and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
10. device as claimed in claim 8, it is characterized in that, described second recognin module comprises:
Second output unit, for exporting the information determining primary user and time user according to the second strategy;
Second recognition unit, for identifying the user occurred in described second predeterminable area in the image from shooting;
Second determining unit, for will first to appear in described second area and the duration user that reaches Preset Time is defined as primary user, other users are defined as time user.
11. devices as described in claim 9 or 10, it is characterized in that, described identification module also comprises:
Store submodule, for storing the characteristic information of primary user and time user;
Follow the tracks of submodule, for the characteristic information according to described storage, primary user and time user are identified and followed the tracks of;
Determine submodule, for when recognition and tracking primary user is failed, delete the information characteristics of described primary user, and first will send specific action in follow-up described first area and the duration user that reaches Preset Time is defined as new primary user, other users are defined as time user.
12. devices as claimed in claim 7, it is characterized in that, described operational module comprises:
Refresh submodule, for when the identity of primary user or secondary user occurs to change, described primary user or each self-corresponding steer mode of secondary user are refreshed;
Manipulation submodule, for performing corresponding manipulation according to the described primary user after refreshing or each self-corresponding steer mode of secondary user to described screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510373952.2A CN104978029B (en) | 2015-06-30 | 2015-06-30 | A kind of screen control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510373952.2A CN104978029B (en) | 2015-06-30 | 2015-06-30 | A kind of screen control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104978029A true CN104978029A (en) | 2015-10-14 |
CN104978029B CN104978029B (en) | 2018-11-23 |
Family
ID=54274602
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510373952.2A Expired - Fee Related CN104978029B (en) | 2015-06-30 | 2015-06-30 | A kind of screen control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104978029B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108829233A (en) * | 2018-04-26 | 2018-11-16 | 深圳市深晓科技有限公司 | A kind of exchange method and device |
CN110716634A (en) * | 2019-08-28 | 2020-01-21 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and display equipment |
CN113269124A (en) * | 2021-06-09 | 2021-08-17 | 重庆中科云从科技有限公司 | Object identification method, system, equipment and computer readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930282A (en) * | 2009-06-27 | 2010-12-29 | 英华达(上海)电子有限公司 | Mobile terminal and mobile terminal-based input method |
WO2014124065A1 (en) * | 2013-02-11 | 2014-08-14 | Microsoft Corporation | Detecting natural user-input engagement |
WO2015037310A1 (en) * | 2013-09-13 | 2015-03-19 | ソニー株式会社 | Information processing device and information processing method |
-
2015
- 2015-06-30 CN CN201510373952.2A patent/CN104978029B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930282A (en) * | 2009-06-27 | 2010-12-29 | 英华达(上海)电子有限公司 | Mobile terminal and mobile terminal-based input method |
WO2014124065A1 (en) * | 2013-02-11 | 2014-08-14 | Microsoft Corporation | Detecting natural user-input engagement |
WO2015037310A1 (en) * | 2013-09-13 | 2015-03-19 | ソニー株式会社 | Information processing device and information processing method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108829233A (en) * | 2018-04-26 | 2018-11-16 | 深圳市深晓科技有限公司 | A kind of exchange method and device |
CN108829233B (en) * | 2018-04-26 | 2021-06-15 | 深圳市同维通信技术有限公司 | Interaction method and device |
CN110716634A (en) * | 2019-08-28 | 2020-01-21 | 北京市商汤科技开发有限公司 | Interaction method, device, equipment and display equipment |
CN113269124A (en) * | 2021-06-09 | 2021-08-17 | 重庆中科云从科技有限公司 | Object identification method, system, equipment and computer readable medium |
CN113269124B (en) * | 2021-06-09 | 2023-05-09 | 重庆中科云从科技有限公司 | Object recognition method, system, equipment and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN104978029B (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104978957A (en) | Voice control method and system based on voiceprint identification | |
CN105929948B (en) | Based on augmented reality self learning type intelligent helmet and its operation method | |
US20200193989A1 (en) | Audio device and control method therefor | |
CN104978029A (en) | Screen manipulation method and apparatus | |
CN105577721A (en) | Remote terminal control method and system thereof | |
CN103796075A (en) | Method and device for switching television channels and channels | |
CN103901863A (en) | Intelligent home control system and method based on live-action control interface | |
CN104714632A (en) | Automatic turn-on and turn-off control method, device and system of display screen | |
CN104407892A (en) | System switching method, system switching device and terminal | |
CN105511273B (en) | A kind of client operation management method and client | |
CN107689903A (en) | Intelligent home equipment control method, system, storage medium and computer equipment | |
CN109364479A (en) | Application widget interface alternation method and apparatus, electronic equipment, storage medium | |
CN104881235B (en) | A kind of method and device for closing application program | |
CN104484038A (en) | Method and device for controlling intelligent equipment | |
CN109725578A (en) | Hotel management method, system and device and computer storage medium | |
CN104793911B (en) | Processing method, device and terminal is presented using split screen | |
CN106658138B (en) | Smart television and its signal source switch method, device | |
CN104615553B (en) | Data capture method, data acquisition facility and terminal | |
CN106796666A (en) | Robot control apparatus, method, system and computer program product | |
Pageaud et al. | Multiagent Learning and Coordination with Clustered Deep Q-Network. | |
RU2704538C1 (en) | Network architecture of an anthropoid network and a method of realizing | |
CN106057197B (en) | A kind of timing voice operating method, apparatus and system | |
CN105894795A (en) | Infrared remote control method and device and mobile terminal | |
CN108724203A (en) | A kind of exchange method and device | |
KR20190075299A (en) | Systemd for control and monitoring of IoT device using VR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20181123 Termination date: 20190630 |
|
CF01 | Termination of patent right due to non-payment of annual fee |