WO2014021547A1 - Method for controlling device, and device using the same - Google Patents
Method for controlling device, and device using the same Download PDFInfo
- Publication number
- WO2014021547A1 WO2014021547A1 PCT/KR2013/004654 KR2013004654W WO2014021547A1 WO 2014021547 A1 WO2014021547 A1 WO 2014021547A1 KR 2013004654 W KR2013004654 W KR 2013004654W WO 2014021547 A1 WO2014021547 A1 WO 2014021547A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- state information
- controlling
- sound output
- device based
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4112—Peripherals receiving signals from specially adapted client devices having fewer capabilities than the client, e.g. thin client having less processing power or no tuning capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
- G06F1/3231—Monitoring the presence, absence or movement of users
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42201—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] biosensors, e.g. heat sensor for presence detection, EEG sensors or any limb activity sensors worn by the user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
- H04N21/4396—Processing of audio elementary streams by muting the audio signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/443—OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
- H04N21/4436—Power management, e.g. shutting down unused components of the receiver
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to a method of controlling a device based on state information of at least one of a user and an object, and a device using the method.
- the user When a user uses a device, such as a TV or a PC, having a display and a sound output, the user needs to automatically control the device according to various situations. For example, when the user decides not to use the device, the device may automatically control power or sound output without a user input.
- a device such as a TV or a PC, having a display and a sound output
- the user needs to automatically control the device according to various situations. For example, when the user decides not to use the device, the device may automatically control power or sound output without a user input.
- a method of controlling a device based on state information of at least one of a user and an object is provided.
- the device may store conditions on state information of the user or object to perceive typical situations including sleeping and making or receiving a call and may control the sound output unit, the power supply, and/or the display in the device.
- the device may acquire state information of the user or object by detecting or tracking the user or object in real-time and control the sound output unit, the power supply, and/or the display in the device based on the acquired information.
- FIG. 1 is a flowchart illustrating a method of controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure
- FIG. 2 is a flowchart illustrating a method of controlling a device by determining whether a user is sleeping based on state information of a user or state information of an object, according to an embodiment of the present disclosure
- FIG. 3 is a flowchart illustrating a method of controlling a device by determining whether a user is making or receiving a call based on state information of a user or state information of an object, according to an embodiment of the present disclosure
- FIG. 4 is a flowchart illustrating a method of controlling a device by determining whether a user is using an object based on state information of a user or state information of the object, according to an embodiment of the present disclosure
- FIG. 5 is a flowchart illustrating a method of controlling a sound volume of a device or brightness of a display based on state information of a user or state information of an object, according to an embodiment of the present disclosure
- FIG. 6 is a flowchart illustrating a method of transmitting state information of a user to another user based on the state information of the user or state information of an object, according to an embodiment of the present disclosure.
- FIG. 7 is a block diagram illustrating a device structure for controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- a method of controlling a device including: detecting a user located at a predetermined location and acquiring state information of the detected user; detecting at least one object located in an area around the user and acquiring state information of the detected object; and controlling the device based on the state information of the user and the state information of the object.
- the state information of the user may include at least one of a posture, a gesture, a body temperature, and an appearance of the user.
- the state information of the object may include at least one of a location and a moving path of the object.
- the controlling may include controlling at least one of a sound output, a display, a camera, and a power supply based on the state information of the user and the state information of the object.
- the controlling may include: determining whether the user is sleeping, based on the state information of the user and the state information of the object; and controlling a sound output of the device based on a result of the determination.
- the controlling may include: determining whether the user has moved for a predetermined time, based on the state information of the user; and controlling a power supply of the device based on a result of the determination.
- the controlling may include: determining whether the user is making or receiving a call, based on the state information of the user and the state information of the object; and controlling a sound output of the device based on a result of the determination.
- the controlling may include: measuring illumination of an area around the device based on the state information of the object; and controlling a display of the device based on the illumination.
- the controlling may include transmitting the state information of the user to another user when it is detected based on the state information of the user that the user having a height less than a reference height is alone within a predetermined radius of the device.
- the controlling may include: determining whether the user is using the device, based on the state information of the user and the state information of the object; and controlling a sound output or a power supply of the device based on a result of the determination.
- the term "and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
- a device may control the device itself based on state information of a user and state information of an object.
- an external device capable of controlling the device based on state information of a user and state information of an object may control the device according to an embodiment of the present disclosure by being connected to the device.
- the device capable of being controlled may include a sound output unit, a display unit, and a power supply. The description below is based on that the device acquires state information of a user and state information of an object and controls itself based on the acquired information.
- FIG. 1 is a flowchart illustrating a method of controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- the device including a sensor to detect the user detects the user in operation S101
- the device acquires the state information of the user by using the sensor to detect the user in operation S103.
- the device including a sensor to detect the object detects the object in operation S105
- the device acquires the state information of the object by using the sensor to detect the object in operation S107.
- the sensor may be at least one of a motion sensor such as a motion detector, or a visual sensor such as a camera, a sound sensor, a thermal sensor, a heat sensor, a temperature sensor, or other types of sensors that one of ordinary skill in the art would have found suitable to detect the user or the object.
- the state information of the user may include a posture, a gesture, a body temperature, and an appearance of the user, and the state information of the object may include a location and a moving path of the object.
- the state information may include any information received from the above mentioned sensors.
- information about the user and the object that may be detected may be stored in a memory in advance, and the user and object may be detected based on the stored information stored in advance.
- the information about the object that may be stored may include location and shape information of the object or any information that one of ordinary skill in the art would find suitable to detect the object, and the information about the object may be acquired in response to a user input and stored in the memory.
- the device is controlled based on the acquired state information of the user or the acquired state information of the object.
- the device since not only the state information of the user but also the state information of the object may be considered, the device may control various units included thereof, such as the sound output, the display, and the power supply by correctly perceiving various possible situations.
- the device controls itself by considering only the state information of the user in operation S111.
- FIG. 2 is a flowchart illustrating a method of controlling a device by determining whether a user is sleeping based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object, detects the user or the object in operation S201, and the device acquires state information of the detected user or object in operation S203.
- the device determines whether the user is sleeping, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
- the device may determine whether the user is sleeping, for example, by analyzing a posture of the user, whether the user’s eyes are closed, a location of the user, and the like using the captured image. That is, the device may determine whether the user is sleeping, by determining whether the user lies down, whether the user’s eyes are closed, and whether the user is on a couch or a bed. In this case, the location of the user may be determined using location information of the couch or bed. When the camera cannot capture the eyes of the user because the user has rolled over, the device may determine whether the user is sleeping, based on a posture or location of the user.
- the device determines using the motion detection sensor or the camera whether the user is moving, and if a time the user does not move is greater than a reference time in operation S207, the device decreases a volume of a sound output thereof by a predetermined magnitude in operation S209. After the adjustment of the volume, the device recounts a time the user does not move, and if the recounted time the user does not move is greater than the reference time in operation S207, the device decreases the volume of the sound output thereof by the predetermined magnitude in operation S209.
- the device may control power thereof by determining whether there is a gesture of the user.
- the user may control the device manually unless the user is sleeping.
- the device turns off power in operation S215. For example, if the gesture input time is set to 3 seconds, it may be determined whether the user moves his or her hand during 3 seconds.
- the devices determines that the user is awake, the device increases the volume of the sound output to the volume before being decreased in operation S209, or a predetermined volume, in operation S217. Thereafter, the device determines in operation S205 whether the user is sleeping.
- FIG. 3 is a flowchart illustrating a method of controlling a device by determining whether a user is making or receiving a call, based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object detects the user or the object in operation S301, and the device acquires state information of the detected user or object in operation S303.
- the device determines whether the user is making or receiving a call, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and determine the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
- the device may acquire information about a location of a communication device, whether the communication device is moving, and behavior, a posture, and a location of the user and determine based on the acquired information whether the user is using the communication device (operation S305). That is, the device may determine whether the user is making or receiving a call, by determining whether the communication device has moved closely to an ear or the face of the user, whether the user holds the communication device, and whether the user is viewing the communication device.
- the device mutes a volume of a sound output thereof in operation S307 so that the user makes or receives a call without noise. If it is determined in operation S309 that the user has ended the call, the device increases the volume of the sound output to the volume before being muted, in operation S311. For example, the device may determine whether the user has ended the call by determining based on the state information of the user or object whether the user has freed the communication device from a hand of the user or whether the communication device is apart from the ear.
- FIG. 4 is a flowchart illustrating a method of controlling a device by determining whether a user is using an object based on state information of a user or state information of the object, according to an embodiment of the present disclosure.
- the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object, detects the user or the object in operation S401, and the device acquires state information of the detected user or object in operation S403.
- the device determines whether the user is using the object, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
- the device may determine whether the user is using the object, by analyzing a posture of the user, the orientation of the eyes of the user, a state and location of the object, and the like using the captured image.
- the user may be determined that the user is eating. If the user holds a book, and the eyes of the user are oriented towards the book, it may be determined that the user is reading the book. If the user holds a writing instrument, and the eyes of the user are oriented towards the writing instrument, it may be determined that the user is writing. If the user holds a portable electronic device, such as a mobile phone, a smart phone, a laptop computer, or a tablet computer, and the eyes of the user are oriented towards the portable electronic device, it may be determined that the user is using the portable electronic device.
- a portable electronic device such as a mobile phone, a smart phone, a laptop computer, or a tablet computer
- the device may determine whether the user is using the object, by tracking the eyes or hands of the user.
- the device decreases a volume of a sound output by a predetermined magnitude in operation S407 so that the user concentrates on the object being used. After the adjustment of the volume, the device recounts a time the user uses the object, and if the recounted time while the user uses the object is greater than the reference time in operation S405, the device decrease the volume of the sound output thereof by the predetermined magnitude in operation S407.
- the device may query the user about whether the device turns off power thereof, in operation S411. In this case, the device sets an input limit time, and if there is no input within the input limit time, or if the user inputs that the device turns off power thereof, the device turns off power thereof in operation S413. Thus, the device may decrease the volume of the sound output or turn off power thereof by determining that the user is not using the device and is using the object instead.
- FIG. 5 is a flowchart illustrating a method of controlling a sound volume of a device or brightness of a display based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- the device including a sensor, such as a camera or a motion detection sensor, to detect the user detects the user in operation S501, and the device acquires state information of the detected user in operation S503.
- the device determines whether the user is using the device, by acquiring the state information of the user, such as behavior, a location, or a posture of the user. In this case, the device may control the sound volume of the device or the brightness of the display without determining whether the user uses the device.
- the device measures noise or illumination of an area around the device by using a microphone or an illumination measurement sensor, respectively, in operation S507.
- the device adjusts the sound volume or the brightness of the display based on the measured noise or illumination. For example, the device may adjust the sound volume to be higher than a magnitude of the noise, or may adjust the brightness of the display to be brighter than the illumination.
- whether the device is controlled based on the noise or illumination in the area around the device may be set by the user.
- FIG. 6 is a flowchart illustrating a method of transmitting state information of a user to another user based on the state information of the user or state information of an object, according to an embodiment of the present disclosure.
- the device including a sensor to detect the user or the object, detects the user or the object in operation S601, and the device acquires state information of the detected user or object in operation S603.
- the sensor may be a camera or a motion detection sensor, but not limited thereto as discussed previously.
- the device may determine a situation of the user by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
- the device transmits the state information of the user to another user in operation S609.
- the device detects a child and transmits state information of the child to a parent or guardian since it is dangerous for the child to be alone without the parent or guardian for a predetermined time.
- Whether the detected user is a child may be determined based on whether the user has a height less than the reference height as in operation S605 and/or based on another criterion.
- the state information of the child transmitted to the parent or guardian may include at least one of a posture, a gesture, a body temperature, and an appearance of the child and may further include an image captured by photographing the child or a result of analyzing the captured image.
- a user whose state information is transmitted is determined by determining whether the user has a height less than the reference height
- a user whose state information is transmitted may be determined based on other information to identify the user as a child.
- a condition in which state information of a user is transmitted may be set to another state instead of the state in which the user is alone for the predetermined time.
- FIG. 7 is a block diagram illustrating a device structure to control a device 100 based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
- the device 100 may bean electronic device, for example, a PC, a laptop computer, a mobile phone, a tablet computer, a navigation terminal, a smart phone, a Personal Digital Assistant (PDA), a TV, a smart TV, a Portable Multimedia Player (PMP), and a digital broadcasting receiver.
- a PC personal digital Assistant
- PDA Personal Digital Assistant
- TV Portable Multimedia Player
- PMP Portable Multimedia Player
- the device 100 may include a processor 110, a display 120, a power supply 130, a sound output 140, a sensor 150, a camera 160, an input 170, and a memory 180.
- the processor 110 may acquire state information of the user or state information of the object and control the device 100 based on the acquired information, according to an embodiment of the present disclosure, and the acquiring state information may be performed by executing programs stored in the memory 180.
- the display 120 may display an image stored in the memory 180 or an image captured by the camera 160.
- the power supply 130 may turn on or off power of the device 100.
- the sound output 140 may output sound associated with images or sound of a media file stored in the memory 180.
- the sensor 150 may include a motion detection sensor 151 to detect a motion of the user or object and an illumination measurement sensor 152 for measuring illumination.
- the motion detection sensor 151 and the illumination measurement sensor are shown in the embodiment, the sensor is not limited thereto.
- the sensor may be at least one of a motion sensor such as a motion detector, or a visual sensor such as a camera, a sound sensor, a thermal sensor, a heat sensor, a temperature sensor, or other types of sensors that one of ordinary skill in the art would have found suitable to detect the user or the object.
- the camera 160 may photograph an area around the device 100 that includes the user or object.
- the input 170 may include a remote control 171 to provide an interface to receive a user input and a microphone 172 to measure noise in a surrounding area or receive a sound input from the user.
- the programs stored in the memory 180 may be classified into a plurality of modules according to their functions, such as an application module 181, a communication module 182, a camera module 183, a media play module 184, a power module 185, a gesture recognition module 186, and so forth.
- the communication module 182 may transmit and receive data to and from another external device via a network.
- the camera module 183 may control the camera 160 to photograph an area around the device 100 that includes the user or the object.
- the media play module 184 may reproduce a moving picture or a sound file stored in the memory 180 and output the reproduced moving picture or sound file through the display 120 or the sound output 140, respectively.
- the power module 185 may control the power supply 130 to control power of the device 100.
- the gesture recognition module 186 may recognize a gesture of the user from an image captured by the camera 160.
- the application module 181 may acquire state information of the user or state information of the object and control at least one of the sound output 140, the display 120, the sensor 150, and the camera 160 that are included in the device 100 based on the acquired information.
- the state information of the user may include at least one of a posture, a gesture, a body temperature, and an appearance of the user
- the state information of the object may include at least one of a location and a moving path of the object.
- the application module 181 may determine whether the user is sleeping, by analyzing a posture of the user, whether the user’s eyes are closed, a location of the user, and the like using a captured image. If it is determined that the user is sleeping, the application module 181 may determine whether the user moves, by using the motion detection sensor 151 or the camera 160, and may control a volume of the sound output 140 based on whether a time the user does not move is greater than a reference time.
- the application module 181 may control a volume of the sound output 140 or the power supply 130 in the device 100 by determining whether the user is making or receiving a call.
- the application module 181 may acquire information about a location of a communication device, whether the communication device is moving, and a posture and a location of the user using a captured image and determine based on the acquired information whether the user is using the communication device. If it is determined that the user is using the communication device, the application module 181 mutes a volume of the sound output 140 or decreases a volume of the sound output to a predetermined volume, and if it is determined that the user has ended the call, the application module 181 may increase the volume of the sound output 140 to the volume before being muted or a predetermined volume.
- the application module 181 may control a volume of the sound output 140 or the power supply 130 in the device 100 by determining whether the user is using the object.
- the application module 181 may determine whether the user is using the object, by analyzing a posture of the user, the orientation of the eyes of the user, a state and location of the object, and the like using a captured image.
- the application module 181 may control a volume of the sound output 140 or brightness of the display 120 based on noise or illumination of a surrounding area.
- the application module 181 may transmit state information of the user based on the state information of the user or state information of the object. For example, as a result of analyzing the state information of the user or the state information of the object, if it is determined that the user, for example, a child, having a height less than a reference height, is alone without another user, the application module 181 may transmit the state information of the user to a predetermined user.
- Table 1 shows an example of a method of controlling a device based on state information of a user and state information of an object according to an embodiment of the present disclosure.
- Table 1 Situation State information of user State information of object Device control Sleeping Orientation of the eyes, a posture, and/or a location User is located on a couch or bed Control a voice volume or power Calling A posture, a location, and/or behavior A moving path and a location of a communication device Control a voice volume or power Using another device A posture, orientation of the eyes, and/or holding another device A location Control a voice volume or power Detecting surrounding environment User is using the device Information about noise or brightness in an environment around the device Control a voice volume or a display unit Transmitting state information of user state information of the user (A state of the user may be determined by considering state information of the object) Transmit state information of the user No user The user is not detected Control a voice volume or power
- cases of sleeping, making or receiving a call, using another device, detecting a surrounding area, and transmitting state information of the user may correspond to the embodiments shown in FIGS. 2 to 6, respectively.
- the device when the device cannot detect the user around the device, since the user is not using the device, the device may control a sound output or a power supply in the device to prevent unnecessary power waste based on a time the device cannot detect the user.
- the device may store conditions on state information of the user or object to perceive typical situations including sleeping and making or receiving a call and may control the sound output unit, the power supply, and/or the display in the device.
- the device may acquire state information of the user or object by detecting or tracking the user or object in real-time and control the sound output unit, the power supply, and/or the display in the device based on the acquired information.
- inventions may be recorded or stored in computer-readable media including program instructions to implement various operations embodied by a computer.
- the media may also include, alone or in combination with the program instructions, data files, data structures, and the like.
- the program instructions recorded on the media may be those specially designed and constructed for the purposes of embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts.
- Examples of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks, DVDs and Blu-rays; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, a universal serial bus (USB), a memory card, and the like.
- the computer-readable media may also be a distributed network, so that the program instructions are stored and executed in a distributed fashion.
- the program instructions may be executed by one or more processors.
- the computer-readable media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA), which executes (processes like a processor) program instructions.
- ASIC application specific integrated circuit
- FPGA Field Programmable Gate Array
- Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter.
- the above-described devices may be configured to act as one or more software modules in order to perform the operations of the above-described embodiments, or vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Neurosurgery (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method of controlling a device, the method including: detecting a user located at a predetermined location and acquiring state information of the detected user; detecting at least one object located in an environment around the user and acquiring state information of the detected object; and controlling the device based on the state information of the user and the state information of the object.
Description
The present disclosure relates to a method of controlling a device based on state information of at least one of a user and an object, and a device using the method.
When a user uses a device, such as a TV or a PC, having a display and a sound output, the user needs to automatically control the device according to various situations. For example, when the user decides not to use the device, the device may automatically control power or sound output without a user input.
According to the related art, since a device is controlled by considering only a state of a user without considering a surrounding area of the user, it is difficult to control the device by perceiving various possible situations.
According to an aspect of the present invention, there is provided a method of controlling a device based on state information of at least one of a user and an object.
According to an embodiment of the present disclosure, the device may store conditions on state information of the user or object to perceive typical situations including sleeping and making or receiving a call and may control the sound output unit, the power supply, and/or the display in the device.
In addition, the device may acquire state information of the user or object by detecting or tracking the user or object in real-time and control the sound output unit, the power supply, and/or the display in the device based on the acquired information.
FIG. 1 is a flowchart illustrating a method of controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method of controlling a device by determining whether a user is sleeping based on state information of a user or state information of an object, according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method of controlling a device by determining whether a user is making or receiving a call based on state information of a user or state information of an object, according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a method of controlling a device by determining whether a user is using an object based on state information of a user or state information of the object, according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method of controlling a sound volume of a device or brightness of a display based on state information of a user or state information of an object, according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a method of transmitting state information of a user to another user based on the state information of the user or state information of an object, according to an embodiment of the present disclosure; and
FIG. 7 is a block diagram illustrating a device structure for controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
According to an aspect of the present disclosure, there is provided a method of controlling a device, the method including: detecting a user located at a predetermined location and acquiring state information of the detected user; detecting at least one object located in an area around the user and acquiring state information of the detected object; and controlling the device based on the state information of the user and the state information of the object.
The state information of the user may include at least one of a posture, a gesture, a body temperature, and an appearance of the user.
The state information of the object may include at least one of a location and a moving path of the object.
The controlling may include controlling at least one of a sound output, a display, a camera, and a power supply based on the state information of the user and the state information of the object.
The controlling may include: determining whether the user is sleeping, based on the state information of the user and the state information of the object; and controlling a sound output of the device based on a result of the determination.
The controlling may include: determining whether the user has moved for a predetermined time, based on the state information of the user; and controlling a power supply of the device based on a result of the determination.
The controlling may include: determining whether the user is making or receiving a call, based on the state information of the user and the state information of the object; and controlling a sound output of the device based on a result of the determination.
The controlling may include: measuring illumination of an area around the device based on the state information of the object; and controlling a display of the device based on the illumination.
The controlling may include transmitting the state information of the user to another user when it is detected based on the state information of the user that the user having a height less than a reference height is alone within a predetermined radius of the device.
The controlling may include: determining whether the user is using the device, based on the state information of the user and the state information of the object; and controlling a sound output or a power supply of the device based on a result of the determination.
This application claims the priority benefit of Korean Patent Application No. 10-2012-0084988, filed on August 2, 2012, in the Korean Intellectual Property Office, and U.S Patent Application No. 13/572,221, filed on August 11, 2012, in the U.S Patent Office, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the disclosure are shown. In the following description and the accompanying drawings, well-known functions or constructions are not described in detail since they would obscure the disclosure with unnecessary detail. In addition, like reference numbers are used to refer to like elements through at the drawings.
The terminology or words used in the specification and claims described below must not be analyzed as common or lexical meaning and must be analyzed as the meaning and concept conforming the technical spirit of the present disclosure according to the principle that the inventor can properly define the disclosure with the terminology for describing in the best method. Therefore, since the embodiments disclosed in the description and the configurations shown in the drawings are only most exemplary embodiments of the present disclosure and do not represent all the technical spirit of the present disclosure, it should be understood that various equivalents and modifications for replacing the embodiments may exist in the applied time.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
According to an embodiment of the present disclosure, a device may control the device itself based on state information of a user and state information of an object. Alternatively, an external device capable of controlling the device based on state information of a user and state information of an object may control the device according to an embodiment of the present disclosure by being connected to the device. In this case, the device capable of being controlled may include a sound output unit, a display unit, and a power supply. The description below is based on that the device acquires state information of a user and state information of an object and controls itself based on the acquired information.
FIG. 1 is a flowchart illustrating a method of controlling a device based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
Referring to FIG. 1, if the device including a sensor to detect the user detects the user in operation S101, the device acquires the state information of the user by using the sensor to detect the user in operation S103. If the device including a sensor to detect the object detects the object in operation S105, the device acquires the state information of the object by using the sensor to detect the object in operation S107. The sensor may be at least one of a motion sensor such as a motion detector, or a visual sensor such as a camera, a sound sensor, a thermal sensor, a heat sensor, a temperature sensor, or other types of sensors that one of ordinary skill in the art would have found suitable to detect the user or the object. The state information of the user may include a posture, a gesture, a body temperature, and an appearance of the user, and the state information of the object may include a location and a moving path of the object. However, the state information may include any information received from the above mentioned sensors. For example, information about the user and the object that may be detected, may be stored in a memory in advance, and the user and object may be detected based on the stored information stored in advance. The information about the object that may be stored may include location and shape information of the object or any information that one of ordinary skill in the art would find suitable to detect the object, and the information about the object may be acquired in response to a user input and stored in the memory.
In operation S109, the device is controlled based on the acquired state information of the user or the acquired state information of the object. Thus, according to an embodiment of the present disclosure, since not only the state information of the user but also the state information of the object may be considered, the device may control various units included thereof, such as the sound output, the display, and the power supply by correctly perceiving various possible situations.
Otherwise, if the device does not detect the object in operation S105, the device controls itself by considering only the state information of the user in operation S111.
Methods of controlling a device based on state information of a user and state information of an object, according to embodiments of the present disclosure, will now be described.
FIG. 2 is a flowchart illustrating a method of controlling a device by determining whether a user is sleeping based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
Referring to FIG. 2, the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object, detects the user or the object in operation S201, and the device acquires state information of the detected user or object in operation S203. In operation S205, the device determines whether the user is sleeping, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
In detail, the device may determine whether the user is sleeping, for example, by analyzing a posture of the user, whether the user’s eyes are closed, a location of the user, and the like using the captured image. That is, the device may determine whether the user is sleeping, by determining whether the user lies down, whether the user’s eyes are closed, and whether the user is on a couch or a bed. In this case, the location of the user may be determined using location information of the couch or bed. When the camera cannot capture the eyes of the user because the user has rolled over, the device may determine whether the user is sleeping, based on a posture or location of the user.
If it is determined that the user is sleeping, the device determines using the motion detection sensor or the camera whether the user is moving, and if a time the user does not move is greater than a reference time in operation S207, the device decreases a volume of a sound output thereof by a predetermined magnitude in operation S209. After the adjustment of the volume, the device recounts a time the user does not move, and if the recounted time the user does not move is greater than the reference time in operation S207, the device decreases the volume of the sound output thereof by the predetermined magnitude in operation S209. As a result of the decreasing of the volume of the sound output due to the no movement of the user, if the volume of the sound output in the device is 0 in operation S211, the device may control power thereof by determining whether there is a gesture of the user. Thus, if the volume of the sound output is 0, the user may control the device manually unless the user is sleeping.
If the user does not move his or her hand during a gesture input time in operation S213, the device turns off power in operation S215. For example, if the gesture input time is set to 3 seconds, it may be determined whether the user moves his or her hand during 3 seconds.
Otherwise, if it is detected in operation S213 that the user moves his or her hand during the gesture input time, the devices determines that the user is awake, the device increases the volume of the sound output to the volume before being decreased in operation S209, or a predetermined volume, in operation S217. Thereafter, the device determines in operation S205 whether the user is sleeping.
FIG. 3 is a flowchart illustrating a method of controlling a device by determining whether a user is making or receiving a call, based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
Referring to FIG. 3, the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object detects the user or the object in operation S301, and the device acquires state information of the detected user or object in operation S303. In operation S305, the device determines whether the user is making or receiving a call, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and determine the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
In detail, the device may acquire information about a location of a communication device, whether the communication device is moving, and behavior, a posture, and a location of the user and determine based on the acquired information whether the user is using the communication device (operation S305). That is, the device may determine whether the user is making or receiving a call, by determining whether the communication device has moved closely to an ear or the face of the user, whether the user holds the communication device, and whether the user is viewing the communication device.
If it is determined that the user is using the communication device, the device mutes a volume of a sound output thereof in operation S307 so that the user makes or receives a call without noise. If it is determined in operation S309 that the user has ended the call, the device increases the volume of the sound output to the volume before being muted, in operation S311. For example, the device may determine whether the user has ended the call by determining based on the state information of the user or object whether the user has freed the communication device from a hand of the user or whether the communication device is apart from the ear.
FIG. 4 is a flowchart illustrating a method of controlling a device by determining whether a user is using an object based on state information of a user or state information of the object, according to an embodiment of the present disclosure.
Referring to FIG. 4, the device including a sensor, such as a camera or a motion detection sensor, to detect the user or the object, detects the user or the object in operation S401, and the device acquires state information of the detected user or object in operation S403. In operation S405, the device determines whether the user is using the object, by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
In detail, the device may determine whether the user is using the object, by analyzing a posture of the user, the orientation of the eyes of the user, a state and location of the object, and the like using the captured image.
For example, if the user holds tableware, and the eyes of the user are oriented towards the tableware, it may be determined that the user is eating. If the user holds a book, and the eyes of the user are oriented towards the book, it may be determined that the user is reading the book. If the user holds a writing instrument, and the eyes of the user are oriented towards the writing instrument, it may be determined that the user is writing. If the user holds a portable electronic device, such as a mobile phone, a smart phone, a laptop computer, or a tablet computer, and the eyes of the user are oriented towards the portable electronic device, it may be determined that the user is using the portable electronic device.
When the user uses the object, the eyes of the user are oriented towards the object in most cases, and since the user holds the object, the device may determine whether the user is using the object, by tracking the eyes or hands of the user.
If it is determined that the user is using the object and the user is using the object for a time greater than a reference time in operation S405, the device decreases a volume of a sound output by a predetermined magnitude in operation S407 so that the user concentrates on the object being used. After the adjustment of the volume, the device recounts a time the user uses the object, and if the recounted time while the user uses the object is greater than the reference time in operation S405, the device decrease the volume of the sound output thereof by the predetermined magnitude in operation S407. As a result of the decreasing of the volume of the sound output because of use of the object, if the volume of the sound output in the device is 0 in operation S409, the device may query the user about whether the device turns off power thereof, in operation S411. In this case, the device sets an input limit time, and if there is no input within the input limit time, or if the user inputs that the device turns off power thereof, the device turns off power thereof in operation S413. Thus, the device may decrease the volume of the sound output or turn off power thereof by determining that the user is not using the device and is using the object instead.
FIG. 5 is a flowchart illustrating a method of controlling a sound volume of a device or brightness of a display based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
Referring to FIG. 5, the device including a sensor, such as a camera or a motion detection sensor, to detect the user detects the user in operation S501, and the device acquires state information of the detected user in operation S503. In operation S505, the device determines whether the user is using the device, by acquiring the state information of the user, such as behavior, a location, or a posture of the user. In this case, the device may control the sound volume of the device or the brightness of the display without determining whether the user uses the device.
If it is determined in operation S505 that the user is using the device, the device measures noise or illumination of an area around the device by using a microphone or an illumination measurement sensor, respectively, in operation S507. In operation S509, the device adjusts the sound volume or the brightness of the display based on the measured noise or illumination. For example, the device may adjust the sound volume to be higher than a magnitude of the noise, or may adjust the brightness of the display to be brighter than the illumination. In addition, whether the device is controlled based on the noise or illumination in the area around the device may be set by the user.
FIG. 6 is a flowchart illustrating a method of transmitting state information of a user to another user based on the state information of the user or state information of an object, according to an embodiment of the present disclosure.
Referring to FIG. 6, the device including a sensor to detect the user or the object, detects the user or the object in operation S601, and the device acquires state information of the detected user or object in operation S603. As a non-limiting example, the sensor may be a camera or a motion detection sensor, but not limited thereto as discussed previously. The device may determine a situation of the user by analyzing the acquired state information of the user or object. For example, the device may photograph the user or object with the camera and acquire the state information of the user or object by analyzing the captured image. Alternatively, when a motion of the user or object is detected by the motion detection sensor, the device may photograph the motion-detected user or object with the camera.
As a result of analyzing the state information of the user or object, if it is determined in operations S605 and S607 that the user having a height less than a reference height such as a child, is alone without any other user for a predetermined time, the device transmits the state information of the user to another user in operation S609.
For example, the device detects a child and transmits state information of the child to a parent or guardian since it is dangerous for the child to be alone without the parent or guardian for a predetermined time. Whether the detected user is a child may be determined based on whether the user has a height less than the reference height as in operation S605 and/or based on another criterion. The state information of the child transmitted to the parent or guardian may include at least one of a posture, a gesture, a body temperature, and an appearance of the child and may further include an image captured by photographing the child or a result of analyzing the captured image.
Although as a non-limiting example, a user whose state information is transmitted is determined by determining whether the user has a height less than the reference height, a user whose state information is transmitted may be determined based on other information to identify the user as a child. Likewise, a condition in which state information of a user is transmitted may be set to another state instead of the state in which the user is alone for the predetermined time.
FIG. 7 is a block diagram illustrating a device structure to control a device 100 based on state information of a user or state information of an object, according to an embodiment of the present disclosure.
The device 100 may bean electronic device, for example, a PC, a laptop computer, a mobile phone, a tablet computer, a navigation terminal, a smart phone, a Personal Digital Assistant (PDA), a TV, a smart TV, a Portable Multimedia Player (PMP), and a digital broadcasting receiver.
Referring to FIG. 7, the device 100 may include a processor 110, a display 120, a power supply 130, a sound output 140, a sensor 150, a camera 160, an input 170, and a memory 180.
The processor 110 may acquire state information of the user or state information of the object and control the device 100 based on the acquired information, according to an embodiment of the present disclosure, and the acquiring state information may be performed by executing programs stored in the memory 180.
The display 120 may display an image stored in the memory 180 or an image captured by the camera 160.
The power supply 130 may turn on or off power of the device 100.
The sound output 140 may output sound associated with images or sound of a media file stored in the memory 180.
The sensor 150 may include a motion detection sensor 151 to detect a motion of the user or object and an illumination measurement sensor 152 for measuring illumination. Although the motion detection sensor 151 and the illumination measurement sensor are shown in the embodiment, the sensor is not limited thereto. The sensor may be at least one of a motion sensor such as a motion detector, or a visual sensor such as a camera, a sound sensor, a thermal sensor, a heat sensor, a temperature sensor, or other types of sensors that one of ordinary skill in the art would have found suitable to detect the user or the object.
The camera 160 may photograph an area around the device 100 that includes the user or object.
The input 170 may include a remote control 171 to provide an interface to receive a user input and a microphone 172 to measure noise in a surrounding area or receive a sound input from the user.
The programs stored in the memory 180 may be classified into a plurality of modules according to their functions, such as an application module 181, a communication module 182, a camera module 183, a media play module 184, a power module 185, a gesture recognition module 186, and so forth.
The communication module 182 may transmit and receive data to and from another external device via a network.
The camera module 183 may control the camera 160 to photograph an area around the device 100 that includes the user or the object.
The media play module 184 may reproduce a moving picture or a sound file stored in the memory 180 and output the reproduced moving picture or sound file through the display 120 or the sound output 140, respectively.
The power module 185 may control the power supply 130 to control power of the device 100.
The gesture recognition module 186 may recognize a gesture of the user from an image captured by the camera 160.
The application module 181 may acquire state information of the user or state information of the object and control at least one of the sound output 140, the display 120, the sensor 150, and the camera 160 that are included in the device 100 based on the acquired information. In this case, the state information of the user may include at least one of a posture, a gesture, a body temperature, and an appearance of the user, and the state information of the object may include at least one of a location and a moving path of the object.
The application module 181 may determine whether the user is sleeping, by analyzing a posture of the user, whether the user’s eyes are closed, a location of the user, and the like using a captured image. If it is determined that the user is sleeping, the application module 181 may determine whether the user moves, by using the motion detection sensor 151 or the camera 160, and may control a volume of the sound output 140 based on whether a time the user does not move is greater than a reference time.
In addition, the application module 181 may control a volume of the sound output 140 or the power supply 130 in the device 100 by determining whether the user is making or receiving a call. In detail, the application module 181 may acquire information about a location of a communication device, whether the communication device is moving, and a posture and a location of the user using a captured image and determine based on the acquired information whether the user is using the communication device. If it is determined that the user is using the communication device, the application module 181 mutes a volume of the sound output 140 or decreases a volume of the sound output to a predetermined volume, and if it is determined that the user has ended the call, the application module 181 may increase the volume of the sound output 140 to the volume before being muted or a predetermined volume.
In addition, the application module 181 may control a volume of the sound output 140 or the power supply 130 in the device 100 by determining whether the user is using the object. Thus, the application module 181 may determine whether the user is using the object, by analyzing a posture of the user, the orientation of the eyes of the user, a state and location of the object, and the like using a captured image.
In addition, the application module 181 may control a volume of the sound output 140 or brightness of the display 120 based on noise or illumination of a surrounding area.
In addition, the application module 181 may transmit state information of the user based on the state information of the user or state information of the object. For example, as a result of analyzing the state information of the user or the state information of the object, if it is determined that the user, for example, a child, having a height less than a reference height, is alone without another user, the application module 181 may transmit the state information of the user to a predetermined user.
Table 1 shows an example of a method of controlling a device based on state information of a user and state information of an object according to an embodiment of the present disclosure.
Table 1
Situation | State information of user | State information of object | Device control |
Sleeping | Orientation of the eyes, a posture, and/or a location | User is located on a couch or bed | Control a voice volume or power |
Calling | A posture, a location, and/or behavior | A moving path and a location of a communication device | Control a voice volume or power |
Using another device | A posture, orientation of the eyes, and/or holding another device | A location | Control a voice volume or power |
Detecting surrounding environment | User is using the device | Information about noise or brightness in an environment around the device | Control a voice volume or a display unit |
Transmitting state information of user | state information of the user | (A state of the user may be determined by considering state information of the object) | Transmit state information of the user |
No user | The user is not detected | Control a voice volume or power |
In Table 1, cases of sleeping, making or receiving a call, using another device, detecting a surrounding area, and transmitting state information of the user may correspond to the embodiments shown in FIGS. 2 to 6, respectively.
In addition, referring to Table 1, when the device cannot detect the user around the device, since the user is not using the device, the device may control a sound output or a power supply in the device to prevent unnecessary power waste based on a time the device cannot detect the user.
Thus, according to an embodiment of the present disclosure, the device may store conditions on state information of the user or object to perceive typical situations including sleeping and making or receiving a call and may control the sound output unit, the power supply, and/or the display in the device.
In addition, the device may acquire state information of the user or object by detecting or tracking the user or object in real-time and control the sound output unit, the power supply, and/or the display in the device based on the acquired information.
The above-described embodiments may be recorded or stored in computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on the media may be those specially designed and constructed for the purposes of embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks, DVDs and Blu-rays; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, a universal serial bus (USB), a memory card, and the like. The computer-readable media may also be a distributed network, so that the program instructions are stored and executed in a distributed fashion. The program instructions may be executed by one or more processors. The computer-readable media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA), which executes (processes like a processor) program instructions. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The above-described devices may be configured to act as one or more software modules in order to perform the operations of the above-described embodiments, or vice versa.
While the present disclosure has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various deletions, replacements, and changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims. All modifications within the equivalent scope of claims are involved in the scope of the present disclosure.
Claims (15)
- A method of controlling a device, the method comprising:detecting a user located at a predetermined location and acquiring state information of the detected user;detecting at least one object located around the user and acquiring state information of the detected object; andcontrolling the device based on the state information of the user and the state information of the object.
- The method of claim 1, wherein the controlling comprises:determining whether the user is sleeping, based on the state information of the user and the state information of the object; andcontrolling a sound output of the device based on a result of the determination.
- The method of claim 1, wherein the controlling comprises:determining whether the user has moved for a predetermined time, based on the state information of the user; andcontrolling a power supply of the device based on a result of the determination.
- The method of claim 1, wherein the controlling comprises:determining whether the user is making or receiving a call, based on the state information of the user and the state information of the object; andcontrolling a sound output of the device based on a result of the determination.
- The method of claim 1, wherein the controlling comprises:measuring illumination of an area around the device based on the state information of the object; andcontrolling a display of the device based on the illumination.
- The method of claim 1, wherein the controlling comprises transmitting the state information of the user to another user when it is detected based on the state information of the user that the user having a height less than a reference height is alone within a predetermined radius of the device.
- The method of claim 1, wherein the controlling comprises:determining whether the user is using the device, based on the state information of the user and the state information of the object; andcontrolling a sound output or a power supply of the device based on a result of the determination.
- A device comprising:a sensor to detect a user located at a predetermined location and at least one object located in an area around the user; anda processor to acquire state information of the user and state information of the at least one object and to control the device based on the state information of the user and the state information of the object.
- The device of claim 8, wherein the processor determines whether the user is sleeping, based on the state information of the user and the state information of the object and controls the sound output of the device based on a result of the determination.
- The device of claim 8, wherein the sensor includes a motion detection sensor to detect a motion of the user or object processor and the processor determines whether the user has moved for a predetermined time, based on the state information of the user and controls a power supply of the device based on a result of the determination.
- The device of claim 8, wherein the processor determines whether the user is making or receiving a call, based on the state information of the user and the state information of the object and controls the sound output of the device based on a result of the determination.
- The device of claim 8, wherein the sensor comprises an illumination measurement sensor to measure illumination of an area around the device based on the state information of the object and controls a display of the device based on the illumination.
- The device of claim 8, wherein the processor to transmit the state information of the user to another user when it is detected based on the state information of the user that the user having a height less than a reference height is alone within a predetermined radius of the device.
- The device of claim 8, wherein the processor determines whether the user is using the device, based on the state information of the user and the state information of the object; and controls the sound output or a power supply of the device based on a result of the determination.
- A computer-readable recording medium storing a computer-readable program to execute a method of controlling a device, the method comprising:detecting, by a processor, the user located at a predetermined location and acquiring state information of the detected user;detecting at least one object located in an area around the user and acquiring state information of the detected object; andcontrolling the device based on the state information of the user and the state information of the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13826432.0A EP2834721A4 (en) | 2012-08-02 | 2013-05-28 | Method for controlling device, and device using the same |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020120084988A KR20140018630A (en) | 2012-08-02 | 2012-08-02 | Method and apparatus for controlling a device |
KR10-2012-0084988 | 2012-08-02 | ||
US13/572,221 | 2012-08-10 | ||
US13/572,221 US20140047464A1 (en) | 2012-08-10 | 2012-08-10 | Method and apparatus for measuring tv or other media delivery device viewer's attention |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014021547A1 true WO2014021547A1 (en) | 2014-02-06 |
Family
ID=50028187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/004654 WO2014021547A1 (en) | 2012-08-02 | 2013-05-28 | Method for controlling device, and device using the same |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2834721A4 (en) |
WO (1) | WO2014021547A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017108143A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Nonlinguistic input for natural language generation |
US9992407B2 (en) | 2015-10-01 | 2018-06-05 | International Business Machines Corporation | Image context based camera configuration |
US10531230B2 (en) | 2015-09-16 | 2020-01-07 | Ivani, LLC | Blockchain systems and methods for confirming presence |
US11533584B2 (en) | 2015-09-16 | 2022-12-20 | Ivani, LLC | Blockchain systems and methods for confirming presence |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0919906A2 (en) | 1997-11-27 | 1999-06-02 | Matsushita Electric Industrial Co., Ltd. | Control method |
US20060139314A1 (en) * | 2002-05-28 | 2006-06-29 | Matthew Bell | Interactive video display system |
KR20110010906A (en) * | 2009-07-27 | 2011-02-08 | 삼성전자주식회사 | Apparatus and method for controlling of electronic machine using user interaction |
US20110080529A1 (en) * | 2009-10-05 | 2011-04-07 | Sony Corporation | Multi-point television motion sensor system and method |
US20110134251A1 (en) * | 2009-12-03 | 2011-06-09 | Sungun Kim | Power control method of gesture recognition device by detecting presence of user |
KR20120080070A (en) * | 2011-01-06 | 2012-07-16 | 삼성전자주식회사 | Electronic device controled by a motion, and control method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2267674A1 (en) * | 2009-06-11 | 2010-12-29 | Koninklijke Philips Electronics N.V. | Subject detection |
-
2013
- 2013-05-28 WO PCT/KR2013/004654 patent/WO2014021547A1/en active Application Filing
- 2013-05-28 EP EP13826432.0A patent/EP2834721A4/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0919906A2 (en) | 1997-11-27 | 1999-06-02 | Matsushita Electric Industrial Co., Ltd. | Control method |
US20060139314A1 (en) * | 2002-05-28 | 2006-06-29 | Matthew Bell | Interactive video display system |
KR20110010906A (en) * | 2009-07-27 | 2011-02-08 | 삼성전자주식회사 | Apparatus and method for controlling of electronic machine using user interaction |
US20110080529A1 (en) * | 2009-10-05 | 2011-04-07 | Sony Corporation | Multi-point television motion sensor system and method |
US20110134251A1 (en) * | 2009-12-03 | 2011-06-09 | Sungun Kim | Power control method of gesture recognition device by detecting presence of user |
KR20120080070A (en) * | 2011-01-06 | 2012-07-16 | 삼성전자주식회사 | Electronic device controled by a motion, and control method thereof |
Non-Patent Citations (1)
Title |
---|
See also references of EP2834721A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10531230B2 (en) | 2015-09-16 | 2020-01-07 | Ivani, LLC | Blockchain systems and methods for confirming presence |
US11533584B2 (en) | 2015-09-16 | 2022-12-20 | Ivani, LLC | Blockchain systems and methods for confirming presence |
US9992407B2 (en) | 2015-10-01 | 2018-06-05 | International Business Machines Corporation | Image context based camera configuration |
WO2017108143A1 (en) * | 2015-12-24 | 2017-06-29 | Intel Corporation | Nonlinguistic input for natural language generation |
US20170330561A1 (en) * | 2015-12-24 | 2017-11-16 | Intel Corporation | Nonlinguistic input for natural language generation |
Also Published As
Publication number | Publication date |
---|---|
EP2834721A4 (en) | 2015-12-09 |
EP2834721A1 (en) | 2015-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014157806A1 (en) | Display device and control method thereof | |
WO2015122616A1 (en) | Photographing method of an electronic device and the electronic device thereof | |
WO2014133278A1 (en) | Apparatus and method for positioning image area using image sensor location | |
WO2014021547A1 (en) | Method for controlling device, and device using the same | |
WO2015030307A1 (en) | Head mounted display device and method for controlling the same | |
WO2015120673A1 (en) | Method, system and photographing equipment for controlling focusing in photographing by means of eyeball tracking technology | |
WO2020130689A1 (en) | Electronic device for recommending play content, and operation method therefor | |
WO2017119659A1 (en) | Portable nystagmus examination device | |
WO2018164411A1 (en) | Electronic device including camera module and method for controlling electronic device | |
WO2015194709A1 (en) | Portable display device and method of controlling therefor | |
WO2015115700A1 (en) | Portable device including a flip cover and method of controlling therefor | |
EP3047640A1 (en) | Portable device and control method thereof | |
WO2017111332A1 (en) | Electronic device and control method for electronic device | |
WO2018117681A1 (en) | Image processing method and electronic device supporting same | |
WO2019139404A1 (en) | Electronic device and method for processing image of same | |
WO2019039698A1 (en) | Method for processing image on basis of external light, and electronic device supporting same | |
WO2016003143A1 (en) | Electronic device carrying case and portable electronic device | |
WO2016064115A1 (en) | Method of controlling device and device thereof | |
WO2014148714A1 (en) | Portable device and visual sensation detecting alarm control method thereof | |
WO2020145653A1 (en) | Electronic device and method for recommending image capturing place | |
WO2016006731A1 (en) | Portable device that controls photography mode, and control method therefor | |
WO2018016924A1 (en) | Electronic device and touch input sensing method of electronic device | |
WO2017179765A1 (en) | Battery charger and battery pack | |
WO2013180354A1 (en) | Method and home device for outputting response to user input | |
WO2018182282A1 (en) | Electronic device and image processing method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13826432 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2013826432 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013826432 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |