KR101774236B1 - Apparatus and method for context-awareness of user - Google Patents
Apparatus and method for context-awareness of user Download PDFInfo
- Publication number
- KR101774236B1 KR101774236B1 KR1020150071475A KR20150071475A KR101774236B1 KR 101774236 B1 KR101774236 B1 KR 101774236B1 KR 1020150071475 A KR1020150071475 A KR 1020150071475A KR 20150071475 A KR20150071475 A KR 20150071475A KR 101774236 B1 KR101774236 B1 KR 101774236B1
- Authority
- KR
- South Korea
- Prior art keywords
- user
- situation
- sound
- predetermined size
- sound intensity
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
Abstract
A user situation awareness apparatus is disclosed, wherein the user presence awareness apparatus includes a data collection unit for collecting environmental sounds around the user, a data analysis unit for analyzing whether or not the collected environment sounds include speech, And may include a situation recognition section that recognizes the user's situation as either a personal situation or a plurality of situations. The present invention can provide a user context awareness apparatus having a higher recognition rate through recognition of the user's situation based on the ambient environment sound information.
Description
The present invention relates to a device and method for recognizing a user's situation by using surrounding environmental sound information.
In general, context awareness refers to a technique of detecting a context, which is information capable of defining and characterizing an entity in a ubiquitous environment, and making an appropriate situation judgment. Context recognition technology that recognizes the changing state of an entity such as user, environment, device, and determines intelligent service in an environment where a computing object is embedded is the most important technology of ubiquitous computing.
BACKGROUND ART [0002] Recently, with the development of technologies related to terminals, e.g., smart phones, and the spread of the technology, small-sized mobile terminals such as smart phones have become personal necessities. In addition, with the spread of smartphones becoming common, operators are offering various services to satisfy users' needs.
In particular, recent smartphone related companies are aware of the situation of users based on context recognition technology and are making efforts to provide various services according to the perceived situation.
As a part of this effort, a music recommendation service providing method for recommending optimal music to the user based on the current weather, temperature, and time has been developed and is being provided. However, this music recommendation service has limitations because it recommends music based on external factors rather than user 's emotion.
The background of the present invention is disclosed in Korean Patent Publication No. 2010-0018221 (published on Feb. 22, 2010).
The present invention provides a device and method for recognizing a user's situation based on information on surrounding environment sounds, which can more clearly recognize a user's situation.
According to a first aspect of the present invention, there is provided a user situation awareness apparatus, comprising: a data collection unit for collecting environmental sounds around a user; A data analyzer for analyzing whether or not the collected environmental sounds include speech; And a situation recognition unit for recognizing the user's situation according to the personal situation or the plurality of situations according to the presence or absence of the voice.
According to an embodiment of the present invention, the data analyzing unit may calculate a frequency standard deviation through frequency analysis of environmental sounds collected by the data collecting unit, and determine whether to include the speech based on the frequency standard deviation have.
According to an embodiment of the present invention, the frequency standard deviation may be calculated by extracting a representative frequency spectrum of the environmental sound collected by the data collection unit, and analyzing the extracted frequency spectrum.
According to an embodiment of the present invention, the data analyzing unit analyzes the sound intensity of the environmental sound collected by the data collecting unit, and the situation recognizing unit recognizes the dynamic level of the user's situation based on the sound intensity.
According to an embodiment of the present invention, the situation recognition unit can recognize the dynamic level of the user's situation as either a static situation or a dynamic situation.
According to an embodiment of the present invention, the situation recognition unit can recognize a situation of a user by synthesizing a situation depending on the presence or absence of the voice and a situation depending on the sound intensity.
According to an embodiment of the present invention, the situation recognition unit may consider a time point at which the environmental sound is collected in the data collection unit in a situation of the user.
According to an embodiment of the present invention, the context awareness unit may consider user activity information in a context of a user.
According to one embodiment of the present invention, the user activity information may be information on acceleration of the user context aware device.
According to one embodiment of the present application, the user context awareness apparatus according to the first aspect of the present invention may further comprise a sensor for measuring the acceleration.
According to one embodiment of the present invention, the user context awareness apparatus according to the first aspect of the present invention may further include a service provision unit for providing a service corresponding to a status of the user recognized by the context awareness unit.
According to an embodiment of the present invention, the service providing unit may provide a recommendation music corresponding to a situation of a user recognized by the context recognition unit.
According to an embodiment of the present invention, the service providing unit may provide the recommendation music corresponding to the emotional state of the user, selected from a music file stored in the service providing apparatus or received from an external server connected through a communication network.
According to a second aspect of the present invention, there is provided a method for recognizing a user situation by a user context aware apparatus, comprising the steps of: (a) collecting environmental sounds around the user; (b) analyzing whether or not the collected environmental sound includes speech; And (c) recognizing the user's situation as either a personal situation or a plurality of situations according to whether or not the voice is included.
According to an embodiment of the present invention, the step (b) may include the step of calculating a frequency standard deviation through frequency analysis of the environmental sound collected in the step (a), and determining whether the speech is included or not based on the frequency standard deviation can do.
According to an embodiment of the present invention, the step (b) analyzes the sound intensity of the environmental sound collected in the step (a), and the step (c) It can be recognized.
According to an embodiment of the present invention, the step (c) may take into consideration at least one of the information related to the time when the environmental sound is collected in the step (a) and the user activity information in the user's context.
According to an embodiment of the present invention, the user activity information is information on the acceleration of the user context aware device, and the user context awareness method according to the second aspect of the present invention is characterized in that before the step (c) Step < / RTI >
According to an embodiment of the present invention, a method of recognizing a user situation according to a second aspect of the present invention may further include the step of (d) providing a service corresponding to a situation of a user recognized in the step (c).
According to a third aspect of the present invention, there is provided a computer-readable recording medium storing a program for causing a computer to execute a method for recognizing a user situation according to the second aspect of the present invention, have.
The above-described task solution is merely exemplary and should not be construed as limiting the present disclosure. In addition to the exemplary embodiments described above, there may be additional embodiments described in the drawings and the detailed description of the invention.
According to any of the above-described tasks, it is possible to determine whether the user's situation is a personal situation or a plurality of situations by judging whether or not a voice is included in the surrounding environment sound. Whether such speech is included can be achieved by analyzing the frequency standard deviation of the ambient sound.
Also, based on the sound intensity of the surrounding environment sound, the user can recognize whether the situation is a static situation or a dynamic situation.
In addition, the user's situation can be more clearly and specifically perceived by comprehensively considering the presence or absence of speech in the ambient environment sound and the sound intensity of ambient environment sound.
In addition, it is possible to more precisely and accurately determine the user's situation by inferring not only the information about the surrounding environment sounds but also the viewpoint related information and the user activity information of the surrounding environment sounds to the user.
In addition, the present invention can provide a music service suited to a user's emotional state by recommending music suitable for a recognized user's situation.
It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.
1 is a block diagram of a user context aware device according to one embodiment of the present invention.
2 is a detailed block diagram of a user context aware device according to one embodiment of the present invention.
FIG. 3 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an exemplary embodiment of the present invention classifies and infer individual situations and a plurality of situations.
FIG. 4 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an embodiment of the present invention classifies and infer a dynamic situation and a static situation (a dynamic level of a user's situation).
FIG. 5 is a conceptual diagram illustrating an example of a process of recognizing a user's situation by considering a personal situation, a plurality of situations, a dynamic situation, and a static situation according to an embodiment of the present invention .
6 is a flow diagram of a method for recognizing a user situation according to an embodiment of the present invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.
Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.
Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.
Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise.
The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.
In this specification, the term " part " includes a unit realized by hardware, a unit realized by software, and a unit realized by using both. Further, one unit may be implemented using two or more hardware, or two or more units may be implemented by one hardware. In the present description, some of the operations or functions described as being performed by a terminal, a device, or a device may be performed instead in a server connected to the terminal, device, or device. Likewise, some of the operations or functions described as being performed by the server may also be performed in a terminal, device or device connected to the server. Hereinafter, one embodiment of the present invention will be described in detail with reference to the accompanying drawings.
FIG. 1 is a block diagram of a user context aware device according to one embodiment of the present invention, and FIG. 2 is a detailed block diagram of a user context aware device according to an embodiment of the present invention.
Referring to FIG. 1, the user condition recognition apparatus may include a
The
In addition, the
The
In the embodiment of the present invention, the SoundScape may refer to noise, voice, etc. generated in the vicinity of the user.
The
The three-axis acceleration sensor measures the dynamic force of the user such as acceleration, vibration, and shock, and can output a sensing signal corresponding to the motion state of the user. Specifically, the three-axis acceleration sensor can sense the movement of the user in three axial directions, that is, the acceleration and deceleration states, and output a sensing signal corresponding thereto. Also, the three-axis gyro sensor senses the user's height, rotation, tilt, and the like, detects the movement direction and acceleration of the user, and outputs a corresponding sensing signal.
The
The
For reference, a module may mean a functional or structural combination of hardware for carrying out the technical idea of the present invention or software for driving the hardware. For example, the module may mean a logical unit of a predetermined code and a hardware resource for executing the predetermined code, and it does not necessarily mean a physically connected code or a kind of hardware. Can be easily deduced to the average expert in the field of technology.
The
Meanwhile, the
The
Since the variation of the frequency is large in the case of speech, it is possible to infer whether speech is included by analyzing the frequency standard deviation of the environmental sound. That is, the
Specifically, since the range of frequencies that can be represented in the case of speech is from 200 Hz to 3,500 Hz, the frequency standard deviation of the collected environmental sounds increases. Accordingly, it is possible to determine whether or not the voice is included according to whether the frequency standard deviation of the environmental sound exceeds a predetermined size.
For example, when the standard deviation of the environmental sound exceeds 150, the
The frequency standard deviation of the environmental sound may be calculated by extracting a representative frequency spectrum of the environmental sound collected by the
The expected value of the decibel (sound intensity) of the environmental sound collected according to the user's activity status, the actually measured decibel, the expected value of the frequency, the actually calculated frequency and the frequency standard deviation are shown in Table 1 below.
In Table 1, it can be seen that in the case of a large number of drives (many) and rest (many) corresponding to a plurality of situations, the actual calculated frequency standard deviation exceeds 150, In the case of exercise, sleeping, studying and resting (individual), the actual calculated frequency standard deviation is less than 150.
In addition, the
The data analyzer 120 may measure or analyze the sound intensity of the environmental sound collected by the
The
The data analyzer 120 may analyze the acceleration signal collected by the
Specifically, the activity information generation module 126 may generate the activity information of the user using the sensing signal. Specifically, the activity information generation module 126 may generate the activity information of the user using the acceleration signal of the sensing signal sensed by the
Meanwhile, the
The
As described above, the personal situation and the plurality of situations are classified according to whether environmental sounds (for example, sound data input to the
FIG. 3 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an exemplary embodiment of the present invention classifies and infer individual situations and a plurality of situations.
Referring to FIG. 3 with reference to Table 1, it is assumed that the user is not likely to include speech in a variety of situations such as work, work, drive (personal), exercise, sleep, study, , It can be perceived as being a personal situation. Further, in a situation where a conversation is usually made to each other such as a drive (a plurality), a rest (a plurality), etc., and a situation in which voice is highly likely to be included,
In addition, the
Whether the user's situation is static or dynamic can be deduced according to the sound intensity (dB). This is because when the user is in a static situation such as sleeping or studying, the sound intensity of ambient sounds generated in the surroundings is usually weak, and when the user is in a dynamic situation such as work or work, It is based on the fact that the sound intensity is usually strong.
For example, when the sound intensity (dB) around the user is equal to or greater than a predetermined size, it can be recognized as a dynamic situation, and when the sound intensity is less than the dynamic intensity, a static situation can be recognized. Illustratively, the magnitude of the sound intensity that serves as a reference for distinguishing dynamic and static situations may be 60 dB.
FIG. 4 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an embodiment of the present invention classifies and infer a dynamic situation and a static situation (a dynamic level of a user's situation).
Referring to FIG. 4, with reference to Table 1 above, it can be seen that the actual measured decibel (sound intensity) exceeds 60 dB in the case of work, work, drive (personal), drive In the case of sleeping, studying, resting (private) and resting (many) in the static situation, it can be confirmed that the actual measured decibel is less than 60 dB.
The
FIG. 5 is a conceptual diagram illustrating an example of a process of recognizing a user's situation by considering a personal situation, a plurality of situations, a dynamic situation, and a static situation according to an embodiment of the present invention .
For example, referring to FIG. 4 and FIG. 5, the
In other words, the
Referring to FIG. 5 and Table 1, when the frequency standard deviation of the surrounding environment sounds is 150 or less and the sound intensity is 60 dB or less, the
On the other hand, the
Specifically, the
For example, referring to FIG. 3, when the
In addition, the
For example, referring to FIG. 4, when the
In this manner, the
In addition, in one embodiment of the present invention, the
In addition, the
Hereinafter, a method of recognizing a user's situation according to an embodiment of the present invention will be described. However, since the method for recognizing the user's situation is a method performed by the user's situation recognition apparatus according to an embodiment of the present invention, which is the same as or similar to the configuration described above, The same reference numerals are used, and redundant description will be simplified or omitted.
6 is a flow diagram of a method for recognizing a user situation according to an embodiment of the present invention.
According to an embodiment of the present invention, there is provided a method for recognizing a user's situation, comprising the steps of collecting environment sounds (S210), analyzing whether or not the collected environmental sounds include speech (S230) (Step S240) of recognizing an individual situation or a plurality of situations. In addition, the method of recognizing a user situation according to an embodiment of the present invention may include providing a service corresponding to a situation of a user recognized in step S240.
The step S210, i.e., the step of collecting the surrounding environment sounds, may be performed until a preset time elapses according to whether or not a predetermined time has elapsed (S220). For example, the
In step S230, a frequency standard deviation is calculated through frequency analysis of the environmental sounds collected in step S210, and it is determined whether speech is included or not based on the frequency standard deviation. Step S210 may be performed by the data collecting unit 210, and step S230 may be performed by the data analyzing unit 220.
In step S230, the sound intensity of the environmental sound collected in step S210 may be analyzed. In addition, the step S240 can recognize the dynamic level of the user's situation based on the analyzed sound intensity in step S230. For example, the dynamic level may be either a dynamic situation or a static situation. As another example, the above dynamic levels can be further subdivided.
In step S240, it is possible to consider at least one of the time-related information and the user activity information at which the environment sound is collected in step S210. Here, the user activity information may be information on the acceleration of the user context aware apparatus. In addition, the method of recognizing the user's situation according to an embodiment of the present invention may further include measuring the acceleration of the user's situation recognition apparatus before step S240. For example, step S210 may include measuring the acceleration.
In addition, the step S250 may be performed by the service providing unit 240 described above. In addition, the service provided in step S250 may be, for example, a music recommendation service.
The method for recognizing the user's situation as described above may be implemented to operate in an apparatus that is carried by a user and provided with various services, for example, a portable terminal.
In addition, the method of recognizing the user's situation according to an embodiment of the present invention may be implemented in the form of a program command which can be executed through various computer means and recorded in a computer-readable recording medium. In other words, the method according to one embodiment of the present invention may also be implemented in the form of a recording medium including instructions executable by a computer, such as program modules, being executed by a computer. The computer-readable recording medium may include a program command, a data file, a data structure, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. In addition, the computer-readable medium may include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data . The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
The present disclosure may also be embodied in the form of a computer program or an application stored on a recording medium for executing the user context awareness method described above. For example, the present application can be implemented in the form of a computer program (application) stored in a recording medium included in a user terminal. In the embodiment of the present invention, the terminal is preferably understood as a broad concept including all kinds of terminals such as a smart phone, a smart pad, and a tablet PC.
It will be understood by those of ordinary skill in the art that the foregoing description of the embodiments is for illustrative purposes and that those skilled in the art can easily modify the invention without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.
It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .
110: Data collection unit 120: Data analysis unit
130: situation inference unit 140:
112: audio input module 114: inertial sensor module
116: information extraction module 122: frequency analysis module
124: sound intensity analysis module 126: activity information generation module
Claims (20)
A data collection unit for collecting environmental sounds around;
The method includes the steps of: calculating a frequency standard deviation through frequency analysis of the collected environment sounds; analyzing whether the sound includes the speech based on the frequency standard deviation; and if the frequency standard deviation exceeds a predetermined size, And analyzing whether the sound intensity of the collected environmental sound is greater than or equal to a predetermined size when the frequency standard deviation is less than or equal to the predetermined size; And
If it is determined that the voice is included, the user is recognized as a plurality of situations in which the conversation exists, and if the voice is not included, the user is recognized as a personal situation in which the voice is not present or the conversation does not exist A status recognition unit recognizing the dynamic level of the user's situation when the sound intensity is equal to or greater than a predetermined size and recognizing the dynamic level of the user's situation as a static situation when the sound intensity is less than a predetermined size and,
Wherein the situation recognition unit recognizes a situation of a user by synthesizing whether a situation according to whether or not the voice is included and a situation according to the sound intensity,
If it is determined that the voice is not included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a static situation of the user,
If it is determined that the voice is included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a plurality of static situations,
If it is determined that the voice is not included and the sound intensity is equal to or larger than a predetermined size,
And recognizes that the user's situation is a plurality of dynamic situations when the sound is determined to be included and the sound intensity is greater than or equal to a predetermined magnitude.
Wherein the frequency standard deviation is calculated by extracting a representative frequency spectrum of the environmental sound collected by the data collection unit and analyzing the extracted frequency spectrum.
Wherein the situation recognition unit considers a time point at which the environmental sound is collected in the data collection unit in the user's context.
Wherein the context awareness unit considers user activity information in a context of the user.
Wherein the user activity information is information about an acceleration of the user context aware device.
Further comprising a sensor for measuring the acceleration.
Further comprising a service providing unit for providing a service corresponding to a situation of a user recognized by the situation recognition unit.
Wherein the service providing unit provides the recommendation music corresponding to the status of the user recognized by the context recognition unit.
Wherein the service providing unit provides the recommendation music corresponding to the emotional state of the user, selected from the stored music file in the service providing unit or received from an external server connected through a communication network.
(a) collecting surrounding environmental sounds;
(b) calculating a frequency standard deviation through frequency analysis of the collected environment sounds, and analyzing whether or not the sound includes the speech based on the frequency standard deviation, and when the frequency standard deviation exceeds a predetermined size, Analyzing whether the sound intensity of the collected environmental sound is greater than or equal to a predetermined size, determining that the sound is not included when the frequency standard deviation is less than or equal to the predetermined size; And
(c) if it is determined that the voice is included, recognizing the user's situation as a plurality of situations in which the conversation is present, and if the voice is not included, Recognizing the dynamic level of the user's situation as a dynamic situation when the sound intensity is greater than or equal to a predetermined size and recognizing the dynamic level of the user's situation as a static situation when the sound intensity is less than a predetermined size Lt; / RTI >
Wherein the recognizing step recognizes the user's situation by synthesizing whether or not the situation is accompanied by the presence of the voice and the situation according to the sound intensity,
When it is determined that the voice is not included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a static situation of the user,
If it is determined that the voice is included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a plurality of static situations,
If it is determined that the voice is not included and the sound intensity is equal to or larger than a predetermined size,
Wherein if the sound is determined to be included and the sound intensity is greater than or equal to a predetermined magnitude, then the user is perceived as being in a plurality of dynamic situations.
Wherein the step (c) considers at least one of the time information related to when the environment sound is collected and the user activity information in the step (a).
Wherein the user activity information is information on acceleration of the user context aware device,
Further comprising, prior to step (c), measuring the acceleration.
(d) providing a service corresponding to the status of the user recognized in the step (c).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150071475A KR101774236B1 (en) | 2015-05-22 | 2015-05-22 | Apparatus and method for context-awareness of user |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150071475A KR101774236B1 (en) | 2015-05-22 | 2015-05-22 | Apparatus and method for context-awareness of user |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20160137008A KR20160137008A (en) | 2016-11-30 |
KR101774236B1 true KR101774236B1 (en) | 2017-09-12 |
Family
ID=57707630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150071475A KR101774236B1 (en) | 2015-05-22 | 2015-05-22 | Apparatus and method for context-awareness of user |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101774236B1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102421487B1 (en) * | 2017-04-24 | 2022-07-15 | 엘지전자 주식회사 | Artificial intelligent device |
KR102104559B1 (en) * | 2017-12-19 | 2020-04-24 | 김양수 | Gateway Platform |
US20190200154A1 (en) * | 2017-12-21 | 2019-06-27 | Facebook, Inc. | Systems and methods for audio-based augmented reality |
KR102635811B1 (en) * | 2018-03-19 | 2024-02-13 | 삼성전자 주식회사 | System and control method of system for processing sound data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101360215B1 (en) * | 2011-10-11 | 2014-02-10 | 엘지전자 주식회사 | Mobile terminal and operation control method thereof |
-
2015
- 2015-05-22 KR KR1020150071475A patent/KR101774236B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101360215B1 (en) * | 2011-10-11 | 2014-02-10 | 엘지전자 주식회사 | Mobile terminal and operation control method thereof |
Also Published As
Publication number | Publication date |
---|---|
KR20160137008A (en) | 2016-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101165537B1 (en) | User Equipment and method for cogniting user state thereof | |
US20190391999A1 (en) | Methods And Systems For Searching Utilizing Acoustical Context | |
US9159324B2 (en) | Identifying people that are proximate to a mobile device user via social graphs, speech models, and user context | |
US20160379105A1 (en) | Behavior recognition and automation using a mobile device | |
Rossi et al. | AmbientSense: A real-time ambient sound recognition system for smartphones | |
KR101774236B1 (en) | Apparatus and method for context-awareness of user | |
KR102005049B1 (en) | Apparatus and method for providing safety management service based on context aware | |
CN105320726A (en) | Reducing the need for manual start/end-pointing and trigger phrases | |
WO2009062176A2 (en) | Activating applications based on accelerometer data | |
CN103631375B (en) | According to the method and apparatus of the Situation Awareness control oscillation intensity in electronic equipment | |
US11516336B2 (en) | Surface detection for mobile devices | |
CN111081275B (en) | Terminal processing method and device based on sound analysis, storage medium and terminal | |
CN103040477A (en) | Method and system for lie-detection through mobile phone | |
KR101564347B1 (en) | Terminal and method for requesting emergency relief | |
CN109271480B (en) | Voice question searching method and electronic equipment | |
Qin et al. | A context-aware do-not-disturb service for mobile devices | |
KR102396147B1 (en) | Electronic device for performing an operation using voice commands and the method of the same | |
García-Navas et al. | A new system to detect coronavirus social distance violation | |
KR101768692B1 (en) | Electronic display apparatus, method, and computer readable recoding medium | |
Li et al. | UserIntent: Detection of user intent for triggering smartphone sensing applications | |
KR20170094527A (en) | Electronic display apparatus, method, and computer readable recoding medium | |
Solak et al. | Remote control of mechanical rat traps based on vibration and audio sensors | |
CN116416980A (en) | Voice instruction processing method, control system and intelligent sofa | |
KR101882789B1 (en) | Method for calculating activity accuracy of conntextness service | |
CN113973149A (en) | Electronic apparatus, device failure detection method and medium thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
E601 | Decision to refuse application | ||
AMND | Amendment | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
X701 | Decision to grant (after re-examination) | ||
GRNT | Written decision to grant |