KR101774236B1 - Apparatus and method for context-awareness of user - Google Patents

Apparatus and method for context-awareness of user Download PDF

Info

Publication number
KR101774236B1
KR101774236B1 KR1020150071475A KR20150071475A KR101774236B1 KR 101774236 B1 KR101774236 B1 KR 101774236B1 KR 1020150071475 A KR1020150071475 A KR 1020150071475A KR 20150071475 A KR20150071475 A KR 20150071475A KR 101774236 B1 KR101774236 B1 KR 101774236B1
Authority
KR
South Korea
Prior art keywords
user
situation
sound
predetermined size
sound intensity
Prior art date
Application number
KR1020150071475A
Other languages
Korean (ko)
Other versions
KR20160137008A (en
Inventor
권용진
임호성
이준
Original Assignee
한국항공대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국항공대학교산학협력단 filed Critical 한국항공대학교산학협력단
Priority to KR1020150071475A priority Critical patent/KR101774236B1/en
Publication of KR20160137008A publication Critical patent/KR20160137008A/en
Application granted granted Critical
Publication of KR101774236B1 publication Critical patent/KR101774236B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Abstract

A user situation awareness apparatus is disclosed, wherein the user presence awareness apparatus includes a data collection unit for collecting environmental sounds around the user, a data analysis unit for analyzing whether or not the collected environment sounds include speech, And may include a situation recognition section that recognizes the user's situation as either a personal situation or a plurality of situations. The present invention can provide a user context awareness apparatus having a higher recognition rate through recognition of the user's situation based on the ambient environment sound information.

Description

[0001] APPARATUS AND METHOD FOR CONTEXT-AWARENESS OF USER [0002]

The present invention relates to a device and method for recognizing a user's situation by using surrounding environmental sound information.

In general, context awareness refers to a technique of detecting a context, which is information capable of defining and characterizing an entity in a ubiquitous environment, and making an appropriate situation judgment. Context recognition technology that recognizes the changing state of an entity such as user, environment, device, and determines intelligent service in an environment where a computing object is embedded is the most important technology of ubiquitous computing.

BACKGROUND ART [0002] Recently, with the development of technologies related to terminals, e.g., smart phones, and the spread of the technology, small-sized mobile terminals such as smart phones have become personal necessities. In addition, with the spread of smartphones becoming common, operators are offering various services to satisfy users' needs.

In particular, recent smartphone related companies are aware of the situation of users based on context recognition technology and are making efforts to provide various services according to the perceived situation.

As a part of this effort, a music recommendation service providing method for recommending optimal music to the user based on the current weather, temperature, and time has been developed and is being provided. However, this music recommendation service has limitations because it recommends music based on external factors rather than user 's emotion.

The background of the present invention is disclosed in Korean Patent Publication No. 2010-0018221 (published on Feb. 22, 2010).

The present invention provides a device and method for recognizing a user's situation based on information on surrounding environment sounds, which can more clearly recognize a user's situation.

According to a first aspect of the present invention, there is provided a user situation awareness apparatus, comprising: a data collection unit for collecting environmental sounds around a user; A data analyzer for analyzing whether or not the collected environmental sounds include speech; And a situation recognition unit for recognizing the user's situation according to the personal situation or the plurality of situations according to the presence or absence of the voice.

According to an embodiment of the present invention, the data analyzing unit may calculate a frequency standard deviation through frequency analysis of environmental sounds collected by the data collecting unit, and determine whether to include the speech based on the frequency standard deviation have.

According to an embodiment of the present invention, the frequency standard deviation may be calculated by extracting a representative frequency spectrum of the environmental sound collected by the data collection unit, and analyzing the extracted frequency spectrum.

According to an embodiment of the present invention, the data analyzing unit analyzes the sound intensity of the environmental sound collected by the data collecting unit, and the situation recognizing unit recognizes the dynamic level of the user's situation based on the sound intensity.

According to an embodiment of the present invention, the situation recognition unit can recognize the dynamic level of the user's situation as either a static situation or a dynamic situation.

According to an embodiment of the present invention, the situation recognition unit can recognize a situation of a user by synthesizing a situation depending on the presence or absence of the voice and a situation depending on the sound intensity.

According to an embodiment of the present invention, the situation recognition unit may consider a time point at which the environmental sound is collected in the data collection unit in a situation of the user.

According to an embodiment of the present invention, the context awareness unit may consider user activity information in a context of a user.

According to one embodiment of the present invention, the user activity information may be information on acceleration of the user context aware device.

According to one embodiment of the present application, the user context awareness apparatus according to the first aspect of the present invention may further comprise a sensor for measuring the acceleration.

According to one embodiment of the present invention, the user context awareness apparatus according to the first aspect of the present invention may further include a service provision unit for providing a service corresponding to a status of the user recognized by the context awareness unit.

According to an embodiment of the present invention, the service providing unit may provide a recommendation music corresponding to a situation of a user recognized by the context recognition unit.

According to an embodiment of the present invention, the service providing unit may provide the recommendation music corresponding to the emotional state of the user, selected from a music file stored in the service providing apparatus or received from an external server connected through a communication network.

According to a second aspect of the present invention, there is provided a method for recognizing a user situation by a user context aware apparatus, comprising the steps of: (a) collecting environmental sounds around the user; (b) analyzing whether or not the collected environmental sound includes speech; And (c) recognizing the user's situation as either a personal situation or a plurality of situations according to whether or not the voice is included.

According to an embodiment of the present invention, the step (b) may include the step of calculating a frequency standard deviation through frequency analysis of the environmental sound collected in the step (a), and determining whether the speech is included or not based on the frequency standard deviation can do.

According to an embodiment of the present invention, the step (b) analyzes the sound intensity of the environmental sound collected in the step (a), and the step (c) It can be recognized.

According to an embodiment of the present invention, the step (c) may take into consideration at least one of the information related to the time when the environmental sound is collected in the step (a) and the user activity information in the user's context.

According to an embodiment of the present invention, the user activity information is information on the acceleration of the user context aware device, and the user context awareness method according to the second aspect of the present invention is characterized in that before the step (c) Step < / RTI >

According to an embodiment of the present invention, a method of recognizing a user situation according to a second aspect of the present invention may further include the step of (d) providing a service corresponding to a situation of a user recognized in the step (c).

According to a third aspect of the present invention, there is provided a computer-readable recording medium storing a program for causing a computer to execute a method for recognizing a user situation according to the second aspect of the present invention, have.

The above-described task solution is merely exemplary and should not be construed as limiting the present disclosure. In addition to the exemplary embodiments described above, there may be additional embodiments described in the drawings and the detailed description of the invention.

According to any of the above-described tasks, it is possible to determine whether the user's situation is a personal situation or a plurality of situations by judging whether or not a voice is included in the surrounding environment sound. Whether such speech is included can be achieved by analyzing the frequency standard deviation of the ambient sound.

Also, based on the sound intensity of the surrounding environment sound, the user can recognize whether the situation is a static situation or a dynamic situation.

In addition, the user's situation can be more clearly and specifically perceived by comprehensively considering the presence or absence of speech in the ambient environment sound and the sound intensity of ambient environment sound.

In addition, it is possible to more precisely and accurately determine the user's situation by inferring not only the information about the surrounding environment sounds but also the viewpoint related information and the user activity information of the surrounding environment sounds to the user.

In addition, the present invention can provide a music service suited to a user's emotional state by recommending music suitable for a recognized user's situation.

 It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

1 is a block diagram of a user context aware device according to one embodiment of the present invention.
2 is a detailed block diagram of a user context aware device according to one embodiment of the present invention.
FIG. 3 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an exemplary embodiment of the present invention classifies and infer individual situations and a plurality of situations.
FIG. 4 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an embodiment of the present invention classifies and infer a dynamic situation and a static situation (a dynamic level of a user's situation).
FIG. 5 is a conceptual diagram illustrating an example of a process of recognizing a user's situation by considering a personal situation, a plurality of situations, a dynamic situation, and a static situation according to an embodiment of the present invention .
6 is a flow diagram of a method for recognizing a user situation according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.

Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.

Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.

Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise.

The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.

In this specification, the term " part " includes a unit realized by hardware, a unit realized by software, and a unit realized by using both. Further, one unit may be implemented using two or more hardware, or two or more units may be implemented by one hardware. In the present description, some of the operations or functions described as being performed by a terminal, a device, or a device may be performed instead in a server connected to the terminal, device, or device. Likewise, some of the operations or functions described as being performed by the server may also be performed in a terminal, device or device connected to the server. Hereinafter, one embodiment of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 1 is a block diagram of a user context aware device according to one embodiment of the present invention, and FIG. 2 is a detailed block diagram of a user context aware device according to an embodiment of the present invention.

Referring to FIG. 1, the user condition recognition apparatus may include a data collection unit 110, a data analysis unit 120, and a context recognition unit 130. In addition, the user context aware device may include a service provider 140.

The data collecting unit 110 collects surrounding environmental sounds. For example, the data collection unit 110 may collect external environment sounds for a predetermined time, for example, between 15 seconds and 25 seconds. For this, the data collection unit 110 may include an audio input module 112.

In addition, the data collection unit 110 may collect at least one of time related information and various sensing signals. For this purpose, the data collection unit 110 may include at least one of the inertial sensor module 114 and the information extraction module 116.

The audio input module 112 is for recognizing environmental sounds, for example, a microphone. As a specific example, the audio input module 112 can buffer and collect ambient sound for a predetermined time using a buffer unit (not shown). The data collection unit 110 may provide the data collected by the audio input module 112 to the data analysis unit 120.

In the embodiment of the present invention, the SoundScape may refer to noise, voice, etc. generated in the vicinity of the user.

The inertial sensor module 114 may use a micro electro mechanical system (MEMS) type sensor. Illustratively, inertial sensor module 114 may be a three-axis acceleration sensor or a three-axis gyro sensor.

The three-axis acceleration sensor measures the dynamic force of the user such as acceleration, vibration, and shock, and can output a sensing signal corresponding to the motion state of the user. Specifically, the three-axis acceleration sensor can sense the movement of the user in three axial directions, that is, the acceleration and deceleration states, and output a sensing signal corresponding thereto. Also, the three-axis gyro sensor senses the user's height, rotation, tilt, and the like, detects the movement direction and acceleration of the user, and outputs a corresponding sensing signal.

The data collecting unit 110 may collect a sensing signal, that is, an acceleration signal, output from the inertial sensor module 114 such as a three-axis gyro sensor or a three-axis acceleration sensor, and provide the sensing signal to the data analyzer 120.

The information extraction module 116 extracts viewpoint-related information such as season, date, day of the week, morning, afternoon, hour, minute, second, etc. from the device including the user context awareness device of the present invention . The data collection unit 110 may collect the view-related information extracted by the information extraction module 116 and provide the collected view-related information to the data analysis unit 120.

For reference, a module may mean a functional or structural combination of hardware for carrying out the technical idea of the present invention or software for driving the hardware. For example, the module may mean a logical unit of a predetermined code and a hardware resource for executing the predetermined code, and it does not necessarily mean a physically connected code or a kind of hardware. Can be easily deduced to the average expert in the field of technology.

The data collecting unit 110 may use the audio input module 112, the inertial sensor module 114, the information extraction module 116, and the like during the process of receiving a specific service by the user, Sensing information, time-related information, and the like to the data analyzing unit 120. [0031] FIG.

Meanwhile, the data analysis unit 120 may generate information for determining the user's situation based on the data collected by the data collection unit 110.

The data analysis unit 120 may analyze whether or not the environmental sound collected by the data collection unit 110 includes speech.

Since the variation of the frequency is large in the case of speech, it is possible to infer whether speech is included by analyzing the frequency standard deviation of the environmental sound. That is, the data analyzer 120 may calculate the frequency standard deviation through frequency analysis of the environment sound collected by the data collector 110, and may determine whether to include speech based on the calculated frequency standard deviation.

Specifically, since the range of frequencies that can be represented in the case of speech is from 200 Hz to 3,500 Hz, the frequency standard deviation of the collected environmental sounds increases. Accordingly, it is possible to determine whether or not the voice is included according to whether the frequency standard deviation of the environmental sound exceeds a predetermined size.

For example, when the standard deviation of the environmental sound exceeds 150, the data analysis unit 120 may determine that the environmental sound is highly likely to include the sound, and the situation recognition unit 130, which will be described later, And it can be recognized that it corresponds to a plurality of situations in which a dialogue exists between each other. On the contrary, when the frequency standard deviation of the environmental sound is 150 or less, the data analysis unit 120 can determine that the environmental sound is highly likely not to include the sound, and the situation recognition unit 130, which will be described later, It can be recognized that it corresponds to a personal situation in which speech or conversation does not exist.

The frequency standard deviation of the environmental sound may be calculated by extracting a representative frequency spectrum of the environmental sound collected by the data collection unit 110 and analyzing the extracted frequency spectrum.

The expected value of the decibel (sound intensity) of the environmental sound collected according to the user's activity status, the actually measured decibel, the expected value of the frequency, the actually calculated frequency and the frequency standard deviation are shown in Table 1 below.

Figure 112015049181979-pat00001

In Table 1, it can be seen that in the case of a large number of drives (many) and rest (many) corresponding to a plurality of situations, the actual calculated frequency standard deviation exceeds 150, In the case of exercise, sleeping, studying and resting (individual), the actual calculated frequency standard deviation is less than 150.

In addition, the data analysis unit 120 may include a frequency analysis module 122, as shown in FIG. The frequency analysis module 122 may calculate the frequency and frequency standard deviation through analyzing the surrounding environment sounds, and then provide the frequency and frequency standard deviation to the situation recognition unit 130.

The data analyzer 120 may measure or analyze the sound intensity of the environmental sound collected by the data collector 110. For this, the data analysis unit 120 may include a sound intensity analysis module 124, as shown in FIG. The sound intensity analysis module 124 may measure the sound intensity (e.g., decibel) of ambient sound for a predetermined period of time collected by the data collection unit 110 and then provide the sound intensity to the situation recognition unit 130 .

The frequency analysis module 122 and the sound intensity analysis module 124 associated with the environmental sound analysis can perform analysis on the collected ambient environment sounds (audio data) using Fast Fourier Transform (FFT).

The data analyzer 120 may analyze the acceleration signal collected by the data collector 110. For this, the data analysis unit 120 may include an activity information generation module 126, as shown in FIG.

Specifically, the activity information generation module 126 may generate the activity information of the user using the sensing signal. Specifically, the activity information generation module 126 may generate the activity information of the user using the acceleration signal of the sensing signal sensed by the inertial sensor module 114, and then provide the activity information to the context recognition unit 130.

Meanwhile, the situation recognition unit 130 recognizes the user's situation based on the result of analyzing the data collected by the data collection unit 110 by the data analysis unit 120. FIG.

The situation recognition unit 130 can recognize the user's situation as either an individual situation or a plurality of situations depending on whether the environment sound is analyzed by the data analysis unit 120 or not.

As described above, the personal situation and the plurality of situations are classified according to whether environmental sounds (for example, sound data input to the audio input module 112 such as a microphone) collected through the data collection unit 110 include speech . Specifically, in a personal situation, there is a relatively high possibility that speech is not included in the collected environmental sounds, and in a plurality of situations in which two or more persons are present together, the possibility of conversation with each other is relatively high. The inventor of the present invention paid attention to this point and made it possible to infer the individual situation and a plurality of situations through discrimination of whether voice is included or not. Also, as described above, the discrimination of whether or not the voice is included in the environmental sound can be made through analysis of the frequency standard deviation.

FIG. 3 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an exemplary embodiment of the present invention classifies and infer individual situations and a plurality of situations.

Referring to FIG. 3 with reference to Table 1, it is assumed that the user is not likely to include speech in a variety of situations such as work, work, drive (personal), exercise, sleep, study, , It can be perceived as being a personal situation. Further, in a situation where a conversation is usually made to each other such as a drive (a plurality), a rest (a plurality), etc., and a situation in which voice is highly likely to be included,

In addition, the situation recognition unit 130 can recognize the dynamic level of the user's situation based on the sound intensity of the environment sound analyzed (measured) by the data analysis unit 110. [ Here, the dynamic level of the user's situation can be either a static situation or a dynamic situation. Alternatively, the dynamic level of the user's situation may be more granular, and may be any of a static situation, a dynamic situation, and a mixed situation where static and dynamic situations are mixed.

Whether the user's situation is static or dynamic can be deduced according to the sound intensity (dB). This is because when the user is in a static situation such as sleeping or studying, the sound intensity of ambient sounds generated in the surroundings is usually weak, and when the user is in a dynamic situation such as work or work, It is based on the fact that the sound intensity is usually strong.

For example, when the sound intensity (dB) around the user is equal to or greater than a predetermined size, it can be recognized as a dynamic situation, and when the sound intensity is less than the dynamic intensity, a static situation can be recognized. Illustratively, the magnitude of the sound intensity that serves as a reference for distinguishing dynamic and static situations may be 60 dB.

FIG. 4 is a conceptual diagram illustrating an example of a process in which a user context aware apparatus according to an embodiment of the present invention classifies and infer a dynamic situation and a static situation (a dynamic level of a user's situation).

Referring to FIG. 4, with reference to Table 1 above, it can be seen that the actual measured decibel (sound intensity) exceeds 60 dB in the case of work, work, drive (personal), drive In the case of sleeping, studying, resting (private) and resting (many) in the static situation, it can be confirmed that the actual measured decibel is less than 60 dB.

The situation recognition unit 130 can recognize the situation of the user by synthesizing whether the situation depends on the presence or absence of the voice and the situation depending on the sound intensity.

FIG. 5 is a conceptual diagram illustrating an example of a process of recognizing a user's situation by considering a personal situation, a plurality of situations, a dynamic situation, and a static situation according to an embodiment of the present invention .

For example, referring to FIG. 4 and FIG. 5, the situation recognition unit 130 recognizes the drive, movement, work, and work out of each situation of the user as a dynamic situation based on sound intensity, which is one of environmental sound information , Sleeping, studying, resting (individual), and resting (multiple) are classified as static situations, and classified according to frequency standard deviation as to whether they correspond to individual situations or multiple situations in dynamic and static situations It can be recognized.

In other words, the context recognition unit 130 may be configured to infer whether the user's situation is a plurality of situations or personal situations based on whether or not the speech is included in the surrounding environment sounds (frequency standard deviation) By inferring the dynamic level of the situation of the user, the situation of the user can be comprehensively recognized in various aspects.

Referring to FIG. 5 and Table 1, when the frequency standard deviation of the surrounding environment sounds is 150 or less and the sound intensity is 60 dB or less, the situation recognition unit 130 determines that the user's situation is static For example, sleeping, studying, rest (individual), etc.). In addition, the situation recognition unit 130 may be configured such that when the frequency standard deviation of the surrounding environment sounds exceeds 150 and the sound intensity is 60 dB or less, if the user's situation is a plurality of static situations (for example, As shown in FIG. In addition, when the frequency standard deviation of the surrounding environment sounds is 150 or less and the sound intensity is more than 60 dB, the situation recognition unit 130 may recognize that the situation of the user is dynamic (for example, (Individual), exercise, etc.). In addition, the situation recognition unit 130 may determine that the user's situation is in a large number of dynamic situations (e. G., Drive (s)) when the frequency standard deviation of ambient environment sounds exceeds 150 and the sound intensity exceeds 60 dB. And so on).

On the other hand, the situation recognition unit 130 can further deduce the user's situation based on view-related information, user activity information, and the like.

Specifically, the situation recognition unit 130 may consider the time at which the environmental sound is collected in the data collection unit 110 in the user's context. Information related to the collection time of the ambient sound may be extracted by the information extraction module 116 described above.

For example, referring to FIG. 3, when the situation recognition unit 130 recognizes that the user's situation corresponds to a personal situation according to whether or not a voice is included in the ambient environment sound, If it is before 10:00 am, the circumstance recognition unit 130 may determine that the user's situation is highly likely to correspond to the commuting situation in various personal situations. 4, when the situation recognition unit 130 recognizes that the user's situation corresponds to a static situation of the user according to whether the user includes the voice of the surrounding environment sound and the magnitude of the sound intensity, If the time at which the sound is collected is after 12:00 pm, the situation recognition unit 130 may determine that the user's situation is highly likely to fall asleep during bedtime, study, and relaxation (individual).

In addition, the context recognition unit 130 may consider user activity information in the context of the user. Such user activity information may be information on the acceleration of the user context aware device. As described above, the inertial sensor module 114 of the data collection unit 110 may include a sensor for measuring the acceleration.

For example, referring to FIG. 4, when the situation recognition unit 130 recognizes that the user's situation corresponds to the dynamic situation of the individual according to the presence or absence of the voice about the ambient environment sound and the magnitude of the sound intensity, If the acceleration measured by the user condition recognition apparatus has a value greater than a predetermined value, the situation recognition unit 130 can determine that the user's situation is highly likely to correspond to a work situation during work, work, work (individual), and workout .

In this manner, the situation recognition unit 130 determines the user's situation based on the information about the surrounding environment sounds, and if necessary, receives one or more of the view-related information on the surrounding environment sounds and the acceleration information of the user's situation recognition apparatus So that the user's situation can be more clearly recognized.

In addition, in one embodiment of the present invention, the context recognition unit 130 stores and manages a range of a sound intensity and a frequency standard deviation statistically determined by a user in a memory (not shown) The user can determine the user's situation through comparison with the analyzed (measured) value in the analysis unit 120. [

In addition, the service providing unit 140 provides a service corresponding to the situation of the user recognized by the situation recognition unit 130. [ For example, the service providing unit 140 may provide the user with emotional recommendation music corresponding to the situation of the user recognized by the situation recognition unit 130. [ As a specific example, the service providing unit 140 searches for one or more music files stored in a memory (not shown), and then provides music to the user according to the user's situation or transmits the detected user's situation to a communication network To the external music service server (not shown) through the external music service server, and then receive the music file from the external music service server through the streaming method and provide it to the user.

Hereinafter, a method of recognizing a user's situation according to an embodiment of the present invention will be described. However, since the method for recognizing the user's situation is a method performed by the user's situation recognition apparatus according to an embodiment of the present invention, which is the same as or similar to the configuration described above, The same reference numerals are used, and redundant description will be simplified or omitted.

6 is a flow diagram of a method for recognizing a user situation according to an embodiment of the present invention.

According to an embodiment of the present invention, there is provided a method for recognizing a user's situation, comprising the steps of collecting environment sounds (S210), analyzing whether or not the collected environmental sounds include speech (S230) (Step S240) of recognizing an individual situation or a plurality of situations. In addition, the method of recognizing a user situation according to an embodiment of the present invention may include providing a service corresponding to a situation of a user recognized in step S240.

The step S210, i.e., the step of collecting the surrounding environment sounds, may be performed until a preset time elapses according to whether or not a predetermined time has elapsed (S220). For example, the data collection unit 110 may collect external environment sounds for a predetermined time, for example, between 15 seconds and 25 seconds.

In step S230, a frequency standard deviation is calculated through frequency analysis of the environmental sounds collected in step S210, and it is determined whether speech is included or not based on the frequency standard deviation. Step S210 may be performed by the data collecting unit 210, and step S230 may be performed by the data analyzing unit 220.

In step S230, the sound intensity of the environmental sound collected in step S210 may be analyzed. In addition, the step S240 can recognize the dynamic level of the user's situation based on the analyzed sound intensity in step S230. For example, the dynamic level may be either a dynamic situation or a static situation. As another example, the above dynamic levels can be further subdivided.

In step S240, it is possible to consider at least one of the time-related information and the user activity information at which the environment sound is collected in step S210. Here, the user activity information may be information on the acceleration of the user context aware apparatus. In addition, the method of recognizing the user's situation according to an embodiment of the present invention may further include measuring the acceleration of the user's situation recognition apparatus before step S240. For example, step S210 may include measuring the acceleration.

In addition, the step S250 may be performed by the service providing unit 240 described above. In addition, the service provided in step S250 may be, for example, a music recommendation service.

The method for recognizing the user's situation as described above may be implemented to operate in an apparatus that is carried by a user and provided with various services, for example, a portable terminal.

In addition, the method of recognizing the user's situation according to an embodiment of the present invention may be implemented in the form of a program command which can be executed through various computer means and recorded in a computer-readable recording medium. In other words, the method according to one embodiment of the present invention may also be implemented in the form of a recording medium including instructions executable by a computer, such as program modules, being executed by a computer. The computer-readable recording medium may include a program command, a data file, a data structure, and the like, alone or in combination. The program instructions to be recorded on the medium may be those specially designed and configured for the embodiments or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. In addition, the computer-readable medium may include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data . The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

The present disclosure may also be embodied in the form of a computer program or an application stored on a recording medium for executing the user context awareness method described above. For example, the present application can be implemented in the form of a computer program (application) stored in a recording medium included in a user terminal. In the embodiment of the present invention, the terminal is preferably understood as a broad concept including all kinds of terminals such as a smart phone, a smart pad, and a tablet PC.

It will be understood by those of ordinary skill in the art that the foregoing description of the embodiments is for illustrative purposes and that those skilled in the art can easily modify the invention without departing from the spirit or essential characteristics thereof. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .

110: Data collection unit 120: Data analysis unit
130: situation inference unit 140:
112: audio input module 114: inertial sensor module
116: information extraction module 122: frequency analysis module
124: sound intensity analysis module 126: activity information generation module

Claims (20)

A user context aware device,
A data collection unit for collecting environmental sounds around;
The method includes the steps of: calculating a frequency standard deviation through frequency analysis of the collected environment sounds; analyzing whether the sound includes the speech based on the frequency standard deviation; and if the frequency standard deviation exceeds a predetermined size, And analyzing whether the sound intensity of the collected environmental sound is greater than or equal to a predetermined size when the frequency standard deviation is less than or equal to the predetermined size; And
If it is determined that the voice is included, the user is recognized as a plurality of situations in which the conversation exists, and if the voice is not included, the user is recognized as a personal situation in which the voice is not present or the conversation does not exist A status recognition unit recognizing the dynamic level of the user's situation when the sound intensity is equal to or greater than a predetermined size and recognizing the dynamic level of the user's situation as a static situation when the sound intensity is less than a predetermined size and,
Wherein the situation recognition unit recognizes a situation of a user by synthesizing whether a situation according to whether or not the voice is included and a situation according to the sound intensity,
If it is determined that the voice is not included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a static situation of the user,
If it is determined that the voice is included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a plurality of static situations,
If it is determined that the voice is not included and the sound intensity is equal to or larger than a predetermined size,
And recognizes that the user's situation is a plurality of dynamic situations when the sound is determined to be included and the sound intensity is greater than or equal to a predetermined magnitude.
delete The method according to claim 1,
Wherein the frequency standard deviation is calculated by extracting a representative frequency spectrum of the environmental sound collected by the data collection unit and analyzing the extracted frequency spectrum.
delete delete delete The method according to claim 1,
Wherein the situation recognition unit considers a time point at which the environmental sound is collected in the data collection unit in the user's context.
The method according to claim 1,
Wherein the context awareness unit considers user activity information in a context of the user.
9. The method of claim 8,
Wherein the user activity information is information about an acceleration of the user context aware device.
10. The method of claim 9,
Further comprising a sensor for measuring the acceleration.
The method according to claim 1,
Further comprising a service providing unit for providing a service corresponding to a situation of a user recognized by the situation recognition unit.
12. The method of claim 11,
Wherein the service providing unit provides the recommendation music corresponding to the status of the user recognized by the context recognition unit.
13. The method of claim 12,
Wherein the service providing unit provides the recommendation music corresponding to the emotional state of the user, selected from the stored music file in the service providing unit or received from an external server connected through a communication network.
A method of recognizing a user situation by a user context aware device,
(a) collecting surrounding environmental sounds;
(b) calculating a frequency standard deviation through frequency analysis of the collected environment sounds, and analyzing whether or not the sound includes the speech based on the frequency standard deviation, and when the frequency standard deviation exceeds a predetermined size, Analyzing whether the sound intensity of the collected environmental sound is greater than or equal to a predetermined size, determining that the sound is not included when the frequency standard deviation is less than or equal to the predetermined size; And
(c) if it is determined that the voice is included, recognizing the user's situation as a plurality of situations in which the conversation is present, and if the voice is not included, Recognizing the dynamic level of the user's situation as a dynamic situation when the sound intensity is greater than or equal to a predetermined size and recognizing the dynamic level of the user's situation as a static situation when the sound intensity is less than a predetermined size Lt; / RTI >
Wherein the recognizing step recognizes the user's situation by synthesizing whether or not the situation is accompanied by the presence of the voice and the situation according to the sound intensity,
When it is determined that the voice is not included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a static situation of the user,
If it is determined that the voice is included and the sound intensity is less than a predetermined size, it is recognized that the user's situation is a plurality of static situations,
If it is determined that the voice is not included and the sound intensity is equal to or larger than a predetermined size,
Wherein if the sound is determined to be included and the sound intensity is greater than or equal to a predetermined magnitude, then the user is perceived as being in a plurality of dynamic situations.
delete delete 15. The method of claim 14,
Wherein the step (c) considers at least one of the time information related to when the environment sound is collected and the user activity information in the step (a).
18. The method of claim 17,
Wherein the user activity information is information on acceleration of the user context aware device,
Further comprising, prior to step (c), measuring the acceleration.
15. The method of claim 14,
(d) providing a service corresponding to the status of the user recognized in the step (c).
15. A computer-readable recording medium on which a program for causing a computer to execute the method of claim 14 is recorded.
KR1020150071475A 2015-05-22 2015-05-22 Apparatus and method for context-awareness of user KR101774236B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150071475A KR101774236B1 (en) 2015-05-22 2015-05-22 Apparatus and method for context-awareness of user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150071475A KR101774236B1 (en) 2015-05-22 2015-05-22 Apparatus and method for context-awareness of user

Publications (2)

Publication Number Publication Date
KR20160137008A KR20160137008A (en) 2016-11-30
KR101774236B1 true KR101774236B1 (en) 2017-09-12

Family

ID=57707630

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150071475A KR101774236B1 (en) 2015-05-22 2015-05-22 Apparatus and method for context-awareness of user

Country Status (1)

Country Link
KR (1) KR101774236B1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102421487B1 (en) * 2017-04-24 2022-07-15 엘지전자 주식회사 Artificial intelligent device
KR102104559B1 (en) * 2017-12-19 2020-04-24 김양수 Gateway Platform
US20190200154A1 (en) * 2017-12-21 2019-06-27 Facebook, Inc. Systems and methods for audio-based augmented reality
KR102635811B1 (en) * 2018-03-19 2024-02-13 삼성전자 주식회사 System and control method of system for processing sound data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360215B1 (en) * 2011-10-11 2014-02-10 엘지전자 주식회사 Mobile terminal and operation control method thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101360215B1 (en) * 2011-10-11 2014-02-10 엘지전자 주식회사 Mobile terminal and operation control method thereof

Also Published As

Publication number Publication date
KR20160137008A (en) 2016-11-30

Similar Documents

Publication Publication Date Title
KR101165537B1 (en) User Equipment and method for cogniting user state thereof
US20190391999A1 (en) Methods And Systems For Searching Utilizing Acoustical Context
US9159324B2 (en) Identifying people that are proximate to a mobile device user via social graphs, speech models, and user context
US20160379105A1 (en) Behavior recognition and automation using a mobile device
Rossi et al. AmbientSense: A real-time ambient sound recognition system for smartphones
KR101774236B1 (en) Apparatus and method for context-awareness of user
KR102005049B1 (en) Apparatus and method for providing safety management service based on context aware
CN105320726A (en) Reducing the need for manual start/end-pointing and trigger phrases
WO2009062176A2 (en) Activating applications based on accelerometer data
CN103631375B (en) According to the method and apparatus of the Situation Awareness control oscillation intensity in electronic equipment
US11516336B2 (en) Surface detection for mobile devices
CN111081275B (en) Terminal processing method and device based on sound analysis, storage medium and terminal
CN103040477A (en) Method and system for lie-detection through mobile phone
KR101564347B1 (en) Terminal and method for requesting emergency relief
CN109271480B (en) Voice question searching method and electronic equipment
Qin et al. A context-aware do-not-disturb service for mobile devices
KR102396147B1 (en) Electronic device for performing an operation using voice commands and the method of the same
García-Navas et al. A new system to detect coronavirus social distance violation
KR101768692B1 (en) Electronic display apparatus, method, and computer readable recoding medium
Li et al. UserIntent: Detection of user intent for triggering smartphone sensing applications
KR20170094527A (en) Electronic display apparatus, method, and computer readable recoding medium
Solak et al. Remote control of mechanical rat traps based on vibration and audio sensors
CN116416980A (en) Voice instruction processing method, control system and intelligent sofa
KR101882789B1 (en) Method for calculating activity accuracy of conntextness service
CN113973149A (en) Electronic apparatus, device failure detection method and medium thereof

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
E902 Notification of reason for refusal
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant