KR101799874B1 - Situation judgment system and method based on voice/sound analysis - Google Patents
Situation judgment system and method based on voice/sound analysis Download PDFInfo
- Publication number
- KR101799874B1 KR101799874B1 KR1020160020348A KR20160020348A KR101799874B1 KR 101799874 B1 KR101799874 B1 KR 101799874B1 KR 1020160020348 A KR1020160020348 A KR 1020160020348A KR 20160020348 A KR20160020348 A KR 20160020348A KR 101799874 B1 KR101799874 B1 KR 101799874B1
- Authority
- KR
- South Korea
- Prior art keywords
- voice
- speaker
- module
- ambient sound
- sound
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 title abstract description 10
- 238000004891 communication Methods 0.000 claims description 6
- 210000001260 vocal cord Anatomy 0.000 claims description 6
- 230000001755 vocal effect Effects 0.000 claims description 6
- 230000008451 emotion Effects 0.000 claims description 5
- 230000002093 peripheral effect Effects 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims 2
- 210000004556 brain Anatomy 0.000 claims 2
- 238000006243 chemical reaction Methods 0.000 claims 2
- 230000006735 deficit Effects 0.000 claims 2
- 230000001953 sensory effect Effects 0.000 claims 2
- 230000006870 function Effects 0.000 description 6
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000005786 degenerative changes Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 210000000867 larynx Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007659 motor function Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 208000011293 voice disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/20—Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
A voice / sound analysis based situation determination system and method are disclosed. A speaker terminal for transmitting a voice of the speaker and a surrounding sound; An age information inference module for inferring age information of the talker by analyzing the voice and the ambient sound received from the call receiving module; A module for inferring a gender of the talker by analyzing the voice and the ambient sound received from the module; a psychological state reasoning module for analyzing the voice and the ambient sound received from the call receiving module to infer the psychological state of the talker; And a truth / false inference module for inferring the truth / false of the talker by analyzing the voice and the ambient sound received from the call reception module. The gender information deduced by the gender information reasoning module, the psychological state inferred from the psychological state reasoning module, and the truth / false information deduced from the truth / false inference module, And forms a terminal.
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a situation determination system and method, and more particularly, to a system and method for determining a situation based on voice / sound analysis.
A lot of emergency calls or criminal incidents are reported every year.
However, a false report is a significant proportion of the report of such a congestion.
Unreasonable calls are frequent every few seconds, so you have to judge whether you are falsely notified within a short period of time and make a quick decision early on.
The recipient of the telephone call should not miss the true telephone call while attracting the false telephone call for a long time.
On the other hand, if it is not a false report, the speaker is often embarrassed and often fails to deliver the situation properly. In this case, it is necessary to quickly and accurately judge not only the authenticity of the declaration but also the urgency of the declaration through the voice of the speaker or the surrounding sound, as well as grasping and reasoning as much information as possible about the accident scene or the speaker in a short time.
However, there is a problem that it is not easy to grasp a lot of information accurately and quickly within a short time, and accuracy and objectivity may not be constant.
It is an object of the present invention to provide a voice / sound analysis based situation determination system.
It is another object of the present invention to provide a method for determining a situation based on voice / sound analysis.
According to an aspect of the present invention, there is provided a voice / sound analysis based situation determination system including: a speech mobile terminal for transmitting a voice of a speaking person and a surrounding sound; An age information inference module for inferring age information of the talker by analyzing the voice and the ambient sound received from the call receiving module; A module for inferring a gender of the talker by analyzing the voice and the ambient sound received from the module; a psychological state reasoning module for analyzing the voice and the ambient sound received from the call receiving module to infer the psychological state of the talker; And a truth / false inference module for inferring the truth / false of the talker by analyzing the voice and the ambient sound received from the call reception module. The gender information deduced by the gender information reasoning module, the psychological state inferred from the psychological state reasoning module, and the truth / false information deduced from the truth / false inference module, May be configured to include a terminal.
In this case, when the call reception module receives the voice of the speaker and the ambient sound from the speaking terminal mobile terminal, the global positioning system (GPS) function of the speaking terminal is turned on through the corresponding communication server server and a global positioning system (GPS) remote control module for controlling the operation of the GPS module.
According to another aspect of the present invention, there is provided a method for determining a situation based on speech / sound analysis, comprising the steps of: transmitting a speech and a surrounding sound of a speaking person; Receiving a voice of the speaker and ambient sound from the speaker terminal; Analyzing the received voice and the ambient sound to infer the age information of the speaker; Analyzing the received voice and ambient sounds to infer the gender information of the speaker; Analyzing the received voice and the ambient sound to infer the psychological state of the speaker; Analyzing the received voice and ambient sound to infer the truth / falsehood of the speaker; And displaying the age information, gender information, psychological state, and truth / falsehood of the user terminal inferred by the situation judgment server.
Here, when the situation determination server receives the voice of the talker and the ambient sound from the speaking terminal, the situation determination server turns on the global positioning system (GPS) function of the speaking terminal through the corresponding communication company server, and receiving and displaying GPS coordinates from the speaker terminal in real time.
According to the voice / sound analysis based situation determination system and method described above, it is possible to deduce the age, gender, psychological state, truth / false and surrounding situation of a speaking person from the voice of the speaking person and the surrounding sound, And it is effective to judge the exact situation according to the accident report.
1 is a block diagram of a voice / sound analysis based situation determination system according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a method of determining a context based on speech / sound analysis according to an exemplary embodiment of the present invention.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail to the concrete inventive concept. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.
The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.
It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.
The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.
Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.
Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.
1 is a block diagram of a voice / sound analysis based situation determination system according to an embodiment of the present invention.
Referring to FIG. 1, a voice / sound analysis based
Hereinafter, the detailed configuration will be described.
The talker
The
Hereinafter, the detailed configuration will be described.
The
The
The age
Specifically, the age information can be inferred according to the following inference criteria.
Generally speaking, there are some factors that cause significant difference in speaking behaviors of elderly people compared to young people. In the elderly, the speed of speech is generally slower than that of young people, and the speed of speech of syllables is not constant. In addition, the silence is inserted in an inappropriate position, and there is a tendency to exhibit abnormal behaviors in pronounciation and pronation.
On the other hand, younger adults showed longer MPT (maximum phonation time) than older adults, which means that the elongation performance of vowels tends to decrease with age. The alternating motion rate (AMR) and sequential motion rate (SMR), which check the repetition rate and regularity of syllables, are also found to be faster in younger people than in older people.
On the other hand, the elderly have lower cognitive sensation and motor function, which contribute to horse output, than the older age group. Therefore, the overall speech rate and articulation rate are slowed down. .
In addition, the elderly exhibit a high incidence of disability in both subjective and objective aspects, and the elderly women exhibit a significantly higher voice disorder index than adult women.
And for men, the vocal pitch is lowered from 40 to 50 and then rising again, and women tend to fall in pitch as they get older.
As a result of measuring jitter and shimmer, the rate of change of vibration and the regularity of waveform are increased in elderly males, and the rate of change of vibration in elderly females only tends to increase. Here, jitter is the rate of change of the vocal fold vibration, and the shimmer means the regularity of the voice waveform. This tendency is indicative of a decrease in laryngeal function or a degenerative change in the laryngeal tissue. As a result of the measurement of the noise contrast ratio, which is another indicator of the stability of the vocalization, it is significantly increased in the elderly woman, which supports the instability of the vocalization according to the age increase.
The change of the voice index due to degenerative changes of the larynx tends to show a larger value in the jitter of the vocal fold vibration.
The gender
The gender
The gender
According to gender, there are significant differences in fundamental frequency, frequency variation, amplitude variation, and maximum fundamental frequency. In addition, there is no significant difference according to gender, noise - to - noise ratio, average fundamental frequency, and minimum fundamental frequency. In addition, the fundamental frequency shows a significant difference between the annual utterance and vocal extension.
The psychological
The psychological state and intention can be inferred by the following criteria.
First, the personality of the speaker can be deduced through the spoken behavior. The extroversion and introversion of the speaker can be judged on the basis of the speaking rate, silence length, silence frequency, and relative variation of the pitch.
In addition, the emotion inference engine for judging one emotion state of pleasant / pleasant / stable from the EEG / pulse wave sensing information of a speaking person can grasp the emotion, personality, psychological state, intention, etc. of the speaking person in various aspects.
The truth /
The truth / falsehood of a speaker can be inferred by the following criteria.
First, the speaker's answer to the question of the sender's question can be stored for 5 seconds and analyzed to judge the truth or falsehood.
Here, the report taker can be configured to ask questions of the same pattern, to pre-set some answers to follow these questions, and to judge truth / falsehood through them.
The peripheral
The ambient
For example, it is possible to preliminarily store sound such as a car sound, a human sound, a rain sound, and the like in the
The GPS
It is preferable that the GPS
The
Also, the
FIG. 2 is a flowchart illustrating a method of determining a context based on speech / sound analysis according to an exemplary embodiment of the present invention.
Referring to FIG. 2, the talker
Next, the
Next, the
Next, the
Next, the
Next, the
Next, the
Next, when the
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the invention as defined in the following claims. There will be.
110: Speaker Mobile terminal
120: Situation determination server
121: Call receiving module
122: Age Information Inference Module
123: Gender Information Inference Module
124: psychological state inference module
125: Truth / False Inference Module
126: Peripheral acoustic reasoning module
127: ambient acoustic database
128: GPS remote control module
130: User terminal
Claims (4)
An age information inference module for inferring age information of the talker by analyzing the voice and the ambient sound received from the call receiving module; A module for inferring a gender of the talker by analyzing the voice and the ambient sound received from the module; a psychological state reasoning module for analyzing the voice and the ambient sound received from the call receiving module to infer the psychological state of the talker; A truth / false inference module for inferring the truth / falsehood of the talker by analyzing the voice and the ambient sound received from the call receiving module; an acoustic database storing a predetermined ambient sound in advance; The surround sound and the ambient sound previously stored in the sound database A global positioning system (GPS) function of the speaker terminal through the corresponding communication server when receiving the voice of the speaker and the ambient sound from the speaker terminal in the call receiving module; A situation determination server that includes a global positioning system (GPS) remote control module that turns on the vehicle;
The gender information deduced by the gender information reasoning module, the psychological state inferred from the psychological state reasoning module, and the truth / false information deduced from the truth / false inference module, Comprising a terminal,
The age information inferring module comprises:
The speech speed of the speaker is slow and the speaking speed of each pitch is not constant and the silence is inserted in an improper position, the pronunciation of the head and tail is abnormal, the vowel extension performance is low and the overall speech speed and articulation speed are slower (AMR) and sequential motion rate (SMR), which have longer MPT (maximum phonation time) and check the repetition rate and regularity of syllables, In the simmer, which shows a high voice impairment index and shows a vocal pitch decrease and jitter indicating the rate of change of the vocal fold vibration and regularity of the voice waveform, the regularity of the vibration waveform does not increase and the vibration conversion rate increases And that the higher the noise-to-noise ratio, which is the stability of speech, the higher the age In the case of male, it is judged that the age is higher as the rate of change of vibration and the regularity of vibration waveform in the simmer which shows jitter and regularity of voice waveform indicating the rate of change of vocal fold vibration, And is judged to be the age of a section between 40 and 50 years old,
Wherein the gender information inferring module comprises:
Gender is deduced by analyzing the difference between the maximum vocal duration time, the fundamental frequency, the frequency variation rate, the amplitude variation rate, the noise to noise ratio, the average fundamental frequency, the maximum fundamental frequency,
Wherein the psychological state inference module comprises:
A sensory reasoning engine for judging the extroversion and the introversion of a speaking person on the basis of a speaking rate of a speaking person, a silence length, a silent frequency, and a relative variation of a pitch, and judging one of emotion states such as pleasantness or stability from the brain wave / The personality, the psychological state, or the intention of the speaker,
Wherein the peripheral acoustic reasoning module comprises:
Wherein the controller is configured to deduce a surround sound by comparing a car sound, a human sound, and a non-sound stored in advance in the sound database with the ambient sound,
The truth / false inference module includes:
Storing the talker's answer for 5 seconds for a predefined question of the sender's acceptance, and, based on the predetermined number of answers to the same pattern of the sender's answer to the stored talker's answer, whether the talker's answer is true or false Wherein the voice / sound analysis based situation determination system comprises:
Receiving a voice of the speaker and ambient sound from the speaker terminal;
Analyzing the received voice and the ambient sound to infer the age information of the speaker;
Analyzing the received voice and ambient sounds to infer the gender information of the speaker;
Analyzing the received voice and the ambient sound to infer the psychological state of the speaker;
Inferring the ambient sound by comparing the ambient sound predetermined in the sound database with the ambient sound received in the communication receiving module;
Analyzing the received voice and ambient sound to infer the truth / falsehood of the speaker;
Displaying the age information, gender information, psychological state, and truth / falsehood of the user terminal inferred by the situation judgment server;
When the situation determination server receives the voice of the talker and the ambient sound from the talker mobile terminal, the situation determination server turns on the global positioning system (GPS) function of the talker mobile terminal through the corresponding communication company server, And receiving and displaying GPS coordinates in real time from the talker mobile terminal,
Wherein the situation determination server analyzes the received voice and ambient sound to infer the age information of the speaker,
The speech speed of the speaker is slow and the speaking speed of each pitch is not constant and the silence is inserted in the inappropriate position, the pronunciations of the beginning and the ending are abnormal, the extension performance of the vowel is low and the overall speech speed and articulation speed are slower (AMR) and sequential motion rate (SMR), which have longer MPT (maximum phonation time) and check the repetition rate and regularity of syllables, In the simmer, which shows a high voice impairment index and shows a vocal pitch decrease and jitter indicating the rate of change of the vocal fold vibration and regularity of the voice waveform, the regularity of the vibration waveform does not increase and the vibration conversion rate increases And that the higher the noise-to-noise ratio, which is the stability of speech, the higher the age In the case of male, it is judged that the age is higher as the rate of change of vibration and the regularity of vibration waveform in the simmer which shows jitter and regularity of voice waveform indicating the rate of change of vocal fold vibration, And is judged to be the age of a section between 40 and 50 years old,
Wherein the situation determination server analyzes the received voice and ambient sound to infer the gender information of the speaker,
Gender is deduced by analyzing the difference between the maximum vocal duration time, the fundamental frequency, the frequency variation rate, the amplitude variation rate, the noise to noise ratio, the average fundamental frequency, the maximum fundamental frequency,
Wherein the situation determination server analyzes the received voice and ambient sound to infer the psychological state of the speaker,
A sensory reasoning engine for judging the extroversion and the introversion of a speaking person on the basis of a speaking rate of a speaking person, a silence length, a silent frequency, and a relative variation of a pitch, and judging one of emotion states such as pleasantness or stability from the brain wave / The personality, the psychological state, or the intention of the speaker,
Wherein the situation determination server analyzes the received voice and ambient sound to infer the truth / false of the speaker,
Storing the talker's answer for 5 seconds for a predefined question of the sender's acceptance, and, based on the predetermined number of answers to the same pattern of the sender's answer to the stored talker's answer, whether the talker's answer is true or false And determining whether the voice / sound analysis is based on the voice / sound analysis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160020348A KR101799874B1 (en) | 2016-02-22 | 2016-02-22 | Situation judgment system and method based on voice/sound analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160020348A KR101799874B1 (en) | 2016-02-22 | 2016-02-22 | Situation judgment system and method based on voice/sound analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170099004A KR20170099004A (en) | 2017-08-31 |
KR101799874B1 true KR101799874B1 (en) | 2017-12-21 |
Family
ID=59761369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160020348A KR101799874B1 (en) | 2016-02-22 | 2016-02-22 | Situation judgment system and method based on voice/sound analysis |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101799874B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200120457A (en) * | 2019-04-12 | 2020-10-21 | 쿠팡 주식회사 | Computerized systems and methods for determining authenticity using micro expressions |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101998650B1 (en) * | 2019-02-12 | 2019-07-10 | 한방유비스 주식회사 | Collecting information management system of report of disaster |
-
2016
- 2016-02-22 KR KR1020160020348A patent/KR101799874B1/en active IP Right Grant
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200120457A (en) * | 2019-04-12 | 2020-10-21 | 쿠팡 주식회사 | Computerized systems and methods for determining authenticity using micro expressions |
US11030294B2 (en) | 2019-04-12 | 2021-06-08 | Coupang Corp. | Computerized systems and methods for determining authenticity using micro expressions |
KR102343777B1 (en) * | 2019-04-12 | 2021-12-28 | 쿠팡 주식회사 | Computerized systems and methods for determining authenticity using micro expressions |
KR20210158376A (en) * | 2019-04-12 | 2021-12-30 | 쿠팡 주식회사 | Computerized systems and methods for determining authenticity using micro expressions |
KR102457498B1 (en) * | 2019-04-12 | 2022-10-21 | 쿠팡 주식회사 | Computerized systems and methods for determining authenticity using micro expressions |
US11494477B2 (en) | 2019-04-12 | 2022-11-08 | Coupang Corp. | Computerized systems and methods for determining authenticity using micro expressions |
Also Published As
Publication number | Publication date |
---|---|
KR20170099004A (en) | 2017-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1222448B1 (en) | System, method, and article of manufacture for detecting emotion in voice signals by utilizing statistics for voice signal parameters | |
ES2242634T3 (en) | TELEPHONE EMOTION DETECTOR WITH OPERATOR FEEDBACK. | |
US6427137B2 (en) | System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud | |
CN110598611B (en) | Nursing system, patient nursing method based on nursing system and readable storage medium | |
JP6585733B2 (en) | Information processing device | |
JP6748965B2 (en) | Cognitive function evaluation device, cognitive function evaluation method, and program | |
JP2017100221A (en) | Communication robot | |
KR101799874B1 (en) | Situation judgment system and method based on voice/sound analysis | |
US11699043B2 (en) | Determination of transcription accuracy | |
KR102079086B1 (en) | Intelligent drowsiness driving prevention device | |
Frank et al. | Nonverbal elements of the voice | |
CN110881987A (en) | Old person emotion monitoring system based on wearable equipment | |
KR20220048381A (en) | Device, method and program for speech impairment evaluation | |
CN110587621B (en) | Robot, robot-based patient care method, and readable storage medium | |
JP3676981B2 (en) | KANSEI GENERATION METHOD, KANSEI GENERATION DEVICE, AND SOFTWARE | |
JP2006230446A (en) | Health-condition estimating equipment | |
JP2017196115A (en) | Cognitive function evaluation device, cognitive function evaluation method, and program | |
JP6598227B1 (en) | Cat-type conversation robot | |
KR102571549B1 (en) | Interactive elderly neglect prevention device | |
KR20170098445A (en) | Situation judgment apparatus based on voice/sound analysis | |
KR20170098446A (en) | Situation judgment ethod based on voice/sound analysis | |
JP6718623B2 (en) | Cat conversation robot | |
KR20180052907A (en) | System and method of supplying graphic statistics using database based on voice/sound analysis | |
KR20180052909A (en) | Interface system and method for database based on voice/sound analysis and legacy | |
KR20190085272A (en) | Open api system and method of json format support by mqtt protocol |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right |