KR101655397B1 - Terminal apparatus and system for reporting circumstances - Google Patents

Terminal apparatus and system for reporting circumstances Download PDF

Info

Publication number
KR101655397B1
KR101655397B1 KR1020150061907A KR20150061907A KR101655397B1 KR 101655397 B1 KR101655397 B1 KR 101655397B1 KR 1020150061907 A KR1020150061907 A KR 1020150061907A KR 20150061907 A KR20150061907 A KR 20150061907A KR 101655397 B1 KR101655397 B1 KR 101655397B1
Authority
KR
South Korea
Prior art keywords
severity
speech
voice
unit
detected
Prior art date
Application number
KR1020150061907A
Other languages
Korean (ko)
Inventor
강지민
Original Assignee
주식회사 리트빅
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 리트빅 filed Critical 주식회사 리트빅
Priority to KR1020150061907A priority Critical patent/KR101655397B1/en
Application granted granted Critical
Publication of KR101655397B1 publication Critical patent/KR101655397B1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0208Combination with audio or video communication, e.g. combination with "baby phone" function
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification

Abstract

The present invention relates to a terminal apparatus and a system for reporting a circumstance which determine severity according to a voice analysis result and calculating a class of severity, extract a certain section image of a time before and after the level of severity is greater than or equal to a risk class, and transmit the certain section image to a designated control terminal. Therefore, a circumstance can be effectively reported.

Description

[0001] The present invention relates to a terminal apparatus and a system for reporting circumstances,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a situation reporting technique, and more particularly, to a situation reporting terminal apparatus and a system.

Korean Patent No. 10-2012-0090615 (2012. 08. 17), based on the collected information on the disaster site, the disaster situation is made knowledgeable and a technology for processing the situation order according to the disaster situation which is notified is proposed have.

The present inventors have developed a technique for calculating a severity level according to a result of speech analysis and calculating a severity level and extracting a specific region image before and after a time point at which the severity level is judged to be higher than a risk level, Research.

Korean Patent Publication No. 10-2012-0090615 (2012. 08. 17)

The present invention has been invented under the above-mentioned circumstances, and it is an object of the present invention to provide a speech recognition system and a speech recognition method, in which a seriousness degree is calculated by a severity according to a speech analysis result, a specific section image is extracted before and after a time point, And it is an object of the present invention to provide a situation reporting terminal device and system capable of efficiently reporting the situation.

According to an aspect of the present invention, there is provided a voice recognition system including: a voice analyzing unit for analyzing a voice sensed by a voice sensing device; A severity determining unit for determining a severity according to a result of the voice analysis by the voice analyzing unit and calculating a severity rating; An interval image extracting unit for extracting a specific interval image before and after a time point at which the severity level is judged to be higher than the danger level from the image taken by the image photographing apparatus when the severity level judged by the severity judging unit is higher than the danger level; And a status reporting unit for reporting the status to the designated monitoring terminal by transmitting a specific section image before and after the time when the severity level extracted by the section image extracting unit is determined to be higher than the danger level. And the like.

According to a further aspect of the present invention, there is provided a speech recognition apparatus comprising: a speech signal strength detector for detecting a speech signal strength of a speech sensed by the speech analyzing unit; A voice duration detector for detecting a duration of voice over a specific signal strength when a voice signal having a specific strength is detected by the voice signal strength detector; A speech recognizing unit for recognizing a speech sound from a speech sensed by the speech sensing apparatus; An abbreviated-word frequency detecting unit for detecting the frequency of the profane words detected by the abusive-word detecting unit; An asperity embroidery detector for detecting the number of artifacts of the profanity detected by the profanity language detector; .

According to a further aspect of the present invention, the severity determining unit includes: a first class factor corresponding to a strength of a voice signal detected by the voice signal strength detector; A second class factor according to the duration of the speech over a specific signal strength detected by the speech duration detector; A third grade factor according to the profane word detected by the proficiency word detection unit; A fourth grade factor according to the frequency of the profanity detected by the degree-of-proficiency word detection unit; A fifth grade factor according to the number of speech uttered by the profanity detected by the asylum character count detection unit; .

According to a further aspect of the present invention, the severity determining unit determines the severity rating by reflecting the first-class factor, the second-class factor, the third-class factor, the fourth-class factor, and the fifth- .

According to a further aspect of the present invention, the severity determining unit further determines the severity rating by reflecting the weight of each factor.

According to a further aspect of the present invention, the status report unit performs a different status report according to the severity level.

According to another aspect of the present invention, there is provided a speech recognition system comprising: a speech detection device for detecting a speech; An image capturing device for capturing an image; The severity rating is calculated by analyzing the voice sensed by the voice sensing device and determining the severity according to the result of the voice analysis. If the severity rating is higher than the danger level, Extracting a specific segment image before and after the time point when it is judged as a grade or more, and transmitting the extracted segment image to the designated control terminal; And the like.

According to the present invention, it is possible to calculate the severity by judging the severity according to the result of the voice analysis, and to extract the specific region image before and after the time when the severity level is judged to be higher than the danger level, .

1 is a block diagram showing a configuration of an embodiment of a situation reporting system according to the present invention.
2 is a block diagram showing a configuration of an embodiment of a situation informing terminal apparatus according to the present invention.
3 is a flowchart showing an example of a situation informing operation of the situation informing system according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout.

In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

The terms used throughout the specification of the present invention have been defined in consideration of the functions of the embodiments of the present invention and can be sufficiently modified according to the intentions and customs of the user or operator. It should be based on the contents of.

1 is a block diagram showing a configuration of an embodiment of a situation reporting system according to the present invention. As shown in FIG. 1, a situation reporting system according to the present invention includes a voice sensing apparatus 100, a video photographing apparatus 200, and a situation informing terminal apparatus 300.

The voice sensing apparatus 100 senses voice. For example, the voice sensing apparatus 100 may include a microphone 110 receiving a voice signal of a sensing area, an AD converter 120 converting an analog voice signal input through the microphone 110 into a digital voice signal, A voice codec 130 for compressing the digital voice signal converted by the AD converter 120 and a voice signal transmitter for transmitting the digital voice signal compressed by the voice codec 130 to the situation information terminal device 300 140).

The image capturing apparatus 200 captures an image. In this case, the image capturing apparatus 200 may be a CCD camera apparatus or an IP camera apparatus, and the image capturing apparatus 200 and the voice sensing apparatus 100 may be integrated into a single apparatus.

For example, the image capturing apparatus 200 may include an image sensor 210 for photoelectrically converting the sensed image of the sensing region, and an image codec 220 for compressing the image signal photoelectrically converted by the image sensor 210, And an image signal transmitter 230 for transmitting the digital image signal compressed by the image codec 220 to the situation information terminal device 300. [

The situation information terminal apparatus 300 analyzes a voice sensed by the voice sensing apparatus 100, calculates a severity level according to a result of the voice analysis, and calculates a severity level. If the severity level is higher than the danger level, Extracts a specific section image before and after the time when the severity rating is judged to be higher than the danger level from the image photographed by the photographing apparatus 200, and transmits the extracted image to the designated control terminal 400.

At this time, the designated control terminal 400 may be a mobile communication terminal owned by a designated person such as a teacher, a parent, or a police officer, or a control server of a designated related organization such as a light car, a fire station, or the like.

Therefore, according to the present invention, the situation information terminal apparatus 300 determines the severity according to the analysis result of the voice obtained by the voice sensing apparatus 100, calculates the severity level, It is possible to efficiently report the situation by extracting a specific section image before and after the time point when the severity level is judged to be higher than the danger level from the image taken by the designated control terminal 400.

2 is a block diagram showing a configuration of an embodiment of a situation informing terminal apparatus according to the present invention. 2, the situation information terminal apparatus 300 includes a voice analysis unit 310, a severity determination unit 320, an interval image extraction unit 330, and a status report unit 340 .

The voice analysis unit 310 analyzes the voice sensed by the voice sensing apparatus. For example, the voice analysis unit 310 may include a voice signal strength detection unit 311, a voice duration detection unit 312, a speed dialing unit 313, a speed dialing frequency detection unit 314, a speed dialing frequency detection unit 315, . ≪ / RTI >

The voice signal strength detector 311 detects the strength of a voice signal detected by the voice sensing apparatus 100. For example, the voice signal strength detector 311 can detect the voice signal strength by measuring decibel (dB) indicating the relative size of the voice signal.

The voice duration detector 312 detects the duration of the voice over a specific signal strength when the voice signal strength detector 311 detects a voice having a specific strength or more. For example, the voice duration detector 312 can detect the duration of voice over a specific signal strength by measuring the duration of the voice signal over a certain decibel (dB).

The profanity detecting unit 313 detects a profanity from a voice sensed by the voice sensing apparatus 100. For example, the speech signal pattern corresponding to the profanity is previously DB-stored and the opponent word detecting unit 313 compares the speech signal pattern detected by the speech sensing apparatus 100 with the speech signal pattern corresponding to the profanity word DB It is possible to detect the profane word.

The unconjugated word frequency detecting unit 314 detects the frequency of the profanity detected by the unconjured word detecting unit 313. For example, the rate of the profanity can be detected by counting the voice signal pattern corresponding to the profanity included in the voice signal sensed by the voice sensing apparatus 100 by the rate-of-profanity detecting unit 314.

The asynchronous-speech-number detecting unit 315 detects the number of speeches that uttered the profanity detected by the asynchronous-speech detecting unit 313. For example, the asynchronous embroidery detecting unit 315 may detect the number of speech uttered by the profanity through the frequency component analysis of the speech signal patterns corresponding to the profanity included in the speech signal sensed by the speech sensing apparatus 100 have.

The severity determination unit 320 determines the severity according to the voice analysis result of the voice analysis unit 310 and calculates the severity level. For example, if the severity determining unit 320 determines that the first level factor corresponding to the strength of the voice signal detected by the voice signal strength detecting unit 311 and the first level factor corresponding to the voice signal strength detected by the voice duration detecting unit 312 A third grade factor according to the profanity detected by the profanity word detection unit 313 and a second grade grade based on the degree of profanity detected by the profanity word frequency detection unit 314, A fifth grade factor according to the number of speeches uttered by the profanity detected by the profanity index detecting unit 315, and calculates the fifth grade factor according to the calculated first grade factor, the second grade factor, the third grade factor, , The fourth grade factor and the fifth grade factor to determine the severity rating.

In this case, the severity determination unit 320 may further be configured to reflect severity levels of the respective factors to determine severity levels. For example, if the first grade factor is I, the second grade factor is T, the third grade factor is P, the fourth grade factor is C, the fifth grade factor is N, and the weight of each factor is ir, tr, pr, cr, and nr, the severity class may be determined according to the value calculated by the severity class determination function F = P * pr + C * cr + I * ir + N * nr + T * tr.

When the severity level judged by the severity judging unit 320 is equal to or higher than the danger level, the section image extracting unit 330 extracts, from the image photographed by the image photographing apparatus, And extracts the segment image.

The status report unit 340 transmits a specific interval image before and after the time when the severity level extracted by the interval image extraction unit 330 is determined to be higher than the danger level to the designated control terminal 400 and reports the status.

At this time, the status report unit 340 may be configured to perform different status reports according to the severity level. For example, if the danger level is class 4, if the severity class is class 4, the specific section image is transmitted to the designated control terminal 400 while warning only the situation. If the severity class is class 3, To the designated control terminal (400), and to determine whether or not to report after warning about the situation.

If the severity level is class 2, the specific section image is transmitted to the designated control terminal 400, and if the notification after the warning about the situation is delayed, it is automatically reported. If the severity level is class 1, It is possible to implement a notification of the situation while automatically transmitting the warning to the control terminal 400 so that different status reports can be made according to the severity level.

Therefore, according to the present invention, the situation information terminal apparatus 300 determines the severity according to the analysis result of the voice obtained by the voice sensing apparatus 100, calculates the severity level, It is possible to efficiently report the situation by extracting a specific section image before and after the time point when the severity level is judged to be higher than the danger level from the image taken by the designated control terminal 400.

The status reporting operation of the status reporting system according to the present invention as described above will now be described with reference to FIG. 3 is a flowchart showing an example of a situation informing operation of the situation informing system according to the present invention.

First, in step 510, a voice signal sensed by the voice sensing device and a video signal photographed by the image capturing device are transmitted to the situation informing terminal device.

Then, at step 520, the situation informing terminal apparatus analyzes the voice sensed by the voice sensing apparatus.

Then, in step 530, the situation-informing terminal device determines the severity according to the voice analysis result in step 520 and calculates the severity rating.

If the severity level determined in step 530 is higher than or equal to the danger level in step 540, the specific region image before and after the severity level is determined to be higher than the danger level is extracted from the image captured by the image capturing apparatus do.

Next, in step 550, the situation reporting terminal transmits a specific interval image to a designated controller terminal before and after the time when the severity level extracted in step 540 is determined to be higher than the danger level, and reports the situation.

According to the present invention, the status information terminal apparatus determines severity according to the analysis result of the voice acquired by the voice recognition apparatus, calculates the severity level, and calculates the severity level from the image captured by the image capturing apparatus, It is possible to efficiently transmit the situation information by extracting a specific section image before and after the point of time when it is judged to be abnormal, and transmitting the extracted image to the designated control terminal, so that the object of the present invention can be achieved.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. .

INDUSTRIAL APPLICABILITY The present invention is industrially applicable in the field of information technology and its application technology.

100: Voice detection device
200:
300: situation reporting terminal device
310:
311: Voice signal strength detector
312: voice duration detector
313:
314:
315:
320: Severity judging unit
330:
340: Situation report section

Claims (12)

A voice signal strength detector for analyzing a voice detected by the voice sensing device and detecting a strength of a voice signal detected by the voice sensing device; A speech speed detection unit for detecting a speech duration longer than a signal strength; an asian speech detection unit for detecting a speech language from a speech sensed by the speech sensing apparatus; And an asynchronous embroidery detector for detecting the number of speech uttered by the profanity detected by the asynchronous speech detector;
A severity determining unit for determining a severity according to a result of the voice analysis by the voice analyzing unit and calculating a severity rating;
An interval image extracting unit for extracting a specific interval image before and after a time point at which the severity level is judged to be higher than the danger level from the image taken by the image photographing apparatus when the severity level judged by the severity judging unit is higher than the danger level;
And a status reporting unit for reporting the status to the designated monitoring terminal by transmitting a specific section image before and after the time when the severity level extracted by the section image extracting unit is determined to be higher than the danger level.
Wherein the terminal is a mobile terminal.
delete The method according to claim 1,
The severity determining unit:
A first class factor according to the strength of a speech signal detected by the speech signal strength detector;
A second class factor according to the duration of the speech over a specific signal strength detected by the speech duration detector;
A third grade factor according to the profane word detected by the proficiency word detection unit;
A fourth grade factor according to the frequency of the profanity detected by the degree-of-proficiency word detection unit;
A fifth grade factor according to the number of speech uttered by the profanity detected by the asylum character count detection unit;
And calculating the position of the terminal.
The method of claim 3,
The severity determining unit:
Wherein the degree of severity is determined by reflecting the first grade factor, the second grade factor, the third grade factor, the fourth grade factor, and the fifth grade factor.
5. The method of claim 4,
The severity determining unit:
And the severity rating is further determined by further reflecting the weights of the respective factors.
The method according to claim 1 or 3 or 4 or 5,
The status report unit:
And performs a different status report according to the severity level.
A sound image sensing device for sensing an image, a sound analyzing device for analyzing the sound sensed by the sound sensing device, calculating a severity level according to a result of the sound analysis, calculating a severity level, And a status information terminal device for extracting a specific section image before and after a time point at which the severity level is judged to be higher than a danger level from the image photographed by the image photographing apparatus, ,
Wherein the status notifying terminal device comprises:
A voice signal strength detector for analyzing a voice sensed by the voice sensing device and detecting a strength of a voice signal detected by the voice sensing device; A speech speed detection unit for detecting a speech duration longer than a specific signal strength; an asynchronous speech detection unit for detecting a speech language from a speech sensed by the speech sensing apparatus; A speech analyzing unit including a detection unit and an anamorphic embroidery detection unit for detecting a number of speech uttered by the profanity detected by the profanity speech detection unit;
A severity determining unit for determining a severity according to a result of the voice analysis by the voice analyzing unit and calculating a severity rating;
An interval image extracting unit for extracting a specific interval image before and after a time point at which the severity level is judged to be higher than the danger level from the image taken by the image photographing apparatus when the severity level judged by the severity judging unit is higher than the danger level;
And a status reporting unit for reporting the status to the designated monitoring terminal by transmitting a specific section image before and after the time when the severity level extracted by the section image extracting unit is determined to be higher than the danger level.
The system comprising:
delete delete 8. The method of claim 7,
The severity determining unit:
A first class factor according to the strength of a speech signal detected by the speech signal strength detector;
A second class factor according to the duration of the speech over a specific signal strength detected by the speech duration detector;
A third grade factor according to the profane word detected by the proficiency word detection unit;
A fourth grade factor according to the frequency of the profanity detected by the degree-of-proficiency word detection unit;
A fifth grade factor according to the number of speech uttered by the profanity detected by the asylum character count detection unit;
And calculates a state information of the user.
11. The method of claim 10,
The severity determining unit:
Wherein the degree of severity is determined by reflecting the first grade factor, the second grade factor, the third grade factor, the fourth grade factor, and the fifth grade factor.
12. The method of claim 11,
The severity determining unit:
And the severity rating is determined by further reflecting the weight of each factor.
KR1020150061907A 2015-04-30 2015-04-30 Terminal apparatus and system for reporting circumstances KR101655397B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150061907A KR101655397B1 (en) 2015-04-30 2015-04-30 Terminal apparatus and system for reporting circumstances

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150061907A KR101655397B1 (en) 2015-04-30 2015-04-30 Terminal apparatus and system for reporting circumstances

Publications (1)

Publication Number Publication Date
KR101655397B1 true KR101655397B1 (en) 2016-09-07

Family

ID=56950046

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150061907A KR101655397B1 (en) 2015-04-30 2015-04-30 Terminal apparatus and system for reporting circumstances

Country Status (1)

Country Link
KR (1) KR101655397B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463928A (en) * 2021-12-29 2022-05-10 上海瑞琨计算机系统集成有限公司 Intelligent alarm method and system
CN108242236B (en) * 2016-12-26 2023-12-15 现代自动车株式会社 Dialogue processing device, vehicle and dialogue processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100584481B1 (en) * 2004-02-09 2006-05-29 (주) 세이프텔 Alarm system and method for surveillance of armed robber's invasion
KR20120090615A (en) 2011-02-08 2012-08-17 한국전자통신연구원 Apparatus and method for processing disaster reporting
KR20140143069A (en) * 2013-06-05 2014-12-15 삼성전자주식회사 Apparatus for dectecting aucoustic event and method and operating method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100584481B1 (en) * 2004-02-09 2006-05-29 (주) 세이프텔 Alarm system and method for surveillance of armed robber's invasion
KR20120090615A (en) 2011-02-08 2012-08-17 한국전자통신연구원 Apparatus and method for processing disaster reporting
KR20140143069A (en) * 2013-06-05 2014-12-15 삼성전자주식회사 Apparatus for dectecting aucoustic event and method and operating method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108242236B (en) * 2016-12-26 2023-12-15 现代自动车株式会社 Dialogue processing device, vehicle and dialogue processing method
CN114463928A (en) * 2021-12-29 2022-05-10 上海瑞琨计算机系统集成有限公司 Intelligent alarm method and system
CN114463928B (en) * 2021-12-29 2022-11-25 上海瑞琨计算机系统集成有限公司 Intelligent alarm method and system

Similar Documents

Publication Publication Date Title
CN109300471B (en) Intelligent video monitoring method, device and system for field area integrating sound collection and identification
CN108734055B (en) Abnormal person detection method, device and system
KR101445367B1 (en) Intelligent cctv system to recognize emergency using unusual sound source detection and emergency recognition method
KR101609914B1 (en) the emergency situation sensing device responding to physical and mental shock and the emergency situation sensing method using the same
EP2747431B1 (en) Device and method for detecting whether camera is interfered with, and video monitoring system
KR101309366B1 (en) System and Method for Monitoring Emergency Motion based Image
KR20130085315A (en) Method for video surveillance system based on human identification
KR101485022B1 (en) Object tracking system for behavioral pattern analysis and method thereof
CN111935319B (en) Monitoring processing method and system based on vehicle terminal system and related equipment
KR101602753B1 (en) emergency call system using voice
KR101899436B1 (en) Safety Sensor Based on Scream Detection
KR101475177B1 (en) Emergency call-closed circuit television system and method thereof
CN102521945A (en) Calling detection alarming method and device
KR102069270B1 (en) CCTV system with fire detection
KR101384781B1 (en) Apparatus and method for detecting unusual sound
KR101111493B1 (en) Automatic Fire Recognition and Alarm Systems using Intelligent IP Camera and Method Thereof
KR101655397B1 (en) Terminal apparatus and system for reporting circumstances
KR101444843B1 (en) System for monitoring image and thereof method
KR100887942B1 (en) System for sensing abnormal phenomenon on realtime and method for controlling the same
CN111652128B (en) High-altitude power operation safety monitoring method, system and storage device
CN112767647A (en) Danger early warning method, device, equipment and computer readable storage medium
KR101752066B1 (en) Development of emergency detection system using environment information in elevator passenger and method thereof
CN207149028U (en) A kind of fire monitoring device
JP2018087838A (en) Voice recognition device
US20220319288A1 (en) Systems and methods for broadcasting an audio or visual alert that includes a description of features of an ambient object extracted from an image captured by a camera of a doorbell device

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant