CN114463928A - Intelligent alarm method and system - Google Patents

Intelligent alarm method and system Download PDF

Info

Publication number
CN114463928A
CN114463928A CN202111638363.4A CN202111638363A CN114463928A CN 114463928 A CN114463928 A CN 114463928A CN 202111638363 A CN202111638363 A CN 202111638363A CN 114463928 A CN114463928 A CN 114463928A
Authority
CN
China
Prior art keywords
risk
information
sound
alarm
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111638363.4A
Other languages
Chinese (zh)
Other versions
CN114463928B (en
Inventor
杨闰辉
杨一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Ruikun Computer System Integration Co ltd
Original Assignee
Shanghai Ruikun Computer System Integration Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Ruikun Computer System Integration Co ltd filed Critical Shanghai Ruikun Computer System Integration Co ltd
Priority to CN202111638363.4A priority Critical patent/CN114463928B/en
Publication of CN114463928A publication Critical patent/CN114463928A/en
Application granted granted Critical
Publication of CN114463928B publication Critical patent/CN114463928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B19/00Alarms responsive to two or more different undesired or abnormal conditions, e.g. burglary and fire, abnormal temperature and abnormal rate of flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/10Actuation by presence of smoke or gases, e.g. automatic alarm devices for analysing flowing fluid materials by the use of optical means
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the specification provides an intelligent alarm method and an intelligent alarm system, wherein the method comprises the following steps: acquiring acquisition information of a monitored space, wherein the monitored space comprises at least one of the following: elevators, cars; performing risk identification based on the acquired information to obtain a risk identification result; determining risk information based on the risk identification result; and sending out an alarm signal based on the risk information.

Description

Intelligent alarm method and system
Technical Field
The specification relates to the field of computers, in particular to an intelligent alarm method and system.
Background
In the limited space of various vehicles such as elevators, subways and the like, when sudden conditions such as equipment failure, personal injury, contraband bringing and the like occur, the personnel in the limited space cannot inform maintenance/management personnel to take measures at the first time; in addition, when sudden situations with different risk degrees occur, the measures taken correspondingly are different.
Therefore, there is a need for an intelligent alarm system and method that can alarm in time, reduce false alarms, and alarm various emergency situations to different degrees.
Disclosure of Invention
One embodiment of the present specification provides an intelligent alarm method. The method comprises the following steps: acquiring acquisition information of a monitored space, wherein the monitored space comprises at least one of the following: elevators, cars; performing risk identification based on the acquired information to obtain a risk identification result; determining risk information based on the risk identification result; and sending out an alarm signal based on the risk information.
One of the embodiments of the present specification provides an intelligent alarm system, including: the acquisition module is used for acquiring the acquisition information of the monitored space, and the monitored space comprises at least one of the following: elevators, cars; the identification module is used for carrying out risk identification based on the acquired information to obtain a risk identification result; a determining module for determining risk information based on the risk identification result; and the alarm module is used for sending out an alarm signal based on the risk information.
One of the embodiments of the present specification provides an intelligent alarm device, which includes a processor and a memory; the memory is to store instructions that, when executed by the processor, cause the apparatus to implement the intelligent warning method.
One of the embodiments of the present specification provides a computer-readable storage medium, where the storage medium stores computer instructions, and when the computer reads the computer instructions in the storage medium, the computer executes the intelligent alarm method.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of an intelligent warning system according to some embodiments of the present description;
FIG. 2 is a block diagram of modules of an intelligent warning system according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a smart warning method according to some embodiments herein;
FIG. 4 is an exemplary flow chart illustrating risk identification and risk information determination according to further embodiments of the present description;
FIG. 5 is a block diagram of an exemplary model structure for risk identification determination according to further embodiments of the present description;
FIG. 6 is an exemplary model structure diagram for character type determination shown in accordance with some embodiments of the present description;
FIG. 7 is an exemplary model architecture diagram for at risk vehicle determination, according to some embodiments described herein;
FIG. 8 is an exemplary diagram illustrating alarm levels according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
The embodiment of the application relates to an intelligent alarm method, an intelligent alarm system and a storage medium. The intelligent alarm method, the system and the storage medium can be used in a sealed or non-sealed space of a vehicle, such as a train/subway carriage, an airplane cabin, an automobile carriage, a ship cabin and the like, and can also be used indoors, in an elevator or in other relatively closed spaces. In some embodiments, the intelligent alarm method, system and storage medium can be used in the fields of intelligent security, space management, vehicle supervision and the like.
FIG. 1 is a schematic diagram of an application scenario of an intelligent warning system according to some embodiments of the present description.
The intelligent alarm system can acquire the collected information of the monitored space; performing risk identification based on the acquired information to obtain a risk identification result; determining risk information based on the risk identification result; and sending out an alarm signal based on the risk information.
The scenario 100 involved in the intelligent warning system may be as shown in fig. 1. In some embodiments, the scene 100 involved in the intelligent alarm system may include the acquisition apparatus 110, the network 120, the terminal device 130, the processing device 140, and the storage device 150. The components in the scenario 100 may be interconnected via a network 120.
The acquisition device 110 may be used to acquire various physical information of an acquisition object in the monitored space. In some embodiments, the acquisition object may include a biological object and/or a non-biological object. For example, the collection object may be an organic and/or inorganic substance, living or non-living, biological objects such as humans, animals, plants, etc., non-living objects such as battery cars, guns, smoke, flames, etc.
In some embodiments, the collection device 110 may include an image collection device 111, a sound collection device 112, a pressure collection device 113, and other data collection or sensing devices (e.g., temperature sensing devices, smoke sensing devices, etc.). In some embodiments, the image capture device 111 may include a camera, a scanner, or the like. In some embodiments, the sound collection apparatus 112 may include a microphone, a sound pickup, or the like. In some embodiments, the pressure acquisition device 113 may include a pressure sensor or the like.
Network 120 may include any suitable network capable of facilitating information and/or data exchange. In some embodiments, at least one component of the scene 100 (e.g., the acquisition apparatus 110, the terminal device 130, the processing device 140, the storage device 150) may exchange information and/or data with at least one other component in the scene 100 via the network 120. For example, the processing device 140 may acquire image information and sound information from the acquisition apparatus 110 through the network 120.
The terminal device 130 may be a request terminal for acquiring information by the acquisition device and a receiving and processing terminal for alarm information, and through the terminal device 130, a user may acquire image information, sound information, pressure information, and the like acquired by the acquisition device 110 and perform corresponding operations based on the information. In some embodiments, the terminal device 130 may perform corresponding operations based on the alert issued by the processing device 140. In some embodiments, the user may enter alarm parameters, cancel the alarm, or alter the alarm level via the terminal device 130. In this specification, "user" and "user terminal" may be used interchangeably. In embodiments of the present invention, the terminal device 130 may include a mobile device 131, a tablet computer 132, a notebook computer 133, and the like, or any combination thereof.
The processing device 140 may process data and/or information obtained from the acquisition apparatus 110, the terminal device 130, the storage device 150, or other components of the scene 100. For example, the processing device 140 may acquire image information, sound information, pressure information, and the like from the acquisition apparatus 110 and perform analysis processing thereon. As another example, the processing device 140 may select whether to alarm and confirm the alarm level based on the above analysis process. Also for example, the processing device 140 may transmit the collected information and the result of the analysis processing to the terminal device 130. In some embodiments, the processing device 140 may be a single server or a group of servers. The server groups may be centralized or distributed. In some embodiments, the processing device 140 may be local or remote.
In some embodiments, the processing device 140 may be part of the acquisition apparatus 110 or the terminal device 130. For example, the processing device 140 may be integrated within the terminal device 130 for analyzing the collected information, feeding back, and the like.
Storage device 150 may store data, instructions, and/or any other information. For example, the storage device 150 may store image information, sound information, pressure information, and the like collected by the collection apparatus 110. Also for example, the storage device 150 may store information related to alarms such as alarm records, alarm times, alarm levels, etc. In some embodiments, storage device 150 may store data and/or instructions that processing device 140 uses to perform or use to perform the exemplary methods described in this specification.
In some embodiments, the storage device 150 may be connected to the network 120 to communicate with at least one other component in the scene 100 (e.g., the acquisition apparatus 110, the terminal device 130, the processing device 140). At least one component in the scene 100 may access data (e.g., image information, sound information, pressure information, etc.) stored in the storage device 150 through the network 120. In some embodiments, the storage device 150 may be part of the processing device 140.
It should be noted that the context 100 to which the intelligent warning system relates is provided for illustrative purposes only and is not intended to limit the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in light of the description herein. For example, the scene 100 may also include information sources. As another example, the scenario 100 may implement similar or different functionality on other devices. However, such changes and modifications do not depart from the scope of the present application.
FIG. 2 is a block diagram of modules of an intelligent warning system according to some embodiments of the present description. As shown in FIG. 2, the system 200 includes an acquisition module 210, an identification module 220, a determination module 230, and an alarm module 240. In some embodiments, the corresponding functions of system 200 may be performed by processing device 140, e.g., acquisition module 210, identification module 220, determination module 230, and alarm module 240 may be modules in processing device 140.
The acquisition module 210 may be used to acquire acquisition information for a monitored space that includes at least one of: an elevator and a car.
The identification module 220 may be configured to perform risk identification based on the collected information, and obtain a risk identification result.
The determination module 230 may be configured to determine risk information based on the risk identification result.
The alarm module 240 may be configured to issue an alarm signal based on the risk information.
It should be appreciated that the system 200 and its modules shown in FIG. 2 may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware.
It should be noted that the above description of the system 200 and its modules is merely for convenience of description and should not limit the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that, after understanding the principle of the system, any combination of the modules or connection of the sub-system with other modules may be made without departing from the principle, and that the functions of the acquisition module 210, the identification module 220, the determination module 230 and the alarm module 240 may be implemented on the same module or implemented by a plurality of modules.
FIG. 3 is an exemplary flow diagram of a smart warning method in accordance with some embodiments described herein. As shown in fig. 3, the process 300 includes the following steps. In some embodiments, flow 300 may be performed by processing device 140, e.g., by respective modules within processing device 140.
Step 310, acquiring acquisition information of a monitored space, wherein the monitored space comprises at least one of the following: an elevator and a car. In some embodiments, this step 310 may be performed by the acquisition module 210.
The monitored space can be a closed or non-closed space monitored by the intelligent alarm system. E.g. elevator, car etc.
The collected information may be various information collected relating to a risk within the monitored space. For example, image information, sound information, pressure information, temperature information, smoke density information, and the like. In some embodiments, the collected information may be collected by the collecting apparatus 110 as shown in fig. 1, or by calling various information in a network or a storage device to compose the collected information.
In some embodiments, the collected information includes at least one of image information within the monitored space and sound information within the monitored space.
The image information may be information obtained by shooting through a camera or a monitoring device. For example, the image information may be information such as a photograph, video, scan data, and the like. In some embodiments, the content of the image information may include information related to a type of person, information related to a risk phenomenon, information related to a dangerous good, information related to a dangerous vehicle, and the like. The above information or other information may be further obtained by analyzing the image information.
The sound information may be information recorded by a microphone or a recording device. For example, the sound information may be a piece of recorded sound/audio. In some embodiments, the sound information may include information related to the type of person (e.g., timbre of the elderly, children, etc.), information related to the action of the person (e.g., loud sound, yelling, etc.), information related to a hazardous item (e.g., sound of explosion, sound of impact, etc.), information related to a hazardous vehicle (e.g., whistling, engine sound, etc.), and the like. The above information or other information may be further obtained by analyzing and processing the sound information. For more description, see fig. 4 for image information and sound information.
And 320, performing risk identification based on the acquired information to obtain a risk identification result. In some embodiments, this step 320 may be performed by the identification module 220.
The risk identification result may be an analysis result obtained by performing risk identification analysis on the identification object in the collected information. The identification object may be any substance, human or non-human. In some embodiments, the risk identification result may include a result of identifying a type of a person, such as whether the identification object is a worker or a person who is a casual person, and the risk identification result may further include a result of identifying a risk phenomenon, a result of identifying a dangerous vehicle, a result of identifying a dangerous object, and the like. In some embodiments, the risk identification result may include a confidence level of some of the above identification results, such as a confidence level that the identification object is a casual person.
In some embodiments, the risk identification result may be obtained by a machine learning model, for example, various collected information may be input into at least one machine learning model, and risk identification may be performed by the model to output the risk identification result. For a detailed description of the risk identification process, see fig. 4 and its associated description.
In some embodiments, the collecting information further includes pressure information, the identifying risk based on the collecting information, and obtaining the risk identification result further includes: based on the pressure information, a risky vehicle may be determined.
The pressure information may be related to the physical pressure experienced by the monitored space, and may be represented by pressure values (newton, N) or pressure (Pa, Pa). In some embodiments, the pressure information may be measured by pressure sensors (e.g., pressure sensors below the elevator, below the car) disposed within the monitored space.
The risky vehicle may be a vehicle that may cause a hazard within the monitored space. Such as a battery car, an overloaded and overweight vehicle, and the like.
In some embodiments, it may be determined whether a risky vehicle is present, and the type of risky vehicle present, based on the pressure information. For example, when the pressure information exceeds the pressure threshold value, a risky vehicle in the monitored space is determined, and further, the type of the risky vehicle may be determined based on the pressure information corresponding to different risky vehicles. For example, when the storage battery car got into monitored space, the pressure in the space produced great change if increasing 600N, can judge at this moment that there is the storage battery car to get into the space.
In some embodiments, the pressure information may be processed based on the model to determine whether a vehicle at risk is present in the monitored space.
In some embodiments, pressure information for multiple points in time may be input into a machine learning model from which the at-risk vehicle is determined. The pressure information of a plurality of time points can reflect the pressure change of the monitored space in a period of time, and the vehicle type can be judged according to the pressure change generated in the monitored space when different vehicles enter the monitored space.
In some embodiments, the at-risk vehicle may be determined by the fourth model based on pressure information at a plurality of points in time.
The fourth model may be a machine learning model for identifying the at risk vehicle, and the fourth model may be a Convolutional Neural Network (CNN) model or other model. For a detailed description of the fourth model, refer to fig. 7 and its associated description.
Based on the risk identification result, risk information is determined 330. In some embodiments, this step 330 may be performed by the determination module 230.
The risk information may be a risk that may occur based on the risk identification result. For example, the risk information may include property safety risks, personal safety risks, privacy safety risks, and the like. In some embodiments, the classification of the above risk information may be further refined into, for example, a theft risk, a fire risk, an explosion risk, a personal attack risk, an injury risk, and the like.
In some embodiments, the risk information may be determined based on a determination of a risk identification result. For example, when the risk identification result is that the person type old person exists, the risk information may be that a personal safety risk exists; when the risk identification result is that the risk phenomenon fighting exists, the risk information can be that personal safety risk exists, property safety risk exists and the like. In some embodiments, each risk identification result may correspond to at least one risk information, and each risk information may also be determined based on at least one risk identification result.
And step 340, sending out an alarm signal based on the risk information. In some embodiments, this step 340 may be performed by the alarm module 340.
The alarm signal may be a signal issued by the processing device to alert a predetermined person in response to the presence of the risk information. In some embodiments, the warning signal may be presented in a variety of ways, such as an alarm sound, a reddish or flashing light, an alarm image and alarm text presentation, a vibration in space such as a handrail, and the like. In some embodiments, the preset personnel may be background management personnel, other personnel within the space, other personnel outside the space, and the like. In some embodiments, the pre-set personnel may vary based on different risk information, for example, when the risk information includes personal safety risks, the pre-set personnel may include healthcare personnel; when the risk information includes a fire risk, the preset personnel may include fire fighting personnel and the like.
In some embodiments, the alarm signal may vary differently based on the risk information. For example, the alarm signal may be presented in different forms based on the degree of risk of the risk information to property safety, personal safety, privacy safety, etc. For example, when a child is alone in the monitored space, the alarm signal is in a general alarm state aiming at driving the child and reminding the nearby adult; when explosives harmful to human life safety exist in the monitored space, the alarm signal is in a high-level alarm state aiming at dispelling personnel and requesting the support of corresponding processing personnel. In some embodiments, the alarm signal may include an alarm level and an alarm threshold, see fig. 4, fig. 8, and their associated description for further explanation of the alarm signal.
Some embodiments of this specification reduce the waste of police resources and supervision resources to a certain extent by recognizing various dangerous situations that may be involved in monitored spaces such as elevators and cars and adopting different alarm signals for recognition results, and have corresponding low-level warning for general situations, and have corresponding high-level warning measures for dangerous situations relating to life safety, and can make judgments for various complex situations in actual scenes.
It should be noted that the above description of the process 300 is for illustration and description only and is not intended to limit the scope of the present disclosure. Various modifications and changes to flow 300 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description. For example, the process 300 may also include other steps.
FIG. 4 is an exemplary flow chart illustrating risk identification and risk information determination according to further embodiments of the present description. As shown in fig. 4, the process 400 includes the following steps. In some embodiments, flow 400 may be performed by processing device 140.
Based on the image information, a risk image is determined by the first model, step 410. In some embodiments, this step 410 may be performed by the identification module 220.
The risk image may be a video/picture containing a risk. For example, the risk image may be an image including a fallen elderly person, a fighting scene, a dangerous vehicle, and the like. The risk image may be determined based on a first model extraction.
The first model may be a machine learning model for determining risk images. In some embodiments, the first model may be a Convolutional Neural Network (CNN), a region-based convolutional network (R-CNN), a Fast region-based convolutional network (Fast R-CNN), etc., or any combination thereof, for further explanation of the first model, see FIG. 5 and its associated description.
Based on the sound information, a risk sound is determined by the second model, step 420. In some embodiments, this step 420 may be performed by the identification module 220.
The risk sound may be audio captured when a risk event occurs. For example, the risk sound may be audio including an alarm sound, a distress sound.
The second model may be a machine learning model for determining the risk sounds. In some embodiments, the second model may be a hidden markov model (HHM), a hybrid acoustic model (GMM + HMM), a long short term memory model (LSTM), or the like, or any combination thereof, for further explanation of the second model, see fig. 5 and related description thereof.
There is no precedence relationship between step 410 and step 420, and the order described herein is not meant to limit the order of the steps, and in some embodiments, step 420 may be performed first and then step 410, or both.
And step 430, obtaining a risk identification result based on at least one of the risk image and the risk sound. In some embodiments, this step 430 may be performed by the identification module 220.
In some embodiments, the risk identification result may be derived based on content including the risk. In some embodiments, the content including the risk may be a risk image, a risk sound, or the like. For example, the risk image includes an image of a dangerous vehicle, the processing device identifies the dangerous vehicle in the risk image based on the first model, and the processing device can determine that the risk identification result is the dangerous vehicle.
In some embodiments, the risk identification result may also include a confidence level of the content corresponding to the risk, such as a confidence level of the risk image and/or the risk sound.
Based on the confidence level of the risk image and/or the confidence level of the risk sound, risk information is determined, step 440. In some embodiments, this step 440 may be performed by the determination module 230.
Confidence may be the confidence level of the risk determination in the content including the risk, which may be expressed in percentage or in rating. For example, the confidence that a fighting image is present in the risk image is 95% or the likelihood is high, or the like.
In some embodiments, the deviceThe confidence level may be a real value between 0 and 1. The calculation of the confidence degree is related to a value of the possibility that a certain risk exists in the content of the risk, for example, if the value of the possibility that a certain risk exists is p, the confidence degree of the risk judgment is K,
Figure BDA0003442117490000111
after the value of the confidence coefficient is obtained, whether the risk exists can be determined based on the set confidence coefficient threshold, for example, two confidence coefficient thresholds are set as a first threshold and a second threshold respectively, the first threshold is 0.95, the second threshold is 0.05, the corresponding first confidence interval is (0.95, 1), the second confidence interval is (0, 0.05), that is, when the confidence coefficient is greater than 0.95 or less than 0.05, it can be considered that whether the risk exists can be determined based on the currently acquired data, and if the confidence coefficient is greater than 0.95, it can be known that the risk exists currently; a confidence level of less than 0.05 is deemed that the risk is not currently present. When the confidence degree takes any value of (0.05, 0.95), it is considered that whether the risk exists cannot be judged based on the currently acquired data.
In some embodiments, when the confidence level exceeds a first threshold, such as above 95% or the likelihood is high, the content including the risk is determined to be trustworthy, i.e., the corresponding risk event in the monitored space is considered to occur. When the confidence level is lower than a first threshold (e.g., 95%), the content including the risk is judged to be suspicious/untrusted, and whether the content is trusted may be determined based on a manual judgment.
In some embodiments, the content including a risk is determined to be trustworthy when at least one of the confidences associated with the content at risk exceeds a first threshold (e.g., 95%). For example, when the confidence level of the risk image of fighting is 80%, and the confidence level of the risk sound related to fighting is 98%, it can be determined that the risk information is the personal safety risk. In some embodiments, all confidence levels associated with a certain risk content may be weighted, and based on the weighted average of the confidence levels and their weights, it is determined whether a corresponding risk event may occur in the monitored space.
In some embodiments, the risk identification result includes an identification result of whether a vehicle at risk is present.
In some embodiments, the determination module 230 may determine the risk information via a pressure sensor in response to the risk identification result including a risk vehicle. In some embodiments, the presence of a vehicle at risk is determined when the pressure information exceeds a pressure threshold. In some embodiments, when determining that a risky vehicle is present, the determination module 230 may also determine the type of risky vehicle based on the different risky vehicles corresponding to different pressure ranges, and determine the risk information based on the type of risky vehicle. For example, when the pressure information is within the pressure range of the battery car, the risk vehicle at the moment is judged to be the battery car, and the obtained risk information is the explosion risk.
In some embodiments, the alarm module 340 may determine an alarm threshold or risk prediction value based on the confidence of the risk image and the risk sound.
The risk prediction value may be used to determine the degree of risk to property safety, personal safety, security, etc. In some embodiments, the risk prediction value may be expressed in the form of a grade or a numerical road, for example, a value within 100 may be expressed, and a higher value may indicate a greater risk of risk. For example, when a child is riding alone, the risk prediction value is 10; when carrying of forbidden articles exists, the risk prediction value is 50; when fighting exists, the risk prediction value is 80. In some embodiments, the risk prediction value may be determined based on the risk identification result and its confidence, e.g., the higher the confidence, the greater the risk prediction value. Risk prediction values may also be determined in other ways, such as by human identification.
The alarm threshold is a risk prediction value threshold that indicates that an alarm may be triggered. In some embodiments, the alarms may be classified into different levels according to different urgency levels, such as a primary alarm, a secondary alarm, etc., for example, a higher alarm level may be set to indicate a higher risk, and the higher the alarm level is, the more urgent the alarm level is. Accordingly, each level corresponds to a different alarm threshold. For example, the alarm threshold corresponding to the primary alarm is a primary risk level threshold, which may range from a risk prediction value of 10 to 50, that is, the primary alarm is triggered when the risk prediction value falls within the range of 10 to 50. For further explanation of the risk level threshold, see fig. 8 and its associated description. In some embodiments, the alarm threshold may be based on manual settings.
In some embodiments, the above-described process 400 may also include a step 450 for manually determining alarm parameters. In some embodiments, this step may be implemented by a processing device.
And step 450, sending the risk information to a manager and acquiring alarm parameters fed back by the manager. In some embodiments, this step 450 may be performed by the alarm module 240.
The alarm parameter may be information related to an alarm procedure. For example, the alarm parameters may include information on whether to approve an alarm, an alarm level, an alarm mode, and the like.
The manager may be the actual manager of the intelligent alarm system. For example, the manager may be a manager of an alarm center, a security room, a train console, and the like.
In some embodiments, the alarm parameters may be determined based on administrator feedback. The feedback may be input feedback of characters, voice, keys, and the like, performed by the administrator on the terminal device 130.
Step 460, in response to the alarm parameters including the parameter agreeing to alarm, an alarm signal is sent out. In some embodiments, this step 460 may be performed by alarm module 240.
In some embodiments, the processing device responds to the administrator agreeing to the alarm and issues a corresponding alarm signal based on the alarm level and the alarm mode included in the alarm parameters fed back by the administrator.
In some embodiments, the risk information may also be determined in conjunction with other information, such as in conjunction with person type information, and setting an alarm threshold or determining a predicted risk value based on the person type. The corresponding operation steps when the risk information is determined by combining the person type information are as follows:
the person type is determined by the third model based on at least one of the image information and the sound information. In some embodiments, this operation may be performed by the identification module 220.
The type of the person can be information of the identity and physiological characteristics of the person. For example, the character type may include identity information such as staff, handicapped, etc.; the character type may also include physiological information such as the elderly, children, handicapped, and the like.
The third model may be a machine learning model for identifying a type of person, and may include a Convolutional Neural Network (CNN), a region-based convolutional network (R-CNN), a Fast region-based convolutional network (Fast R-CNN), a region-based convolutional network (Fast R-CNN), and the like, or any combination thereof. For a detailed description of the third model, refer to fig. 6 and its associated description.
In some embodiments, the image information and the sound information may include information capable of reflecting the type of the person, and the determination of the type of the person in the input image information and/or sound information based on the extraction and recognition of the information capable of reflecting the type of the person may be achieved by the trained machine learning model.
After the person type is determined, an alarm threshold or a risk prediction value may be determined based on the person type. In some embodiments, this step may be performed by the determination module 230.
In some embodiments, the alarm threshold may be determined or altered based on the person type. For example, a higher alarm threshold may be set for a person type having a certain resistance or tolerance to a risk, such as a worker or an adult man, and a lower alarm threshold may be set for a person type that is more likely to be attended, such as an elderly person or a child.
In some embodiments, risk prediction values may be determined or altered based on the person type. For example, for the same scene, when the character types are different, the corresponding risk prediction value can be determined according to the character types. For example, in an elevator with explosives, the determined risk prediction value may be lower, such as 30, if the fellow passenger is a young adult, and higher, such as 60, if the fellow passenger is a child.
In some embodiments, the determining module 230 may determine the risk information in combination with the acquired person types of the fellow persons and the corresponding number of persons in response to the risk identification result including the risk vehicle.
The people information may refer to the total number of people in the monitored space. In some embodiments, the people information may include the number of people in the monitored space with the hazardous vehicle. In some embodiments, it may be preset that for certain types of risky vehicles, a specified number of workers may be allowed to ride together. For example, in a certain predetermined risk vehicle, the person who is in the elevator must be a worker, and the number of people cannot exceed two persons, and if the number of people information obtained at this time exceeds 2, it is determined that the risk information is a risk vehicle overtaking.
Through the methods described in some embodiments of the process 400, the intelligent warning system can determine whether to warn and the warning level based on various information such as the type of a person, a vehicle at risk, etc., and for the complex actual compartment space conditions, there are various adaptive warning countermeasures, which achieves the goal based on the complex actual conditions.
It should be noted that the above description related to the flow 400 is only for illustration and description, and does not limit the applicable scope of the present specification. Various modifications and changes to flow 400 will be apparent to those skilled in the art in light of this description. However, such modifications and variations are intended to be within the scope of the present description. For example, the flow 400 may also include other steps.
FIG. 5 is a block diagram of an exemplary model structure for risk identification determination according to further embodiments of the present disclosure. In some embodiments, the risk identification results may be obtained based on the model structure 500 shown in FIG. 5.
The first model 503 may be a machine learning model for determining risk images.
The input to the first model 503 may be image information 501 and the output may be a risk image 505.
In some embodiments, the first model may be trained from historical image data with risk labels, such as image data captured by a camera of the elevator over a period of time, or images of the elevator at risk downloaded via the internet. The parameters of the first model are iteratively updated based on the loss function by inputting the historical image data with the risk label into the model, constructing the loss function based on the risk label and the results of the first model. And finishing model training when the loss function of the initial first model meets the preset condition to obtain the trained first model. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like. The label can be various risks such as existence of a battery car, falling of old people and the like. In some embodiments, the tags may be retrieved by manual tagging.
In some embodiments, the first model may also output other information related to the risk image, for example, any real value between 0 and 1 representing the probability p of occurrence of the risk image. In some embodiments, the handle may be
Figure BDA0003442117490000151
The value of (a) is used as the confidence of determining the occurrence of the risk image, when the value of the confidence is close to 1 (e.g., greater than the first threshold value by 0.95), it may be regarded that the risk image exists, and when the value of the confidence is close to 0 (e.g., less than the second threshold value by 0.05), it may be regarded that the risk image does not exist. If the confidence value is any value (e.g., 0.5) between the second threshold and the first threshold, it may be determined that whether the risk image occurs cannot be determined currently.
Correspondingly, the label data during the first model training further includes two real values, namely 0 and 1, wherein if the value of the label is 0, the label data represents that the risk image is not represented in the label (that is, the risk image represented in the label does not occur), and if the value of the label is 1, the label data represents that the risk image represented in the label (that is, the risk image represented in the label occurs).
The second model 504 may be a machine learning model for determining risk sounds.
The input to the second model 504 may be the sound information 502 and the output may be the risk sound 506. In some embodiments, the second model 504 may include a semantic recognition structure. The semantic identification structure is used for identifying semantic content in the audio file. The input of the semantic recognition structure may be the sound information 502, and the output may be semantic content, such as "life saving", "man doing", and so on.
In some embodiments, the second model may be obtained by training historical sound data with a risk tag, for example, training audio data such as explosion sound, shouting sound, and distress sound acquired by the elevator through a microphone or microphone for a period of time, or training audio downloaded through the internet when the elevator is in danger. And iteratively updating parameters of the second model based on the loss function by inputting the historical sound data with the risk label into the model, constructing the loss function based on the risk label and the result of the second model. And finishing model training when the loss function of the initial second model meets the preset condition to obtain the trained second model. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like. The label can be calling for help, alarming, calling, etc. In some embodiments, the tags may be retrieved by manual tagging.
In some embodiments, the second model may also output other information related to the risk sound, for example, any real value between 0 and 1 representing the probability q of the risk sound occurring. In some embodiments, the handle may be
Figure BDA0003442117490000161
The value of (a) is used as a confidence level for determining the occurrence of the risk sound, and when the value of the confidence level is close to 1 (e.g., greater than the first threshold value by 0.95), it indicates that the occurrence probability of the risk sound is high, and it may be considered that the risk sound exists, and when the value of the confidence level is close to 0 (e.g., less than the second threshold value by 0.05), it indicates that the occurrence probability of the risk sound is low, and it may be considered that the risk sound does not exist. If the confidence level value is any value (e.g., 0.5) between the second threshold and the first threshold, it may be determined that whether the risk sound occurs cannot be determined currently.
Correspondingly, the label data during the second model training further includes two real values, namely 0 and 1, where if the value of the label is 0, the label data represents that the risk sound is not represented in the label (that is, the risk sound represented in the label does not occur), and if the value of the label is 1, the label data represents that the risk sound represented in the label (that is, the risk sound represented in the label occurs).
In some embodiments, the semantic recognition structure may be obtained by training audio with known semantic content, such as speech captured by a microphone or microphone in an elevator, audio downloaded via the internet and associated with the semantic content, and so on. The specific training process of the semantic recognition structure is referred to the training process of the second model, and is not described herein again.
Based on the risk image output by the first model and the risk sound output by the second model, the processing device takes the content of the risk included in the risk image and the risk sound as a risk identification result, and reference may be made to step 430 for the description of this step.
In some embodiments, the risk identification result may be integrated based on the risk image and the risk sound. For example, when the risk image includes an image of alarming by the child and the risk sound includes a sound of alarming by the child, the risk identification result can be comprehensively judged as alarming by the child; when the risk image is judged to include the image of the alarm of the child, and the risk sound does not include the sound of the alarm of the child or include other contents, the risk identification result is comprehensively judged by acquiring the confidence degrees of the risk image and the risk sound and comparing the confidence degrees, for example, the judgment result with the higher confidence degree can be taken as the final judgment result.
FIG. 6 is an exemplary model structure diagram for character type determination as shown in some embodiments herein. In some embodiments, character types 604 may be obtained through the model structure 600 shown in FIG. 6.
The third model 603 may be a machine learning model for identifying a type of character.
The input of the third model 603 may be image information 601, sound information 602, or both, and the output may be character type 604.
In some embodiments, the third model may be trained from historical image data, historical voice data, etc. containing character features, such as images of a character, spoken voice, etc. taken by the elevator over a period of time, or from the above data downloaded via the internet. And iteratively updating parameters of the third model based on the loss function by inputting at least one of historical image data and historical sound data containing character features into the model, constructing the loss function based on the character feature labels and the result of the third model. And finishing model training when the loss function of the initial third model meets the preset condition to obtain a trained third model. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like. The tags may be character features such as elderly, children, staff, etc. In some embodiments, the tags may be retrieved by manual tagging.
FIG. 7 is an exemplary model architecture diagram for at risk vehicle determination, according to some embodiments described herein. In some embodiments, the risky vehicle 703 may be determined by the model structure 700 shown in fig. 7.
The fourth model 702 may be a model for obtaining a risk vehicle 703.
The input to the fourth model 702 may be pressure information 701 and the output may be an at risk vehicle 703.
In some embodiments, the fourth model may be trained from historical pressure data with at-risk vehicle class labels by inputting a plurality of pressure data with at-risk vehicle class labels into the model, constructing a loss function based on the at-risk vehicle class labels and the results of the fourth model, and iteratively updating parameters of the fourth model based on the loss function. And finishing the model training when the loss function of the initial fourth model meets the preset condition to obtain the trained fourth model. The preset condition may be that the loss function converges, the number of iterations reaches a threshold, and the like. The tags may be of various types of risky vehicles, etc. In some embodiments, the tags may be retrieved by manual tagging.
In some embodiments, the first model, the second model, the third model, and the fourth model may be obtained by joint training, or may be implemented by one joint training model. For example, models that can realize the functions of the first model, the second model, the third model, and the fourth model are arranged as layers in a joint training model, such as a risk image recognition layer, a risk sound recognition layer, a character feature recognition layer, and a risk vehicle recognition layer. The joint training model has the advantages that the input is image information, sound information and pressure information, the output is a risk identification result, the label can be preset content including risks, and the label can be obtained based on historical image information, historical sound information and historical pressure information acquired by monitored spaces such as elevators in a period of time or acquired through manual labeling and internet searching. In some embodiments, the joint training model may also output a confidence level of the risk identification result. For the explanation of the confidence, see fig. 5, the first model and the second model output the relevant description of the confidence.
In some embodiments, the functions of the above models can be implemented by other numbers and kinds of models, and the description in this specification is intended to introduce the functions of the models and not to limit the numbers and kinds of the models.
In some embodiments, the training label of the fourth model may be determined based on the recognition result of the first model, for example, when the recognition result output by the first model is a specific type of a risk vehicle, the pressure information acquired corresponding to the acquired risk image may be used as the training sample of the fourth model, and the recognition result output by the first model may be used as the label corresponding to the training sample.
FIG. 8 is an exemplary diagram illustrating alarm levels according to some embodiments of the present description. In some embodiments, as shown in the alarm system 800 in fig. 8, based on the risk images, risk sounds, person types, and risk vehicles (or their confidence levels) output by the four models, corresponding risk prediction values may be determined. The risk prediction value may be represented by a value within 100, with higher values indicating greater risk hazard.
The primary risk level threshold may be a range of risk predictors corresponding to risks of minor harm. For example, the primary risk level threshold may be in the range of 10 to 20 risk prediction values. In some embodiments, when an alarm is triggered, if the risk prediction value is within the primary risk level threshold, the alarm content and the alarm mode aim to persuade and suggest, and remind by means with small influence. For example, when a child rides alone, the risk prediction value is 10 and is within the first-level risk level threshold range, the alarm mode is broadcast reminding, and the child is persuaded to accompany the adult to ride. In some embodiments, the primary risk level threshold may also correspond to a lower broadcast alert volume and a relaxed alert mood.
The secondary risk level threshold may be a range of risk predictors corresponding to risks of greater risk. For example, the secondary risk level threshold may be in the range of 20 to 100 risk prediction values. In some embodiments, when an alarm is triggered, if the predicted risk value is within the secondary risk level threshold, the alarm content and the alarm mode are intended to command, warn, and take a means with greater influence. For example, when the elevator has a battery car, the risk prediction value is 80 and the range of the secondary risk level threshold value, the alarm mode is call security, broadcast warning, forced parking of the elevator, call alarm receiving center and the like. In some embodiments, the secondary risk level threshold may also correspond to a higher broadcast alarm volume and a jerky alarm sound. In some embodiments, the alarm receiving center may make a manual determination in response to receiving an alarm, such as a video or voice interaction with the monitored space, contact alarm based on alarm misjudgment, and the like.
The above description of the primary risk level threshold and the secondary risk level threshold is for clarity of illustration only and is not intended to limit the content of the specification. In some embodiments, a tertiary risk level threshold, a quaternary risk level threshold, etc. may also be included. In some embodiments, the primary risk level threshold, the secondary risk level threshold may represent a range of risk predictors as opposed to those described above, or other ranges of risk predictors.
Some embodiments of the present specification also disclose an intelligent warning apparatus, comprising a processor and a memory; the memory is used for storing instructions which when executed by the processor cause the intelligent alarming device to realize the intelligent alarming method.
Some embodiments of the present specification further disclose a computer-readable storage medium storing computer instructions, wherein when the computer reads the computer instructions in the storage medium, the computer executes the intelligent alarm method.
By the intelligent alarm method introduced in some embodiments of the present specification, targeted hierarchical alarm based on different risk scenes can be realized, risks can be simply reminded under common conditions, and police resources and management resources are saved; and the alarm is given in time under the crisis condition, so that the alarm efficiency is improved. In addition, the intelligent alarm method identifies risks of different degrees based on the machine learning model, can reduce false alarm, and can alarm according to different risks, so that managers/rescuers can obtain risk information at the first time.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means a feature, structure, or characteristic described in connection with at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments described herein. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. An intelligent alarm method, characterized in that the method comprises:
acquiring acquisition information of a monitored space, wherein the monitored space comprises at least one of the following: elevators, cars;
performing risk identification based on the acquired information to obtain a risk identification result;
determining risk information based on the risk identification result;
and sending out an alarm signal based on the risk information.
2. The method of claim 1, wherein the acquisition information includes at least one of image information and sound information;
the risk identification based on the acquired information, and the obtaining of the risk identification result comprises:
determining a risk image by a first model based on the image information;
determining a risk sound through a second model based on the sound information;
obtaining a risk identification result based on at least one of the risk image and the risk sound, wherein the risk identification result comprises a confidence level;
the determining risk information based on the risk identification result comprises:
determining the risk information based on a confidence of the risk image and/or a confidence of the risk sound.
3. The method of claim 2, wherein the identifying a risk based on the collected information further comprises:
determining a character type through a third model based on the at least one of the image information and the sound information;
the determining risk information based on the risk identification result further comprises:
an alarm threshold or a risk prediction value is determined based on the person type.
4. The method of claim 1, wherein the collected information further includes pressure information, wherein performing risk identification based on the collected information further includes:
based on the pressure information, a risky vehicle is determined.
5. An intelligent alarm system, the system comprising:
the acquisition module is used for acquiring the acquisition information of the monitored space, and the monitored space comprises at least one of the following: elevators, cars;
the identification module is used for carrying out risk identification based on the acquired information to obtain a risk identification result;
a determining module for determining risk information based on the risk identification result;
and the alarm module is used for sending out an alarm signal based on the risk information.
6. The system of claim 5, wherein the acquisition information includes at least one of image information and sound information, the identification module further to:
determining a risk image by a first model based on the image information;
determining a risk sound through a second model based on the sound information;
obtaining a risk identification result based on at least one of the risk image and the risk sound;
the determination module is further to:
determining the risk information based on a confidence of the risk image and/or a confidence of the risk sound.
7. The system of claim 6, wherein the identification module is further to:
determining a character type through a third model based on the at least one of the image information and the sound information;
the determination module is further to:
an alarm threshold or a risk prediction value is determined based on the person type.
8. The system of claim 5, wherein the acquisition information further comprises pressure information, the identification module further to:
based on the pressure information, a risky vehicle is determined.
9. An intelligent warning device, the device comprising a processor and a memory; the memory is used for storing instructions, and the instructions are characterized in that when executed by the processor, the instructions cause the device to realize the intelligent alarm method according to any one of claims 1-4.
10. A computer-readable storage medium, wherein the storage medium stores computer instructions, and when the computer instructions in the storage medium are read by a computer, the computer executes the intelligent alarm method according to any one of claims 1 to 4.
CN202111638363.4A 2021-12-29 2021-12-29 Intelligent alarm method and system Active CN114463928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111638363.4A CN114463928B (en) 2021-12-29 2021-12-29 Intelligent alarm method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111638363.4A CN114463928B (en) 2021-12-29 2021-12-29 Intelligent alarm method and system

Publications (2)

Publication Number Publication Date
CN114463928A true CN114463928A (en) 2022-05-10
CN114463928B CN114463928B (en) 2022-11-25

Family

ID=81408511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111638363.4A Active CN114463928B (en) 2021-12-29 2021-12-29 Intelligent alarm method and system

Country Status (1)

Country Link
CN (1) CN114463928B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315592A (en) * 2023-11-27 2023-12-29 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101670973A (en) * 2008-09-09 2010-03-17 东芝电梯株式会社 Escalator monitoring system
JP2010128920A (en) * 2008-11-28 2010-06-10 Toyota Motor Corp Safety device for vehicle
CN104751663A (en) * 2015-02-28 2015-07-01 北京壹卡行科技有限公司 Safe driving auxiliary system and safe driving auxiliary method for driver
KR101655397B1 (en) * 2015-04-30 2016-09-07 주식회사 리트빅 Terminal apparatus and system for reporting circumstances
CN110008804A (en) * 2018-12-12 2019-07-12 浙江新再灵科技股份有限公司 Elevator monitoring key frame based on deep learning obtains and detection method
CN110712591A (en) * 2019-09-25 2020-01-21 江西沃可视发展有限公司 Back-up safety system based on analysis of back-pull camera image ADAS
CN110782111A (en) * 2019-02-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Risk assessment method and system
CN111063162A (en) * 2019-12-05 2020-04-24 恒大新能源汽车科技(广东)有限公司 Silent alarm method and device, computer equipment and storage medium
CN212966489U (en) * 2020-10-19 2021-04-13 郑州大学 Elevator early warning system on electric motor car based on intelligent analysis technique
CN113830092A (en) * 2021-01-25 2021-12-24 西安睿博智能股份有限公司 Driving safety management method and device and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101670973A (en) * 2008-09-09 2010-03-17 东芝电梯株式会社 Escalator monitoring system
JP2010128920A (en) * 2008-11-28 2010-06-10 Toyota Motor Corp Safety device for vehicle
CN104751663A (en) * 2015-02-28 2015-07-01 北京壹卡行科技有限公司 Safe driving auxiliary system and safe driving auxiliary method for driver
KR101655397B1 (en) * 2015-04-30 2016-09-07 주식회사 리트빅 Terminal apparatus and system for reporting circumstances
CN110008804A (en) * 2018-12-12 2019-07-12 浙江新再灵科技股份有限公司 Elevator monitoring key frame based on deep learning obtains and detection method
CN110782111A (en) * 2019-02-21 2020-02-11 北京嘀嘀无限科技发展有限公司 Risk assessment method and system
CN110712591A (en) * 2019-09-25 2020-01-21 江西沃可视发展有限公司 Back-up safety system based on analysis of back-pull camera image ADAS
CN111063162A (en) * 2019-12-05 2020-04-24 恒大新能源汽车科技(广东)有限公司 Silent alarm method and device, computer equipment and storage medium
CN212966489U (en) * 2020-10-19 2021-04-13 郑州大学 Elevator early warning system on electric motor car based on intelligent analysis technique
CN113830092A (en) * 2021-01-25 2021-12-24 西安睿博智能股份有限公司 Driving safety management method and device and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315592A (en) * 2023-11-27 2023-12-29 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting
CN117315592B (en) * 2023-11-27 2024-01-30 四川省医学科学院·四川省人民医院 Identification early warning system based on robot end real-time monitoring camera shooting

Also Published As

Publication number Publication date
CN114463928B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
Biswal et al. IoT‐based smart alert system for drowsy driver detection
US20210076010A1 (en) System and method for gate monitoring during departure or arrival of an autonomous vehicle
Uma et al. Accident prevention and safety assistance using IOT and machine learning
JP5560397B2 (en) Autonomous crime prevention alert system and autonomous crime prevention alert method
US9412142B2 (en) Intelligent observation and identification database system
CN111063162A (en) Silent alarm method and device, computer equipment and storage medium
CN105556581A (en) Responding to in-vehicle environmental conditions
CN109548408B (en) Autonomous vehicle providing safety zone to person in distress
CN116457851B (en) System and method for real estate monitoring
EP4141813A1 (en) Detection and mitigation of inappropriate behaviors of autonomous vehicle passengers
CN114463928B (en) Intelligent alarm method and system
CN116308960B (en) Intelligent park property prevention and control management system based on data analysis and implementation method thereof
CN112277954A (en) Method and device for comprehensively monitoring vehicle safety and personnel safety driving
US20210271217A1 (en) Using Real Time Data For Facilities Control Systems
KR20160028542A (en) an emergency management and crime prevention system for cars and the method thereof
CN112829705A (en) Vehicle control management method based on characteristics of left-over personnel in vehicle
KR102556447B1 (en) A situation judgment system using pattern analysis
KR101437406B1 (en) an emergency management and crime prevention system for cars and the method thereof
US20230153424A1 (en) Systems and methods for an automous security system
KR102648004B1 (en) Apparatus and Method for Detecting Violence, Smart Violence Monitoring System having the same
CN115171335A (en) Image and voice fused indoor safety protection method and device for elderly people living alone
RU2721178C1 (en) Intelligent automatic intruders detection system
CN209199284U (en) A kind of Household security system
KR102665312B1 (en) Tunnel control enclosure and enclosure control method for safe evacuation
KR102643541B1 (en) System for detecting and preventing drowsy driving using ai technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant