CN115223331A - Fall alarm method, device and equipment - Google Patents

Fall alarm method, device and equipment Download PDF

Info

Publication number
CN115223331A
CN115223331A CN202210557378.6A CN202210557378A CN115223331A CN 115223331 A CN115223331 A CN 115223331A CN 202210557378 A CN202210557378 A CN 202210557378A CN 115223331 A CN115223331 A CN 115223331A
Authority
CN
China
Prior art keywords
panoramic
frame image
event frame
falling
alarm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557378.6A
Other languages
Chinese (zh)
Inventor
陈家安
邱翌
王东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Lian Science And Technology Co ltd
Original Assignee
Ningbo Lian Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Lian Science And Technology Co ltd filed Critical Ningbo Lian Science And Technology Co ltd
Priority to CN202210557378.6A priority Critical patent/CN115223331A/en
Publication of CN115223331A publication Critical patent/CN115223331A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras

Abstract

The invention discloses a falling alarm method, a falling alarm device and falling alarm equipment, and relates to the technical field of computer vision technology and voice interaction. The method comprises the following steps: acquiring a real-time event stream acquired by a dynamic vision sensor under a panoramic camera; accumulating each E events in the real-time event stream into a panoramic event frame image; storing the panoramic event frame image into a time window queue; performing motion recognition on the time window queue through a trained motion recognition neural network; if the accidental falling action is identified, capturing the sound source direction identified by the microphone array, and judging whether the sound source direction is matched with the position of the determined falling picture in the panoramic event frame image; if the voice information is matched with the voice information, sending inquiry voice to a loudspeaker, and receiving the voice information collected by the microphone array; and sending alarm information according to the voice information.

Description

Fall alarm method, device and equipment
Technical Field
The application relates to the technical field of computer vision technology and voice interaction, in particular to a falling alarm method, a falling alarm device and falling alarm equipment.
Background
The elderly living alone or in empty nests in China exceed 1 hundred million people, and the elderly living alone are unconscious due to falling down or lose mobility, so that rescue opportunities are easily missed, and the elderly living alone can be found to accidentally fall down in time and can be rescued from injured in time by being linked with an alarm system.
The old people are required to wear various sensing devices in some solutions by sensing whether the old people fall or not through the sensors worn by the old people, so that the old people are inconvenient to move. In addition, an infrared device covering an activity area is arranged in a room, and the activity condition of the old people after getting out of the bed is sensed by combining a sensor on the bed, so that the method has complex required hardware conditions.
With the development of computer vision technology, a camera is installed in a room, and then the accidental fall can be accurately and conveniently identified through a fall detection algorithm related to the computer vision technology.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
a single traditional camera cannot acquire a complete indoor picture, a picture dead angle exists, and a plurality of cameras are required to be matched. When accidental falls occur, the transmitted color images are recorded, so that privacy protection is not facilitated; in addition, under the condition of insufficient illumination at night, the quality of the image obtained by the camera is poor, and the identification of a falling detection algorithm is not facilitated.
Disclosure of Invention
The embodiment of the application aims to provide a method, a device and equipment for alarming falling down, and aims to solve the technical problems that equipment arrangement is complex, picture dead angles exist, the illumination adaptability is poor, and recorded image pictures are not beneficial to privacy protection in the related technology.
According to a first aspect of embodiments of the present application, there is provided a fall alarm method, including:
acquiring a real-time event stream acquired by a dynamic vision sensor under a panoramic camera;
accumulating each E events in the real-time event stream into a panoramic event frame image;
storing the panoramic event frame image into a time window queue;
performing motion recognition on the time window queue through a trained motion recognition neural network;
if the accidental falling action is identified, capturing the sound source direction identified by the microphone array, and judging whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
if the matching is carried out, sending inquiry voice to a loudspeaker, and receiving voice information collected by the microphone array;
and sending alarm information according to the voice information.
Optionally, the method for constructing the trained motion recognition neural network includes:
acquiring a normal action panoramic event stream, an accidental falling panoramic event stream and a fallen-back panoramic event stream which are acquired by a dynamic visual sensor under the panoramic camera;
respectively accumulating E events in the three types of panoramic event streams into a panoramic event frame image to generate a video data set, wherein the video data set comprises a normal-action panoramic event frame video, an accidental-falling panoramic event frame video and a falling-back-to-get-up panoramic event frame video;
and training a motion recognition neural network by using the video data set to obtain the trained motion recognition neural network.
Optionally, the method further includes:
if the accidental falling action is not identified, continuously acquiring a next panoramic event frame image;
and performing action recognition on the next panoramic event frame image through the trained action recognition neural network.
Optionally, the method further includes:
if not, continuously acquiring a next panoramic event frame image;
and performing action recognition on the next panoramic event frame image through the trained action recognition neural network.
Optionally, sending alarm information according to the voice information, including:
if the voice information shows that the alarm is needed, sending alarm information to a preset emergency contact client;
if the voice information shows that the alarm is not needed, identifying whether the person has a rising action after falling or not within a preset first time through the trained action identification neural network, and if so, continuously acquiring a next panoramic event frame image and identifying; if not, sending the inquiry voice to the loudspeaker again, and if not, continuing to acquire and identify the next panoramic event frame image, otherwise, sending alarm information; and if the inquiry does not reply within the preset second time, sending alarm information.
Optionally, the method further includes:
and sending the panoramic event frame image and the alarm information at the moment of falling down accidentally to a preset emergency contact.
According to a second aspect of embodiments of the present application, there is provided a fall alarm device, comprising:
the acquisition module is used for acquiring a real-time event stream acquired by a dynamic vision sensor under the panoramic camera;
the accumulation module is used for accumulating each E events in the real-time event stream into a panoramic event frame image;
the storage module is used for storing the panoramic event frame image into a time window queue;
the recognition module is used for recognizing the action of the time window queue through the trained action recognition neural network;
the judging module is used for capturing the sound source direction identified by the microphone array if the accidental falling action is identified, and judging whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
the sending and receiving module is used for sending inquiry voice to a loudspeaker and receiving voice information collected by the microphone array if the inquiry voice is matched with the voice information;
and the alarm module is used for sending alarm information according to the voice information.
According to a third aspect of embodiments of the present application, there is provided a fall alarm device comprising:
the panoramic camera is used for sensing indoor conditions at 360 degrees;
the dynamic vision sensor is used for collecting a real-time event stream under the panoramic camera;
the microphone array is used for identifying the direction of a sound source and collecting voice information;
the loudspeaker is used for broadcasting inquiry voice;
and the processor is used for accumulating E events in the real-time event stream into a panoramic event frame image, storing the panoramic event frame image into a time window queue, carrying out action identification on the time window queue through a trained action identification neural network, capturing a sound source direction identified by the microphone array if an accidental falling action is identified, judging whether the sound source direction is matched with the position of a determined falling picture in the panoramic event frame image, sending inquiry voice to the loudspeaker if the sound source direction is matched with the position of the determined falling picture in the panoramic event frame image, receiving voice information collected by the microphone array, and sending alarm information according to the voice information.
According to a fourth aspect of embodiments of the present application, there is provided an electronic apparatus, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as described in the first aspect.
According to a fifth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the technical scheme, the panoramic camera is adopted for capturing, so that a single panoramic camera can sense the action condition of the old people in a room at 360 degrees without dead angles, a viewing angle blind area does not exist, and hardware installation is simplified;
the dynamic vision sensor is adopted, so that the illumination adaptability is strong, the normal work can be effectively identified when the illumination is insufficient at night, the response is generated only when a moving object exists in the picture, and the power consumption is low;
the panoramic event frame image is adopted to record an indoor picture when the person falls accidentally, only the falling position with motion information has event record, and color and texture information does not exist, so that privacy protection is facilitated when alarm information is sent;
by adopting voice interactive inquiry, after the equipment detects that the old man falls down accidentally, the old man is subjected to voice inquiry and waits for a response, and the old man stands up to recognize the action, so that the alarm notification can be better completed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flow chart illustrating a fall alarm method according to an exemplary embodiment.
Fig. 2 is a block diagram illustrating a fall alarm apparatus according to an exemplary embodiment.
Fig. 3 is a schematic structural diagram of a fall alarm device according to an exemplary embodiment.
Fig. 4 is a schematic diagram illustrating the use of a fall alarm device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
Fig. 1 is a flowchart illustrating a fall alarm method according to an exemplary embodiment, which is applied to a processor, as shown in fig. 1, and includes:
s11, acquiring a real-time event stream acquired by a dynamic vision sensor under the panoramic camera;
step S12, accumulating each E events in the real-time event stream into a panoramic event frame image;
step S13, storing the panoramic event frame image into a time window queue
Step S14, performing action recognition on the time window queue through a trained action recognition neural network;
step S15, if the accidental falling action is identified, capturing the sound source direction identified by the microphone array, and judging whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
step S16, if the voice information is matched with the voice information, sending inquiry voice to a loudspeaker, and receiving the voice information collected by the microphone array;
and S17, sending alarm information according to the voice information.
According to the embodiment, the indoor state of 360 degrees is captured by the panoramic camera, and the movement of indoor personnel can be sensed without dead angles; the dynamic vision sensor can be well adapted to illumination, so that a good sensing effect can be achieved when the illumination is insufficient at night; the action recognition algorithm based on panoramic dynamic vision is adopted, the actions of accidental falling and rising after falling can be recognized accurately, and in addition, the sound source position is captured through the microphone array, so that the detection recognition result can be subjected to secondary verification, and the detection result is more reliable; by adopting a voice interactive inquiry mode, the condition of the current personnel can be better known, the actual requirement can be conveniently known, and the alarm is finished; and transmitting a panoramic event frame image when alarming, wherein the panoramic event frame image only has event records at falling positions with motion information and does not have color and texture information, thereby being beneficial to privacy protection.
In the specific implementation of the step S11, a real-time event stream acquired by a dynamic vision sensor under the panoramic camera is acquired;
specifically, a single miniature panoramic camera device can sense the indoor state at 360 degrees, has a simple hardware structure and a stable imaging effect, and can be suitable for indoor dead-angle-free capture; the dynamic vision sensor is a novel optical sensor, only outputs events to the part generating motion in the picture, has the characteristics of low power consumption and high dynamic range, has strong illumination adaptability, can still normally work and effectively recognize when illumination is insufficient at night, contains motion information in the real-time event stream, and is convenient to recognize by using an action recognition algorithm.
In a specific implementation of step S12, accumulating each E event in the real-time event stream into a panoramic event frame image;
specifically, the number of events is counted in the real-time event stream, each E events are accumulated into a panoramic event frame image, the panoramic event frame image records the information of motion at the current moment, and the panoramic event frame image is simple and beneficial to the identification of an action identification algorithm.
In a specific implementation of step S13, storing the panoramic event frame image in a time window queue;
specifically, a time window queue with the length of L is set, and all initial values are set to be 0; and storing the panoramic event frame image into the time window queue in real time, and obtaining the time window queue full of panoramic event frame images in real time. The time window queue is completely stored with time sequence information and panoramic spatial information, and can be effectively processed and predicted by a trained action recognition neural network, so that real-time recognition is carried out.
In the specific implementation of step S14, performing motion recognition on the event window queue through a trained motion recognition neural network;
specifically, the time window queue full of panoramic event frame images is input into a trained action recognition neural network, and the trained action recognition neural network outputs an action recognition result, so that whether the current action is a normal action or an accidental fall or a rise after the fall is judged.
The method for constructing the trained action recognition neural network comprises the following steps:
acquiring a normal action panoramic event stream, an accidental falling panoramic event stream and a fallen-back panoramic event stream which are acquired by a dynamic vision sensor under the panoramic camera;
respectively accumulating E events in the three types of panoramic event streams into a panoramic event frame image to generate a video data set, wherein the video data set comprises a normal-action panoramic event frame video, an accidental-falling panoramic event frame video and a falling-back-to-rise panoramic event frame video;
and training the action recognition neural network by using the video data set so as to achieve the required corresponding accuracy, and obtaining the trained action recognition neural network.
In the specific implementation of step S15, if an accidental falling action is identified, capturing a sound source direction identified by the microphone array, and determining whether the sound source direction matches the position of the falling picture determined in the panoramic event frame image;
specifically, the microphone array is an annular array and is used for sensing the sound source direction when falling occurs, the detected sound source direction can be mutually verified with the falling picture position determined in the panoramic event frame image, if the detected sound source direction is matched with the falling picture position, the detection result is considered to be correct, and if the detected sound source direction is not matched with the falling picture position, the detection result is considered to be an interference misinformation, so that the detection result can be more reliable.
In the specific implementation of step S16, if matching, sending an inquiry voice to the speaker, and receiving voice information collected by the microphone array;
specifically, if abnormal falling is detected, inquiry voice is sent to the loudspeaker to inquire whether the old needs to give an alarm to an emergency contact person or not, and a reply received by the microphone array is waited, so that the condition of the current personnel can be better known through voice interaction inquiry, the actual requirement can be conveniently known, and the alarm is finished.
In the specific implementation of step S17, an alarm message is sent according to the voice message.
Specifically, the voice information collected by the microphone array is distinguished.
This step may include the following sub-steps:
step S171, if the voice information shows that an alarm is needed, sending alarm information to a preset emergency contact client;
step S172, if the voice information shows that the alarm is not needed, identifying whether the person has a rising action within a preset first time (T minutes) through the trained action identification neural network, and if the person has the rising action, continuously acquiring a next panoramic event frame image and identifying; if not, sending the inquiry voice to the loudspeaker again, and if not, continuing to acquire and identify the next panoramic event frame image, otherwise, sending alarm information; and if the inquiry does not reply within a preset second time (t seconds), sending alarm information.
After step S15, the method further includes:
if the accidental falling action is not identified, continuously acquiring a next panoramic event frame image; and performing action recognition on the next panoramic event frame image through the trained action recognition neural network. This step will not be described further.
After step S16, the method further includes:
if not, continuously acquiring a next panoramic event frame image; and performing action recognition on the next panoramic event frame image through the trained action recognition neural network. This step will not be described further.
After step S17, the method further includes:
and sending the panoramic event frame image and the alarm information at the moment of accidental fall to a preset emergency contact.
Corresponding to the foregoing embodiment of the fall alarm method, the present application also provides an embodiment of a fall alarm device.
Fig. 2 is a block diagram illustrating a fall alarm apparatus according to an exemplary embodiment. Referring to fig. 2, the device is applied to a processor and comprises an acquisition module, an accumulation module, a storage module, an identification module, a judgment module, a sending and receiving module and an alarm module.
The acquisition module 21 is configured to acquire a real-time event stream acquired by a dynamic vision sensor under the panoramic camera;
an accumulation module 22, configured to accumulate each E event in the real-time event stream into a panoramic event frame image;
a storage module 23, configured to store the panoramic event frame image into a time window queue;
the recognition module 24 is used for performing motion recognition on the time window queue through the trained motion recognition neural network;
the judging module 25 is configured to capture a sound source direction identified by the microphone array if an accidental falling action is identified, and judge whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
the sending and receiving module 26 is configured to send an inquiry voice to the speaker and receive voice information acquired by the microphone array if the inquiry voice is matched with the voice information;
and the alarm module 27 is used for sending alarm information according to the voice information.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment also provides a fall alarm device, which is applied to a processor, as shown in fig. 3, and includes: the panoramic camera comprises a panoramic camera 1, a dynamic vision sensor, a microphone array, a loudspeaker and a processor, wherein the dynamic vision sensor and the processor can be arranged on a first shell 2, the microphone array and the loudspeaker can also be arranged on a second shell 3, the panoramic camera 1 is arranged in the middle, and the second shell 3 is arranged around the circumference of the panoramic camera 1.
A single one of the panoramic cameras 1 can capture a scene of a place in a current room without a dead angle by 360 °.
The dynamic vision sensor is used for collecting a real-time event stream under the panoramic camera; the dynamic vision sensor is a novel optical sensor, only outputs partial events which generate motion in a picture, has the characteristics of low power consumption and high dynamic range, and has strong illumination adaptability, and can still work normally and effectively recognize when illumination is insufficient at night.
The microphone array is an annular array, can position a sound source position in a 360-degree direction, and is used for identifying the sound source direction and collecting voice information;
the loudspeaker is a sound playing device, and can play voice at proper volume according to the received instruction, so as to broadcast inquiry voice;
the processor is an embedded processor, integrates a plurality of modules such as network connection, signal processing and the like, can run instructions, perform calculation processing and the like, and is used for accumulating E events in the real-time event stream into a panoramic event frame image, storing the panoramic event frame image into a time window queue, performing action recognition on the panoramic event frame image through a trained action recognition neural network, if an accidental falling action is recognized, capturing the sound source direction recognized by a microphone array, judging whether the sound source direction is matched with the position of a falling picture determined in the panoramic event frame image, if the accidental falling action is recognized, sending inquiry voice to the loudspeaker, receiving voice information collected by the microphone array, and sending alarm information according to the voice information.
In this embodiment, referring to fig. 4, two fall alarm devices are shown, which may be respectively installed at ceiling positions in two rooms, namely a bedroom and a living room, and the two fall alarm devices are connected to the same wireless lan and may transmit information to a preset emergency contact client via the internet.
The small panoramic camera is adopted to sense the moving space at 360 degrees without dead angles, so that the hardware arrangement is simple, and the information of the whole room can be captured by a single panoramic camera; the dynamic vision sensor is adopted, so that the camera can adapt to illumination well, and the problems that the imaging quality is poor and the detection algorithm is invalid when the illumination is insufficient at night in the traditional camera are solved, so that the camera can work almost in all weather; a microphone array is adopted to position a sound source, and on the basis of obtaining a result identified by a fall detection algorithm, information of the sound source positioning result and the sound source positioning result are combined to form a mutual check so as to eliminate false alarm caused by possible interference and enable the result to be more stable; voice interactive inquiry and rising action recognition are adopted, so that the current state of the old can be better confirmed, and alarm notification is further completed; when the accident falling is confirmed, the pictures recorded and sent by the device are panoramic event frame images, only the falling position with motion information has event records, and no color and texture information exists, so that the problem that excessive privacy information is left in the traditional color image is solved, and the privacy protection is facilitated.
Correspondingly, the present application further provides an electronic device, comprising: one or more processors; a memory for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a fall alarm method as described above.
Accordingly, the present application also provides a computer readable storage medium, on which computer instructions are stored, wherein the instructions, when executed by a processor, implement the fall alarm method as described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A fall alarm method, comprising:
acquiring a real-time event stream acquired by a dynamic vision sensor under a panoramic camera;
accumulating each E events in the real-time event stream into a panoramic event frame image;
storing the panoramic event frame image into a time window queue;
performing motion recognition on the time window queue through a trained motion recognition neural network;
if the accidental falling action is identified, capturing the sound source direction identified by the microphone array, and judging whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
if the matching is carried out, sending inquiry voice to a loudspeaker, and receiving voice information collected by the microphone array;
and sending alarm information according to the voice information.
2. A fall alarm method as claimed in claim 1, wherein the trained action recognition neural network is constructed by:
acquiring a normal action panoramic event stream, an accidental falling panoramic event stream and a fallen-back panoramic event stream which are acquired by a dynamic vision sensor under the panoramic camera;
respectively accumulating E events in the three types of panoramic event streams into a panoramic event frame image to generate a video data set, wherein the video data set comprises a normal-action panoramic event frame video, an accidental-falling panoramic event frame video and a falling-back-to-rise panoramic event frame video;
and training a motion recognition neural network by using the video data set to obtain the trained motion recognition neural network.
3. A fall alarm method according to claim 1, further comprising:
if the accidental falling action is not identified, continuously acquiring a next panoramic event frame image;
and performing motion recognition on the next panoramic event frame image through the trained motion recognition neural network.
4. A fall alarm method according to claim 1, further comprising:
if not, continuing to acquire the next panoramic event frame image;
and performing motion recognition on the next panoramic event frame image through the trained motion recognition neural network.
5. A fall alarm method according to claim 1, wherein sending alarm information according to the voice information comprises:
if the voice information shows that the alarm is needed, sending alarm information to a preset emergency contact client;
if the voice information shows that the alarm is not needed, identifying whether the person has a rising action after falling or not within a preset first time through the trained action identification neural network, and if so, continuously acquiring a next panoramic event frame image and identifying; if not, sending the inquiry voice to the loudspeaker again without alarming, continuously acquiring and identifying the next panoramic event frame image, otherwise, sending alarm information; and if the inquiry does not reply within the preset second time, sending alarm information.
6. A fall alarm method according to claim 1, further comprising:
and sending the panoramic event frame image and the alarm information at the moment of accidental fall to a preset emergency contact.
7. A fall alarm device, comprising:
the acquisition module is used for acquiring a real-time event stream acquired by a dynamic vision sensor under the panoramic camera;
the accumulation module is used for accumulating each E events in the real-time event stream into a panoramic event frame image;
the storage module is used for storing the panoramic event frame image into a time window queue;
the recognition module is used for recognizing the action of the time window queue through the trained action recognition neural network;
the judging module is used for capturing the sound source direction identified by the microphone array if the accidental falling action is identified, and judging whether the sound source direction is matched with the position of the falling picture determined in the panoramic event frame image;
the sending and receiving module is used for sending inquiry voice to a loudspeaker and receiving voice information collected by the microphone array if the inquiry voice is matched with the voice information;
and the alarm module is used for sending alarm information according to the voice information.
8. A fall alarm device, comprising:
the panoramic camera is used for sensing indoor conditions at 360 degrees;
the dynamic vision sensor is used for collecting a real-time event stream under the panoramic camera;
the microphone array is used for identifying the direction of a sound source and collecting voice information;
a speaker for broadcasting an inquiry voice;
and the processor is used for accumulating E events in the real-time event stream into a panoramic event frame image, storing the panoramic event frame image into a time window queue, carrying out action identification on the time window queue through a trained action identification neural network, capturing a sound source direction identified by the microphone array if an accidental falling action is identified, judging whether the sound source direction is matched with the position of a determined falling picture in the panoramic event frame image, sending inquiry voice to the loudspeaker if the sound source direction is matched with the position of the determined falling picture in the panoramic event frame image, receiving voice information collected by the microphone array, and sending alarm information according to the voice information.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-6.
CN202210557378.6A 2022-05-20 2022-05-20 Fall alarm method, device and equipment Pending CN115223331A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210557378.6A CN115223331A (en) 2022-05-20 2022-05-20 Fall alarm method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557378.6A CN115223331A (en) 2022-05-20 2022-05-20 Fall alarm method, device and equipment

Publications (1)

Publication Number Publication Date
CN115223331A true CN115223331A (en) 2022-10-21

Family

ID=83607800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557378.6A Pending CN115223331A (en) 2022-05-20 2022-05-20 Fall alarm method, device and equipment

Country Status (1)

Country Link
CN (1) CN115223331A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009163573A (en) * 2008-01-08 2009-07-23 Systec:Kk Detection device for fallen or lying state of person
CN108986404A (en) * 2018-07-10 2018-12-11 深圳市赛亿科技开发有限公司 Water heater and its human body tumble monitoring method, electronic equipment, storage medium
CN109147277A (en) * 2018-09-30 2019-01-04 桂林海威科技股份有限公司 A kind of old man care system and method
CN112071022A (en) * 2019-05-25 2020-12-11 昆明医科大学 Fall monitoring method based on visual sensing and voice feedback
CN113660455A (en) * 2021-07-08 2021-11-16 深圳宇晰科技有限公司 Method, system and terminal for fall detection based on DVS data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009163573A (en) * 2008-01-08 2009-07-23 Systec:Kk Detection device for fallen or lying state of person
CN108986404A (en) * 2018-07-10 2018-12-11 深圳市赛亿科技开发有限公司 Water heater and its human body tumble monitoring method, electronic equipment, storage medium
CN109147277A (en) * 2018-09-30 2019-01-04 桂林海威科技股份有限公司 A kind of old man care system and method
CN112071022A (en) * 2019-05-25 2020-12-11 昆明医科大学 Fall monitoring method based on visual sensing and voice feedback
CN113660455A (en) * 2021-07-08 2021-11-16 深圳宇晰科技有限公司 Method, system and terminal for fall detection based on DVS data

Similar Documents

Publication Publication Date Title
WO2020057355A1 (en) Three-dimensional modeling method and device
JP3872014B2 (en) Method and apparatus for selecting an optimal video frame to be transmitted to a remote station for CCTV-based residential security monitoring
CN109040709B (en) Video monitoring method and device, monitoring server and video monitoring system
CN109376601B (en) Object tracking method based on high-speed ball, monitoring server and video monitoring system
KR101514061B1 (en) Wireless camera device for managing old and weak people and the management system thereby
CN101123722A (en) Panorama video intelligent monitoring method and system
JP2004021495A (en) Monitoring system and monitoring method
WO2010024281A1 (en) Monitoring system
JP2016178363A (en) Processing unit, control unit, lobby interphone unit and interphone system
JP2012212236A (en) Left person detection device
KR101075550B1 (en) Image sensing agent and security system of USN complex type
US20080211908A1 (en) Monitoring Method and Device
KR20200139987A (en) Apparatus and method for detecting invader and fire for energy storage system
KR101615824B1 (en) spontaneousness rescue apparatus for vehicle
CN108197614A (en) A kind of examination hall monitor camera and system based on face recognition technology
JP2012212238A (en) Article detection device and stationary-person detection device
JP5701657B2 (en) Anomaly detection device
CN108986407B (en) Safety detection system and method for solitary old people
JP5669302B2 (en) Behavior information collection system
CN115223331A (en) Fall alarm method, device and equipment
JP4792986B2 (en) Monitoring system
JP6754451B2 (en) Monitoring system, monitoring method and program
AU2021103548A4 (en) Smart home surveillance system using iot application with warning of intruder activities
JP2022189835A (en) Imaging apparatus
JP2001126173A (en) Notification system for home security information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 315524 No. 289, Huisheng Road, economic development zone, Fenghua District, Ningbo City, Zhejiang Province

Applicant after: Ningbo Lian Science and Technology Co.,Ltd.

Address before: No. 289, Huisheng Road, Economic Development Zone, Fenghua District, Hangzhou, Zhejiang 315524

Applicant before: Ningbo Lian Science and Technology Co.,Ltd.