CN115601925A - Fall detection system - Google Patents

Fall detection system Download PDF

Info

Publication number
CN115601925A
CN115601925A CN202211462211.8A CN202211462211A CN115601925A CN 115601925 A CN115601925 A CN 115601925A CN 202211462211 A CN202211462211 A CN 202211462211A CN 115601925 A CN115601925 A CN 115601925A
Authority
CN
China
Prior art keywords
module
clustering
signal
point cloud
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211462211.8A
Other languages
Chinese (zh)
Other versions
CN115601925B (en
Inventor
高军峰
李济涵
张冰洋
向杰
付君雅
曹书琪
黄龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South Central Minzu University
Original Assignee
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South Central University for Nationalities filed Critical South Central University for Nationalities
Priority to CN202211462211.8A priority Critical patent/CN115601925B/en
Publication of CN115601925A publication Critical patent/CN115601925A/en
Application granted granted Critical
Publication of CN115601925B publication Critical patent/CN115601925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/886Radar or analogous systems specially adapted for specific applications for alarm systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/34Adaptation of a single recogniser for parallel processing, e.g. by use of multiple processors or cloud computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Social Psychology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a fall detection system, comprising: an attitude radar and a cloud; the attitude radar comprises a single chip microcomputer, a voice module and a falling detection module; the falling detection module is used for judging whether the detected object falls or not and sending a falling trigger signal to the single chip microcomputer; the singlechip is used for sending a voice driving signal to the voice module; the voice module is used for receiving the voice driving signal, outputting a voice signal and acquiring an environment audio signal according to the voice signal; the single chip microcomputer is used for acquiring the environment audio signal, storing the environment audio signal in the flash memory area, and sending the environment audio signal stored in the flash memory area to the cloud end through the WIFI module in an http protocol; the cloud end is used for acquiring an environment audio signal, performing voice recognition on the environment audio signal based on a machine learning algorithm model, and sending a recognition result to the monitoring platform and driving the emergency call software. Therefore, the operation steps are simplified, the time is saved, and the identification precision is improved.

Description

Fall detection system
Technical Field
The invention relates to the field of biological radars, in particular to a falling detection system.
Background
Because the wearable radar needs to be charged frequently, the wearable radar is inconvenient to wear, the old people do not like wearing electronic equipment frequently, and the wearable equipment is eliminated gradually; currently mainstream non-wearable physiological monitoring devices include three major categories: video cameras, infrared cameras, and biological radars. The camera is easy to cause the old and family members to worry about privacy disclosure, so the old generally dislike to install a video camera, and the camera is easy to cause the monitoring distance to be limited due to the placement of articles in the family; the infrared camera is relatively high in interference, so that the infrared camera is difficult to play a role when old people fall down in a bathroom and the like, and the monitoring accuracy is relatively low; the biological radar has no privacy problem, charging, intervention and manual management are not needed, the size is small, and the installation is simple and convenient.
The biological radar overcomes the defects of various wearable and non-wearable devices at present, and can be widely applied to the old-age market. At present, biological fall radars are classified into three types, namely attitude radars, respiration heart rate radars and path radars according to purposes, and the three types are all installed in different positions of a room. The posture radar is mainly used for identifying human body postures in scenes, such as standing postures, sitting postures, falling and the like, wherein the falling judgment is particularly important; because the posture judgment is influenced by various factors, the voice recognition can be frequently triggered by mistake, and meanwhile, the voice is also frequently coded during the voice recognition, so that the operation is complex and the voice precision recognition degree is not high.
Disclosure of Invention
The invention aims to overcome the defects of the background technology and provide a fall detection system, which simplifies the operation steps, saves time and improves the identification precision.
In a first aspect, there is provided a fall detection system comprising:
an attitude radar and a cloud;
the attitude radar comprises a single chip microcomputer in communication connection with the cloud end, a voice module in communication connection with the single chip microcomputer, and a falling detection module in communication connection with the single chip microcomputer;
the falling detection module is used for judging whether the detected object falls or not and sending a falling trigger signal to the single chip microcomputer;
the singlechip is used for receiving the falling triggering signal and sending a voice driving signal to the voice module;
the voice module is used for receiving the voice driving signal, outputting a voice signal and acquiring an environment audio signal according to the voice signal;
the single chip microcomputer is used for acquiring the environment audio signal, storing the environment audio signal in a flash memory area, and sending the environment audio signal stored in the flash memory area to a cloud terminal through a WIFI module by using an http protocol;
and the cloud is used for acquiring the environment audio signal, performing voice recognition on the environment audio signal based on the machine learning algorithm model, sending a recognition result to the monitoring platform and driving the emergency call software.
In a first possible implementation manner of the first aspect, according to the first aspect, the cloud is configured to,
when the identification result is that the detection object falls down, the identification result is sent to the monitoring platform and the first-aid calling software is driven;
when the recognized environment audio signals do not contain the sound of the detection object or the recognition result shows that the detection object does not fall down, the singlechip sends voice driving information to the voice module and carries out voice recognition on the environment audio signals which are received again and sent by the voice module;
when the environment audio signal identified again does not comprise the sound of the detection object, the identification result is sent to the monitoring platform and the first-aid calling software is driven;
and when the re-identification result is that the detection object does not fall down, judging as false alarm and sending the identification result to the monitoring platform.
According to a first possible implementation form of the first aspect, in a second possible implementation form of the first aspect, the fall detection module comprises:
the sampling signal module is used for acquiring a sampling signal of a detection object;
the point cloud data acquisition module is in communication connection with the sampling signal module and is used for performing three-dimensional Fourier transform on the sampling signal and performing point cloud operation on sampling information subjected to the three-dimensional Fourier transform to acquire point cloud data;
the filtering module is in communication connection with the point cloud data acquisition module and is used for performing constant virtual early warning processing on the point cloud data and performing Doppler filtering on the point cloud data subjected to constant virtual early warning processing;
the clustering module is in communication connection with the filtering module and is used for clustering the point cloud data after Doppler filtering to obtain a final clustering result;
the tracking matching module is in communication connection with the clustering module and is used for performing tracking matching processing on the final clustering result to obtain the attitude height of the detected object; and the number of the first and second groups,
and the judging module is in communication connection with the tracking matching module and is used for judging that the detection object falls down when the gesture height is detected to be smaller than a preset height threshold value.
In a third possible implementation form of the first aspect, according to the second possible implementation form of the first aspect, the sampling signal module is configured to,
transmitting a frequency modulation signal and receiving a signal reflected by the frequency modulation signal after the frequency modulation signal is transmitted to a detection object;
merging the frequency modulation signal and the reflected signal to obtain an intermediate frequency signal;
and carrying out analog-digital conversion sampling on the intermediate frequency signal to obtain a sampling signal of a detection object.
In a fourth possible implementation manner of the first aspect, according to the third possible implementation manner of the first aspect, the point cloud data obtaining module is configured to,
the three-dimensional fourier transform includes: distance dimension fourier, doppler fourier, angle dimension fourier;
based on distance dimension fourier, obtaining the distance d between the fall detection module and the detection object as: d = f c T/2B;
based on the doppler fourier, the velocity V of the detection object is obtained as:
Figure 169293DEST_PATH_IMAGE001
based on the angle dimension Fourier, the azimuth angle theta of the detected object is obtained as follows:
Figure 61332DEST_PATH_IMAGE002
wherein f is the intermediate frequency signal frequency; c is the speed of light; t is a sweep frequency period; b is the bandwidth; λ is the wavelength; omega is the phase difference of the two transmitted frequency modulation signals at the same position; tc is the interval time of the two transmitted frequency modulated signals.
In a fifth possible implementation manner of the first aspect, according to the fourth possible implementation manner of the first aspect, the filtering module is configured to,
selecting any point cloud unit in the point cloud data as a detection unit, wherein peripheral point cloud units of the detection unit are reference units;
distributing the reference units to a target set according to preset distribution conditions, and acquiring the number of the reference units in the target set;
judging whether a sampling signal exists in the detection unit or not according to the following formula;
Figure 513173DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 767437DEST_PATH_IMAGE004
when the sampling signal exists in the detection unit, reserving the selected detection unit, otherwise, eliminating the selected detection unit;
deleting point cloud data with Doppler velocity of 0 from the point cloud data;
in the formula (I), the compound is shown in the specification,
Figure 112968DEST_PATH_IMAGE005
is a parameter of the threshold that is,
Figure 325643DEST_PATH_IMAGE006
is an integer number of threshold values, and is,
Figure 795939DEST_PATH_IMAGE007
is a set
Figure 181790DEST_PATH_IMAGE008
The number of the reference unit samples is,
Figure 381827DEST_PATH_IMAGE009
k-th ordered samples of leading and trailing edge reference sliding windows, respectively, k =1,2, \ 8230;, N.
In a sixth possible implementation manner of the first aspect, according to the fifth possible implementation manner of the first aspect, the clustering module is configured to,
performing coordinate conversion on the point cloud data subjected to Doppler filtering;
clustering the point cloud data after coordinate conversion based on a large threshold DBSCAN, and deleting the point cloud data which are not successfully clustered;
clustering the residual point cloud data based on the small threshold DBSCAN, judging whether the clustering result is a detection object or not, and putting the clustering result judged as the detection object into a result queue;
obtaining the mode of the clustering results of the first 10 frames in the result queue;
when the number of the clustering results in the result queue exceeds the mode of the clustering results of the previous 10 frames, clustering the clustering results which do not exceed the mode of the clustering results of the previous 10 frames respectively based on K-means clustering and Gaussian mixture model clustering;
and when the numerical value of the clustering result after the K-means clustering and the numerical value of the clustering result after the Gaussian mixture model clustering are detected to be larger than the preset numerical value, selecting the largest numerical value clustering result of the two numerical values of the clustering results and placing the largest numerical value clustering result in a result queue, wherein the clustering result in the result queue is the final clustering result.
In a seventh possible implementation manner of the first aspect, according to the sixth possible implementation manner of the first aspect, the trace matching module is configured to,
predicting the next frame of point cloud data of the final clustering result based on Kalman particle filtering;
obtaining a cost matrix according to the final clustering result and the next frame of point cloud data
Figure 516136DEST_PATH_IMAGE010
And the cost matrix is filled up;
performing matching calculation on the supplemented cost matrix based on Hungary algorithm, and acquiring the attitude height h of the detected object as follows:
Figure 460346DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 728516DEST_PATH_IMAGE012
Figure 455164DEST_PATH_IMAGE013
wherein, s is the distance between the final clustering result and the current frame point cloud data;
Figure 275221DEST_PATH_IMAGE014
is the absorption loss rate;
Figure 454530DEST_PATH_IMAGE015
is the decay in the dry case;
Figure 916604DEST_PATH_IMAGE016
is the decay in the case of humidity; f is the frequency of the wave;
Figure 763337DEST_PATH_IMAGE017
is the imaginary part of the complex index of refraction that is frequency dependent.
Compared with the prior art, the method comprises the steps of firstly obtaining a sampling signal of a detection object; when the posture radar judges that the radar falls down, the intelligent voice module is triggered, namely, the voice module does not work at ordinary times, and only when the radar module falls down and the voice module is triggered, the voice module can start to collect scene voice, so that the power consumption of the voice module is greatly reduced. Simultaneously, the singlechip drives the ADC through the I2S protocol and does not carry out any coding operation after gathering sound collection module, has avoided many complicated data manipulation to save more time, directly with data storage in flash memory district, also do not adopt the mode of the DAC that the I2S flows commonly used when sending audio data, but through the WIFI module with http protocol with speech transmission to high in the clouds, utilize machine learning algorithm model to carry out online decoding discernment.
Drawings
Fig. 1 is a schematic structural diagram of a fall detection system according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of a fall detection module according to an embodiment of the invention.
Description of the drawings:
10. a fall detection system; 100. an attitude radar; 110. a fall detection module; 111. a signal sampling module; 112. a point cloud data acquisition module; 113. a filtration module; 114. a clustering module; 115. a tracking matching module; 116. a judgment module; 120. a unit machine; 130. a voice module; 200. and (4) cloud.
Detailed Description
Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the specific embodiments, it will be understood that they are not intended to limit the invention to the embodiments described. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. It should be noted that the method steps described herein may be implemented by any functional block or functional arrangement, and that any functional block or functional arrangement may be implemented as a physical entity or a logical entity, or a combination of both.
In order that those skilled in the art will better understand the invention, further details are provided below in conjunction with the accompanying drawings and the detailed description of the invention.
Note that: the example to be described next is only a specific example, and does not limit the embodiments of the present invention necessarily to the following specific steps, values, conditions, data, orders, and the like. Those skilled in the art can, upon reading this specification, utilize the concepts of the present invention to construct more embodiments than those specifically described herein.
Referring to fig. 1, the invention also provides a fall detection system 10 comprising:
attitude radar 100 and cloud 200;
the attitude radar 100 comprises a single chip microcomputer 120 in communication connection with the cloud 200, a voice module 130 in communication connection with the single chip microcomputer 120, and the fall detection module 110 in communication connection with the single chip microcomputer 120;
the fall detection module 110 is configured to determine whether a detected object falls and send a fall trigger signal to the single chip microcomputer 120;
the single chip microcomputer 120 is configured to receive the fall trigger signal and send a voice driving signal to the voice module 130;
the voice module 130 is configured to receive the voice driving signal, output a voice signal, and obtain an environmental audio signal according to the voice signal;
the single chip microcomputer 120 is configured to acquire the environmental audio signal, store the environmental audio signal in a flash memory area, and send the environmental audio signal stored in the flash memory area to the cloud 200 through a WIFI module in an http protocol;
the cloud 200 is configured to acquire an environmental audio signal, perform speech recognition on the environmental audio signal based on a machine learning algorithm model, and send a recognition result to a monitoring platform and drive emergency call software.
Specifically, in this embodiment, when the gesture radar determines that the person falls, the intelligent voice module is triggered, and the person falling is determined through conversation with the person falling. Its advantage lies in, relative current gesture radar technology, doubtless can improve the rate of accuracy of the judgement of tumbleing, and greatly reduced occupies the probability of social first aid resource because of the false positive, and voice module is out of work at ordinary times simultaneously, and only when radar module tumbles trigger voice module, voice module just can start to gather scene pronunciation, greatly reduced voice module's consumption. And many traditional intelligent voice modules wait for the slave scene receiving module at any time, so that the power consumption is high. The technology combines the attitude radar and the intelligent voice technology, is not simply integrated, and inputs radar related information into the voice module, including trigger signals and voice data. Most of common voice modules in the current market are in an offline mode, updating and iteration are inconvenient, the market for intelligent voice recognition accuracy is also subject to different application scenes, and a plurality of factors such as accents of people are broadcasted, and the original trained algorithm model is difficult to adapt to various occasions completely. What this product design developed is online intelligent recognition's voice module, and the front end is gathered sound back and is uploaded to the high in the clouds with voice information and personnel's that fall three-dimensional coordinate information through ESP32 WIFI module. The cloud end has a machine learning algorithm model, firstly, software filtering denoising is carried out on sound, then the dynamic information input by the radar is combined, voice information is dynamically and intelligently recognized, the model can rapidly recognize results in a short time after being optimized, and the results are returned to the voice module for broadcasting.
ESP32 does not carry out any coding operation after gathering sound collection module MAX9814 through I2S protocol drive ADC (analog-to-digital converter), has avoided many complicated data operations, in order to save more time, directly with data storage in flash memory district, also do not adopt the DAC' S that I2S flows commonly used mode during the sending audio data, but send the high in the clouds with the http protocol through the WIFI module, utilize machine learning algorithm model to carry out online decoding discernment.
Preferably, in another embodiment of the present application, the cloud 200 is configured to,
when the identification result is that the detection object falls down, the identification result is sent to the monitoring platform and the first-aid calling software is driven;
when the recognized environment audio signals do not contain the sound of the detection object or the recognition result shows that the detection object does not fall down, the singlechip sends voice driving information to the voice module and carries out voice recognition on the environment audio signals which are received again and sent by the voice module;
when the environment audio signal identified again does not comprise the sound of the detection object, the identification result is sent to the monitoring platform and the first-aid calling software is driven;
and when the re-identification result is that the detection object does not fall down, judging as false alarm and sending the identification result to the monitoring platform.
Specifically, in the embodiment, when the attitude radar detects that the old man falls down, a fall trigger signal, namely a high level, is sent to the ESP32 single chip microcomputer. The ESP32 detects the high level through the IO interrupt, and drives the voice module to initiate a query of asking whether you fall, and the voice module starts to collect 20s voice after the query.
Then, ESP32 drives ADC (analog-digital converter) through I2S protocol to collect the voice collection module MAX9814, then does not perform any coding operation, directly stores data in a flash memory area, sends voice to a cloud end through a WIFI module by http protocol when sending audio data, and performs online decoding and identification by using a machine learning algorithm model.
And if the identification result is that the old people have abnormal conditions, the cloud uploads the abnormal conditions to the monitoring platform, and drives the emergency call software to make 120 an emergency call. Because the user can input personal basic information such as a home address, the name of the old, the age, the past medical history and the like in a voice or text mode before using, the cloud voice system synthesizes the voice information in advance, and the conditions such as the position of a patient, the current position and the like can be accurately and clearly transferred to first-aid personnel so as to strive for more rescue time and provide a guarantee for the old who lives alone. Meanwhile, the voice module judges whether the old man falls down according to a text instruction returned by the cloud, if the cloud identification result indicates that the old man does not fall down or effective information cannot be detected (if the old man falls down seriously and cannot speak, the voice cannot be detected on site, and only when surrounding environment sounds are detected, the voice module is judged to be incapable of detecting the effective information), the voice module is triggered to carry out secondary inquiry, and the operation is repeated. If the recognized voice result is no fall again, the system judges that the voice is false alarm. If the voice result identified by the system is that no effective information can be detected and a falling trigger signal of the attitude radar is received, the system judges that the falling situation of the old people is serious, and the old people possibly lose the help seeking capability and directly trigger an emergency plan, namely report a calling platform and notify 120 emergency personnel. The mechanism not only can greatly reduce the situations that 120 emergency calls are mistakenly made due to insufficient judgment accuracy of the attitude radar, and medical staff of a wrong transmission mechanism occupy excessive social resources, but also can strive for more time for the rescue of the old people.
On the contrary, if the text instruction is returned for the first time and the old man takes place the action of tumbleing, voice module reports immediately: "We have been with you for you to contact the rescuer, please wait for ease, not move easily to avoid secondary injury". The specific voice text can be modified, and the main purpose is to pacify the injured old man, so that the injured old man can have patience to wait for rescue workers.
Referring to fig. 2, the fall detection module 110 comprises: a sampling signal module 111, a point cloud data acquisition module 112, a filtering module 113, a clustering module 114, a tracking matching module 115 and a judgment module 116;
a sampling signal module 111, configured to obtain a sampling signal of a detection object;
a point cloud data obtaining module 112, which is in communication connection with the sampling signal module 111, and is configured to perform three-dimensional fourier transform on the sampling signal and perform point cloud operation on the sampling information after the three-dimensional fourier transform to obtain point cloud data;
a filtering module 113, which is in communication connection with the point cloud data obtaining module 112, and is configured to perform constant-virtual early warning processing on the point cloud data, and perform doppler filtering on the point cloud data after the constant-virtual early warning processing;
a clustering module 114, communicatively connected to the filtering module 113, configured to perform clustering processing on the point cloud data after doppler filtering to obtain a final clustering result;
a tracking matching module 115, which is in communication connection with the clustering module 114, and is configured to perform tracking matching processing on the final clustering result to obtain a posture height of the detected object; and (c) a second step of,
and the judging module 116 is in communication connection with the tracking matching module 115, and is configured to judge that the detected object falls down when it is detected that the posture height is smaller than a preset height threshold.
And setting one third of the height of the detected object as a preset height threshold, and sending out a falling early warning when the current attitude height of the detected object is lower than the preset height threshold.
Specifically, in the embodiment, the posture radar is mainly used for identifying human body postures in a scene, such as standing posture, sitting posture, falling and the like, wherein the falling is particularly serious; because the posture judgment is influenced by various factors, the identification precision of all the existing biological radars is not high, and the misjudgment of the radars can bring great trouble; in order to improve the radar attitude judgment precision, firstly, a sampling signal of a detection object is obtained; then, performing three-dimensional Fourier transform on the sampling signal, and performing point cloud operation on the sampling information subjected to the three-dimensional Fourier transform to obtain point cloud data; then, constant virtual early warning processing is carried out on the point cloud data, and Doppler filtering is carried out on the point cloud data after the constant virtual early warning processing; clustering the point cloud data after Doppler filtering to obtain a final clustering result; then, tracking and matching the final clustering result to obtain the attitude height of the detected object; finally, when the gesture height is smaller than a preset height threshold value, the falling of the detection object is judged; through the series of operations on the detection object, the radar attitude judgment precision can be improved.
Preferably, in a further embodiment of the present application, the sampling signal module 111 is configured to,
transmitting a frequency modulation signal and receiving a signal reflected by the frequency modulation signal after the frequency modulation signal is transmitted to a detection object;
merging the frequency modulation signal and the reflected signal to obtain an intermediate frequency signal;
and carrying out analog-digital conversion sampling on the intermediate frequency signal to obtain a sampling signal of a detection object.
Specifically, in this embodiment, the FMCW millimeter wave radar and the antenna serve as a transceiver to realize conversion between electric energy and electromagnetic waves. The TX (transmitting) antenna continuously transmits a frequency modulated signal, and depending on the object surface type and shape, part of the electromagnetic waves are reflected back to the radar RX (receiving) antenna. The mixer combines the TX and RX signals to generate an IF (intermediate frequency) signal having a frequency that is the difference between the TX and RX frequencies and an initial phase that is the difference between the phases at the current time; and then ADC analog-digital conversion sampling is carried out on the intermediate frequency signal, and a sampling signal of the detection object is obtained.
Preferably, in another embodiment of the present application, the point cloud data obtaining module 112 is configured to,
the three-dimensional fourier transform includes: distance dimension fourier, doppler fourier, angle dimension fourier;
based on distance dimension fourier, obtaining the distance d between the fall detection module and the detection object as: d = f c T/2B;
specifically, since there is a certain distance between the millimeter wave radar and the detection object, from the signal transmission to the signal reception, the distance generates an interpolation T = 2d/c of the reception time, the frequency of the intermediate frequency signal is f = s × T, s is the slope of the signal, s = B/T, and d can be obtained from T = 2d/c, f = s × T, s = B/T;
based on the doppler fourier, the velocity V of the detection object is obtained as:
Figure 364083DEST_PATH_IMAGE001
based on the angle dimension Fourier, the azimuth angle theta of the detected object is obtained as follows:
Figure 545534DEST_PATH_IMAGE018
wherein f is the intermediate frequency signal frequency; c is the speed of light; t is a sweep frequency period; b is the bandwidth; λ is the wavelength; omega is the phase difference of the two transmitted frequency modulation signals at the same position; tc is the interval time of the two transmitted frequency modulated signals.
Preferably, in a further embodiment of the present application, the filter module 113 is adapted to,
selecting any point cloud unit in the point cloud data as a detection unit, wherein peripheral point cloud units of the detection unit are reference units;
distributing the reference units to a target set according to preset distribution conditions, and acquiring the number of the reference units in the target set;
judging whether a sampling signal exists in the detection unit or not according to the following formula;
Figure 765294DEST_PATH_IMAGE003
wherein, the first and the second end of the pipe are connected with each other,
Figure 453152DEST_PATH_IMAGE004
when the sampling signal exists in the detection unit, reserving the selected detection unit, otherwise, eliminating the selected detection unit;
deleting point cloud data with Doppler velocity of 0 from the point cloud data;
in the formula (I), the compound is shown in the specification,
Figure 755957DEST_PATH_IMAGE005
as a parameter of the threshold, the threshold value,
Figure 644279DEST_PATH_IMAGE006
is a threshold value of an integer number of times,
Figure 979314DEST_PATH_IMAGE007
is a set
Figure 472744DEST_PATH_IMAGE008
The number of the reference unit samples is,
Figure 867822DEST_PATH_IMAGE009
k-th ordered samples of leading and trailing edge reference sliding windows, respectively, k =1,2, \ 8230;, N.
Specifically, in this embodiment, 2N reference unit samples are divided into two sets, and when the magnitude relationship between the data in the reference unit and the product of α Z (Z is a detection unit sample, α < 1 is a nominal factor) is determined, the data in the reference unit is allocated to the target set according to the preset allocation condition when the data in the reference unit is smaller than the product of α Z; and meanwhile, whether a sampling signal exists in the detection unit is judged, so that the stability and reliability of radar detection are ensured.
And (3) deleting the point cloud with the Doppler velocity of 0 in the data (due to the Doppler characteristic of the millimeter wave radar, when the person is static, the Doppler velocity of the point cloud data is 0), removing the background points and the static points, and obtaining the remaining point cloud data, namely the point cloud data of the person in motion.
Preferably, in a further embodiment of the present application, the clustering module 114 is configured to,
performing coordinate transformation on the point cloud data subjected to Doppler filtering;
clustering the point cloud data after coordinate conversion based on a large threshold DBSCAN, and deleting the point cloud data which are not successfully clustered;
clustering the residual point cloud data based on a small threshold DBSCAN, judging whether a clustering result is a detection object, and putting the clustering result which is judged to be the detection object into a result queue;
obtaining the mode of the clustering results of the first 10 frames in the result queue;
when the number of the clustering results in the result queue exceeds the mode of the clustering results of the previous 10 frames, clustering the clustering results which do not exceed the mode of the clustering results of the previous 10 frames respectively based on K-means clustering and Gaussian mixture model clustering;
and when the numerical value of the clustering result after the K-means clustering and the numerical value of the clustering result after the Gaussian mixture model clustering are detected to be larger than the preset numerical value, selecting the largest numerical value clustering result of the two numerical values of the clustering results and placing the largest numerical value clustering result in a result queue, wherein the clustering result in the result queue is the final clustering result.
Specifically, in this embodiment, DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a Density-Based Clustering algorithm, and DBSCAN defines clusters as the largest set of Density-connected points, can divide areas with sufficiently high Density into clusters, and can find clusters of arbitrary shape in a Spatial database of Noise. The clustering means that a class of real objects with similar attributes are put together, the attributes of the real objects are detected, and the attributes of the real objects are uniformly divided into a group to find the grouping information; in the present invention, clustering is intended to extract moving people from the environment.
The method comprises the steps of firstly adopting a large threshold DBSCAN to roughly cluster point clouds, equivalently performing preliminary filtering, then using a small threshold to perform clustering, wherein subsequent calculation is based on a clustering result of the small threshold, if the clustering is successful, the clustering is put into a successful queue, the clustering is unsuccessful, and the group of information is stored in a pending queue. In the pending queue, because the small threshold DBSCAN clustering gives a rough class, namely a K value, the K value is used for performing K-means clustering and Gaussian mixture model clustering respectively, backtracking and scoring are performed on clustering results, a high-score result in the two clustering results is selected, namely the result with better clustering effect is stored to a successful queue, and otherwise, the clustering result is discarded; the clustering result includes various data of the clustering point in the point cloud, such as coordinates, direction, and speed.
Preferably, in a further embodiment of the present application, the trace matching module 115 is configured to,
predicting the next frame of point cloud data of the final clustering result based on Kalman particle filtering;
obtaining a cost matrix according to the final clustering result and the next frame of point cloud data
Figure 945954DEST_PATH_IMAGE010
And the cost matrix is filled up;
performing matching calculation on the completed cost matrix based on the Hungarian algorithm, and acquiring the attitude height h of the detected object as follows:
Figure 287943DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 698195DEST_PATH_IMAGE019
Figure 795333DEST_PATH_IMAGE013
wherein s is the distance between the final clustering result and the current frame point cloud data;
Figure 861509DEST_PATH_IMAGE020
is the absorption loss rate;
Figure 275698DEST_PATH_IMAGE015
is the decay in the dry case;
Figure 540457DEST_PATH_IMAGE016
is the decay in the case of humidity; f is the frequency of the wave;
Figure 11758DEST_PATH_IMAGE021
is the imaginary part of the complex index of refraction that is frequency dependent.
Specifically, in the embodiment, the position of the point cloud in the detection range of the millimeter wave radar system is changed all the time; because the system has a point cloud reflection phenomenon, the actually generated point cloud position may have deviation with a real moving object. Therefore, the design needs to predict the position and speed of the next point cloud according to the point cloud of the previous frame, and complete the attitude (falling) prediction of the point cloud (corresponding data is transmitted to the particle filter, and the attitude of the next frame is predicted through the particle filter).
The Hungarian algorithm is used for matching, kalman particle filtering is used for generating next frame point cloud data (new observed values), a tracked object is a final clustering result, the Hungarian algorithm is a combinatorial optimization algorithm for solving the problem of task allocation, the new observed values are corresponding to the existing tracked object by using the Hungarian algorithm, namely, the matched calculation is carried out on the cost matrix after the completion based on the Hungarian algorithm, and the attitude height h of the detected object is obtained.
Based on the same inventive concept, the embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements all or part of the method steps of the above method.
The present invention can implement all or part of the processes of the above methods, and can also be implemented by using a computer program to instruct related hardware, where the computer program can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above method embodiments can be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, software distribution medium, etc. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
Based on the same inventive concept, an embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program running on the processor, and the processor executes the computer program to implement all or part of the method steps in the method.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor being the control center of the computer device and the various interfaces and lines connecting the various parts of the overall computer device.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the computer device by executing or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (e.g., a sound playing function, an image playing function, etc.); the storage data area may store data (e.g., audio data, video data, etc.) created according to the use of the cellular phone. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, server, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), servers and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (8)

1. A fall detection system, comprising:
an attitude radar and a cloud;
the gesture radar comprises a single chip microcomputer in communication connection with the cloud end, a voice module in communication connection with the single chip microcomputer, and a falling detection module in communication connection with the single chip microcomputer;
the falling detection module is used for judging whether the detected object falls or not and sending a falling trigger signal to the single chip microcomputer;
the singlechip is used for receiving the falling triggering signal and sending a voice driving signal to the voice module;
the voice module is used for receiving the voice driving signal, outputting a voice signal and acquiring an environment audio signal according to the voice signal;
the single chip microcomputer is used for acquiring the environment audio signal, storing the environment audio signal in a flash memory area, and sending the environment audio signal stored in the flash memory area to a cloud terminal through a WIFI module by using an http protocol;
and the cloud is used for acquiring the environment audio signal, performing voice recognition on the environment audio signal based on the machine learning algorithm model, sending a recognition result to the monitoring platform and driving the emergency call software.
2. The fall detection system of claim 1, wherein the cloud is configured to,
when the identification result is that the detection object falls down, the identification result is sent to the monitoring platform and the first-aid calling software is driven;
when the recognized environment audio signal does not comprise the sound of the detection object or the recognition result indicates that the detection object does not fall down, sending voice driving information to the voice module through the single chip microcomputer, and performing voice recognition on the environment audio signal which is received again and sent by the voice module;
when the environment audio signal identified again does not contain the sound of the detection object, the identification result is sent to the monitoring platform and the emergency call software is driven;
and when the re-identification result is that the detection object does not fall down, judging as false alarm and sending the identification result to the monitoring platform.
3. A fall detection system as claimed in claim 1, wherein the fall detection module comprises:
the sampling signal module is used for acquiring a sampling signal of a detection object;
the point cloud data acquisition module is in communication connection with the sampling signal module and is used for performing three-dimensional Fourier transform on the sampling signal and performing point cloud operation on the sampling information subjected to the three-dimensional Fourier transform to acquire point cloud data;
the filtering module is in communication connection with the point cloud data acquisition module and is used for performing constant virtual early warning processing on the point cloud data and performing Doppler filtering on the point cloud data subjected to constant virtual early warning processing;
the clustering module is in communication connection with the filtering module and is used for clustering the point cloud data after Doppler filtering to obtain a final clustering result;
the tracking matching module is in communication connection with the clustering module and is used for performing tracking matching processing on the final clustering result to obtain the attitude height of the detected object; and the number of the first and second groups,
and the judging module is in communication connection with the tracking matching module and is used for judging that the detection object falls down when the gesture height is detected to be smaller than a preset height threshold value.
4. Fall detection system according to claim 3, wherein the sampled signal module is adapted to,
transmitting a frequency modulation signal and receiving a signal reflected by the frequency modulation signal after the frequency modulation signal is transmitted to a detection object;
merging the frequency modulation signal and the reflected signal to obtain an intermediate frequency signal;
and carrying out analog-digital conversion sampling on the intermediate frequency signal to obtain a sampling signal of a detection object.
5. A fall detection system as claimed in claim 3, wherein the point cloud data acquisition module is configured to,
the three-dimensional fourier transform includes: distance dimension fourier, doppler fourier, angle dimension fourier;
based on distance dimension fourier, obtaining the distance d between the fall detection module and the detection object as: d = f c T/2B;
based on the doppler fourier, the velocity V of the detection object is obtained as:
Figure 605499DEST_PATH_IMAGE001
based on the angle dimension Fourier, the azimuth angle theta of the detected object is obtained as follows:
Figure 554869DEST_PATH_IMAGE002
wherein f is the intermediate frequency signal frequency; c is the speed of light; t is a sweep frequency period; b is the bandwidth; λ is the wavelength; omega is the phase difference of the two transmitted frequency modulation signals at the same position; tc is the time interval between the two transmitted frequency modulated signals.
6. Fall detection system as claimed in claim 3, wherein the filter module is configured to,
selecting any point cloud unit in the point cloud data as a detection unit, wherein peripheral point cloud units of the detection unit are reference units;
distributing the reference units to a target set according to preset distribution conditions, and acquiring the number of the reference units in the target set;
judging whether a sampling signal exists in the detection unit or not according to the following formula;
Figure 470872DEST_PATH_IMAGE003
wherein the content of the first and second substances,
Figure 581917DEST_PATH_IMAGE004
when the sampling signal exists in the detection unit, reserving the selected detection unit, otherwise, eliminating the selected detection unit;
deleting point cloud data with Doppler velocity of 0 from the point cloud data;
in the formula (I), the compound is shown in the specification,
Figure 216160DEST_PATH_IMAGE005
as a parameter of the threshold, the threshold value,
Figure 454375DEST_PATH_IMAGE006
is an integer number of threshold values, and is,
Figure 739731DEST_PATH_IMAGE007
is a set
Figure 772410DEST_PATH_IMAGE008
The number of the reference unit samples is,
Figure 893949DEST_PATH_IMAGE009
Figure 934191DEST_PATH_IMAGE010
k-th ordered samples of leading and trailing edge reference sliding windows, respectively, k =1,2, \ 8230;, N.
7. A fall detection system as claimed in claim 3, wherein the clustering module is configured to,
performing coordinate conversion on the point cloud data subjected to Doppler filtering;
clustering the point cloud data after coordinate conversion based on a large threshold DBSCAN, and deleting the point cloud data which are not successfully clustered;
clustering the residual point cloud data based on the small threshold DBSCAN, judging whether the clustering result is a detection object or not, and putting the clustering result judged as the detection object into a result queue;
obtaining the mode of the clustering results of the first 10 frames in the result queue;
when the number of the clustering results in the result queue exceeds the mode of the clustering results of the previous 10 frames, clustering the clustering results which do not exceed the mode of the clustering results of the previous 10 frames respectively based on K-means clustering and Gaussian mixture model clustering;
and when the numerical value of the clustering result after the K-means clustering and the numerical value of the clustering result after the Gaussian mixture model clustering are both larger than the preset numerical value, selecting the largest numerical value clustering result of the two numerical values of the clustering results and placing the largest numerical value clustering result in a result queue, wherein the clustering result in the result queue is the final clustering result.
8. A fall detection system as claimed in claim 3, wherein the tracking matching module is configured to,
predicting the next frame of point cloud data of the final clustering result based on Kalman particle filtering;
obtaining a cost matrix according to the final clustering result and the next frame of point cloud data
Figure 746159DEST_PATH_IMAGE011
And the cost matrix is filled up;
performing matching calculation on the completed cost matrix based on the Hungarian algorithm, and acquiring the attitude height h of the detected object as follows:
Figure 277634DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure 558574DEST_PATH_IMAGE013
Figure 715755DEST_PATH_IMAGE014
wherein s is the distance between the final clustering result and the current frame point cloud data;
Figure 195277DEST_PATH_IMAGE015
is the absorption loss rate;
Figure 100917DEST_PATH_IMAGE016
is the decay in the dry case;
Figure 118420DEST_PATH_IMAGE017
is the decay under humidity; f is the frequency of the wave;
Figure 564445DEST_PATH_IMAGE018
is the imaginary part of the complex index of refraction that is frequency dependent.
CN202211462211.8A 2022-11-17 2022-11-17 Fall detection system Active CN115601925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211462211.8A CN115601925B (en) 2022-11-17 2022-11-17 Fall detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211462211.8A CN115601925B (en) 2022-11-17 2022-11-17 Fall detection system

Publications (2)

Publication Number Publication Date
CN115601925A true CN115601925A (en) 2023-01-13
CN115601925B CN115601925B (en) 2023-03-07

Family

ID=84853778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211462211.8A Active CN115601925B (en) 2022-11-17 2022-11-17 Fall detection system

Country Status (1)

Country Link
CN (1) CN115601925B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105206041A (en) * 2015-08-12 2015-12-30 东南大学 Smart-phone track chain-cluster identification method considering sequential DBSCAN
US20160188694A1 (en) * 2013-07-31 2016-06-30 Hewlett-Packard Development Company, L.P. Clusters of polynomials for data points
CN105788124A (en) * 2014-12-19 2016-07-20 宏达国际电子股份有限公司 Non-contact monitoring system and method
KR20190121721A (en) * 2019-08-29 2019-10-28 엘지전자 주식회사 Method and device for providing speech recognition service
CN111401392A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Clustering integration method and device, electronic equipment and storage medium
CN112071022A (en) * 2019-05-25 2020-12-11 昆明医科大学 Fall monitoring method based on visual sensing and voice feedback
CN112581723A (en) * 2020-11-17 2021-03-30 芜湖美的厨卫电器制造有限公司 Method and device for recognizing user gesture, processor and water heater
WO2021118570A1 (en) * 2019-12-12 2021-06-17 Google Llc Radar-based monitoring of a fall by a person
CN113837131A (en) * 2021-09-29 2021-12-24 南京邮电大学 Multi-scale feature fusion gesture recognition method based on FMCW millimeter wave radar
CN114707575A (en) * 2022-03-07 2022-07-05 南京邮电大学 SDN multi-controller deployment method based on AP clustering
CN114942434A (en) * 2022-04-25 2022-08-26 西南交通大学 Fall attitude identification method and system based on millimeter wave radar point cloud

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160188694A1 (en) * 2013-07-31 2016-06-30 Hewlett-Packard Development Company, L.P. Clusters of polynomials for data points
CN105788124A (en) * 2014-12-19 2016-07-20 宏达国际电子股份有限公司 Non-contact monitoring system and method
CN105206041A (en) * 2015-08-12 2015-12-30 东南大学 Smart-phone track chain-cluster identification method considering sequential DBSCAN
CN111401392A (en) * 2019-01-02 2020-07-10 中国移动通信有限公司研究院 Clustering integration method and device, electronic equipment and storage medium
CN112071022A (en) * 2019-05-25 2020-12-11 昆明医科大学 Fall monitoring method based on visual sensing and voice feedback
KR20190121721A (en) * 2019-08-29 2019-10-28 엘지전자 주식회사 Method and device for providing speech recognition service
WO2021118570A1 (en) * 2019-12-12 2021-06-17 Google Llc Radar-based monitoring of a fall by a person
CN112581723A (en) * 2020-11-17 2021-03-30 芜湖美的厨卫电器制造有限公司 Method and device for recognizing user gesture, processor and water heater
CN113837131A (en) * 2021-09-29 2021-12-24 南京邮电大学 Multi-scale feature fusion gesture recognition method based on FMCW millimeter wave radar
CN114707575A (en) * 2022-03-07 2022-07-05 南京邮电大学 SDN multi-controller deployment method based on AP clustering
CN114942434A (en) * 2022-04-25 2022-08-26 西南交通大学 Fall attitude identification method and system based on millimeter wave radar point cloud

Also Published As

Publication number Publication date
CN115601925B (en) 2023-03-07

Similar Documents

Publication Publication Date Title
CN108764304B (en) Scene recognition method and device, storage medium and electronic equipment
Khan et al. Hand-based gesture recognition for vehicular applications using IR-UWB radar
CN111695420B (en) Gesture recognition method and related device
CN110688957B (en) Living body detection method, device and storage medium applied to face recognition
WO2021174414A1 (en) Microwave identification method and system
CN111439642B (en) Elevator control method, device, computer readable storage medium and terminal equipment
US20220375106A1 (en) Multi-target tracking method, device and computer-readable storage medium
CN111738060A (en) Human gait recognition system based on millimeter wave radar
CN112184626A (en) Gesture recognition method, device, equipment and computer readable medium
CN115542308B (en) Indoor personnel detection method, device, equipment and medium based on millimeter wave radar
CN109284715B (en) Dynamic object identification method, device and system
CN111414843B (en) Gesture recognition method and terminal device
CN115343704A (en) Gesture recognition method of FMCW millimeter wave radar based on multi-task learning
CN114338585A (en) Message pushing method and device, storage medium and electronic device
CN111046849A (en) Kitchen safety implementation method and device, intelligent terminal and storage medium
CN107452381B (en) Multimedia voice recognition device and method
CN115601925B (en) Fall detection system
CN111128150A (en) Method and device for awakening intelligent voice equipment
CN111414829B (en) Method and device for sending alarm information
CN111723785A (en) Animal estrus determination method and device
CN115602283A (en) Medical and nursing combined terminal service information management system
CN115841707A (en) Radar human body posture identification method based on deep learning and related equipment
CN114021097A (en) Identity recognition method, device and system
CN111950431B (en) Object searching method and device
CN113439274A (en) Identity recognition method, terminal device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant