CN112198507A - Method and device for detecting human body falling features - Google Patents

Method and device for detecting human body falling features Download PDF

Info

Publication number
CN112198507A
CN112198507A CN202011023757.4A CN202011023757A CN112198507A CN 112198507 A CN112198507 A CN 112198507A CN 202011023757 A CN202011023757 A CN 202011023757A CN 112198507 A CN112198507 A CN 112198507A
Authority
CN
China
Prior art keywords
signals
point cloud
signal
cloud information
frame signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011023757.4A
Other languages
Chinese (zh)
Other versions
CN112198507B (en
Inventor
程毅
彭诚诚
秦屹
吴坚
何文彦
薛高茹
王晨红
石露露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN202011023757.4A priority Critical patent/CN112198507B/en
Publication of CN112198507A publication Critical patent/CN112198507A/en
Application granted granted Critical
Publication of CN112198507B publication Critical patent/CN112198507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/50Systems of measurement based on relative movement of target
    • G01S13/505Systems of measurement based on relative movement of target using Doppler effect for determining closest range to a target or corresponding time, e.g. miss-distance indicator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention is suitable for the technical field of radar detection, and provides a method and a device for detecting human body falling features, wherein the method comprises the following steps: sequentially receiving a plurality of echo signals, processing a local oscillation signal and a corresponding echo signal in a current frame signal to obtain target signal-to-noise ratio data and motion point cloud information of the current frame signal, and processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal; and calculating the characteristic vector of the human body in the M frames of signals according to the weighted average value of the motion point cloud information in the M frames of signals, and detecting whether the human body falls down according to the characteristic vector of the human body in the M frames of signals to obtain a detection result. Whether the human body falls can be detected more accurately, the probability of falling identification is improved, and the false alarm probability is reduced.

Description

Method and device for detecting human body falling features
Technical Field
The invention belongs to the technical field of radar detection, and particularly relates to a method and a device for detecting falling features of a human body.
Background
Falls are becoming a global health threat. According to the world health organization estimates, about 42 million falls each year eventually result in death, meaning that falls are becoming a leading cause of unexpected death.
One important reason for the high mortality rate from falls is the increasing number of empty nesters year by year. The empty nest old people can not send alarm information in time when falling down accidentally, so that the most critical treatment time is easily missed, and a plurality of accidental death situations are caused. Therefore, how to detect the falling characteristics of the human body in an indoor environment and then send alarm information in time is very important.
Currently, common fall feature detection methods include a contact-based fall feature detection method and a non-contact-based fall feature detection method. The fall feature detection method based on the non-contact mode mainly comprises two types of video detection and radar detection. The video-based fall feature detection method has the problems of large influence of illumination, poor detection effect under the condition of poor light conditions and inconvenience for privacy protection when being installed in a family environment. In the falling feature detection method based on the radar, the doppler radar can only extract doppler information of a target, and cannot extract distance information of the target, but the falling of a human body is close to the doppler information of actions such as sitting down and bending down, false alarm is easily caused when the falling feature detection is performed through the doppler radar, and the falling feature detection accuracy is not high. The broadband radar only considers information such as energy, micro Doppler, distance and the like respectively, all characteristics of target motion cannot be extracted, and the problem of low falling characteristic detection accuracy rate also exists.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for detecting a human body fall feature, so as to solve the problem in the prior art that the accuracy of detecting the human body fall feature is not high.
A first aspect of the embodiments of the present invention provides a method for detecting a human body fall characteristic, including:
sequentially receiving a plurality of echo signals, wherein the echo signals are signals which are reflected by a human body from a plurality of continuous local oscillator signals in each frame of signals transmitted by a radar;
processing a local oscillation signal and a corresponding echo signal in a current frame signal to obtain point cloud information of the current frame signal; the point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information;
processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal;
calculating a characteristic vector of a human body in the M frames of signals according to the weighted average value of the motion point cloud information in the M frames of signals, wherein M is a positive integer greater than 0;
and detecting whether the human body falls down according to the characteristic vector of the human body in the M frame signals to obtain a detection result.
Optionally, the motion point cloud information includes: target radial distance data, target radial velocity data, target azimuth angle data, and target pitch angle data;
the processing of the local oscillator signal and the corresponding echo signal in the current frame signal to obtain the point cloud information of the current frame signal includes:
performing frequency mixing, filtering and analog-to-digital conversion on the local oscillation signal in the current frame signal and the corresponding echo signal to obtain a digital signal array of the current frame signal;
performing unit average constant false alarm detection on the digital signal array of the current frame signal to obtain the number of detection points corresponding to the current frame signal and target signal-to-noise ratio data corresponding to each detection point;
performing fast Fourier transform on the digital signal array of the current frame signal to obtain target radial distance data and target radial speed data corresponding to each detection point;
and carrying out digital beam forming angle measurement on the digital signal array of the current frame signal to obtain target azimuth angle data and target pitch angle data corresponding to each detection point.
Optionally, the processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average of the motion point cloud information of the current frame signal includes:
according to
Figure BDA0002701509420000031
Obtaining a corresponding radial distance weighted average value in the motion point cloud information of the current frame signal;
according to
Figure BDA0002701509420000032
Obtaining a corresponding radial velocity weighted average value in the motion point cloud information of the current frame signal;
according to
Figure BDA0002701509420000033
Obtaining a corresponding azimuth angle weighted average value in the motion point cloud information of the current frame signal;
according to
Figure BDA0002701509420000034
Obtaining a corresponding pitching angle weighted average value in the motion point cloud information of the current frame signal;
wherein R iswWeighted average, SNR, for the radial distanceiFor the ith target signal noise in the point cloud information of the current frame signalRatio data, RiIs the ith target radial distance data V in the motion point cloud information of the current frame signalwWeighted average, V, of said radial velocitiesiIs the ith target radial velocity data in the motion point cloud information of the current frame signal, AwWeighted average of the azimuth angles, AiFor the ith target azimuth angle data in the motion point cloud information of the current frame signal, EwFor said pitch angle weighted average, EiAnd N is the number of detection points corresponding to the current frame signal.
Optionally, the calculating a feature vector of a human body in the M-frame signal according to the obtained weighted average of the point cloud information of each motion in the M-frame signal includes:
calculating the average number of detection points of the M frame signals according to the number of the detection points corresponding to each frame signal in the M frame signals;
calculating the azimuth angle variance and the motion range of the M frame signals according to the weighted average value of each motion point cloud information in the M frame signals;
and determining the characteristic vector of the human body in the M frame signals according to the motion range, the azimuth angle variance and the average detection point number.
Optionally, the calculating the azimuth angle variance of the M-frame signal according to the weighted average of the motion point cloud information in the M-frame signal includes:
and calculating the azimuth angle variance of the M frames of signals according to the corresponding azimuth angle weighted average value in the motion point cloud information of each frame of signals in the M frames of signals.
Optionally, the motion is very poor, including: range, speed range and pitch angle range;
the calculating the motion range of the M frame signals according to the weighted average value of the motion point cloud information in the M frame signals comprises the following steps:
and respectively calculating the distance range, the speed range and the pitch angle range according to the maximum weighted average value and the minimum weighted average value in the weighted average values of the motion point cloud information in the M frames of signals.
Optionally, the calculating the azimuth angle variance of the M-frame signal according to the corresponding azimuth angle weighted average in the motion point cloud information of each frame of signal in the M-frame signal includes:
according to
Figure BDA0002701509420000041
Calculating the azimuth angle variance of the M frame signals;
wherein A isstdIs the variance of the azimuth angle, Aw(n) is the corresponding azimuth angle weighted average value in the motion point cloud information of the nth frame signal in the M frame signals, AwAnd (M) is the corresponding azimuth angle weighted average value in the motion point cloud information of the mth frame signal in the M frame signals, and M is the frame number of the signals.
Optionally, the calculating the distance range, the speed range, and the pitch angle range according to a maximum weighted average and a minimum weighted average of weighted averages of the point cloud information of each motion in the M-frame signal includes:
according to Rrange=max[Rw(1),Rw(2),…Rw(M)]-min[Rw(1),Rw(2),…Rw(M)]Calculating the range pole difference;
according to Vrange=max[Vw(1),Vw(2),…Vw(M)]-min[Vw(1),Vw(2),…Vw(M)]Calculating the speed range;
according to Erange=max[Ew(1),Ew(2),…Ew(M)]-min[Ew(1),Ew(2),…Ew(M)]Calculating the pitch angle range;
wherein R israngeIs said distance is extremely poor, Rw(1),Rw(2),…Rw(M) adding the corresponding radial distance in the motion point cloud information of the No. 1 frame signal and the No. 2 frame signal … M frame signal in the M frame signalsWeight average value, max [. cndot]Is the maximum value in the matrix, min [ ·]Is the minimum value in the matrix, VrangeIs said velocity is extremely poor, Vw(1),Vw(2),…Vw(M) is a weighted average value of radial velocities corresponding to the motion point cloud information of the No. 1 frame signal and the No. 2 frame signal … M frame signal in the M frame signals, ErangeFor said very poor pitch angle, Ew(1),Ew(2),…Ew(M) is a weighted average of the pitch angles corresponding to the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … mth frame signal in the M frame signals.
Optionally, the detecting whether the human body falls down according to the feature vector of the human body in the M-frame signal to obtain a detection result includes:
calculating a feature vector of falling human body and a feature vector of not falling human body based on a method for calculating the feature vector of human body in the M frames of signals;
training a classifier according to the feature vector of the fallen human body and the feature vector of the non-fallen human body;
and inputting the feature vectors of the human body in the M frames of signals into the trained classifier to obtain a classification detection result of whether the human body falls down.
A second aspect of the embodiments of the present invention provides a device for detecting a fall characteristic of a human body, including:
the receiving module is used for sequentially receiving a plurality of echo signals, wherein the echo signals are signals which are reflected by a human body from a plurality of continuous local oscillator signals in each frame of signals transmitted by a radar;
the signal processing module is used for processing a local oscillation signal and a corresponding echo signal in a current frame signal to obtain point cloud information of the current frame signal; the point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information;
the point cloud processing module is used for processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal;
the characteristic vector calculation module is used for calculating a characteristic vector of a human body in the M frames of signals according to the weighted average value of each piece of motion point cloud information in the M frames of signals, wherein M is a positive integer greater than 0;
and the detection module is used for detecting whether the human body falls down according to the characteristic vector of the human body in the M frame signals to obtain a detection result.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the method comprises the steps of sequentially receiving a plurality of echo signals, processing local oscillation signals and corresponding echo signals in a current frame signal to obtain point cloud information of the current frame signal, wherein the point cloud information comprises target signal-to-noise ratio data and motion point cloud information, and processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal; the characteristic vectors of the human body in the M frames of signals are calculated according to the weighted average value of the point cloud information of each motion point in the M frames of signals, the characteristic vectors containing richer human body characteristic information can be obtained, whether the human body falls down is detected according to the characteristic vectors of the human body in the M frames of signals, the detection result is obtained, whether the human body falls down can be detected more accurately, the probability of falling down identification is improved, and the false alarm probability is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a method for detecting a fall characteristic of a human body according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an antenna array distribution of a wideband array radar according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of calculating a feature vector of a human body in M frames of signals according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a detection apparatus for fall characteristics of a human body according to an embodiment of the invention;
fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation of a method for detecting a fall characteristic of a human body according to an embodiment of the present invention, which is described in detail below.
Step S101 sequentially receives a plurality of echo signals.
The plurality of echo signals are signals which are reflected back by a human body from a plurality of continuous local oscillator signals in each frame of signals transmitted by the radar.
The broadband array radar can be used for transmitting local oscillation signals and receiving echo signals of the local oscillation signals reflected back through a human body. As shown in fig. 2, the wideband array radar may include one transmitting antenna and a plurality of receiving antennas, for example, Tr ═ Rr + R1-1 receiving antennas, where Rr may be the number of receiving antennas in the azimuth direction, R1 may be the number of receiving antennas in the elevation direction, the spacing between receiving antennas in adjacent azimuth directions may be one half of the wavelength of the local oscillation signal transmitted by the wideband array radar, and the spacing between receiving antennas in adjacent elevation directions may also be one half of the wavelength of the local oscillation signal transmitted by the wideband array radar.
Optionally, the local oscillator signal transmitted by the wideband array radar may be a wideband local oscillator signal, such as a wideband frequency modulated continuous wave signal or a continuous chirp signal, based on
Figure BDA0002701509420000081
The wavelength of a broadband local oscillator signal transmitted by the broadband array radar can be calculated, wherein c is the speed of light, f0Is the center frequency of the broadband local oscillator signal.
The broadband array radar can transmit the broadband local oscillator signals according to frames, and each frame of signals can comprise NchirpA continuous chirp signal, each frame of which may last for TframeTime, exemplary, NchirpMay be 128, TframeAnd may be 100 ms. The number of local oscillation signals included in each frame of signal and the duration of each frame of signal are not limited in the embodiments of the present invention. Tr receiving antennas respectively receive echo signals of each frame of signals reflected back by a human body, and each receiving antenna sequentially receives a plurality of echo signals.
The method is based on the broadband array radar, the local oscillator signals are transmitted and the corresponding echo signals are received, high-precision distance, speed, azimuth angle, pitching angle and signal-to-noise ratio data of a target can be obtained simultaneously, and the method has higher target resolution.
Step S102, the local oscillation signals and the corresponding echo signals in the current frame signals are processed to obtain point cloud information of the current frame signals.
The point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information.
Optionally, the motion point cloud information may include: target radial distance data, target radial velocity data, target azimuth angle data, and target pitch angle data.
Optionally, the processing the local oscillator signal and the corresponding echo signal in the current frame signal to obtain the point cloud information of the current frame signal may include: and carrying out frequency mixing, filtering and analog-to-digital conversion on the local oscillation signal in the current frame signal and the corresponding echo signal to obtain a digital signal array of the current frame signal. And carrying out unit average constant false alarm detection on the digital signal array of the current frame signal to obtain the number of detection points corresponding to the current frame signal and target signal-to-noise ratio data corresponding to each detection point. And carrying out fast Fourier transform on the digital signal array of the current frame signal to obtain target radial distance data and target radial speed data corresponding to each detection point. And carrying out digital beam forming angle measurement on the digital signal array of the current frame signal to obtain target azimuth angle data and target pitch angle data corresponding to each detection point.
After the Tr receiving antennas respectively receive the echo signals of each frame of signals reflected by the human body, each receiving antenna can perform frequency mixing, filtering and analog-to-digital conversion on the local oscillation signals of each frame of signals and the corresponding echo signals thereof to obtain a digital signal array after processing the echo signals received by the Tr receiving antennas. Wherein, the number of sampling points for analog-to-digital conversion of a chirp signal can be NadcExemplary, NadcThe number of chirp signals included in each frame of signal and the number of sampling points corresponding to each chirp signal may be 128, so that a digital signal array after frequency mixing, filtering and analog-to-digital conversion may be obtained. Wherein, the number of sampling points corresponding to each local oscillator signal can be set as required.
Step S103, processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal.
Optionally, can be based on
Figure BDA0002701509420000091
And obtaining a corresponding radial distance weighted average value in the motion point cloud information of the current frame signal. According to
Figure BDA0002701509420000092
And obtaining a corresponding radial velocity weighted average value in the motion point cloud information of the current frame signal. According to
Figure BDA0002701509420000093
And obtaining the corresponding azimuth angle weighted average value in the motion point cloud information of the current frame signal. According to
Figure BDA0002701509420000094
And obtaining the corresponding pitching angle weighted average value in the motion point cloud information of the current frame signal.
Wherein R iswWeighted average of radial distance, SNRiFor the ith target signal-to-noise ratio data R in the point cloud information of the current frame signaliIs the ith target radial distance data V in the motion point cloud information of the current frame signalwWeighted average of radial velocity, ViIs the ith target radial velocity data in the motion point cloud information of the current frame signal, AwWeighted average of azimuth angle, AiFor the ith target azimuth angle data in the motion point cloud information of the current frame signal, EwAs a weighted average of pitch angles, EiThe method is characterized in that the method is the ith target pitching angle data in the motion point cloud information of the current frame signal, and N is the number of detection points corresponding to the current frame signal.
The target radial distance data, the target radial speed data, the target azimuth angle data and the target pitch angle data corresponding to each detection point in the current frame signal are processed by using the target signal-to-noise ratio data corresponding to each detection point, a radial distance weighted average value, a radial speed weighted average value, an azimuth angle weighted average value and a pitch angle weighted average value corresponding to the current frame signal are obtained, the characteristic vector of the human body is calculated based on the radial distance weighted average value, the radial speed weighted average value, the azimuth angle weighted average value and the pitch angle weighted average value of each frame signal, the motion state of the human body can be well represented, and the accuracy of detecting the falling characteristics of the human body is improved.
And step S104, calculating the characteristic vector of the human body in the M frame signals according to the weighted average value of the motion point cloud information in the M frame signals.
Wherein M is a positive integer greater than 0. For example, M may be 25, and a specific value of M may be determined according to a duration of the human fall, which is not limited in this embodiment of the present invention.
For M frames of signals, the number of detection points corresponding to each frame of signal, and the radial distance weighted average, the radial velocity weighted average, the azimuth angle weighted average, and the pitch angle weighted average of each frame of signal can be obtained according to the methods in step S102 and step S103 described above. And calculating the characteristic vector of the human body in the M frames of signals based on the radial distance weighted average value, the radial speed weighted average value, the azimuth angle weighted average value and the pitch angle weighted average value of each frame of signals.
Optionally, referring to fig. 3, calculating a feature vector of a human body in the M-frame signal according to the weighted average of the motion point cloud information in the M-frame signal may include:
step S401, calculating an average number of detection points of the M-frame signals according to the number of detection points corresponding to each frame signal in the M-frame signals.
Optionally, can be based on
Figure BDA0002701509420000101
The average number of detection points of the M-frame signal is calculated.
Wherein, PavgThe average number of detection points of the M frames of signals, P (n) is the corresponding number of detection points of the nth frame of signals, and M is the number of frames of signals.
Step S402, calculating the azimuth angle variance and the motion range of the M frame signals according to the weighted average value of the motion point cloud information in the M frame signals.
Optionally, calculating the azimuth angle variance of the M-frame signal according to the weighted average of the motion point cloud information in the M-frame signal may include: and calculating the azimuth angle variance of the M frames of signals according to the corresponding azimuth angle weighted average value in the motion point cloud information of each frame of signals in the M frames of signals. Calculating the motion range of the M-frame signal according to the weighted average of the motion point cloud information in the M-frame signal may include: and respectively calculating a distance range, a speed range and a pitch angle range according to the maximum weighted average value and the minimum weighted average value in the weighted average values of the motion point cloud information in the M frames of signals.
Wherein, the motion is extremely poor, can include: range pole, speed pole and pitch angle pole.
Optionally, can be based on
Figure BDA0002701509420000111
And calculating the azimuth angle variance of the M frame signals.
Wherein A isstdIs the variance of the azimuth angle, Aw(n) is the corresponding azimuth angle weighted average value in the motion point cloud information of the nth frame signal in the M frame signals, AwAnd (M) is the corresponding azimuth angle weighted average value in the motion point cloud information of the mth frame signal in the M frame signals, and M is the frame number of the signals.
According to Rrange=max[Rw(1),Rw(2),…Rw(M)]-min[Rw(1),Rw(2),…Rw(M)]And calculating the range pole difference.
According to Vrange=max[Vw(1),Vw(2),…Vw(M)]-min[Vw(1),Vw(2),…Vw(M)]The calculation speed is extremely poor.
According to Erange=max[Ew(1),Ew(2),…Ew(M)]-min[Ew(1),Ew(2),…Ew(M)]And calculating the pitch angle range.
Wherein R israngeIs a very poor distance, Rw(1),Rw(2),…Rw(M) is the corresponding radial distance weighted average value, max [. cndot. & gt, in the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … M frame signal in the M frame signals]Is the maximum value in the matrix, min [ ·]Is the minimum value in the matrix, VrangeFor extreme speed difference, Vw(1),Vw(2),…Vw(M) is a weighted average of corresponding radial velocities in the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … M frame signal in the M frame signals, ErangeExtremely poor pitch angle, Ew(1),Ew(2),…EwAnd (M) is a weighted average value of the pitch angles corresponding to the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … M frame signal in the M frame signals.
And S403, determining the characteristic vector of the human body in the M frames of signals according to the motion range, the azimuth angle variance and the average detection point number.
Exemplary embodiments of the inventionAccording to the average number P of detection points of the M-frame signalavgAzimuth angle variance A of M frame signalsstdRange pole difference R of M frame signalrangeSpeed range V of M frame signalrangeAnd a pitch angle range E of the M frame signalrangeThe characteristic vector EV ═ P of the human body in the M frame signals can be determinedavg,Astd,Erange,Rrange,Vrange]。
The variance or range is calculated through the weighted average of the motion point cloud information in the M frames of signals, and the variance or range is used as the feature vector of the human body in the M frames of signals, so that the feature vectors of the distance dimension, the angle dimension and the Doppler dimension of the motion features of the human body can be obtained.
And S105, detecting whether the human body falls down according to the characteristic vector of the human body in the M frames of signals to obtain a detection result.
According to the method for obtaining the characteristic vector of the human body in the M frames of signals, the characteristic vectors of the human body corresponding to the M frames of signals can be obtained, whether the human body falls down or not is detected according to the characteristic vectors of the human body corresponding to the M frames of signals, and a detection result is obtained.
Optionally, detecting whether the human body falls or not according to the feature vector of the human body in the M frames of signals to obtain a detection result, which may include: based on the method for calculating the feature vector of the human body in the M frames of signals, the feature vector of the fallen human body and the feature vector of the non-fallen human body are calculated. And training a classifier according to the feature vector of the fallen human body and the feature vector of the non-fallen human body. And inputting the feature vectors of the human body in the M frames of signals into the trained classifier to obtain a classification detection result of whether the human body falls down.
The classifier is trained according to the feature vectors of the fallen human body and the feature vectors of the non-fallen human body in a certain scale, such as a neural network, a support vector machine, a random forest and the like, and then the feature vectors of the human body in the M frame signals or the feature vectors of the human body in the M frame signals are input into the trained classifier, so that the trained classifier can classify the feature vectors of the human body in the M frame signals or the feature vectors of the human body in the M frame signals and output a classification detection result of whether the human body falls or not.
The method for detecting the human body falling features can be based on a broadband array radar, sequentially receives a plurality of echo signals, processes local oscillation signals and corresponding echo signals in current frame signals to obtain target signal-to-noise ratio data and motion point cloud information of the current frame signals, and processes the motion point cloud information of the current frame signals according to the target signal-to-noise ratio data of the current frame signals to obtain a weighted average value of the motion point cloud information of the current frame signals; and calculating the characteristic vector of the human body in the M frames of signals according to the weighted average value of the motion point cloud information in the M frames of signals, and detecting whether the human body falls down according to the characteristic vector of the human body in the M frames of signals to obtain a detection result. The characteristic vector of the human body is obtained based on the signals obtained by the broadband array radar, the influence of illumination on the human body is avoided, the human body is not imaged, and the privacy protection of a user is facilitated. And based on the signal that broadband array radar acquireed, can obtain including human distance, speed, position angle, every single move angle and signal to noise ratio data, obtain human eigenvector based on human distance, speed, position angle, every single move angle and signal to noise ratio data, can obtain including the human distance dimension, the eigenvector of angle dimension and Doppler dimension, the motion information of expression human body more comprehensive abundant, whether detecting the human body and tumbleing based on human eigenvector, can obtain higher tumble identification rate, reduce false alarm rate simultaneously.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Corresponding to the method for detecting a fall characteristic of a human body described in the above embodiments, fig. 4 shows a schematic diagram of a device for detecting a fall characteristic of a human body provided by an embodiment of the invention. As shown in fig. 4, the apparatus may include: a receiving module 41, a signal processing module 42, a point cloud processing module 43, a feature vector calculation module 44, and a detection module 45.
The receiving module 41 is configured to receive a plurality of echo signals in sequence, where the echo signals are signals that are reflected back by a human body from a plurality of continuous local oscillator signals in each frame of signal transmitted by a radar.
The signal processing module 42 is configured to process a local oscillation signal and a corresponding echo signal in a current frame signal to obtain point cloud information of the current frame signal; the point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information.
The point cloud processing module 43 is configured to process the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal, so as to obtain a weighted average value of the motion point cloud information of the current frame signal.
And the feature vector calculation module 44 is configured to calculate a feature vector of a human body in the M frames of signals according to the obtained weighted average value of each piece of motion point cloud information in the M frames of signals, where M is a positive integer greater than 0.
And the detection module 45 is configured to detect whether the human body falls down according to the feature vector of the human body in the M frames of signals, so as to obtain a detection result.
Optionally, the motion point cloud information includes: target radial distance data, target radial velocity data, target azimuth angle data, and target pitch angle data; the signal processing module 42 may be configured to perform frequency mixing, filtering and analog-to-digital conversion on the local oscillator signal in the current frame signal and the corresponding echo signal to obtain a digital signal array of the current frame signal; performing unit average constant false alarm detection on the digital signal array of the current frame signal to obtain the number of detection points corresponding to the current frame signal and target signal-to-noise ratio data corresponding to each detection point; performing fast Fourier transform on the digital signal array of the current frame signal to obtain target radial distance data and target radial speed data corresponding to each detection point; and carrying out digital beam forming angle measurement on the digital signal array of the current frame signal to obtain target azimuth angle data and target pitch angle data corresponding to each detection point.
Optionally, a point cloud processing module 43, which can be used for processing the point cloud data according to
Figure BDA0002701509420000151
Obtaining a corresponding radial distance weighted average value in the motion point cloud information of the current frame signal; according to
Figure BDA0002701509420000152
Obtaining a corresponding radial velocity weighted average value in the motion point cloud information of the current frame signal; according to
Figure BDA0002701509420000153
Obtaining a corresponding azimuth angle weighted average value in the motion point cloud information of the current frame signal; according to
Figure BDA0002701509420000154
And obtaining the corresponding pitching angle weighted average value in the motion point cloud information of the current frame signal.
Wherein R iswWeighted average, SNR, for the radial distanceiIs the ith target signal-to-noise ratio data R in the point cloud information of the current frame signaliIs the ith target radial distance data V in the motion point cloud information of the current frame signalwWeighted average, V, of said radial velocitiesiIs the ith target radial velocity data in the motion point cloud information of the current frame signal, AwWeighted average of the azimuth angles, AiFor the ith target azimuth angle data in the motion point cloud information of the current frame signal, EwFor said pitch angle weighted average, EiIs the current frame messageAnd N is the number of detection points corresponding to the current frame signal.
Optionally, the feature vector calculating module 44 may be configured to calculate an average number of detection points of the M-frame signals according to the number of detection points corresponding to each frame signal in the M-frame signals; calculating the azimuth angle variance and the motion range of the M frame signals according to the weighted average value of each motion point cloud information in the M frame signals; and determining the characteristic vector of the human body in the M frame signals according to the motion range, the azimuth angle variance and the average detection point number.
Optionally, the feature vector calculating module 44 may be configured to calculate an azimuth angle variance of the M frames of signals according to a corresponding azimuth angle weighted average in the motion point cloud information of each frame of signals in the M frames of signals.
Optionally, the movement is extremely poor, including: range, speed range and pitch angle range; the feature vector calculation module 44 may be configured to calculate the distance range, the speed range, and the pitch angle range according to a maximum weighted average value and a minimum weighted average value of weighted average values of the motion point cloud information in the M frames of signals.
Optionally, a feature vector calculation module 44 may be used to calculate the feature vector based on
Figure BDA0002701509420000161
And calculating the azimuth angle variance of the M frame signals.
Wherein A isstdIs the variance of the azimuth angle, Aw(n) is the corresponding azimuth angle weighted average value in the motion point cloud information of the nth frame signal in the M frame signals, AwAnd (M) is the corresponding azimuth angle weighted average value in the motion point cloud information of the mth frame signal in the M frame signals, and M is the frame number of the signals.
Optionally, a feature vector calculation module 44 may be used to calculate the feature vector based on Rrange=max[Rw(1),Rw(2),…Rw(M)]-min[Rw(1),Rw(2),…Rw(M)]CalculatingThe distance is extremely poor; according to Vrange=max[Vw(1),Vw(2),…Vw(M)]-min[Vw(1),Vw(2),…Vw(M)]Calculating the speed range; according to Erange=max[Ew(1),Ew(2),…Ew(M)]-min[Ew(1),Ew(2),…Ew(M)]And calculating the pitch angle range.
Wherein R israngeIs said distance is extremely poor, Rw(1),Rw(2),…Rw(M) is the corresponding radial distance weighted average value, max [. cndot. & gt, in the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … M frame signal in the M frame signals]Is the maximum value in the matrix, min [ ·]Is the minimum value in the matrix, VrangeIs said velocity is extremely poor, Vw(1),Vw(2),…Vw(M) is a weighted average value of radial velocities corresponding to the motion point cloud information of the No. 1 frame signal and the No. 2 frame signal … M frame signal in the M frame signals, ErangeFor said very poor pitch angle, Ew(1),Ew(2),…Ew(M) is a weighted average of the pitch angles corresponding to the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … mth frame signal in the M frame signals.
Optionally, the detecting module 45 may be configured to calculate a feature vector of a fallen human body and a feature vector of an unripped human body based on a method of calculating a feature vector of a human body in the M frames of signals; training a classifier according to the feature vector of the fallen human body and the feature vector of the non-fallen human body; and inputting the feature vectors of the human body in the M frames of signals into the trained classifier to obtain a classification detection result of whether the human body falls down.
According to the detection device for the human body falling feature, a plurality of continuous local oscillation signals in each frame of signal are received through a receiving module, a plurality of echo signals reflected by a human body are received, the local oscillation signals in the current frame of signal and the corresponding echo signals are processed through a signal processing module to obtain target signal-to-noise ratio data and motion point cloud information of the current frame of signal, and the motion point cloud information of the current frame of signal is processed through a point cloud processing module according to the target signal-to-noise ratio data of the current frame of signal to obtain a weighted average value of the motion point cloud information of the current frame of signal. The feature vector of the human body in the M frames of signals is calculated through the feature vector calculating module based on the weighted average value of the motion point cloud information of each frame of signals, the feature vector comprising the distance dimension, the angle dimension and the Doppler dimension of the human body can be obtained, and whether the human body falls down or not is detected through the detecting module according to the feature vector of the human body in the M frames of signals, so that a detection result is obtained. The characteristic vectors of the human body in the M frame signals can be more accurately detected based on the characteristic vectors comprising the distance dimension, the angle dimension and the Doppler dimension of the human body, so that the accuracy of the human body falling detection is improved, and the false alarm probability is reduced.
Fig. 5 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 5, the terminal device 500 of this embodiment includes: a processor 501, a memory 502 and a computer program 503, such as a detection program for fall characteristics of a person, stored in the memory 502 and executable on the processor 501. The processor 501 executes the computer program 503 to implement the steps in the embodiment of the human body fall detection method, such as steps S101 to S105 shown in fig. 1 or steps S401 to S403 shown in fig. 3, and the processor 501 executes the computer program 503 to implement the functions of the modules in the embodiments of the apparatuses, such as the modules 41 to 45 shown in fig. 4.
Illustratively, the computer program 503 may be partitioned into one or more program modules that are stored in the memory 502 and executed by the processor 501 to implement the present invention. The one or more program modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution process of the computer program 503 in the detecting apparatus for a fall characteristic of a human body or the terminal device 500. For example, the computer program 503 may be divided into a receiving module 41, a signal processing module 42, a point cloud processing module 43, a feature vector calculating module 44, and a detecting module 45, and specific functions of the modules are shown in fig. 4, which are not described in detail herein.
The terminal device 500 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 501, a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 500 and is not intended to limit the terminal device 500 and may include more or fewer components than those shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 502 may be an internal storage unit of the terminal device 500, such as a hard disk or a memory of the terminal device 500. The memory 502 may also be an external storage device of the terminal device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 500. Further, the memory 502 may also include both an internal storage unit and an external storage device of the terminal device 500. The memory 502 is used for storing the computer programs and other programs and data required by the terminal device 500. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method for detecting falling features of a human body is characterized by comprising the following steps:
sequentially receiving a plurality of echo signals, wherein the echo signals are signals which are reflected by a human body from a plurality of continuous local oscillator signals in each frame of signals transmitted by a radar;
processing a local oscillation signal and a corresponding echo signal in a current frame signal to obtain point cloud information of the current frame signal; the point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information;
processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal;
calculating a characteristic vector of a human body in the M frames of signals according to the weighted average value of the motion point cloud information in the M frames of signals, wherein M is a positive integer greater than 0;
and detecting whether the human body falls down according to the characteristic vector of the human body in the M frame signals to obtain a detection result.
2. A method of detecting a fall characteristic of an individual as claimed in claim 1,
the motion point cloud information comprises: target radial distance data, target radial velocity data, target azimuth angle data, and target pitch angle data;
the processing of the local oscillator signal and the corresponding echo signal in the current frame signal to obtain the point cloud information of the current frame signal includes:
performing frequency mixing, filtering and analog-to-digital conversion on the local oscillation signal in the current frame signal and the corresponding echo signal to obtain a digital signal array of the current frame signal;
performing unit average constant false alarm detection on the digital signal array of the current frame signal to obtain the number of detection points corresponding to the current frame signal and target signal-to-noise ratio data corresponding to each detection point;
performing fast Fourier transform on the digital signal array of the current frame signal to obtain target radial distance data and target radial speed data corresponding to each detection point;
and carrying out digital beam forming angle measurement on the digital signal array of the current frame signal to obtain target azimuth angle data and target pitch angle data corresponding to each detection point.
3. The method for detecting a human fall feature of claim 2, wherein the processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average of the motion point cloud information of the current frame signal comprises:
according to
Figure FDA0002701509410000021
Obtaining a corresponding radial distance weighted average value in the motion point cloud information of the current frame signal;
according to
Figure FDA0002701509410000022
Obtaining a corresponding radial velocity weighted average value in the motion point cloud information of the current frame signal;
according to
Figure FDA0002701509410000023
Obtaining a corresponding azimuth angle weighted average value in the motion point cloud information of the current frame signal;
according to
Figure FDA0002701509410000024
Obtaining a corresponding pitching angle weighted average value in the motion point cloud information of the current frame signal;
wherein R iswWeighted average, SNR, for the radial distanceiIs the ith target signal-to-noise ratio data R in the point cloud information of the current frame signaliIs the ith target radial distance data V in the motion point cloud information of the current frame signalwWeighted average, V, of said radial velocitiesiIs the ith target radial velocity data in the motion point cloud information of the current frame signal, AwWeighted average of the azimuth angles, AiFor the ith target azimuth angle data in the motion point cloud information of the current frame signal, EwFor said pitch angle weighted average, EiAnd N is the number of detection points corresponding to the current frame signal.
4. A method for detecting a fall characteristic of a human body according to claim 3, wherein the calculating a feature vector of the human body in the M frames of signals according to the obtained weighted average of the point cloud information of each motion in the M frames of signals comprises:
calculating the average number of detection points of the M frame signals according to the number of the detection points corresponding to each frame signal in the M frame signals;
calculating the azimuth angle variance and the motion range of the M frame signals according to the weighted average value of each motion point cloud information in the M frame signals;
and determining the characteristic vector of the human body in the M frame signals according to the motion range, the azimuth angle variance and the average detection point number.
5. The method for detecting a fall characteristic of a human body according to claim 4, wherein the calculating the variance of the orientation angle of the M frames of signals according to the weighted average of the point cloud information of each motion point in the M frames of signals comprises:
and calculating the azimuth angle variance of the M frames of signals according to the corresponding azimuth angle weighted average value in the motion point cloud information of each frame of signals in the M frames of signals.
6. A method of detecting a fall characteristic in a person as claimed in claim 4, wherein the extremely poor movement comprises: range, speed range and pitch angle range;
the calculating the motion range of the M frame signals according to the weighted average value of the motion point cloud information in the M frame signals comprises the following steps:
and respectively calculating the distance range, the speed range and the pitch angle range according to the maximum weighted average value and the minimum weighted average value in the weighted average values of the motion point cloud information in the M frames of signals.
7. The method for detecting a fall characteristic of a human body according to claim 5, wherein the calculating the variance of the azimuth angle of the M frames of signals according to the corresponding weighted average of the azimuth angle in the motion point cloud information of each frame of signals in the M frames of signals comprises:
according to
Figure FDA0002701509410000041
Calculating the azimuth angle variance of the M frame signals;
wherein A isstdIs the variance of the azimuth angle, Aw(n) is the corresponding azimuth angle weighted average value in the motion point cloud information of the nth frame signal in the M frame signals, AwAnd (M) is the corresponding azimuth angle weighted average value in the motion point cloud information of the mth frame signal in the M frame signals, and M is the frame number of the signals.
8. The method for detecting the fall characteristics of the human body according to claim 6, wherein the calculating the distance range, the speed range and the pitch angle range according to the maximum weighted average and the minimum weighted average of the weighted averages of the motion point clouds in the M frames of signals respectively comprises:
according to Rrange=max[Rw(1),Rw(2),…Rw(M)]-min[Rw(1),Rw(2),…Rw(M)]Calculating the range pole difference;
according to Vrange=max[Vw(1),Vw(2),…Vw(M)]-min[Vw(1),Vw(2),…Vw(M)]Calculating the speed range;
according to Erange=max[Ew(1),Ew(2),…Ew(M)]-min[Ew(1),Ew(2),…Ew(M)]Calculating the pitch angle range;
wherein R israngeIs said distance is extremely poor, Rw(1),Rw(2),…Rw(M) is the corresponding radial distance weighted average value, max [. cndot. & gt, in the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … M frame signal in the M frame signals]Is the maximum value in the matrix, min [ ·]Is the minimum value in the matrix, VrangeIs said velocity is extremely poor, Vw(1),Vw(2),…Vw(M) is a weighted average value of radial velocities corresponding to the motion point cloud information of the No. 1 frame signal and the No. 2 frame signal … M frame signal in the M frame signals, ErangeFor said very poor pitch angle, Ew(1),Ew(2),…Ew(M) is a weighted average of the pitch angles corresponding to the motion point cloud information of the 1 st frame signal and the 2 nd frame signal … mth frame signal in the M frame signals.
9. The method for detecting the fall characteristics of a human body according to any one of claims 1 to 8, wherein the detecting whether the human body falls according to the feature vector of the human body in the M frames of signals to obtain the detection result includes:
calculating a feature vector of falling human body and a feature vector of not falling human body based on a method for calculating the feature vector of human body in the M frames of signals;
training a classifier according to the feature vector of the fallen human body and the feature vector of the non-fallen human body;
and inputting the feature vectors of the human body in the M frames of signals into the trained classifier to obtain a classification detection result of whether the human body falls down.
10. A device for detecting a fall characteristic of a person, comprising:
the receiving module is used for sequentially receiving a plurality of echo signals, wherein the echo signals are signals which are reflected by a human body from a plurality of continuous local oscillator signals in each frame of signals transmitted by a radar;
the signal processing module is used for processing a local oscillation signal and a corresponding echo signal in a current frame signal to obtain point cloud information of the current frame signal; the point cloud information of the current frame signal comprises target signal-to-noise ratio data and motion point cloud information;
the point cloud processing module is used for processing the motion point cloud information of the current frame signal according to the target signal-to-noise ratio data of the current frame signal to obtain a weighted average value of the motion point cloud information of the current frame signal;
the characteristic vector calculation module is used for calculating a characteristic vector of a human body in the M frames of signals according to the weighted average value of each piece of motion point cloud information in the M frames of signals, wherein M is a positive integer greater than 0;
and the detection module is used for detecting whether the human body falls down according to the characteristic vector of the human body in the M frame signals to obtain a detection result.
CN202011023757.4A 2020-09-25 2020-09-25 Method and device for detecting human body falling features Active CN112198507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011023757.4A CN112198507B (en) 2020-09-25 2020-09-25 Method and device for detecting human body falling features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011023757.4A CN112198507B (en) 2020-09-25 2020-09-25 Method and device for detecting human body falling features

Publications (2)

Publication Number Publication Date
CN112198507A true CN112198507A (en) 2021-01-08
CN112198507B CN112198507B (en) 2022-09-30

Family

ID=74008357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011023757.4A Active CN112198507B (en) 2020-09-25 2020-09-25 Method and device for detecting human body falling features

Country Status (1)

Country Link
CN (1) CN112198507B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114767087A (en) * 2022-06-20 2022-07-22 精华隆智慧感知科技(深圳)股份有限公司 Multi-target respiratory frequency estimation method, device, equipment and storage medium
CN117331047A (en) * 2023-12-01 2024-01-02 德心智能科技(常州)有限公司 Human behavior data analysis method and system based on millimeter wave radar

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637242A (en) * 2013-11-12 2015-05-20 广州华久信息科技有限公司 Elder falling detection method and system based on multiple classifier integration
US20160377704A1 (en) * 2015-06-29 2016-12-29 Echocare Technologies Ltd. Human posture feature extraction in personal emergency response systems and methods
CN106334286A (en) * 2016-10-31 2017-01-18 长安大学 Detection-identification and falling-prevention electronic control device for running apparatus and method
CN109171738A (en) * 2018-07-13 2019-01-11 杭州电子科技大学 Fall detection method based on human body acceleration multiple features fusion and KNN
CN109581361A (en) * 2018-11-22 2019-04-05 九牧厨卫股份有限公司 A kind of detection method, detection device, terminal and detection system
EP3695783A1 (en) * 2019-02-15 2020-08-19 Origin Wireless, Inc. Method, apparatus, and system for wireless gait recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104637242A (en) * 2013-11-12 2015-05-20 广州华久信息科技有限公司 Elder falling detection method and system based on multiple classifier integration
US20160377704A1 (en) * 2015-06-29 2016-12-29 Echocare Technologies Ltd. Human posture feature extraction in personal emergency response systems and methods
CN106334286A (en) * 2016-10-31 2017-01-18 长安大学 Detection-identification and falling-prevention electronic control device for running apparatus and method
CN109171738A (en) * 2018-07-13 2019-01-11 杭州电子科技大学 Fall detection method based on human body acceleration multiple features fusion and KNN
CN109581361A (en) * 2018-11-22 2019-04-05 九牧厨卫股份有限公司 A kind of detection method, detection device, terminal and detection system
EP3695783A1 (en) * 2019-02-15 2020-08-19 Origin Wireless, Inc. Method, apparatus, and system for wireless gait recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许志猛 等: ""基于空间聚类的FMCW雷达双人行为识别方法"", 《福州大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114767087A (en) * 2022-06-20 2022-07-22 精华隆智慧感知科技(深圳)股份有限公司 Multi-target respiratory frequency estimation method, device, equipment and storage medium
CN114767087B (en) * 2022-06-20 2022-10-11 精华隆智慧感知科技(深圳)股份有限公司 Multi-target respiratory frequency estimation method, device, equipment and storage medium
CN117331047A (en) * 2023-12-01 2024-01-02 德心智能科技(常州)有限公司 Human behavior data analysis method and system based on millimeter wave radar

Also Published As

Publication number Publication date
CN112198507B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US20220252712A1 (en) Human Detection Method and Device, Electronic Apparatus and Storage Medium
CN112394334B (en) Clustering device and method for radar reflection points and electronic equipment
JP2020071226A (en) Fall detection method and apparatus
US6437728B1 (en) A-scan ISAR target recognition system and method
CN111398943B (en) Target posture determining method and terminal equipment
CN111427032B (en) Room wall contour recognition method based on millimeter wave radar and terminal equipment
CN112198507B (en) Method and device for detecting human body falling features
CN111142102B (en) Respiratory data calculation method and related equipment
WO2021121361A1 (en) Distance measurement method and distance measurement apparatus
CN113009442B (en) Method and device for identifying multipath target of radar static reflecting surface
CN116106855B (en) Tumble detection method and tumble detection device
CN114942434A (en) Fall attitude identification method and system based on millimeter wave radar point cloud
CN112926218A (en) Method, device, equipment and storage medium for acquiring clearance
WO2019217517A1 (en) Radio frequency (rf) object detection using radar and machine learning
CN113050797A (en) Method for realizing gesture recognition through millimeter wave radar
CN112386248A (en) Method, device and equipment for detecting human body falling and computer readable storage medium
US20240053464A1 (en) Radar Detection and Tracking
CN110837079A (en) Target detection method and device based on radar
Wang et al. A survey of hand gesture recognition based on FMCW radar
Xie et al. Lightweight midrange arm-gesture recognition system from mmwave radar point clouds
JP7484492B2 (en) Radar-based attitude recognition device, method and electronic device
Zhang et al. Synthetic aperture radar ship detection in complex scenes based on multifeature fusion network
CN116027288A (en) Method and device for generating data, electronic equipment and storage medium
CN112859065A (en) Target tracking method and system based on ellipse Hough transform
CN113050057B (en) Personnel detection method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant