CN116840836A - Fall detection method, device and radar - Google Patents

Fall detection method, device and radar Download PDF

Info

Publication number
CN116840836A
CN116840836A CN202310798120.XA CN202310798120A CN116840836A CN 116840836 A CN116840836 A CN 116840836A CN 202310798120 A CN202310798120 A CN 202310798120A CN 116840836 A CN116840836 A CN 116840836A
Authority
CN
China
Prior art keywords
height
target
point cloud
frame
energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310798120.XA
Other languages
Chinese (zh)
Inventor
何文彦
程毅
彭诚诚
秦屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN202310798120.XA priority Critical patent/CN116840836A/en
Publication of CN116840836A publication Critical patent/CN116840836A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a method, a device and a radar for detecting falling, which comprise the following steps: acquiring a track of a target, and acquiring an associated point cloud of each frame associated with the track of the target; according to the continuous first preset frame number of associated point clouds, determining the highest point average value and the highest change degree of the height of the target and the highest point of the position change degree of the target, wherein the highest point is used for representing the point cloud with the largest height value in each frame of associated point clouds; judging the scene type of the target according to the height average value, the height change degree and the position change degree of the target; after determining the scene type, for each frame of associated point cloud, determining an energy height distribution of the frame of associated point cloud, wherein the energy height distribution is used for representing a point cloud energy value of each height interval, and the detected target total height comprises a plurality of continuous but non-overlapping height intervals; and determining whether the target falls according to the scene type and the energy height distribution of the associated point cloud of each frame. The invention can improve the falling judgment precision.

Description

Fall detection method, device and radar
Technical Field
The invention relates to the technical field of radar detection, in particular to a falling detection method and device and a radar.
Background
With the aging phenomenon, the safety monitoring problem of the old is paid attention to. Among the potential risk factors of the living scene of the elderly, accidental falls occupy a very high proportion. In many fall cases, bathroom falls occupy a higher proportion.
The current fall detection methods are of several types: firstly, a video processing scheme based on visible light obtains a human body behavior sequence through an optical sensor, and judges a falling event through image analysis, so that the video scheme is not suitable for falling detection of a shower scene due to the privacy problem; the second type is fall detection based on sound signals, and due to interference of water sound and the like in a shower process, the fall detection scheme based on sound signals is not suitable for shower scenes; thirdly, the wearable device acquires gesture and position information by carrying a micro sensor, such as an acceleration sensor, a gyroscope and the like, judges whether the user falls after the information is processed, and is not easy to accept in a shower scene; fourth, based on environmental perception, the influence of human body behaviors on signals is captured mainly by using sensors placed in a bathroom, and signal echo data are analyzed to make falling judgment. The existing environment sensing schemes such as ultrasonic sensors and the like have the defects of higher report missing rate and low reliability; the infrared device has low reliability due to moisture interference and the like, and the high-performance infrared sensor has privacy leakage risk; the falling detection based on the millimeter wave radar is a non-contact detection means, has strong applicability, does not violate privacy, and can be continuously monitored for a long time.
However, at present, the falling detection based on radar is mainly based on a sample set training model, and because the point clouds of a shower scene person and water are fused together, the signal of the person is submerged, so that the detection precision is lower.
Disclosure of Invention
In view of the above, the invention provides a fall detection method, a fall detection device and a radar, which can solve the problem of low fall detection precision of a shower scene based on the radar.
In a first aspect, an embodiment of the present invention provides a fall detection method, including:
acquiring a track of a target, and acquiring an associated point cloud of each frame associated with the track of the target;
determining the height average value, the height change degree and the position change degree of the target of the highest point of the targets according to the continuous associated point clouds of the first preset frame number, wherein the highest point is used for representing the point cloud with the largest height value in each frame of associated point clouds;
judging the scene type of the target according to the height average value and the height change degree of the highest point of the target and the position change degree of the target;
after determining the scene type, for each frame of associated point cloud, determining an energy height distribution of the frame of associated point cloud, wherein the energy height distribution is used for representing a point cloud energy value of each height interval, and the detected target total height comprises a plurality of continuous but non-overlapping height intervals;
And judging whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame.
In one possible implementation manner, the determining, according to the height average value, the height variation degree of the highest point of the target, and the position variation degree of the target, the scene type where the target is located includes:
if the height average value of the highest point is larger than or equal to a first preset threshold value, the height change degree of the highest point is smaller than a second preset threshold value, and the position change degree of the target is smaller than a third preset threshold value, judging that the scene type of the target is a shower scene;
the determining whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame comprises:
after judging that the scene type is a shower scene, performing primary target state judgment according to the energy height distribution of the associated point cloud of every second preset frame number, and determining the state of the target, wherein the state of the target comprises that the target is in an active state and the target is in an inactive state;
and if the state of the target is changed from the active state to the inactive state, judging that the target falls down.
In one possible implementation manner, the determining, for each frame of the associated point cloud, an energy height distribution of the frame of the associated point cloud includes:
for each height interval, acquiring point clouds of which all height values belong to the height interval in the frame associated point clouds;
according to the amplitude value of each point cloud, calculating the energy value of the point cloud;
obtaining the point cloud energy value of the altitude interval according to the energy sum values of all the point clouds of the altitude interval;
and obtaining the energy height distribution of the frame associated point cloud according to the point cloud energy value of each height interval.
In one possible implementation manner, after the determining that the scene type is a shower scene, determining the target state according to the energy height distribution of the associated point cloud of every second preset frame number once includes:
acquiring all height intervals with the height larger than or equal to a first preset height as first target height intervals;
counting the ratio of the number of frames meeting a first judgment condition to the second preset number of frames in the associated point clouds of the second preset number of frames to obtain a first duty ratio, wherein the first judgment condition is that the ratio of the total point cloud energy value of the first target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fourth preset threshold value;
And if the first duty ratio is larger than a first preset proportion, determining that the state of the target is an inactive state.
In one possible implementation manner, after the determining that the scene type is a shower scene, determining the target state according to the energy height distribution of the associated point cloud of every second preset frame number once includes:
acquiring all height intervals with the height smaller than a second preset height as second target height intervals;
counting the ratio of the number of frames meeting a second judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a second duty ratio, wherein the second judgment condition is that the ratio of the total point cloud energy value of the second target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fifth preset threshold value;
and if the second duty ratio is larger than a second preset proportion, determining the state of the target as an active state.
In one possible implementation manner, the determining, according to the continuous first preset frame number of associated point clouds, the height average value, the height variation degree of the highest point of the target and the position variation degree of the target includes:
acquiring a height value of the highest point of each frame of associated point cloud in the associated point cloud of the first preset frame number to obtain a height array;
Determining the height average value and the height change degree of the highest point of the target according to the height array;
acquiring the radial distance of each frame of associated point cloud in the associated point cloud of the first preset frame number to obtain a radial distance array;
and determining the position change degree of the target according to the radial distance array.
In a possible implementation manner, the obtaining the radial distance of each frame of the associated point clouds in the associated point clouds of the first preset frame number includes:
for each frame of the associated point clouds in the first preset frame number of associated point clouds, calculating an x-axis average value and a y-axis average value of all the point clouds according to coordinate values of each point cloud in the frame of associated point clouds after being projected to a geodetic coordinate system, and obtaining a target coordinate point;
and calculating the distance between the target coordinate point and the origin of the geodetic coordinate system to obtain the radial distance of the frame-associated point cloud.
In a second aspect, an embodiment of the present invention provides a fall detection apparatus, including: the device comprises an acquisition module, a first determination module, a first judgment module, a second determination module and a second judgment module;
the acquisition module is used for acquiring the track of the target and acquiring the associated point cloud of each frame associated with the track of the target;
The first determining module is configured to determine, according to a continuous first preset frame number of associated point clouds, a height average value, a height variation degree of a highest point of the target, and a position variation degree of the target, where the highest point is used to represent a point cloud with a maximum height value in each frame of associated point clouds;
the first judging module is used for judging the scene type of the target according to the height average value and the height change degree of the highest point of the target and the position change degree of the target;
the second determining module is configured to determine, after determining the scene type, for each frame of associated point cloud, an energy height distribution of the frame of associated point cloud, where the energy height distribution is used to represent a point cloud energy value of each height interval, and the detected target total height includes a plurality of continuous but non-overlapping height intervals;
the second judging module is used for determining whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame.
In a third aspect, embodiments of the present invention provide a radar comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method as described above in the first aspect or any one of the possible implementations of the first aspect when the computer program is executed.
In one possible implementation, the radar is a millimeter wave radar.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
according to the embodiment of the invention, the associated point clouds of the target track are analyzed, the height mean value, the height change degree and the change degree of the target are determined through the multi-frame continuous associated point clouds, scene type judgment is carried out based on the height mean value, the height change degree and the change degree of the target, after a scene is determined, the energy height distribution of each frame of associated point clouds is obtained, whether the target falls down or not is judged based on the scene type and the multi-frame energy height distribution, and the falling judgment precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a fall detection method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a fall detection device according to an embodiment of the present invention;
Fig. 3 is a schematic diagram of a radar according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
To make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be embodied by the following detailed description with reference to the accompanying drawings
Referring to fig. 1, a flowchart of an implementation of a fall detection method provided by an embodiment of the present invention is shown, and details are as follows:
in step 101, a track of the target is acquired, and an associated point cloud is acquired, where each frame is associated with the track of the target.
In the embodiment of the invention, optionally, the point cloud data of the target is acquired through a radar, such as a millimeter wave radar, and the track of the target is acquired through target tracking.
The number of targets may be different in different application scenarios. Taking the application scene as a shower scene as an example, because the shower scene is private, and if a falling event occurs, timely rescue or warning can be obtained when a plurality of people take place, and optionally, only a single shower scene is considered at the moment, the target in the embodiment of the invention is a single target, and the track in the bathroom is represented as a single track which exists stably. Each frame associated point cloud is the point cloud associated with the single track.
In step 102, according to the continuous first preset frame number of associated point clouds, the height average value, the height variation degree of the highest point of the target and the position variation degree of the target are determined.
The highest point is used for representing the point cloud with the largest height value in the associated point clouds of each frame.
In the embodiment of the invention, the height variation degree can be the standard deviation, variance and the like of the height variation of the highest point, and the position variation degree can be the standard deviation, variance and the like of the position. The mathematical quantity for representing the height variation degree and the position variation degree can be determined through specific application scenes, and the embodiment of the invention is not limited to the mathematical quantity.
The degree of position change is used to indicate the degree of position shake of the target, for example, in a shower scene, a person typically stands under a shower head, the degree of position shake being low. The centroid of the target may be determined by associating a point cloud with each frame, determining a position change degree by a change degree of the centroid, determining a radial distance of the target by associating a point cloud with each frame, determining a position change degree by a radial distance, or calculating a position shake degree by other means, which is not limited by the embodiment of the present invention.
In an optional implementation manner, acquiring a height value of a highest point of each frame of associated point cloud in the associated point cloud of a first preset frame number to obtain a height array; according to the height group, determining the height average value and the height change degree of the highest point of the target; acquiring radial distances of each frame of associated point clouds in the associated point clouds of a first preset frame number to obtain a radial distance array; and determining the position change degree of the target according to the radial distance array.
In the embodiment of the invention, optionally, a geodetic coordinate system is adopted to obtain the coordinate of each point cloud in the geodetic coordinate system, and the value of the z axis in the coordinate is the height value of each point cloud. Each frame of associated point cloud has a point cloud with the largest z value, namely the highest point, and the change rule of the height value of the highest point can be obtained through the continuous first preset frame number of associated point clouds, so as to obtain the height average value and the height change degree.
The method provided by the embodiment of the invention can also adopt a Cartesian three-dimensional coordinate system as a world coordinate system to represent the position of the point cloud in the real world. Other coordinate systems may also be employed, and are not limited by embodiments of the present invention.
Optionally, by extracting the highest point coordinates of each frame of associated point cloud, the single frame highest point stitching may form a time sequence height curve, and the coordinates of each highest point form the highest point history track information.
In the embodiment of the invention, the height group can be obtained by continuous 100-frame association point clouds according to fixed length calculation, for example, the first preset frame number is 100 frames, and the height group is optionally represented by h. Optionally, the height standard deviation is calculated by a first formula, the height standard deviation represents the height variation degree, and the first formula is:
stdH=std(h)
Wherein std (h) is the standard deviation of the data in the height array h, and stdH is the degree of height variation calculated by the height array.
The calculation mode of the height average value of the highest point can have various implementation modes, for example, the data in the height array is directly averaged to obtain the height average value of the highest point, or the data in the height array is preprocessed, and the data of the abnormal point is removed and then averaged to obtain the height average value. The embodiment of the invention does not limit the method for determining the height average value.
Alternatively, a radial distance array is obtained by associating point clouds with 100 frames in succession, and the radial distance array is denoted by rlist. Optionally, the distance standard deviation is calculated by a second formula, and the position change degree is represented by the distance standard deviation, where the second formula is:
stdR=std(rlist)
wherein std (rlist) is the standard deviation of data in the radial distance array rlist, and stdR is the degree of position change obtained by calculation of the radial distance array.
Optionally, for each frame of associated point cloud, the radial distance obtained according to the frame of associated point cloud can be directly obtained through the distance between the coordinate point obtained after Kalman filtering and the coordinate origin;
alternatively, for each frame of associated point clouds in the associated point clouds of the first preset frame number, calculating an x-axis average value and a y-axis average value of all the point clouds according to coordinate values of each point cloud in the associated point clouds of the frame after being projected to a geodetic coordinate system, so as to obtain a target coordinate point; and calculating the distance between the target coordinate point and the origin of the geodetic coordinate system to obtain the radial distance of the frame associated point cloud.
For example, for a frame of associated point clouds, the frame of associated point clouds includes 20 point clouds, the coordinates of each point cloud may be directly obtained, by calculating the x-axis average value of the 20 point clouds to obtain xcenter, calculating the y-axis average value of the 20 point clouds to obtain ycenter, and then the coordinate value of the target coordinate point is (xcenter, ycenter), optionally, calculating the radial distance of the frame of associated point clouds by a third formula, where the third formula is:
where r is used to represent the radial distance of the frame associated point cloud. And obtaining a radial distance array rlist by calculating the radial distance of the first preset frame number, such as 100 frames of the associated point cloud.
In step 103, the scene type of the object is determined according to the height average value and the height variation degree of the highest point of the object and the position variation degree of the object.
In the embodiment of the present invention, optionally, the scene where the target is located is divided into a shower scene and a non-shower scene, if the scene where the target is located is a shower scene, because the point cloud of water and the point cloud of people are generally difficult to distinguish in the shower scene, the detected target includes water and people, and therefore, in the embodiment of the present invention, the basis for determining whether the scene is a shower scene is: the highest point of the target is the position of the shower head in the shower scene, and because the shower head is fixedly installed, the highest point of the target is higher, the jitter degree of the highest point is lower, and a person is usually positioned below the shower head in the shower process, namely, the jitter degree of the target position is lower.
Therefore, if the highest point height average value of the target is higher, the height change degree is lower and the position change degree is lower, namely the shower scene is determined. If the feature is not satisfied, the scene type of the target is confirmed to be a non-shower scene.
In an alternative implementation, the conditions for discriminating the shower scenario are: if the height average value of the highest point is larger than or equal to a first preset threshold value, the height change degree of the highest point is smaller than a second preset threshold value, and the position change degree of the target is smaller than a third preset threshold value, judging that the scene where the target is located is a shower scene.
The embodiment of the invention does not limit specific numerical values of the first preset threshold value, the second preset threshold value and the third preset threshold value, and the size of the specific numerical values can be set according to specific application scenes.
Due to the influence of sensor errors and the like, there may be a case that a single frame has no valid point cloud, that is, there is no valid point cloud in a frame of associated point cloud. In the embodiment of the invention, in order to improve the judgment precision, each data in the height array and the radial distance array further comprises flag bit data, wherein the flag bit data is used for indicating whether valid point clouds exist in the associated point clouds of the current frame, the valid point clouds exist, the flag bit is set to be 1, the valid point clouds do not exist, and the flag bit is set to be 0. Or based on the same inventive concept, the height array and the radial distance array correspond to a flag array, and the flag array is used for storing information whether valid point clouds exist in each frame of associated point clouds in the associated point clouds of the first preset frame number. The embodiment of the invention is not limited to the specific implementation form.
And combining the mark array, if the condition that the effective point cloud does not exist in the single-frame associated point cloud exists, judging the ratio of the effective frame number in the associated point cloud of the first preset frame number in the process of determining the shower scene in the step, wherein the ratio of the effective frame number is higher than a preset value, such as 95%, and the judging condition is met, so that the shower scene can be judged. The effective frame number refers to the frame number of the effective point cloud. When the duty ratio of the effective frame number is lower than the preset value, even if the above-described discrimination condition is satisfied, it is impossible to determine whether it is a shower scene, thereby improving the discrimination accuracy of whether it is a shower scene. For example, the number of flag bits 1 in the flag array is greater than 95%, and the discrimination condition of the shower scene is satisfied, and the shower scene is determined at this time. By the method provided by the embodiment of the invention, the discrimination error caused by excessive frames of the invalid frame correlation point cloud can be reduced, and the discrimination precision is improved. The invalid frame association point cloud is used for indicating that no valid point cloud exists in the single frame association point cloud.
After determining the scene type in step 104, for each frame of associated point cloud, an energy height distribution of the frame of associated point cloud is determined.
The energy altitude distribution is used to represent the point cloud energy value for each altitude interval, and the target total height to be detected includes a plurality of consecutive but non-overlapping altitude intervals.
By dividing the total height of the detected target into a plurality of continuous but non-overlapping height intervals, for each frame of associated point clouds, according to the height values of the respective point clouds in the frame of associated point clouds, the height distribution of the frame of associated point clouds is counted, i.e. the height interval into which each point cloud falls is counted, i.e. which point clouds each height interval comprises is counted.
For each frame of associated point cloud, the number of the frame of associated point cloud is the point cloud obtained by associating the tracked target track, and the total number of the point clouds of each frame of associated point cloud may be the same or different.
For example, the height of a typical shower is set to about 2 meters, at this time, the total height of the target to be detected is 2 meters, the 2 meters are divided into 20 height sections, each height section is 10cm, the ground height is 0 meters, and the height from the ground to the shower is divided into 20 height sections.
For a frame of related point clouds, taking one of the height intervals as an example, for example, the height interval corresponding to 80cm to 90cm, wherein the height values of 5 point clouds in the 40 point clouds fall into the height interval, namely, the height interval corresponding to 80cm to 90cm, the point cloud energy value of the height interval can be obtained by calculating the represented energy value of the 5 point clouds, and the point cloud energy value of each height interval can be obtained by adopting the same method to obtain the energy height distribution of the frame of related point clouds.
In an optional implementation manner, for each height interval, acquiring a point cloud of which all height values in the frame-associated point cloud belong to the height interval; according to the amplitude value of each point cloud, calculating the energy value of the point cloud; obtaining the point cloud energy value of the altitude interval according to the energy sum values of all the point clouds of the altitude interval; and obtaining the energy height distribution of the frame associated point cloud according to the point cloud energy value of each height interval.
Optionally, establishing a height interval array, wherein the total length of the height interval array is height BinSize, the lowest part of the statistical height interval is MINH, the highest part of the statistical height interval is MAXH, and the statistical total height is MAXH-MINH, namely the detected target total height in the step.
The interval size of each height interval is binsize
For example, maxh=200, minh=0 cm, binsize=10 cm, then the height binsize=20, i.e. the total height of the object to be detected is divided into 20 consecutive but non-overlapping height intervals. The index number corresponding to each height interval in the order from low to high may be 1, 2, 3 … ….
For each frame of associated point cloud, all height values in the frame of associated point cloud can be obtained to belong to the height interval by the following method:
Traversing all point clouds in the frame-associated point clouds, and calculating the index number of the height interval corresponding to the point cloud according to a fourth formula for the point cloud i, wherein the fourth formula is as follows:
where h ((i) is the height value of the point cloud i), ceil () is used to represent the minimum positive integer taken up from the value found in the brackets, and if the value found in the brackets is 5.6, ceil (5.6) =6, ind is the index number of the height section corresponding to the point cloud i.
By such a method, the point cloud distribution of each altitude section is obtained.
In the embodiment of the invention, optionally, because the square of the point cloud amplitude and the point cloud energy are in a linear relationship, the amplitude of the point cloud can be used for representing the point cloud energy.
Alternatively, the point cloud amplitude is represented by amp, and for any point cloud i, the point cloud energy value may be passed k (point (i). Amp) 2 And (3) representing that k is a preset coefficient, and point (i). Amp is the amplitude of the point cloud i. In the embodiment of the invention, the point cloud data is acquired through the millimeter wave radar, and the point cloud amplitude is the data which can be directly obtained in the process of generating the point cloud data through the millimeter wave radar.
And for the altitude interval corresponding to any index number, adding the energy values of all the point clouds in the altitude interval to obtain the point cloud energy value of the altitude interval. Optionally, a height bin (Ind) is used to represent the point cloud energy value of the altitude interval represented by each index number.
In step 105, it is determined whether the target falls according to the scene type and the energy height distribution of the point cloud associated with each frame.
Optionally, after judging that the scene type is a shower scene, performing primary target state judgment according to the energy height distribution of the associated point cloud of every second preset frame number, and determining the state of the target, wherein the state of the target comprises that the target is in an active state and the target is in an inactive state; if the state of the target is changed from the active state to the inactive state, the falling of the target is judged.
For each frame of associated point cloud, optionally, calculating the energy duty ratio of each altitude interval in a normalized mode, specifically, for any altitude interval, dividing the point cloud energy value of the altitude interval by the sum value of the point cloud energy values of all altitude intervals to obtain the energy duty ratio of the altitude interval.
In the embodiment of the invention, optionally, the energy height distribution of the associated point clouds of the second preset frame number can be spliced to generate a height heat map, and the state of the target is determined by the height heat. The state of the target can also be determined directly through the energy height distribution of the associated point cloud of the second preset frame number.
For example, in a shower scenario, when a person is showering and the person is actively active, the person's point cloud energy is greater and the water's point cloud energy is less, i.e. when someone is showering under the shower and the person is actively active, the frame is associated with a point cloud where the main energy is at a relatively low height. Wherein, obvious activities comprise the actions of scrubbing, washing hair and the like of a person, and also comprise the action of falling down. Based on this, in the embodiment of the invention, in the shower scene, the target is in an active state including a person under the shower head and a person has a remarkable action.
In a shower scenario, after a person under water falls, the signal is generally weak, and at this time, the energy of the water is dominant, that is, the energy of a position with a higher height is higher. In the embodiment of the invention, the unmanned state is similar to the weak state of the vital signal of the human, so that the target is in the inactive state and also comprises the weak state of the vital signal.
For example, the energy height distribution of the single-frame associated point cloud is analyzed, and by setting a height threshold, for example, the height threshold is set to be 70cm away from the ground, if the energy of the point cloud of the frame associated point cloud is dominant over 70cm, the state of the target corresponding to the frame associated point cloud is indicated to be an inactive state. Or, setting 140cm from the ground as a height threshold, and judging that the target state corresponding to the frame associated point cloud is an active state if the energy of the point cloud of the frame associated point cloud is dominant below 140 cm.
In an alternative implementation manner, all height intervals with the height being greater than or equal to a first preset height are obtained as first target height intervals; counting the ratio of the number of frames meeting a first judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a first duty ratio, wherein the first judgment condition is that the ratio of the total point cloud energy value of a first target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fourth preset threshold value; and if the first duty ratio is larger than the first preset proportion, determining that the state of the target is an inactive state.
In an alternative implementation, all height intervals with the height smaller than the second preset height are obtained as second target height intervals; counting the ratio of the number of frames meeting a second judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a second duty ratio, wherein the second judgment condition is that the ratio of the total point cloud energy value of a second target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fifth preset threshold value; and if the second duty ratio is larger than the second preset proportion, determining the state of the target as an active state.
Alternatively, if the scene type is determined to be a non-shower scene, the target state may be determined once according to the energy height distribution of the associated point cloud of the second preset frame number, so as to determine the state of the target. The determination is based on the same concept as described above, and if the target is changed from the active state to the inactive state, it is determined that the target falls. At this time, by analyzing the non-shower scene, the first preset height, the first target height section, the first duty ratio, the fourth preset threshold value and the first preset proportion corresponding to the non-shower scene are confirmed, and the state of the target in the non-shower scene is determined to be the inactive state by adopting the same method as the above. And determining the second preset height, the second target height interval, the second duty ratio, the fifth preset threshold and the second preset proportion corresponding to the non-shower scene by analyzing the non-shower scene, and determining the state of the target in the non-shower scene as an active state by adopting the same method.
For example, the second preset number of frames is 120 frames. And carrying out discrimination once every 120 frames of associated point clouds. The first preset proportion is set to be 95%, and if more than 95% of the 120 frames of data are judged to be in an unmanned state, the judgment result is determined to be in an unmanned state.
Assuming that the second preset proportion is set to 60%, if more than 60% of the 120 frames of data are judged to be the human activity state, the judgment result is determined to be the human activity state.
In the embodiment of the invention, the judgment is carried out once every second preset frame number, the state of the target is obtained once every time the judgment, and if the state of the target is changed from the active state to the inactive state, the falling of the target is determined. And triggering an alarm in time.
In the embodiment of the invention, the target can also comprise other states besides the active state and the inactive state, and the embodiment of the invention does not analyze the other states. The target is changed from the active state to the inactive state, and the target can be continuously judged twice, or after the judging result of the target in the active state is obtained, other states are separated, and then the judging result of the target in the inactive state is obtained, and the falling alarm condition is also met.
The method provided by the embodiment of the invention has at least the following advantages: firstly, personnel in the bathroom can be protected safely in real time without wearing the bathroom and installing the bathroom in a bathroom corner; secondly, privacy problems are avoided, point cloud data are acquired through a millimeter wave radar, the wave band of the millimeter wave radar is in a far infrared and microwave overlapping area, and privacy leakage problems are avoided; thirdly, the method provided by the embodiment of the invention can greatly reduce the interference of water, is convenient to operate, has strong portability and lower cost; fourth, compared with the conventional millimeter wave scheme, the method has high alarm accuracy and high reliability in a shower scene, and fifth, based on the height processing scheme, the method is not interfered by the radar installation position.
According to the embodiment of the invention, the associated point clouds of the target track are analyzed, the height mean value, the height change degree and the change degree of the target are determined through the multi-frame continuous associated point clouds, scene type judgment is carried out based on the height mean value, the height change degree and the change degree of the target, after a scene is determined, the energy height distribution of each frame of associated point clouds is obtained, whether the target falls down or not is judged based on the scene type and the multi-frame energy height distribution, and the falling judgment precision is improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The following are device embodiments of the invention, for details not described in detail therein, reference may be made to the corresponding method embodiments described above.
Fig. 2 shows a schematic structural diagram of a fall detection device according to an embodiment of the present invention, and for convenience of explanation, only the parts related to the embodiment of the present invention are shown in detail as follows:
as shown in fig. 2, the fall detection apparatus 2 includes: the acquisition module 21, the first determination module 22, the first judgment module 23, the second determination module 24 and the second judgment module 25;
an acquisition module 21, configured to acquire a track of a target, and acquire an associated point cloud associated with the track of the target for each frame;
the first determining module 22 is configured to determine, according to the continuous first preset frame number of associated point clouds, a height average value, a height variation degree of a highest point of the target, and a position variation degree of the target, where the highest point is used to represent a point cloud with a maximum height value in each frame of associated point clouds;
A first judging module 23, configured to judge a scene type where the target is located according to a height average value, a height variation degree of a highest point of the target, and a position variation degree of the target;
a second determining module 24, configured to determine, for each frame of associated point cloud, an energy height distribution of the frame of associated point cloud, where the energy height distribution is used to represent a point cloud energy value of each height interval, and the detected target total height includes a plurality of continuous but non-overlapping height intervals;
the second determining module 25 is configured to determine whether the target falls according to the scene type and the energy height distribution of the point cloud associated with each frame.
According to the embodiment of the invention, the associated point clouds of the target track are analyzed, the height mean value, the height change degree and the change degree of the target are determined through the multi-frame continuous associated point clouds, scene type judgment is carried out based on the height mean value, the height change degree and the change degree of the target, after a scene is determined, the energy height distribution of each frame of associated point clouds is obtained, whether the target falls down or not is judged based on the scene type and the multi-frame energy height distribution, and the falling judgment precision is improved.
In one possible implementation, the first determining module 23 is configured to:
if the height average value of the highest point is larger than or equal to a first preset threshold value, the height change degree of the highest point is smaller than a second preset threshold value, and the position change degree of the target is smaller than a third preset threshold value, judging that the scene type of the target is a shower scene;
The second judging module 25 is configured to:
after judging that the scene type is a shower scene, performing primary target state judgment according to the energy height distribution of the associated point cloud of every second preset frame number, and determining the state of the target, wherein the state of the target comprises that the target is in an active state and the target is in an inactive state;
if the state of the target is changed from the active state to the inactive state, the falling of the target is judged.
In one possible implementation, the second determining module 24 is configured to:
for each height interval, acquiring point clouds of which all height values belong to the height interval in the frame associated point clouds;
according to the amplitude value of each point cloud, calculating the energy value of the point cloud;
obtaining the point cloud energy value of the altitude interval according to the energy sum values of all the point clouds of the altitude interval;
and obtaining the energy height distribution of the frame associated point cloud according to the point cloud energy value of each height interval.
In one possible implementation, the second determining module 25 is configured to:
acquiring all height intervals with the height larger than or equal to a first preset height as first target height intervals;
counting the ratio of the number of frames meeting a first judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a first duty ratio, wherein the first judgment condition is that the ratio of the total point cloud energy value of a first target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fourth preset threshold value;
And if the first duty ratio is larger than the first preset proportion, determining that the state of the target is an inactive state.
In one possible implementation, the second determining module 25 is configured to:
acquiring all height intervals with the height smaller than a second preset height as second target height intervals;
counting the ratio of the number of frames meeting a second judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a second duty ratio, wherein the second judgment condition is that the ratio of the total point cloud energy value of a second target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fifth preset threshold value;
and if the second duty ratio is larger than the second preset proportion, determining the state of the target as an active state.
In one possible implementation, the first determining module 22 is configured to:
acquiring a height value of the highest point of each frame of associated point cloud in the associated point cloud of the first preset frame number to obtain a height array;
according to the height group, determining the height average value and the height change degree of the highest point of the target;
acquiring radial distances of each frame of associated point clouds in the associated point clouds of a first preset frame number to obtain a radial distance array;
and determining the position change degree of the target according to the radial distance array.
In one possible implementation, the first determining module 22 is configured to:
for each frame of associated point clouds in the associated point clouds of the first preset frame number, calculating an x-axis average value and a y-axis average value of all the point clouds according to coordinate values of each point cloud in the associated point clouds of the frame after being projected to a geodetic coordinate system, and obtaining a target coordinate point;
and calculating the distance between the target coordinate point and the origin of the geodetic coordinate system to obtain the radial distance of the frame associated point cloud.
The fall detection device provided in this embodiment may be used to execute the above embodiment of the fall detection method, and its implementation principle and technical effects are similar, and this embodiment will not be described here again.
Fig. 3 is a schematic diagram of a radar according to an embodiment of the present invention. As shown in fig. 3, the radar 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32 stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, carries out the steps of the various fall detection method embodiments described above, for example steps 101 to 105 shown in fig. 1. Alternatively, the processor 30 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 21 to 25 shown in fig. 2, when executing the computer program 32.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions describing the execution of the computer program 32 in the radar 3.
The radar 3 may be a millimeter wave radar. The radar 3 may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of radar 3 and is not meant to be limiting of radar 3, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the radar may also include input-output devices, network access devices, buses, etc.
The processor 30 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the radar 3, such as a hard disk or a memory of the radar 3. The memory 31 may be an external storage device of the radar 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the radar 3. Further, the memory 31 may also include both an internal memory unit and an external memory device of the radar 3. The memory 31 is used for storing the computer program as well as other programs and data required by the radar. The memory 31 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/radar and method may be implemented in other ways. For example, the apparatus/radar embodiments described above are merely illustrative, e.g., the division of the modules or elements is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by instructing the relevant hardware by a computer program, where the computer program may be stored in a computer readable storage medium, and the computer program, when executed by a processor, may implement the steps of each of the fall detection method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A fall detection method, comprising:
acquiring a track of a target, and acquiring an associated point cloud of each frame associated with the track of the target;
determining the height average value, the height change degree and the position change degree of the target of the highest point of the targets according to the continuous associated point clouds of the first preset frame number, wherein the highest point is used for representing the point cloud with the largest height value in each frame of associated point clouds;
judging the scene type of the target according to the height average value and the height change degree of the highest point of the target and the position change degree of the target;
After determining the scene type, for each frame of associated point cloud, determining an energy height distribution of the frame of associated point cloud, wherein the energy height distribution is used for representing a point cloud energy value of each height interval, and the detected target total height comprises a plurality of continuous but non-overlapping height intervals;
and judging whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame.
2. The method according to claim 1, wherein the determining the scene type of the object according to the height average value of the highest point of the object, the height variation degree, and the position variation degree of the object includes:
if the height average value of the highest point is larger than or equal to a first preset threshold value, the height change degree of the highest point is smaller than a second preset threshold value, and the position change degree of the target is smaller than a third preset threshold value, judging that the scene type of the target is a shower scene;
the determining whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame comprises:
after judging that the scene type is a shower scene, performing primary target state judgment according to the energy height distribution of the associated point cloud of every second preset frame number, and determining the state of the target, wherein the state of the target comprises that the target is in an active state and the target is in an inactive state;
And if the state of the target is changed from the active state to the inactive state, judging that the target falls down.
3. The method of claim 1, wherein for each frame of associated point cloud, determining an energy height distribution for the frame of associated point cloud comprises:
for each height interval, acquiring point clouds of which all height values belong to the height interval in the frame associated point clouds;
according to the amplitude value of each point cloud, calculating the energy value of the point cloud;
obtaining the point cloud energy value of the altitude interval according to the energy sum values of all the point clouds of the altitude interval;
and obtaining the energy height distribution of the frame associated point cloud according to the point cloud energy value of each height interval.
4. The method according to claim 2, wherein after determining that the scene type is a shower scene, determining a state of the target according to a target state determination based on an energy level distribution of the associated point cloud for every second preset number of frames comprises:
acquiring all height intervals with the height larger than or equal to a first preset height as first target height intervals;
counting the ratio of the number of frames meeting a first judgment condition to the second preset number of frames in the associated point clouds of the second preset number of frames to obtain a first duty ratio, wherein the first judgment condition is that the ratio of the total point cloud energy value of the first target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fourth preset threshold value;
And if the first duty ratio is larger than a first preset proportion, determining that the state of the target is an inactive state.
5. The method according to claim 2, wherein after determining that the scene type is a shower scene, determining a state of the target according to a target state determination based on an energy level distribution of the associated point cloud for every second preset number of frames comprises:
acquiring all height intervals with the height smaller than a second preset height as second target height intervals;
counting the ratio of the number of frames meeting a second judgment condition to the second preset number of frames in the associated point cloud of the second preset number of frames to obtain a second duty ratio, wherein the second judgment condition is that the ratio of the total point cloud energy value of the second target altitude interval to the total point cloud energy value of all altitude intervals is larger than a fifth preset threshold value;
and if the second duty ratio is larger than a second preset proportion, determining the state of the target as an active state.
6. The method according to any one of claims 1 to 5, wherein determining the height average, the height change degree, and the position change degree of the target according to the continuous first preset number of associated point clouds includes:
Acquiring a height value of the highest point of each frame of associated point cloud in the associated point cloud of the first preset frame number to obtain a height array;
determining the height average value and the height change degree of the highest point of the target according to the height array;
acquiring the radial distance of each frame of associated point cloud in the associated point cloud of the first preset frame number to obtain a radial distance array;
and determining the position change degree of the target according to the radial distance array.
7. The method of claim 6, wherein the obtaining the radial distance of each frame of the associated point clouds of the first preset number of frames comprises:
for each frame of the associated point clouds in the first preset frame number of associated point clouds, calculating an x-axis average value and a y-axis average value of all the point clouds according to coordinate values of each point cloud in the frame of associated point clouds after being projected to a geodetic coordinate system, and obtaining a target coordinate point;
and calculating the distance between the target coordinate point and the origin of the geodetic coordinate system to obtain the radial distance of the frame-associated point cloud.
8. A fall detection device, comprising: the device comprises an acquisition module, a first determination module, a first judgment module, a second determination module and a second judgment module;
The acquisition module is used for acquiring the track of the target and acquiring the associated point cloud of each frame associated with the track of the target;
the first determining module is configured to determine, according to a continuous first preset frame number of associated point clouds, a height average value, a height variation degree of a highest point of the target, and a position variation degree of the target, where the highest point is used to represent a point cloud with a maximum height value in each frame of associated point clouds;
the first judging module is used for judging the scene type of the target according to the height average value and the height change degree of the highest point of the target and the position change degree of the target;
the second determining module is configured to determine, after determining the scene type, for each frame of associated point cloud, an energy height distribution of the frame of associated point cloud, where the energy height distribution is used to represent a point cloud energy value of each height interval, and the detected target total height includes a plurality of continuous but non-overlapping height intervals;
the second judging module is used for determining whether the target falls down according to the scene type and the energy height distribution of the associated point cloud of each frame.
9. Radar comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of the preceding claims 1 to 7 when the computer program is executed.
10. The radar of claim 9, wherein the radar is a millimeter wave radar.
CN202310798120.XA 2023-06-30 2023-06-30 Fall detection method, device and radar Pending CN116840836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310798120.XA CN116840836A (en) 2023-06-30 2023-06-30 Fall detection method, device and radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310798120.XA CN116840836A (en) 2023-06-30 2023-06-30 Fall detection method, device and radar

Publications (1)

Publication Number Publication Date
CN116840836A true CN116840836A (en) 2023-10-03

Family

ID=88170165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310798120.XA Pending CN116840836A (en) 2023-06-30 2023-06-30 Fall detection method, device and radar

Country Status (1)

Country Link
CN (1) CN116840836A (en)

Similar Documents

Publication Publication Date Title
JP5301973B2 (en) Crime prevention device and program
US8525725B2 (en) Method and system for position and track determination
CN114942434B (en) Fall gesture recognition method and system based on millimeter wave Lei Dadian cloud
JP6356266B2 (en) Crowd monitoring system
CN111666821B (en) Method, device and equipment for detecting personnel aggregation
CN103186902A (en) Trip detecting method and device based on video
KR101679597B1 (en) System for managing objects and method thereof
CN114446026B (en) Article forgetting reminding method, corresponding electronic equipment and device
CN118053261B (en) Anti-spoofing early warning method, device, equipment and medium for smart campus
CN115600438B (en) Data processing method and system suitable for hydraulic engineering
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN112990168B (en) Illegal land monitoring method and system
CN117784124A (en) Millimeter wave radar falling monitoring method and device in multi-person scene
JP2010250775A (en) Crime prevention device and program
CN108111802B (en) Video monitoring method and device
CN113314230A (en) Intelligent epidemic prevention method, device, equipment and storage medium based on big data
CN102937412A (en) Foreign matter blocking detection and intelligent alarm method for camera devices
CN112781556A (en) Well lid transaction monitoring method and device based on multi-data fusion filtering
CN116840836A (en) Fall detection method, device and radar
CN104482865A (en) Method and system for brake thickness detection
CN117031462A (en) Object-based distance monitoring method, device, system and medium
CN110348408A (en) Pupil positioning method and device
Sun et al. A new fall detection algorithm based on depth information using RGB-D camera
US20090153326A1 (en) Method for locating intruder
CN112633143B (en) Image processing system, method, head-mounted device, processing device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination