CN112859033A - Target detection method, device and related equipment - Google Patents

Target detection method, device and related equipment Download PDF

Info

Publication number
CN112859033A
CN112859033A CN202110200830.9A CN202110200830A CN112859033A CN 112859033 A CN112859033 A CN 112859033A CN 202110200830 A CN202110200830 A CN 202110200830A CN 112859033 A CN112859033 A CN 112859033A
Authority
CN
China
Prior art keywords
target
height
target object
highest
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110200830.9A
Other languages
Chinese (zh)
Inventor
丁永超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Calterah Semiconductor Technology Shanghai Co Ltd
Original Assignee
Calterah Semiconductor Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Calterah Semiconductor Technology Shanghai Co Ltd filed Critical Calterah Semiconductor Technology Shanghai Co Ltd
Priority to CN202110200830.9A priority Critical patent/CN112859033A/en
Publication of CN112859033A publication Critical patent/CN112859033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity

Abstract

The embodiment of the invention discloses a target detection method, which comprises the following steps: transmitting a detection signal by using a wireless sensor, and receiving an echo signal reflected by a target object; performing signal processing on the echo signal to obtain point cloud data; and carrying out data processing on the point cloud data to obtain target information. In the embodiment, the wireless sensor is used for detecting the target, namely, the electromagnetic waves are sent and received to detect and track the target, so that the short-distance or indoor target can be classified and subjected to attitude identification, the information such as the distance, the speed, the azimuth, the pitch angle and the like of the target can be measured, the target can not be influenced by illumination, the target can work all the day, the privacy problem is not involved, and the information of the target obtained by the sensor is richer along with the improvement of the resolution of the sensor.

Description

Target detection method, device and related equipment
Technical Field
The embodiment of the invention relates to the technical field of sensors, in particular to a method and a device for detecting a target and related equipment.
Background
With the development of smart homes, smart sensors for recognizing indoor moving object types and human body gestures are increasingly popularized in life, and commonly used sensors for realizing the functions are generally optical cameras and the like.
Although the detection method based on optical shooting can utilize machine learning and the like to classify and recognize the target, the calculation amount is large, the realization cost is high, the influence of illumination, shielding and the like on the camera is large, the camera cannot work in all weather, and meanwhile, the risk of invading the privacy exists.
Disclosure of Invention
The embodiment of the invention provides a target detection method, a target detection device, a storage medium and related equipment, which are used for reducing the operation amount, reducing the cost, realizing all-weather work and avoiding privacy disclosure.
The embodiment of the application provides a target detection method, which comprises the following steps:
transmitting a detection signal by using a wireless sensor, and receiving an echo signal formed by the reflection of a target object;
performing signal processing on the echo signal to obtain point cloud data; and
and carrying out data processing on the point cloud data to obtain target information.
In the above embodiments, the wireless sensor (e.g., millimeter wave radar) is used to detect the target, that is, the wireless sensor transmits and receives electromagnetic waves to detect and track the target, so that the targets in a short distance (e.g., a range less than 10 m) or indoors (generally, a range less than 6 m) can be classified and gesture recognition can be performed, which not only can measure information such as distance, speed, azimuth, and pitch angle of the target, but also is not affected by light, can work around the clock, does not involve privacy problems, and the information of the target obtained by the wireless sensor is richer as the resolution of the sensor is improved.
Optionally, the wireless sensor includes a millimeter wave radar.
Optionally, the point cloud data includes at least multidimensional data such as a radial distance, a radial velocity, an azimuth angle, a pitch angle, and the like, and the target information includes the number, the velocity, the distance, the direction angle, an image (such as an outer dimension, and the like), and/or an attitude, and the like of the target object.
The present application also provides another method of target detection, which may include:
performing signal processing on any frame of echo signal to obtain point cloud data comprising a plurality of target point four-dimensional data, and performing clustering processing on the point cloud data to obtain a plurality of target clusters;
aiming at any one target cluster, acquiring a first radial distance of a cluster center of the target cluster and an extreme value parameter in target point four-dimensional data included in the target cluster;
acquiring three-dimensional space size data of the corresponding target object and a mean value of the height of the highest target point based on the first radial distance and the extreme value parameter; and
and judging the target object based on the average value of the height of the highest target point and/or the three-dimensional space size data.
In the embodiment, the target is detected by using the four-dimensional data, so that the problems of moving target type identification and living body target posture identification in scenes such as a short distance scene, an in-cabin scene and an indoor scene can be solved, the identification accuracy is high, the implementation cost is low, the calculation amount is small, the embedded type target detection method can run on a conventional embedded platform in real time, and product development and popularization are facilitated.
Optionally, the target point four-dimensional data includes a radial velocity, an azimuth angle, a pitch angle, a second radial distance, and the like.
Optionally, the target point four-dimensional data further includes a signal to Noise Ratio (SNR);
and clustering the point cloud data based on the signal-to-noise ratio.
Optionally, the clustering process includes a nearest neighbor clustering operation, a K-means clustering operation, or a DBSCAN clustering operation.
Optionally, the signal processing includes frequency mixing, analog-to-digital conversion (AD), sampling, two-dimensional Fast Fourier Transform (FFT), constant false alarm detection (CFAR), direction of arrival estimation (DOA), and the like performed in sequence.
Optionally, the extreme parameter includes a maximum radial velocity in the target point four-dimensional data included in the target cluster; the method may further comprise:
judging whether the target cluster meets a preset condition or not;
if the preset condition is met, acquiring three-dimensional space size data and the highest target point reference height of the corresponding target object based on the first radial distance and the extreme value parameter;
otherwise, the target cluster is taken as a pseudo target cluster for processing;
the preset condition is that the number of target points included in the target cluster is greater than a preset number, and the included maximum radial velocity is greater than a first preset velocity.
Optionally, the extreme value parameter includes a maximum azimuth angle, a minimum azimuth angle, a maximum pitch angle, a minimum pitch angle, a maximum second radial distance, a minimum second radial distance, and the like in the target point four-dimensional data included in the target cluster; and
the three-dimensional space dimensions include length, width and height in the detection view angle section.
Optionally, the obtaining three-dimensional spatial dimension data of the corresponding target object based on the first radial distance and the extremum parameter includes:
obtaining the width based on the first radial distance, the maximum azimuth angle, and the minimum azimuth angle;
obtaining the altitude based on the first radial distance, the maximum pitch angle, and the minimum pitch angle; and
obtaining the length based on the maximum second radial distance and the minimum second radial distance.
Optionally, the method may further include:
and acquiring the highest target point height based on the first radial distance and the maximum pitch angle.
Optionally, the method may further include:
acquiring the mean value of the three-dimensional space size data based on historical data in a current preset time period;
wherein the mean of the three-dimensional spatial dimension data comprises at least one of the mean of the lengths, the mean of the widths, the mean of the heights, and the mean of the highest target point heights.
Optionally, the method may further include:
acquiring the highest target point reference height of the current frame based on a smoothing factor, the average value of the highest target point heights and the highest target point reference height of the previous frame;
in a preset processing time period, the highest target point reference height of the first frame is the highest target point height in the target cluster corresponding to the first frame.
Optionally, the method may further include:
acquiring the speed of the clustering center of the target cluster;
judging whether the speed of the clustering center of the target cluster is greater than a second preset speed value or not;
and if so, acquiring the highest target point reference height of the current frame based on the smoothing factor, the average value of the highest target point heights and the highest target point reference height of the previous frame.
Optionally, the determining the target object based on the average value of the height of the highest target point includes:
presetting at least one height threshold; and
determining the type of the target object by comparing the highest target point reference height with each of the height thresholds.
Optionally, the at least one height threshold includes a first height threshold and a second height threshold, and the first height threshold is greater than the second height threshold; the types of the targets comprise a first type target, a second type target and a third type target;
the determining the type of the target object by comparing the mean of the highest target point heights with the height thresholds comprises:
if the average value of the heights of the highest target points is larger than the first height threshold value, determining that the target object is the first type target object;
if the average value of the heights of the highest target points is smaller than the second height threshold value, determining that the target object is the third type target object;
otherwise, the target object is confirmed to be the second type target object.
Optionally, the method may include:
determining a target object height variation value based on the average of the highest target point heights and the highest target point reference height;
and determining each type of posture based on the type of the target object, the height change value of the target object and the mean value of the three-dimensional space size.
Optionally, the type of the object includes "adult", and the postures include "standing", "sitting", and "lying"; the determining each type of gesture based on the type of the object, the object height variation value and the mean of the three-dimensional space size includes:
presetting a standing threshold and a sitting threshold;
when the type of the target object is 'adult', if the height variation value of the target object is greater than or equal to the standing threshold value, determining that the posture of the target object is 'standing'; if the height variation value of the target object is between the standing threshold and the sitting threshold, determining that the posture of the target object is 'sitting'; and if the height variation value of the target object is less than or equal to the sitting threshold, determining the posture of the target object as lying.
Optionally, if the height variation value of the target object is between the standing threshold and the sitting threshold, and the average value obtained by subtracting the width from the average value of the height is smaller than a preset difference value, the posture of the target object is output as "sitting".
Optionally, if the height variation value of the target object is less than or equal to the sitting threshold and the height is less than the average value of the widths, outputting the posture of the target object as lying.
Optionally, the method may further comprise:
determining a motion state of the target object based on a transition time between the poses of the target object.
Optionally, the determining the motion state of the object based on the transition time between the postures of the object includes:
and if the time for converting the posture of the target object from standing to lying is less than a preset time threshold, determining that the motion state of the target object is 'falling'.
The embodiment of the present application further provides an apparatus for object detection, which may include a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the program to implement the method according to any embodiment of the present application.
The present application further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, is configured to perform the method according to any one of the embodiments of the present application.
Embodiments of the present application further provide a radio device, which may include:
the radio transmitting and receiving channel is used for transmitting radio signals and receiving echo signals formed by reflection of a target object;
the signal processing module is used for carrying out signal processing on the echo signals to obtain point cloud data comprising a plurality of target point four-dimensional data; and
and the data processing module is used for carrying out data processing on the point cloud data so as to judge at least one of the type, the posture and the motion state of the detected target object.
Optionally, the data processing module is configured to perform a method according to any embodiment of the present application to obtain at least one of a type, a posture and a motion state of the detected target object.
Optionally, the radio device is an on-chip antenna chip or a packaged antenna chip, such as a MIMO (multiple input multiple output) chip.
Optionally, the radio device is a millimeter wave radar chip.
Optionally, a radio device may include:
a carrier;
a radio as claimed in any one of the embodiments of the present application, disposed on an existing carrier;
an antenna disposed on the carrier or disposed on the carrier as an integral device with the radio;
the radio device is connected with the antenna and used for transmitting and receiving radio signals.
Optionally, the antenna is an MIMO antenna, the radio device may have at least three transceiving channels, and physical centers of the antennas connected to the at least three transceiving channels are not on the same straight line, so that the pitch angle and the azimuth angle of the target can be detected.
An embodiment of the present application further provides an electronic device, which may include:
an apparatus body; and
the radio device according to the embodiment of the present application is provided on the device body;
wherein the radio device is used for object detection and/or communication.
Drawings
FIG. 1 is a schematic flow chart of a method for detecting a target according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for detecting a target according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a module structure of a millimeter-wave radar for target detection;
FIG. 4 is a first schematic diagram of a process of target detection by the millimeter wave radar;
FIG. 5 is a second schematic diagram of a process of target detection by the millimeter wave radar;
fig. 6 is a third schematic diagram of a process of performing target detection by the millimeter-wave radar.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The first embodiment is as follows:
fig. 1 is a flowchart illustrating a method for detecting a target according to an embodiment of the present invention. As shown in fig. 1, a method for detecting an object, which can be applied to a scene of detecting and monitoring an object in a short distance, for example, detecting an object in a range less than 10m, especially monitoring an object in a closed or semi-closed space such as an indoor space, a warehouse space or a parking lot, may include the following steps:
in step S11, a wireless sensor (such as various radars, for example, millimeter wave radar) may be used to transmit a probe signal and receive an echo signal formed by reflection from the target object.
And step S12, performing signal processing on the echo signals to obtain point cloud data. Optionally, the point cloud data may include at least multidimensional data such as a radial distance, a radial velocity, an azimuth angle, a pitch angle, and the like, and generally, the data is data of four dimensions, that is, the wireless sensor is a four-dimensional (4D) sensor, and the target information may further include information such as the number, velocity, distance, direction angle, image (such as an outer dimension, and the like), and/or attitude of the target object.
And step S13, performing data processing on the point cloud data to obtain target information, and further realizing the operations of detecting, tracking, monitoring and the like on a target object.
In the above embodiments, the wireless sensor (e.g., millimeter wave radar) is used to detect the target, that is, the wireless sensor transmits and receives electromagnetic waves to detect and track the target, so that the targets in a short distance (e.g., a range less than 10 m) or indoors (generally, a range less than 6 m) can be classified and gesture recognition can be performed, which not only can measure information such as distance, speed, azimuth, and pitch angle of the target, but also is not affected by light, can work around the clock, does not involve privacy problems, and the information of the target obtained by the wireless sensor is richer as the resolution of the sensor is improved.
It should be noted that, in the first embodiment, operations such as detecting, tracking, monitoring and the like of the target object can be realized based on the following embodiments (such as the second embodiment).
Example two:
fig. 2 is a flowchart illustrating a target detection method according to a second embodiment of the present invention. In this example, the present application also provides another method for target detection, which may include the following steps:
step S21, performing signal processing on any frame of echo signal to obtain point cloud data including a plurality of target point four-dimensional data, and performing clustering processing on the point cloud data to obtain a plurality of target clusters. The target point four-dimensional data comprises radial velocity, azimuth angle, pitch angle, second radial distance and the like so as to detect the space shape and size of the target object; meanwhile, the target point four-dimensional data can also comprise a signal-to-noise ratio and the like, so that the subsequent clustering processing and other operations on the point cloud data are facilitated.
In addition, nearest neighbor clustering operation, K-means clustering operation, DBSCAN clustering operation or the like may be employed to perform clustering processing on the point cloud data.
It should be noted that the signal processing for the echo signal may include operations such as frequency mixing, analog-to-digital conversion, sampling, two-dimensional fast fourier transform, constant false alarm detection, and direction of arrival estimation, which are performed in sequence, and the sequence between different operations may be adjusted according to actual requirements, and some operation steps may be added or deleted as long as the point cloud data including the four-dimensional data of the plurality of target points can be obtained.
Step S22, for any one of the target clusters, obtaining a first radial distance of a cluster center of the target cluster and an extreme value parameter in the target point four-dimensional data included in the target cluster.
Optionally, in order to perform further screening on the target cluster, it may be determined whether the target cluster meets a preset condition; if the preset condition is met, acquiring three-dimensional space size data and the highest target point reference height of the corresponding target object based on the first radial distance and the extreme value parameter; otherwise, the target cluster is treated as a pseudo target cluster, namely, the target cluster can be deleted or subjected to other operations, and the target cluster can not be subjected to any treatment; the preset condition may be that the target cluster includes a number of target points greater than a preset number (e.g., 1, 2, or 3), and the maximum radial velocity is greater than a first preset velocity (e.g., 0.05m/s, 0.1m/s, or 0.15 m/s).
Optionally, the extreme value parameter in this embodiment may include extreme values such as a maximum azimuth angle, a minimum azimuth angle, a maximum pitch angle, a minimum pitch angle, a maximum second radial distance, and a minimum second radial distance in the target point four-dimensional data included in the target cluster.
Step S23, obtaining the mean value of the three-dimensional spatial dimension data and the highest target point height of the corresponding target object based on the first radial distance and the extremum parameter.
Optionally, the three-dimensional space size may include size information such as length, width, and height of the detection view plane. For example, the width may be obtained based on the first radial distance, the maximum azimuth angle, and the minimum azimuth angle, the altitude may be obtained based on the first radial distance, the maximum pitch angle, and the minimum pitch angle, and the length may be obtained based on the maximum second radial distance and the minimum second radial distance. In addition, the highest target point height may also be obtained based on the first radial distance and the maximum pitch angle.
And step S24, judging the target object based on the average value of the heights of the highest target points and/or the three-dimensional space size data.
Optionally, the mean value of the three-dimensional space size data may be obtained based on historical data in a current preset time period; wherein the mean of the three-dimensional spatial dimension data comprises at least one of the mean of the lengths, the mean of the widths, the mean of the heights, and the mean of the highest target point heights.
Optionally, the highest target point reference height of the current frame may be obtained based on a smoothing factor, the average of the highest target point heights, and the highest target point reference height of the previous frame; in a preset processing time period, the highest target point reference height of the first frame is the highest target point height in the target cluster corresponding to the first frame.
It should be noted that the smoothing factor may be set based on big data analysis or historical empirical values, or may be obtained based on historical data modeling analysis in the current scene, and may be specifically obtained or set according to actual conditions in combination with corresponding algorithms. For example, the smoothing factor may be set to the value of 0.05, 0.1, or 0.15.
Optionally, in order to perform further screening on the target cluster, the speed of the cluster center of the target cluster may also be obtained, and whether the speed of the cluster center of the target cluster is greater than a second speed preset value (e.g., 0.2m/s, 0.3m/s, or 0.4 m/s) is determined; and if so, acquiring the highest target point reference height of the current frame based on the smoothing factor, the average value of the highest target point heights and the highest target point reference height of the previous frame.
Optionally, the at least one height threshold may include a first height threshold and a second height threshold (three or more height thresholds may be set as long as different height thresholds are ensured to have different values), and the first height threshold is greater than the second height threshold; the types of the objects comprise a first type object, a second type object and a third type object (the types of the objects can be divided or set according to the number and specific values of the height threshold values). If the average value of the heights of the highest target points is larger than the first height threshold value, the target object is determined to be the first type target object; if the average value of the heights of the highest target points is smaller than the second height threshold value, determining that the target object is the third type target object; otherwise, the target object is confirmed to be the second type target object.
Optionally, in order to further improve the accuracy of determining the type of the target object, a target object height variation value may be determined based on the average value of the highest target point heights and the highest target point reference height, and each type of pose may be determined based on the type of the target object, the target object height variation value, and the average value of the three-dimensional space dimensions.
For example, the types of the target include "adult", and the postures include "standing", "sitting", and "lying"; at this time, a standing threshold and a sitting threshold can be preset, and when the type of the target object is 'adult', if the height variation value of the target object is greater than or equal to the standing threshold, the posture of the target object is determined to be 'standing'; if the height variation value of the target object is between the standing threshold and the sitting threshold, determining that the posture of the target object is 'sitting'; and if the height variation value of the target object is less than or equal to the sitting threshold, determining the posture of the target object as lying.
It should be noted that, within the accuracy range of the sensor, in an application scenario in a home, the types of the target object may be divided into multiple target object types, such as "adult", "minor", or "pet", and corresponding posture determination conditions may be set for different target object types, so as to improve the posture determination accuracy of various target object types. Meanwhile, based on other scenes, other types of the target object can be divided, and particularly, according to actual requirements, the division, setting and other operations can be performed by combining parameters such as the space size, the motion state, the posture and the like of the target object.
Optionally, in order to improve the accuracy of the posture judgment, some judgment conditions may be added. For example, when the height variation value of the target object is between the standing threshold and the sitting threshold, and the mean value of the height minus the width is smaller than a preset difference value, the posture of the target object is output as 'sitting'; and when the height variation value of the target object is smaller than or equal to the sitting threshold value and the height is smaller than the mean value of the widths, outputting the posture of the target object as lying.
Optionally, the motion state of the target object may be determined in combination with the change of the posture, that is, the motion state of the target object is determined based on the transition time between the postures of the target object. For example, if the time for the posture of the target object to be changed from "standing" to "lying" is less than a preset time threshold, the motion state of the target object is determined to be "falling".
In the embodiment, the target is detected by using the four-dimensional data, so that the problems of moving target type identification and living body target posture identification in scenes such as a short distance scene, an in-cabin scene and an indoor scene can be solved, the identification accuracy is high, the implementation cost is low, the calculation amount is small, the embedded type target detection method can run on a conventional embedded platform in real time, and product development and popularization are facilitated.
The embodiment of the present application further provides an apparatus for object detection, which may include a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the program to implement the method according to any embodiment of the present application.
The present application further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, is configured to perform the method according to any one of the embodiments of the present application.
The embodiment of the application also provides a radio device, which can comprise a radio transmitting and receiving channel, a signal processing module and a data processing module, wherein the radio transmitting and receiving channel can be used for transmitting radio signals and receiving echo signals formed by reflection of a target object, the signal processing module can be used for carrying out signal processing on the echo signals to obtain point cloud data comprising a plurality of target point four-dimensional data, and the data processing module can be used for carrying out data processing on the point cloud data to judge at least one of the type, the posture and the motion state of the detected target object.
Optionally, the data processing module is configured to perform the method according to any embodiment of the present application to obtain at least one of the type, the posture and the motion state of the detected target object.
Optionally, the radio device is an on-chip antenna chip or a packaged antenna chip, such as a MIMO (multiple input multiple output) chip, for example, a millimeter wave radar chip.
The present application also provides a radio device, which may include: a carrier; a radio as claimed in any one of the embodiments of the present application, disposed on an existing carrier; an antenna disposed on the carrier or disposed on the carrier as an integral device with the radio; the radio device is connected with the antenna and used for transmitting and receiving radio signals.
Optionally, the antenna is an MIMO antenna, the radio device may have at least three transceiving channels, and physical centers of the antennas connected to the at least three transceiving channels are not on the same straight line, so that the pitch angle and the azimuth angle of the target can be detected.
The embodiment of the application also provides electronic equipment which comprises an equipment body and the radio device arranged on the equipment body, wherein the radio device is provided with a radio frequency module; wherein the radio device is used for object detection and/or communication.
In an alternative embodiment, the device body may be a component and a product applied to fields such as smart home, transportation, smart home, consumer electronics, monitoring, industrial automation, in-cabin detection, health care, and the like; for example, the device body can be an intelligent transportation device (such as an automobile, a bicycle, a motorcycle, a ship, a subway, a train and the like), a security device (such as a camera), an intelligent wearable device (such as a bracelet, glasses and the like), an intelligent household device (such as a television, an air conditioner, an intelligent lamp and the like), various communication devices (such as a mobile phone, a tablet personal computer and the like), a barrier gate, an intelligent traffic indicator lamp, an intelligent sign, a traffic camera, various industrial manipulators (or robots) and the like, and can also be various instruments for detecting vital sign parameters and various devices carrying the instruments. The radio device may be a radio device as set forth in any embodiment of the present application, and the structure and the operation principle of the radio device have been described in detail in the above embodiments, which are not described in detail herein.
The following detailed description is provided in connection with practical applications for detecting a living target by a 4D based FMCW (frequency modulated continuous wave) MIMO target sensor:
a target detection method can be used for improving the accuracy and the real-time performance of target class identification and personnel posture identification, and comprises the following steps:
acquiring detection point cloud (DOA) of a 4D sensor to obtain data; the point cloud information may include a radial distance (i.e., a second radial distance), a radial velocity, an azimuth angle, a pitch angle, a signal to noise ratio (SNR), and the like, and then the detected point clouds are clustered into individual target clusters by a preset clustering algorithm (DBSCAN).
The radial distance R (i.e., the first radial distance) from the sensor to the cluster center of each target cluster to which each tracked target (i.e., target object) is associated is calculated.
When the number of target point data included in the target cluster is greater than 1 and the maximum velocity of the target point is greater than the set threshold vTh1 (i.e. the first preset velocity), searching for extreme value information of the target point included in each target cluster, such as the maximum azimuth angle aziMax and the minimum azimuth angle aziMin, the maximum pitch angle elvMax and the minimum pitch angle elvMin, the maximum radial distance rMax and the minimum radial distance rnin, etc.
And calculating the distribution size of each target cluster in a three-dimensional space, namely acquiring the three-dimensional space dimensions of the target object on the monitoring visual angle section of the sensor, such as width (width), length (long), height (height) of the target object, height Max of the highest point in each target cluster and the like.
Specifically, the above parameter information may be obtained by using the following formula:
Figure BDA0002948802250000161
Figure BDA0002948802250000162
long=rMax-rMin
heightMax=R*sin(elvMax)
4 circulating memory banks can be continuously established, the values of width, long, height and height Max of the latest 1 s-2 s historical data are respectively stored, and the average value of all stored data in each circulating memory bank is respectively calculated to obtain the width average value width hAvvg, the length average value longhAvg, the height average value height Avg and the highest point height average value height MaxAvg.
And judging whether the speed of the target cluster center point is greater than a set threshold vTh2 (namely a second speed threshold), and updating the height reference Bmark of the highest point in each target cluster when the speed of the target cluster center point is greater than the set threshold, wherein the initial value of the height Bmark is the height of the highest point cloud in the first frame target cluster. Specifically, the method comprises the following steps:
heightBmark(n)=(1-α)*heightBmark(n-1)+α*heightMaxAvg
wherein, height Bmark (n-1) is the reference height of the highest point in the target cluster in the last frame, α is a smoothing factor, height MaxAvg is the average value of the height of the highest point in the current frame, height Bmark (n) is the reference height of the highest point in the target cluster in the current frame, and n is an integer greater than 1.
Continuously judging whether the target object is an adult or not according to the height average value heightAvg of the target cluster, namely judging that the target object is an adult when the continuous m frames meet heightAvg > hTh 1; if the continuous m frames meet hTh1 condition that height avg is more than or equal to hTh2, judging that the target object is a child; otherwise, the target object is judged to be the pet. Wherein hTh1 (i.e., the first altitude threshold) and hTh2 (i.e., the second altitude threshold) are altitude threshold values set for determining the target type, and hTh1> hTh 2.
Next, for the case where the target object is an adult, the posture of the human body is recognized; namely, after the target is confirmed to be an adult, posture recognition is performed on the adult, and a height change value h _ diff, namely h _ diff is heightBmark-heightMaxAvg, is calculated according to the reference height heightBmark of the highest point and the highest point height average height heightMaxAvg in the target cluster obtained in the above steps.
Then, judging the adult posture, namely judging the adult posture to be 'standing' when h _ diff is larger than or equal to ly _ Th; when ly _ Th > h _ diff > sit _ Th is satisfied, and height Avg-width hAvg < 0.5m, the adult posture is judged to be 'sitting'; when the sit _ Th is greater than or equal to h _ diff and the height Avg is less than width hAvg, the adult posture is judged to be 'lying'. In addition, when the transition time of the adult from the "standing" state to the "lying" state is less than 1s, it can be also judged that the adult falls down (i.e., the kinetic state judgment). And the ly _ Th and the sit _ Th are height change threshold values set for judging the postures, namely the ly _ Th is a first posture threshold value, and the sit _ Th is a second posture threshold value.
In the embodiment, the type and the posture of the target are identified by directly utilizing the point cloud characteristics detected by target reflection, so that the identification accuracy is high, the calculation amount is small, and the cost of the monitor can be reduced.
In the following, a millimeter wave radar is taken as an example, and the object classification is performed on a living body in a room, and the posture determination is performed on an object whose object is an adult:
fig. 3 is a schematic diagram of a module structure of the millimeter wave radar for performing target detection, fig. 4 is a first schematic diagram of a process of performing target detection by the millimeter wave radar, fig. 5 is a second schematic diagram of the process of performing target detection by the millimeter wave radar, and fig. 6 is a third schematic diagram of the process of performing target detection by the millimeter wave radar.
As shown in fig. 4, the millimeter wave radar may include a radar front end, a radar signal processing module, a radar data processing module, and the like, wherein the radar front end includes an antenna, a radio frequency front end module (e.g., an analog-to-digital conversion unit ADC), and the like, the radar signal processing module may include an FFT unit, a CFAR detection unit, a COA unit, and the like, and the radar data processing module may include a clustering unit, a target association and tracking unit, a target type and posture identification unit, and the like. For example, the millimeter wave radar has a 4-transmitter and 4-receiver antenna array, and transmits a frequency modulated continuous wave FMCW to a monitoring area in an MIMO (multiple-transmitter and multiple-receiver) manner, and receives an echo signal reflected by a target in the monitoring area, and the echo signal is received by a receiving module of the radar, and then sent to a signal processing module at the rear end, and after ADC, FFT, CFAR detection and DOA angle solution, reflection point data about the target, which is the point cloud information, can be obtained, and the point cloud information can include information such as a radial distance, a radial velocity, a horizontal angle, a vertical angle, and a reflection intensity from the target to the radar, thereby completing radar signal processing operation. Then, sending the obtained point cloud information to a radar data processing module to perform clustering processing on the obtained point cloud information so as to obtain a target cluster, wherein common clustering algorithms include nearest neighbor clustering, K-means clustering, DBSCAN and the like, and the DBSCAN is taken as an example in the embodiment for explanation; after the clustering operation is completed, the target clusters are subjected to operations such as correlation filtering and tracking, so that the target type identification and posture identification method provided by the embodiment of the application is applied to the point cloud cluster associated with each tracked target, and information such as the type and posture of each tracked target (namely the target object) is obtained.
As shown in fig. 5, the present embodiment provides a step based on information required in the method for detecting the type and the attitude of an indoor target of a 4D millimeter wave radar, specifically:
the detection point cloud of the 4D radar can be obtained first, and a target point cloud cluster associated with each target is obtained.
And calculating the radial distance R from the cluster center of each target cluster to the radar.
Aiming at target clusters containing point clouds of which the number is more than 1 and the absolute value of the maximum radial velocity of the point clouds is more than a set threshold vTh1, searching the maximum azimuth angle aziMax and the minimum azimuth angle aziMin, the maximum pitch angle elvMax and the minimum pitch angle elvMin, the maximum radial distance rMax and the minimum radial distance rMin of the point clouds contained in each target cluster; in this embodiment, the absolute threshold vTh1 for the maximum radial velocity of the point cloud in the target cluster may be set to 0.1 m/s.
Then calculating the distribution size of each target cluster in a three-dimensional space, including width (width), length (long), height (height), and height Max of the highest point in each target cluster; the specific calculation formula is as follows:
Figure BDA0002948802250000191
Figure BDA0002948802250000192
long=rMax-rMin
heightMax=R*sin(elvMax)
and then establishing 4 circulating memory banks, respectively storing the width (width), length (long), height (height) values and highest height Max values of the latest 1-2 s, and respectively calculating the average value of all stored data in each circulating memory bank to obtain the width average value width hAvvg, the length average value width hAvg, the height average value height Avg and the highest height average value height MaxAvg. In an alternative embodiment, the above operation may be performed on the stored historical data over the last 1s time.
And finally, when the speed of the target cluster center point is greater than the set threshold vTh2, updating the height base of the highest point in each target cluster, wherein the initial value of the height base is the height of the highest point cloud in the first frame target cluster.
Specifically, the calculation updating method is as follows:
heightBmark(n)=(1-α)*heightBmark(n-1)+α*heightMaxAvg
in this embodiment, the initial value of height bmark is the measurement value of the first frame of the effective target point cloud cluster, α may be 0.1, and vTh2 may be 0.3 m/s.
As shown in fig. 4-5, when the related information for determining the type and posture of the object is obtained, first, the object type determination is performed. The specific steps are that when the continuous 3 frames meet height Avg > hTh1, the target is judged to be adult, otherwise, if the continuous 3 frames meet hTh1> height Avg > hTh2, the target is judged to be child, otherwise, the target is pet. In this example, hTh1 was set to 1.2m, and hTh2 was set to 0.5 m.
As shown in fig. 6, when the target is confirmed to be an adult, the adult is subjected to posture recognition by assuming that the adult is in a standing posture just after entering the radar monitoring area, and when ly _ Th > h _ diff > sit _ Th is satisfied and height avg-width avg < 0.5, the adult is judged to be in a sitting posture; judging the adult posture to lie when ly _ Th is less than h _ diff and height Avg is less than width hAvg; when the transition time from the standing state to the lying state of the adult is less than 1s, judging that the adult falls down; and if the sit _ Th is less than the h _ diff, judging the adult posture as the station. In this example, sit _ Th was 0.4m, and ly _ Th was 1 m.
In the embodiments of the present application, "<", ">" may be replaced with "≦" or "≧" respectively according to actual needs, as long as they do not conflict with each other. It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (29)

1. A method of target detection, comprising:
transmitting a detection signal by using a wireless sensor, and receiving an echo signal formed by the reflection of a target object;
performing signal processing on the echo signal to obtain point cloud data; and
and carrying out data processing on the point cloud data to obtain target information.
2. The method of claim 1, wherein the wireless sensor comprises a millimeter wave radar; and/or
The point cloud data comprises radial distance, radial speed, azimuth angle and pitch angle, and the target information comprises the number, speed, distance, direction angle, image and/or attitude of target objects.
3. A method of target detection, comprising:
performing signal processing on any frame of echo signal to obtain point cloud data comprising a plurality of target point four-dimensional data, and performing clustering processing on the point cloud data to obtain a plurality of target clusters;
aiming at any one target cluster, acquiring a first radial distance of a cluster center of the target cluster and an extreme value parameter in target point four-dimensional data included in the target cluster;
acquiring three-dimensional space size data of the corresponding target object and a mean value of the height of the highest target point based on the first radial distance and the extreme value parameter; and
and judging the target object based on the average value of the height of the highest target point and/or the three-dimensional space size data.
4. The method of claim 3, wherein the target point four-dimensional data comprises radial velocity, azimuth, elevation, and second radial distance.
5. The method of claim 4, wherein the target point four-dimensional data further comprises a signal-to-noise ratio;
and clustering the point cloud data based on the signal-to-noise ratio.
6. The method according to any of claims 3-5, wherein the clustering process comprises a nearest neighbor clustering operation, a K-means clustering operation, or a DBSCAN clustering operation.
7. The method of claim 5, wherein the signal processing comprises sequential operations of mixing, analog-to-digital conversion, sampling, two-dimensional fast Fourier transform, constant false alarm detection, and direction of arrival estimation.
8. The method of claim 4, wherein the extreme parameter comprises a maximum radial velocity in the four-dimensional data of the target point included in the target cluster; the method further comprises the following steps:
judging whether the target cluster meets a preset condition or not;
if the preset condition is met, acquiring three-dimensional space size data and the highest target point reference height of the corresponding target object based on the first radial distance and the extreme value parameter;
otherwise, the target cluster is taken as a pseudo target cluster for processing;
the preset condition is that the number of target points included in the target cluster is greater than a preset number, and the included maximum radial velocity is greater than a first preset velocity.
9. The method of claim 4, wherein the extreme parameters include a maximum azimuth angle, a minimum azimuth angle, a maximum pitch angle, a minimum pitch angle, a maximum second radial distance, and a minimum second radial distance in the four-dimensional data of the target point included in the target cluster; and
the three-dimensional space dimensions include length, width and height in the detection view angle section.
10. The method of claim 9, wherein said obtaining three-dimensional spatial dimension data of the corresponding object based on the first radial distance and the extremum parameter comprises:
obtaining the width based on the first radial distance, the maximum azimuth angle, and the minimum azimuth angle;
obtaining the altitude based on the first radial distance, the maximum pitch angle, and the minimum pitch angle; and
obtaining the length based on the maximum second radial distance and the minimum second radial distance.
11. The method of claim 9, further comprising:
and acquiring the highest target point height based on the first radial distance and the maximum pitch angle.
12. The method of claim 11, further comprising:
acquiring the mean value of the three-dimensional space size data based on historical data in a current preset time period;
wherein the mean of the three-dimensional spatial dimension data comprises at least one of the mean of the lengths, the mean of the widths, the mean of the heights, and the mean of the highest target point heights.
13. The method of claim 12, further comprising:
acquiring the highest target point reference height of the current frame based on a smoothing factor, the average value of the highest target point heights and the highest target point reference height of the previous frame;
in a preset processing time period, the highest target point reference height of the first frame is the highest target point height in the target cluster corresponding to the first frame.
14. The method of claim 13, further comprising:
acquiring the speed of the clustering center of the target cluster;
judging whether the speed of the clustering center of the target cluster is greater than a second preset speed value or not;
and if so, acquiring the highest target point reference height of the current frame based on the smoothing factor, the average value of the highest target point heights and the highest target point reference height of the previous frame.
15. The method of claim 13, wherein determining the target object based on the average value of the highest target point height comprises:
presetting at least one height threshold; and
determining the type of the target object by comparing the highest target point reference height with each of the height thresholds.
16. The method of claim 15, wherein the at least one height threshold comprises a first height threshold and a second height threshold, and wherein the first height threshold is greater than the second height threshold; the types of the targets comprise a first type target, a second type target and a third type target;
the determining the type of the target object by comparing the mean of the highest target point heights with the height thresholds comprises:
if the average value of the heights of the highest target points is larger than the first height threshold value, determining that the target object is the first type target object;
if the average value of the heights of the highest target points is smaller than the second height threshold value, determining that the target object is the third type target object;
otherwise, the target object is confirmed to be the second type target object.
17. The method of claim 15, comprising:
determining a target object height variation value based on the average of the highest target point heights and the highest target point reference height;
and determining each type of posture based on the type of the target object, the height change value of the target object and the mean value of the three-dimensional space size.
18. The method of claim 17, wherein the type of object comprises "adult" and the postures comprise "standing", "sitting" and "lying"; the determining each type of gesture based on the type of the object, the object height variation value and the mean of the three-dimensional space size includes:
presetting a standing threshold and a sitting threshold;
when the type of the target object is 'adult', if the height variation value of the target object is greater than or equal to the standing threshold value, determining that the posture of the target object is 'standing'; if the height variation value of the target object is between the standing threshold and the sitting threshold, determining that the posture of the target object is 'sitting'; and if the height variation value of the target object is less than or equal to the sitting threshold, determining the posture of the target object as lying.
19. The method of claim 18, wherein the posture of the object is output as "sitting" if the height variation value of the object is between the standing threshold and the sitting threshold and the mean of the height minus the width is less than a preset difference.
20. The method of claim 18, wherein if the height variation value of the object is less than or equal to the sitting threshold value and the height is less than the average value of the widths, the posture of the object is outputted as lying.
21. The method of any one of claims 18-20, further comprising:
determining a motion state of the target object based on a transition time between the poses of the target object.
22. The method of claim 21, wherein determining the motion state of the object based on the transition time between the poses of the object comprises:
and if the time for converting the posture of the target object from standing to lying is less than a preset time threshold, determining that the motion state of the target object is 'falling'.
23. An apparatus for object detection comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method of any of claims 1-2 and 3-22.
24. A storage medium containing computer-executable instructions for performing the method of any one of claims 1-2, 3-22 when executed by a computer processor.
25. A radio device, comprising:
the radio transmitting and receiving channel is used for transmitting radio signals and receiving echo signals formed by reflection of a target object;
the signal processing module is used for carrying out signal processing on the echo signals to obtain point cloud data comprising a plurality of target point four-dimensional data; and
and the data processing module is used for carrying out data processing on the point cloud data so as to judge at least one of the type, the posture and the motion state of the detected target object.
26. The radio device of claim 25, wherein the data processing module is configured to perform the method of any of claims 3-22 to derive at least one of a type, a posture and a state of motion of the detected object.
27. The radio device of claim 26, wherein the radio device is an on-chip antenna chip or a packaged antenna chip; and/or
The radio device is a millimeter wave radar chip.
28. A radio device, comprising:
a carrier;
a radio as claimed in any of claims 25 to 27, provided on a carrier;
an antenna disposed on the carrier or disposed on the carrier as an integral device with the radio;
the radio device is connected with the antenna and used for transmitting and receiving radio signals.
29. An electronic device, comprising:
an apparatus body; and
the radio of claim 28 disposed on the equipment body;
wherein the radio device is used for object detection and/or communication.
CN202110200830.9A 2021-02-23 2021-02-23 Target detection method, device and related equipment Pending CN112859033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110200830.9A CN112859033A (en) 2021-02-23 2021-02-23 Target detection method, device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110200830.9A CN112859033A (en) 2021-02-23 2021-02-23 Target detection method, device and related equipment

Publications (1)

Publication Number Publication Date
CN112859033A true CN112859033A (en) 2021-05-28

Family

ID=75989998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110200830.9A Pending CN112859033A (en) 2021-02-23 2021-02-23 Target detection method, device and related equipment

Country Status (1)

Country Link
CN (1) CN112859033A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113625228A (en) * 2021-07-09 2021-11-09 中汽创智科技有限公司 Single-frame data processing method and device, electronic equipment and storage medium
CN113947867A (en) * 2021-09-23 2022-01-18 宁波溪棠信息科技有限公司 Method, system, electronic device and storage medium for detecting abnormal target behaviors
CN114755648A (en) * 2022-03-22 2022-07-15 珠海正和微芯科技有限公司 Object detection system, method, device and storage medium
WO2022195954A1 (en) * 2021-03-17 2022-09-22 ソニーセミコンダクタソリューションズ株式会社 Sensing system
CN115494472A (en) * 2022-11-16 2022-12-20 中南民族大学 Positioning method based on enhanced radar wave signal, millimeter wave radar and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109188382A (en) * 2018-07-27 2019-01-11 惠州华阳通用电子有限公司 A kind of target identification method based on millimetre-wave radar
CN109581312A (en) * 2018-11-22 2019-04-05 西安电子科技大学昆山创新研究院 A kind of high-resolution millimetre-wave radar multi-object clustering method
US20190108740A1 (en) * 2017-10-06 2019-04-11 Tellus You Care, Inc. Non-contact activity sensing network for elderly care
CN109993192A (en) * 2018-01-03 2019-07-09 北京京东尚科信息技术有限公司 Recongnition of objects method and device, electronic equipment, storage medium
JP2019153188A (en) * 2018-03-05 2019-09-12 学校法人 芝浦工業大学 Object recognition device and object recognition method
CN110567135A (en) * 2019-10-08 2019-12-13 珠海格力电器股份有限公司 air conditioner control method and device, storage medium and household equipment
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data
CN110925969A (en) * 2019-10-17 2020-03-27 珠海格力电器股份有限公司 Air conditioner control method and device, electronic equipment and storage medium
CN111491426A (en) * 2020-05-19 2020-08-04 河南职业技术学院 Intelligent light control system and control method
WO2020216316A1 (en) * 2019-04-26 2020-10-29 纵目科技(上海)股份有限公司 Driver assistance system and method based on millimetre wave radar, terminal, and medium
KR20200126141A (en) * 2019-04-29 2020-11-06 충북대학교 산학협력단 System and method for multiple object detection using multi-LiDAR

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190108740A1 (en) * 2017-10-06 2019-04-11 Tellus You Care, Inc. Non-contact activity sensing network for elderly care
CN109993192A (en) * 2018-01-03 2019-07-09 北京京东尚科信息技术有限公司 Recongnition of objects method and device, electronic equipment, storage medium
JP2019153188A (en) * 2018-03-05 2019-09-12 学校法人 芝浦工業大学 Object recognition device and object recognition method
CN109188382A (en) * 2018-07-27 2019-01-11 惠州华阳通用电子有限公司 A kind of target identification method based on millimetre-wave radar
CN109581312A (en) * 2018-11-22 2019-04-05 西安电子科技大学昆山创新研究院 A kind of high-resolution millimetre-wave radar multi-object clustering method
WO2020216316A1 (en) * 2019-04-26 2020-10-29 纵目科技(上海)股份有限公司 Driver assistance system and method based on millimetre wave radar, terminal, and medium
KR20200126141A (en) * 2019-04-29 2020-11-06 충북대학교 산학협력단 System and method for multiple object detection using multi-LiDAR
CN110647835A (en) * 2019-09-18 2020-01-03 合肥中科智驰科技有限公司 Target detection and classification method and system based on 3D point cloud data
CN110567135A (en) * 2019-10-08 2019-12-13 珠海格力电器股份有限公司 air conditioner control method and device, storage medium and household equipment
CN110925969A (en) * 2019-10-17 2020-03-27 珠海格力电器股份有限公司 Air conditioner control method and device, electronic equipment and storage medium
CN111491426A (en) * 2020-05-19 2020-08-04 河南职业技术学院 Intelligent light control system and control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUANGZHENG LI, ET AL: "Capturing Human Pose Using mmWave Radar", 《2020 IEEEINTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING》, pages 1 - 6 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022195954A1 (en) * 2021-03-17 2022-09-22 ソニーセミコンダクタソリューションズ株式会社 Sensing system
CN113625228A (en) * 2021-07-09 2021-11-09 中汽创智科技有限公司 Single-frame data processing method and device, electronic equipment and storage medium
CN113947867A (en) * 2021-09-23 2022-01-18 宁波溪棠信息科技有限公司 Method, system, electronic device and storage medium for detecting abnormal target behaviors
CN113947867B (en) * 2021-09-23 2023-06-27 恒玄科技(上海)股份有限公司 Method, system, electronic device and storage medium for detecting abnormal target behavior
CN114755648A (en) * 2022-03-22 2022-07-15 珠海正和微芯科技有限公司 Object detection system, method, device and storage medium
CN114755648B (en) * 2022-03-22 2023-01-06 珠海正和微芯科技有限公司 Object detection system, method, device and storage medium
CN115494472A (en) * 2022-11-16 2022-12-20 中南民族大学 Positioning method based on enhanced radar wave signal, millimeter wave radar and device
CN115494472B (en) * 2022-11-16 2023-03-10 中南民族大学 Positioning method based on enhanced radar wave signal, millimeter wave radar and device

Similar Documents

Publication Publication Date Title
US11885872B2 (en) System and method for camera radar fusion
CN112859033A (en) Target detection method, device and related equipment
CN109917390A (en) Vehicle checking method and system based on radar
CN112816960A (en) In-vehicle life detection method, device, equipment and storage medium
CN113093170A (en) Millimeter wave radar indoor personnel detection method based on KNN algorithm
CN116106855B (en) Tumble detection method and tumble detection device
CN112394334A (en) Radar reflection point clustering device and method and electronic equipment
CN112946630B (en) Personnel counting and tracking method based on millimeter wave radar
CN112162283A (en) All-section networking traffic radar multi-target detection system
Cui et al. 3D detection and tracking for on-road vehicles with a monovision camera and dual low-cost 4D mmWave radars
Sengupta et al. Automatic radar-camera dataset generation for sensor-fusion applications
CN109766851A (en) Determination method and device, the Car reversion image-forming equipment of barrier
KR20220141748A (en) Method and computer readable storage medium for extracting target information from radar signal
CN112198507B (en) Method and device for detecting human body falling features
CN112859059A (en) Target detection and tracking system and method
WO2020180074A1 (en) Determining relevant signals using multi-dimensional radar signals
Xie et al. Lightweight midrange arm-gesture recognition system from MmWave radar point clouds
CN115131756A (en) Target detection method and device
CN113591695A (en) Pedestrian re-identification method and device based on millimeter wave radar point cloud
CN113325410A (en) Radar antenna signal processing method and device, control equipment and storage medium
CN114859337A (en) Data processing method and device, electronic equipment and computer storage medium
CN113820704A (en) Method and device for detecting moving target and electronic equipment
CN113848825B (en) AGV state monitoring system and method for flexible production workshop
Li et al. Indoor Multi-Human Device-Free Tracking System Using Multi-Radar Cooperative Sensing
Streck et al. Comparison of two different radar concepts for pedestrian protection on bus stops

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination