CN116520315A - Target recognition system, target recognition method and target recognition device - Google Patents

Target recognition system, target recognition method and target recognition device Download PDF

Info

Publication number
CN116520315A
CN116520315A CN202310310134.2A CN202310310134A CN116520315A CN 116520315 A CN116520315 A CN 116520315A CN 202310310134 A CN202310310134 A CN 202310310134A CN 116520315 A CN116520315 A CN 116520315A
Authority
CN
China
Prior art keywords
target
radar
point cloud
client
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310310134.2A
Other languages
Chinese (zh)
Inventor
黄湘烨
郑旭
骆睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaxun Ark Photoelectric Technology Co ltd
Shenzhen Institute of Terahertz Technology and Innovation
Original Assignee
Shenzhen Huaxun Ark Photoelectric Technology Co ltd
Shenzhen Institute of Terahertz Technology and Innovation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaxun Ark Photoelectric Technology Co ltd, Shenzhen Institute of Terahertz Technology and Innovation filed Critical Shenzhen Huaxun Ark Photoelectric Technology Co ltd
Priority to CN202310310134.2A priority Critical patent/CN116520315A/en
Publication of CN116520315A publication Critical patent/CN116520315A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/886Radar or analogous systems specially adapted for specific applications for alarm systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0421Multiprocessor system
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0428Safety, monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Automation & Control Theory (AREA)
  • Dentistry (AREA)
  • Electromagnetism (AREA)
  • Physiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a target recognition system, a target recognition method and a target recognition device, wherein the target recognition system comprises: client device, server and client; the client equipment acquires radar signals of an area where a target is located, generates a three-dimensional point cloud according to the radar signals, and analyzes target information based on radar intensity information distribution in the three-dimensional point cloud; the client device sends the target information to the server for storage; and the server sends target information to the client according to the target identification instruction of the client. According to the method, the radial distance of the detection object is acquired through the radar sensor in the client equipment, the sensing data information quantity is more, the identification and judgment are facilitated, the data quantity acquired by the radar sensor is not large, the balance between the semantic information richness and the data complexity (redundancy) can be conveniently controlled, the complexity and precision of data transmission, processing and operation of the whole target identification system are controlled, and a wider solution selection space is provided for engineering landing.

Description

Target recognition system, target recognition method and target recognition device
Technical Field
The present disclosure relates to the field of behavior recognition technologies, and in particular, to a target recognition system, a target recognition method, and a target recognition device.
Background
Gradually establishing a perfect pension service system, and reasonably converting the traditional pension monitoring mode taking a hospital as a core into an intelligent pension monitoring mode of 'hospital and family' by considering the current actual social condition.
In view of the foregoing, there is a need to solve the problems of early prevention and timely warning of falling accidents of the population at home. In recent years, human body posture detection and behavior recognition have been greatly developed, and schemes for recognizing through videos (images) have been quite mature. However, most of the most similar technical schemes at present use a visible light camera to collect data, and the visible light camera is used for obtaining video to detect human body and key points thereof, so as to identify top-level information such as human body posture, behavior and the like. The problems that the visible light camera in the special use scene can not be satisfied cannot be solved: privacy, light requirements, 2D can not directly and accurately acquire distance information, equipment is relatively complex, installation requirements are high, and the like.
Disclosure of Invention
The application provides a target recognition system, a target recognition method and a target recognition device.
One technical solution adopted in the present application is to provide an object recognition system, which includes: client device, server and client; wherein,,
the client equipment is used for acquiring radar signals of an area where a target is located and generating three-dimensional point clouds according to the radar signals;
the client device is used for analyzing target information based on radar intensity information distribution in the three-dimensional point cloud;
the client equipment is used for sending the target information to the server for storage;
and the server is used for sending the target information to the client according to the target identification instruction of the client.
The client device comprises a radar sensor, a data processing module and a target identification module; wherein,,
the radar sensor is used for acquiring radar signals of the area where the target is located;
the data processing module is used for generating a three-dimensional point cloud according to radar signals of the radar sensor;
the target identification module is used for analyzing the radar intensity information distribution in the three-dimensional point cloud generated by the data processing module and analyzing the target information according to the radar intensity information distribution.
Wherein the client device further comprises an infrared sensor; wherein,,
the infrared sensor is used for acquiring infrared signals of the area where the target is located;
the target recognition module is further used for analyzing the human body position of the target based on the infrared signals.
Wherein the radar sensor is a millimeter wave radar sensor.
The target recognition module is further used for extracting a target point cloud from the three-dimensional point cloud, and analyzing the behavior of the target based on radar intensity information distribution of the target point cloud.
The target recognition module is further used for analyzing the behavior change of the target according to the radar signals of the continuous multiframe, and carrying out gesture recognition, behavior recognition, falling judgment, vital sign judgment and/or human body safety state monitoring on the target.
The target recognition module is further used for analyzing and extracting respiratory frequency components and/or heartbeat frequency components in the radar signals by utilizing Doppler signals, extracting respiratory frequency of the target according to the respiratory frequency components and/or extracting heartbeat frequency of the target according to the heartbeat frequency components.
The client is further configured to send a control parameter to the server, send the control parameter to the client device through the server, and adjust a target identification parameter of the client device by using the control parameter.
Another technical solution adopted in the present application is to provide a target recognition method, where the target recognition method includes:
acquiring a radar signal of an area where a target is located;
generating a three-dimensional point cloud based on the radar signals, wherein each data point in the three-dimensional point cloud comprises radar intensity information;
and extracting a target point cloud from the three-dimensional point cloud, and analyzing the behavior of the target based on radar intensity information distribution of the target point cloud.
Another technical solution adopted by the present application is to provide an object recognition device, the object recognition device including a memory and a processor coupled to the memory;
wherein the memory is configured to store program data, and the processor is configured to execute the program data to implement the target recognition method as described above.
The beneficial effects of this application are: the object recognition system includes: client device, server and client; the client equipment is used for acquiring radar signals of an area where a target is located and generating three-dimensional point clouds according to the radar signals; the client device is used for analyzing target information based on radar intensity information distribution in the three-dimensional point cloud; the client equipment is used for sending the target information to the server for storage; and the server is used for sending the target information to the client according to the target identification instruction of the client. According to the target recognition system, the radar sensor in the client equipment is used for acquiring the radial distance of the detection object, the sensing data information quantity is more, recognition and judgment are facilitated, the data quantity acquired by the radar sensor is not large, the data quantity can be converted into the 3D point cloud with the specified density after being processed, the balance between the richness of semantic information and the complexity (redundancy) of the data can be conveniently controlled, the complexity and the precision of data transmission, processing and operation of the whole detection recognition system are further controlled, and a wider solution selection space is provided for engineering landing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a framework of one embodiment of an object recognition system provided herein;
FIG. 2 is a diagram of the overall architecture of the radar-infrared multi-modality detection system IOT provided herein;
FIG. 3 is a schematic view of a scenario of a client device provided herein;
FIG. 4 is a schematic diagram of the overall architecture of a radar-infrared multi-mode detection single module provided by the present application;
FIG. 5 is a diagram of a multi-modal information fusion system algorithm architecture provided herein;
FIG. 6 is a flowchart illustrating an embodiment of a method for identifying an object provided in the present application;
FIG. 7 is a schematic diagram of a first-level sub-module algorithm flow for radar point cloud generation and gesture behavior recognition provided by the present application;
FIG. 8 is a schematic diagram of a human body posture and behavior recognition two-level sub-module algorithm flow provided by the present application;
FIG. 9 is a schematic illustration of a two-dimensional feature matrix provided herein;
FIG. 10 is a graph of vertical height versus signal strength provided herein;
FIG. 11 is a graph of height versus time provided herein;
FIG. 12 is a schematic diagram of a radar point cloud generation two-level sub-module algorithm flow provided herein;
FIG. 13 is a schematic diagram of a multi-target localization and tracking two-level sub-module algorithm flow provided herein;
fig. 14 is a schematic diagram of an algorithm flow of a first-level submodule for extracting vital sign data from millimeter wave radar echo data provided by the application;
FIG. 15 is a schematic structural view of an embodiment of an object recognition device provided in the present application;
FIG. 16 is a schematic view of another embodiment of an object recognition device provided herein;
fig. 17 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The existing human body gesture behavior recognition basically adopts a 2D plane image as a detection data source; vital signs basically adopt contact type portable devices and even large medical instruments.
1. The current mainstream scheme cannot protect the personal privacy of the detected object, has limited application scenes and can only be used in public occasions.
2. The requirements for detecting the ambient light are harsh, and the effect is poor under the condition that the visible light is weak or the detected object is shielded.
3. The image is 2D single-mode data, and the distance of the detected object in the receptive field cannot be obtained directly.
4. The image data volume is bigger, the throughput of data transmission, processing and operation is bigger, and the requirement on the detection system is higher.
5. Most of the existing vital signs are carried on electronic products, and are not friendly to living habits of the elderly.
The method adopts the radar millimeter wave and infrared induction sensor multi-mode detection method to identify the human body gesture and behavior in the detected environment and track the human body multi-target, and monitors vital signs of the human body in a shielding area and whether falling occurs or not in all weather.
The application is wider in adaptation scene, and the multiple sensors do not depend on visible light acquisition data:
1. the sensor collects the original data without carrying any privacy information of the detected object, and is suitable for special and private scenes as well. (e.g., nursing homes, ordinary household bedroom bathrooms, etc.).
2. The radar and infrared sensing are all-weather indiscriminate use for 24 hours at daytime and night without depending on visible light. The millimeter wave radar is not affected by special shielding (such as water vapor fog, indoor curtains, clothes and the like) and can penetrate through a nonmetal wall body with a certain thickness.
3. The radar sensor can acquire the radial distance of the detection object, and the dimension and the information quantity of the perception data are more than those of the 2D image, so that the recognition and judgment are facilitated.
4. The data volume acquired by the radar sensor is not large, and the data volume can be converted into 3D point cloud with specified density after being processed, so that the balance between the semantic information richness and the data complexity (redundancy) can be conveniently controlled. And further control the complexity and accuracy of data transmission, processing and operation of the whole detection and identification system. Providing a wider solution selection space for engineered landings.
5. The breathing and heart rate detection is performed by adjusting the equipment installation mode and separating Doppler dimension characteristics of a specific distance to estimate breathing and heart beat signals. The infrared sensor can assist in detecting the human body temperature of the living being. The non-contact vital sign monitoring and non-contact detection in the appointed area are realized, and the problems of charging, water prevention and the like of equipment are not required to be paid attention to. The human body gesture and behavior can be better identified and judged by combining the radar point cloud data and the radial distance information.
The application is wider in adaptation scene, and the multiple sensors do not depend on visible light to collect data. The sensor collects the original data without carrying any privacy information of the detected object, and is also suitable for special and private scenes, such as a nursing home, a medical care room, a common home bedroom bathroom and the like; the radar and infrared perception are used differently in 24 hours at daytime and nighttime without depending on visible light. Millimeter wave radars are not affected by special shielding, such as water vapor fog, indoor curtains, clothes and the like; the radar sensor can acquire the radial distance of the detection object, so that the quantity of the information of the perception data is more, and the recognition and judgment are facilitated; the data volume acquired by the radar sensor is not large, and the data volume can be converted into 3D point cloud with specified density after being processed, so that the balance between the semantic information richness and the data complexity (redundancy) can be conveniently controlled. And further control the complexity and accuracy of data transmission, processing and operation of the whole detection and identification system. Providing a wider solution selection space for engineered lands; the infrared sensor can assist in detecting vital signs of the living being. The human body gesture and behavior can be better identified and judged by combining the radar point cloud data and the radial distance information.
Specifically, referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of a framework of an embodiment of an object recognition system provided in the present application, and fig. 2 is an overall architecture diagram of a radar-infrared multi-mode detection system IOT provided in the present application.
As shown in fig. 1, the object recognition system 100 of the present application includes a client device 11, a server 12, and a client 13.
The client device 11 is configured to obtain a radar signal of an area where a target is located, and generate a three-dimensional point cloud according to the radar signal; the client device 11 is configured to analyze target information based on the radar intensity information distribution in the three-dimensional point cloud; the client device 11 is configured to send the target information to the server 12 for storage; the server 12 is configured to send the target information to the client 13 according to a target identification instruction of the client 13.
In a specific embodiment, as shown in fig. 2, the radar-infrared multi-mode detection system includes a terminal cluster, and is composed of a plurality of terminal devices, where each terminal device includes a client device module. The terminal cluster takes a certain customer management unit, such as a plurality of rooms of a personal home, and a nursing home management mechanism uniformly manages a plurality of terminals; unified care units for different rooms of a hospital.
Specifically, as shown in fig. 3, the terminal device may be installed on a wall or a ceiling according to actual situations, where the radar sensor and the infrared sensor correspondingly emit detection waves to detect a specified area, generate 3D point cloud and other information, and perform human body gesture recognition and vital sign detection in real time in all weather.
The terminal module can be installed on a wall or a ceiling according to actual requirements, the client side sets parameters such as a detection range and the like and guides the parameters into the device through the WIFI module, the device can work around the clock for 24 hours after being started, and the identification result information is sent to the background MQTT server through the WIFI module in real time.
MQTT server: the terminal cluster carries out real-time synchronization on information transmission and the MQTT server through the device WIFI module, the server acquires the acquired information and the processed information of each client device in real time, and the acquired information and the processed information are synchronized to a database and are subjected to data updating, association and bottom storage with a client list.
The client side is divided into a webpage version and a mobile phone App, and bidirectional data transmission is carried out on the server through a network. The client can write related data into the MQTT server, and the MQTT server writes control parameters into the terminal equipment through WIFI. Meanwhile, the device can push real-time detection conditions to the client through the MQTT server.
The client is also used for sending the control parameters to the MQTT server, sending the control parameters to the client device through the MQTT server and adjusting the target identification parameters of the client device by utilizing the control parameters.
Referring to fig. 4, fig. 4 is a schematic diagram of an overall architecture of a radar-infrared multi-mode detection single module provided in the present application.
The client device 11 shown in fig. 1 specifically includes a radar sensor 111, an infrared sensor 112, a DSP (Digital Signal Processing ) data processing module 113, an MCU (Microcontroller Unit, micro control unit) processing unit 114, and a WIFI (wireless router) module 115.
Wherein the radar sensor 111: the millimeter wave-containing transceiver antenna acquires the echo and transmits the echo to the DSP data processing module 113 for data processing. 2. An infrared sensor 112: including infrared sensing transceivers. The method is mainly used for detecting the existence of the human body. 3. DSP data processing module 113: and receiving the data of the two sensors, and simultaneously processing the output data of the radar sensor to obtain 3D point cloud data. 4. The MCU processing unit 114, i.e. the object recognition module: and integrating the acquired data, performing arithmetic operation, and outputting an identification result. 5. WIFI module 115: and transmitting the identification result to the MQTT server.
The radar sensor 111 may be a millimeter wave radar sensor, or may be another type of radar sensor, which is not limited herein.
The DSP data processing module 113 is configured to generate a three-dimensional point cloud according to the radar signal of the radar sensor 111, and the MCU processing unit 114 analyzes the radar intensity information distribution in the three-dimensional point cloud generated by the DSP data processing module 113, and analyzes the target information according to the radar intensity distribution.
Specifically, the MCU processing unit 114 is further configured to extract a target point cloud from the three-dimensional point cloud, and analyze the behavior of the target based on the radar intensity information distribution of the target point cloud.
Referring to fig. 5, fig. 5 is an algorithm architecture diagram of the multimodal information fusion system provided in the present application. As shown in fig. 5, the multi-mode information fusion integrated system of the application can utilize millimeter wave radar to generate three-dimensional point cloud of a human body, realize point cloud clustering processing and object segmentation on one hand, and can utilize doppler dimension signal analysis to realize human body respiration detection and human body heart rate detection on the other hand. The target recognition device further fuses the two modal information and realizes the following functions according to the fused information: 1. recognizing the gesture; 2. behavior recognition; 3. judging falling; 4. judging vital signs; 5. and monitoring the safety state of the human body.
The millimeter wave radar of the present application provides two functions: generating point cloud data, and providing human body gesture and behavior recognition; and carrying out data processing on the echo information, and extracting human vital sign information.
The infrared sensor 112 acquires an infrared signal of the area where the target is located, so as to help the MCU 114 locate the human body position of the target according to the difference between the infrared information of the target and the background infrared information in the infrared signal.
In the embodiment of the application, the target identification system 100 can detect whether the human body algorithm falls down accidents, illegal intrusion occurs in a designated range or not in real time by using the client device 11, and has strong practicability and important significance; privacy can be absolutely protected, and application scenes are more; the millimeter wave detection can be normally used under the condition that partial light and thin environmental materials such as household cloth, wood, plastic, paper, ceramic, glass and the like are shielded.
The target recognition system can flexibly adjust the point cloud density and the infrared sensing echo intensity of the module by changing the sensor parameters and the processing algorithm, and further adapt to various different use scenes. For light weak, the scene with high privacy environment requirement is more adaptive.
The following describes the MCU processing unit 114 in connection with the object recognition method, i.e. the function of the object recognition module:
referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment of a target recognition method provided in the present application, wherein the target recognition method is applied to the client device shown in fig. 1, and the specific structure and functions thereof are not described herein.
As shown in fig. 6, the target recognition method of the present application specifically includes the following steps:
step S21: and acquiring radar signals of the area where the target is located.
In the embodiment of the application, the client device collects radar signals of the area where the target is located by using the millimeter wave radar sensor.
Step S22: based on the radar signals, a three-dimensional point cloud is generated, wherein each data point in the three-dimensional point cloud contains radar intensity information.
In the embodiment of the application, the client device reconstructs three-dimensional point cloud data of the area where the target is located according to the radar signal, wherein each data point in the three-dimensional point cloud specifically comprises three-dimensional coordinate information and radar intensity information.
Step S23: and extracting a target point cloud from the three-dimensional point cloud, and analyzing the behavior of the target based on the radar intensity information distribution of the target point cloud.
In this embodiment of the present application, as shown in fig. 7 and fig. 8, the client device may obtain a target point cloud of an object to be detected based on a clustering algorithm process of density, and extract feature information such as a three-dimensional detection frame and a height of the clustered object according to a point cloud set of the target point cloud. Fig. 7 is a schematic diagram of a radar point cloud generation and gesture behavior recognition primary sub-module algorithm flow provided by the application, and fig. 8 is a schematic diagram of a human body gesture and behavior recognition secondary sub-module algorithm flow provided by the application.
Specifically, in the human body gesture behavior recognition module, because of the physical characteristics of the millimeter wave sensor, various interference conditions such as echo reflection and interference are easy to generate, the original point cloud with uneven density is generated by preprocessing the original point cloud by using a density-based clustering algorithm (such as DBSCAN, OPTICS, DENCLUE and the like and improved variant algorithms thereof) so as to eliminate a large number of invalid interference points; meanwhile, detection objects in the detection area can be separated, and 3-box information and centroid coordinates of the objects are obtained; and performing dimension reduction processing on the original 3D point cloud, extracting primary characteristic information of falling judgment, namely height information and radar intensity information, fundamentally reducing algorithm operation complexity and ensuring instantaneity of side end deployment.
After the client device of the application segments the object data information through clustering, the object data information can be utilized to realize gesture and behavior recognition on one hand, judge whether unexpected situations such as falling occur or not, and on the other hand, realize target recognition, multi-target positioning and tracking and judge whether illegal intrusion into a safe area exists or not.
In the time dimension, the client equipment intercepts target point clouds of radar signals of continuous multiframes, and each frame of target point cloud constructs a 3D point cloud clustering two-dimensional feature matrix through dimension reduction processing, so that the 3D point cloud clustering two-dimensional feature matrix based on a time axis is constructed. The client device further inputs the 3D point cloud clustering two-dimensional feature matrix based on the time axis into a 2-dimensional CNN network to conduct fall detection behavior recognition.
Specifically, the client equipment acquires radar intensity information of different heights of the target point cloud according to the height information of the target point cloud; based on the radar intensity information of different heights, analyzing and obtaining radar vertical signal distribution intensity of the target point cloud; and acquiring the current gesture of the target according to the radar vertical distribution intensity.
In a specific embodiment, the human body gesture and behavior recognition module of the 3D point cloud clusters the original point cloud to obtain the boundary envelope 3Dbox of the point cloud cluster object, so as to obtain spatial stereo information of different objects in the region, including aspect ratio and corresponding vertical echo signal intensity distribution information (vertical intensity profile). The echo signal intensity distribution information provides intensity data of signals received by the clustered point cloud clusters vertically to targets with different heights.
Referring specifically to fig. 9, fig. 9 is a schematic diagram of a two-dimensional feature matrix provided in the present application. As shown in fig. 9, the vertical profile refers to the reflected energy intensity within the three-dimensional cartesian coordinate system "box" around each target (initial default estimated size of 3 Dbox: 75x 65x 190 cm). The tank was divided into 19 cross sections, each having a height of 10cm. And splitting the target into 19 features according to the height information through 19 cross sections, wherein the numerical value of each feature is the echo signal intensity included in the corresponding cross section. Thus, according to the two-dimensional feature matrix of fig. 9, the echo signal intensity distribution of the target point cloud of each frame can be obtained.
Further, as illustrated in fig. 10, fig. 10 is a graph of vertical height versus signal strength provided herein. As is apparent from fig. 10, there is a significant difference in the echo vertical signal distribution intensity in the 3 cases of standing, sitting, and lying. Thus, by comparing fig. 9 with fig. 10, the current posture of the target can be effectively analyzed.
Further, the client device may also obtain radar vertical distribution intensity of the continuous multi-frame radar signal; and acquiring the attitude change condition of the target by comparing the change conditions of the vertical distribution intensities of a plurality of continuous radars.
Specifically, when the human body posture is changed suddenly in a short time (a time span threshold can be set), the 3D-box height, the centroid height and the echo vertical signal distribution intensity thereof are changed obviously correspondingly. Thus, these physical information can be considered as variables of the human body posture and behavior change.
Considering that the information quantity of the simple height (mass center and 3D-box) is relatively small, the human body posture behavior state characteristic vector is constructed aiming at the priori knowledge, and the corresponding state at the appointed moment is represented by the human body posture behavior state characteristic vector. The feature matrix is a two-dimensional matrix: the number of the lines is 19, and the lines correspond to 19 intervals in the vertical direction; the number of columns is the corresponding frame number threshold w of the selected time span, and the time span from the previous w frames to the Tn-w frames is taken as a interception unit from the current frame moment Tn to be detected; the value of the i-th row j is the vertical echo signal intensity vertical intensity profile of the corresponding frame. vertical intensity profile is normalized and then mapped to a total of 2^8 steps of 0-255 to be stored as 8-bit shaping data.
Referring specifically to fig. 11, fig. 11 is a graph of altitude versus time provided herein. And judging the change condition of the target height within a preset time span threshold value through the relation diagram of fig. 11, thereby acquiring the posture change of the target.
To this end, the client device may represent the gesture behavior state of a certain target object of the current frame by using a two-dimensional matrix, which is equivalent to a single-channel gray scale map, and the behavior recognition may be equivalent to a CNN network classification task. The client device may set the classification output to a plurality of modes according to the application requirements. If the simple recognition mode is adopted, the simple recognition mode is classified into normal and abnormal modes; or detailed classification, such as standing, sitting, lying, sitting normally, lying normally, sitting accidentally, falling accidentally, etc.
Further, the client device clusters the three-dimensional point cloud based on density, and a plurality of point clouds obtained by clustering form a target point cloud, wherein the clustering mode based on density comprises but is not limited to: DBSCAN, OPTICS, DENCLUE, etc. and modified variant algorithms thereof, etc.
In other embodiments, the client device may also utilize the millimeter wave point cloud generating module to analyze the digital signal of the radar signal, so as to locate a specific position of the target point cloud in the three-dimensional point cloud, and thus extract a specific point cloud set of the target point cloud.
Specifically, as shown in fig. 12, the radar signal is an analog signal, and the point cloud generation secondary sub-module converts the radar signal from the analog signal to a digital signal through an ADC (analog-to-digital converter, analog to digital converter). The client device filters clutter and low-frequency direct current components in the digital signal through a preset modal decomposer, such as EMD (empirical mode decomposition ), VME (variational mode extraction, variational mode extraction) and the like, so as to filter interference data in the digital signal.
Further, the client device extracts velocity and doppler data in the digital signal through 2-dimensional-FFT (fast fourier transform), extracts coordinate position and velocity of the target object to be detected through 2-dimensional-CA-CEAR (cell average-constant false-AlarmRate), and extracts pose information of direction angle and pitch angle of the target object to be detected in the digital signal through 3-dimensional-FFT. The client device acquires target point clouds in each frame of three-dimensional point cloud according to the speed and the coordinate position of the target, then superimposes the target point clouds of continuous multiframes in the same rectangular coordinate system according to the direction angle, pitch angle and other pose information of the target point clouds of each frame, and compares the pose change of the target to be detected in the continuous multiframe radar signals.
The millimeter wave point cloud generation module is a core part of the application, is fundamentally different from the current mainstream visible light vision method, can utilize the difference of echo intensities of high water quantity on the surface skin of organisms on millimeter waves and other environmental objects with low water content, combines physical quantities such as the millimeter wave frequency band and the target application echo distance of practical application, sets a proper threshold value, and can separate biological and non-biological point cloud clusters more accurately.
When comparing the pose changes of the targets to be detected in continuous multi-frame radar signals, the positioning and tracking effects of multiple targets are needed to be achieved.
Specifically, as shown in fig. 13, fig. 13 is a schematic diagram of a multi-target positioning and tracking two-level sub-module algorithm flow provided in the present application.
And after the client equipment extracts the target point cloud from the three-dimensional point cloud data based on the density clustering algorithm, centroid information of the target point cloud can be obtained. The client device can predict the track of the target through the EKF extended Kalman filter, and match the associated clustering objects of the continuous multiframes according to the data correlator, namely the target, and finally realize the track management and target tracking of the target.
Specifically, the multi-target positioning and tracking secondary submodule firstly acquires the object centroid coordinates after the original point cloud clustering, then carries out track prediction on the object point set coordinates by using an EKF (extended Kalman filter), and then carries out track pairing of the registered objects by using a data association matching algorithm (such as NNDA, PDA, JPDA or Hungarian-algorithm) and the like.
With continued reference to fig. 14, fig. 14 is a schematic diagram of an algorithm flow of the first-level sub-module for extracting vital sign data from millimeter wave radar echo data provided in the present application. The radar millimeter wave and infrared multi-mode detection method is adopted to identify the human body gesture and behavior in the detected environment, and distance information is obtained physically, so that the accuracy is high; the human body temperature detection is carried out by matching with infrared, the multi-mode information is fused, and the robustness is stronger. The radar millimeter wave adopts a plurality of transmitting antennas and a plurality of antennas to greatly improve the angular resolution of the radar, so that the density of the generated point cloud can be greatly improved, rich language information is provided for algorithm identification, and the stability and accuracy of the algorithm identification are fundamentally improved.
The client device compares the Doppler plots of successive multiframes and sets the motion region therein as the target region. In the embodiment of the application, the client equipment analyzes the millimeter wave radar echo signal, acquires a Doppler diagram of the radar echo signal, and jointly judges and detects the primary screening target according to the distance dimension and the Doppler diagram.
The client device acquires the target area phase of the continuous multi-frame radar signal, extracts the frequency component of the target area phase, and separates the respiratory frequency component and the heartbeat frequency component from the frequency component.
In an embodiment of the present application, the client device separates out the respiratory frequency component using a low pass filter and the heartbeat frequency component using a high pass filter.
The client device extracts the respiration rate of the target in terms of the respiration rate components and extracts the heartbeat rate of the target in terms of the heartbeat rate components.
In the present embodiment, the client device extracts the breathing frequency of the target from the breathing frequency component and the heartbeat frequency of the target from the heartbeat frequency component using a modal decomposer such as EMD (empirical mode decomposition ), VMD (variational mode decomposition (Variational Mode Decomposition), or VME (variational mode extraction).
The respiratory frequency component bandwidth and the heartbeat frequency component bandwidth are preset in the modal decomposer, the client device directly decomposes the respiratory frequency conforming to the respiratory frequency component bandwidth and the heartbeat frequency conforming to the heartbeat frequency component bandwidth by using the modal decomposer, and frequency extraction efficiency can be effectively improved.
In the embodiment of the application, the millimeter wave echo data of the vital sign monitoring module stores different frequency components, so that the module algorithm firstly separates the high frequency component from the low frequency component by using different band-pass filters, and then extracts the respiratory frequency and the heartbeat frequency of the target by using a modal decomposer (setting prior values such as frequency bandwidth) such as EMD/VME.
In the embodiment of the application, the client equipment reasonably constructs the 2D feature matrix according to the actual situation, so that the large data volume of the 3D point cloud is reduced, and the information volume is richer than that of the 1D point cloud which is simply utilized; by utilizing the millimeter wave sensor, the echo information of the millimeter wave sensor has the advantages which are not possessed by the mainstream visible light scheme, and more effective information such as vital signs, polar distance and the like can be extracted from the echo distance sum; the millimeter wave detection can still be normally used under the condition that partial light and thin environmental materials (such as household cloth, wood, plastic, paper, ceramic, glass and the like) are shielded.
Specifically, the sensor collects the original data without carrying any privacy information of the detected object, and is suitable for special and private scenes as well. (e.g., nursing homes, healthcare rooms, ordinary household bedroom bathrooms, etc.); the radar and infrared perception are used differently in 24 hours at daytime and nighttime without depending on visible light. Millimeter wave radars are not affected by special shielding (such as water vapor fog, indoor curtains, clothes and the like); the radar sensor can acquire the radial distance of the detection object, so that the quantity of the information of the perception data is more, and the recognition and judgment are facilitated; the data volume acquired by the radar sensor is not large, and the data volume can be converted into 3D point cloud with specified density after being processed, so that the balance between the semantic information richness and the data complexity (redundancy) can be conveniently controlled, and further the complexity and the accuracy of data transmission, processing and operation of the whole detection and identification system can be controlled. Providing a wider solution selection space for engineered lands; the infrared sensor can assist in detecting vital signs of the living being. The human body gesture and behavior can be better identified and judged by combining the radar point cloud data and the radial distance information.
The client device can flexibly adjust the point cloud density and the infrared sensing echo intensity of the module by changing the sensor parameters and the processing algorithm, so as to adapt to various different use scenes, and adapt to scenes with weak light and high privacy environment requirements.
The above embodiments are only one common case of the present application, and do not limit the technical scope of the present application, so any minor modifications, equivalent changes or modifications made to the above matters according to the scheme of the present application still fall within the scope of the technical scheme of the present application.
With continued reference to fig. 15, fig. 15 is a schematic structural diagram of an embodiment of the object recognition device provided in the present application. The object recognition device 40 includes a signal acquisition module 41, a point cloud generation module 42, and an object recognition module 43.
The signal acquisition module 41 is configured to acquire a radar signal of an area where the target is located.
A point cloud generating module 42, configured to generate a three-dimensional point cloud based on the radar signal, where each data point in the three-dimensional point cloud includes radar intensity information.
The target recognition module 43 is configured to extract a target point cloud from the three-dimensional point cloud, and analyze a behavior of the target based on a radar intensity information distribution of the target point cloud.
With continued reference to fig. 16, fig. 16 is a schematic structural diagram of another embodiment of the object recognition device provided in the present application. The object recognition apparatus 500 of the embodiment of the present application includes a processor 51, a memory 52, an input-output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, and the memory 52 stores program data, and the processor 51 is configured to execute the program data to implement the target recognition method according to any of the above embodiments.
In the present embodiment, the processor 51 may also be referred to as a CPU (Central Processing Unit ). The processor 51 may be an integrated circuit chip with signal processing capabilities. Processor 51 may also be a general purpose processor, a digital signal processor (DSP, digital Signal Process), an application specific integrated circuit (ASIC, application Specific Integrated Circuit), a field programmable gate array (FPGA, field Programmable Gate Array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
Still further, referring to fig. 17, fig. 17 is a schematic structural diagram of an embodiment of the computer storage medium provided in the present application, in which the program data 61 is stored in the computer storage medium 600, and the program data 61 is used to implement the target recognition method according to any of the above embodiments when the program data is executed by a processor.
Embodiments of the present application are implemented in the form of software functional units and sold or used as a stand-alone product, which may be stored on a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely an embodiment of the present application, and the patent scope of the present application is not limited thereto, but the equivalent structures or equivalent flow changes made in the present application and the contents of the drawings are utilized, or directly or indirectly applied to other related technical fields, which are all included in the patent protection scope of the present application.

Claims (10)

1. An object recognition system, the object recognition system comprising: client device, server and client; wherein,,
the client equipment is used for acquiring radar signals of an area where a target is located and generating three-dimensional point clouds according to the radar signals;
the client device is used for analyzing target information based on radar intensity information distribution in the three-dimensional point cloud;
the client equipment is used for sending the target information to the server for storage;
and the server is used for sending the target information to the client according to the target identification instruction of the client.
2. The object recognition system of claim 1, wherein,
the client device comprises a radar sensor, a data processing module and a target identification module; wherein,,
the radar sensor is used for acquiring radar signals of the area where the target is located;
the data processing module is used for generating a three-dimensional point cloud according to radar signals of the radar sensor;
the target identification module is used for analyzing the radar intensity information distribution in the three-dimensional point cloud generated by the data processing module and analyzing the target information according to the radar intensity information distribution.
3. The object recognition system of claim 2, wherein,
the client device further comprises an infrared sensor; wherein,,
the infrared sensor is used for acquiring infrared signals of the area where the target is located;
the target recognition module is further used for analyzing the human body position of the target based on the infrared signals.
4. The object recognition system of claim 2, wherein,
the radar sensor is a millimeter wave radar sensor.
5. The object recognition system of claim 2, wherein,
the target recognition module is further used for extracting a target point cloud from the three-dimensional point cloud, and analyzing the behavior of the target based on radar intensity information distribution of the target point cloud.
6. The object recognition system of claim 5, wherein,
the target recognition module is further used for analyzing the behavior change of the target according to the radar signals of the continuous multiframes, and carrying out gesture recognition, behavior recognition, falling judgment, vital sign judgment and/or human body safety state monitoring on the target.
7. The object recognition system of claim 2, wherein,
the target recognition module is further used for analyzing and extracting respiratory frequency components and/or heartbeat frequency components in the radar signals by using Doppler signals, extracting respiratory frequency of the target according to the respiratory frequency components and/or extracting heartbeat frequency of the target according to the heartbeat frequency components.
8. The object recognition system of claim 1, wherein,
the client is further configured to send a control parameter to the server, send the control parameter to the client device through the server, and adjust a target identification parameter of the client device by using the control parameter.
9. A target recognition method, characterized in that the target recognition method comprises:
acquiring a radar signal of an area where a target is located;
generating a three-dimensional point cloud based on the radar signals, wherein each data point in the three-dimensional point cloud comprises radar intensity information;
and extracting a target point cloud from the three-dimensional point cloud, and analyzing the behavior of the target based on radar intensity information distribution of the target point cloud.
10. An object recognition device, comprising a memory and a processor coupled to the memory;
wherein the memory is for storing program data and the processor is for executing the program data to implement the object recognition method of claim 9.
CN202310310134.2A 2023-03-21 2023-03-21 Target recognition system, target recognition method and target recognition device Pending CN116520315A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310310134.2A CN116520315A (en) 2023-03-21 2023-03-21 Target recognition system, target recognition method and target recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310310134.2A CN116520315A (en) 2023-03-21 2023-03-21 Target recognition system, target recognition method and target recognition device

Publications (1)

Publication Number Publication Date
CN116520315A true CN116520315A (en) 2023-08-01

Family

ID=87396609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310310134.2A Pending CN116520315A (en) 2023-03-21 2023-03-21 Target recognition system, target recognition method and target recognition device

Country Status (1)

Country Link
CN (1) CN116520315A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117250610A (en) * 2023-11-08 2023-12-19 浙江华是科技股份有限公司 Laser radar-based intruder early warning method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117250610A (en) * 2023-11-08 2023-12-19 浙江华是科技股份有限公司 Laser radar-based intruder early warning method and system
CN117250610B (en) * 2023-11-08 2024-02-02 浙江华是科技股份有限公司 Laser radar-based intruder early warning method and system

Similar Documents

Publication Publication Date Title
Meng et al. Gait recognition for co-existing multiple people using millimeter wave sensing
US11885872B2 (en) System and method for camera radar fusion
US12072440B2 (en) Identification system for subject or activity identification using range and velocity data
Planinc et al. Introducing the use of depth data for fall detection
WO2021174414A1 (en) Microwave identification method and system
Song et al. Power line detection from optical images
CN110456320B (en) Ultra-wideband radar identity recognition method based on free space gait time sequence characteristics
CN111738060A (en) Human gait recognition system based on millimeter wave radar
WO2020134848A1 (en) Intelligent detection method and device applied to millimeter wave security check instrument, and storage device
KR101839827B1 (en) Smart monitoring system applied with recognition technic of characteristic information including face on long distance-moving object
CN112184626A (en) Gesture recognition method, device, equipment and computer readable medium
CN112859033B (en) Target detection method and device and related equipment
CN113311428A (en) Intelligent human body falling monitoring system based on millimeter wave radar and identification method
CN116520315A (en) Target recognition system, target recognition method and target recognition device
CN114814832A (en) Millimeter wave radar-based real-time monitoring system and method for human body falling behavior
CN114818788A (en) Tracking target state identification method and device based on millimeter wave perception
Liu et al. A multimodal dynamic hand gesture recognition based on radar–vision fusion
US20240053464A1 (en) Radar Detection and Tracking
Luo et al. Spectro-temporal modeling for human activity recognition using a radar sensor network
Feng et al. DAMUN: A domain adaptive human activity recognition network based on multimodal feature fusion
Hu et al. Agricultural robot for intelligent detection of pyralidae insects
Hu et al. Coarse-to-fine activity annotation and recognition algorithm for solitary older adults
Li et al. mmBehavior: Human Activity Recognition System of millimeter-wave Radar Point Clouds Based on Deep Recurrent Neural Network
Li et al. Gait Recognition Using Spatio‐Temporal Information of 3D Point Cloud via Millimeter Wave Radar
CN115841707A (en) Radar human body posture identification method based on deep learning and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination