CN111007496B - Through-wall perspective method based on neural network associated radar - Google Patents

Through-wall perspective method based on neural network associated radar Download PDF

Info

Publication number
CN111007496B
CN111007496B CN201911195313.6A CN201911195313A CN111007496B CN 111007496 B CN111007496 B CN 111007496B CN 201911195313 A CN201911195313 A CN 201911195313A CN 111007496 B CN111007496 B CN 111007496B
Authority
CN
China
Prior art keywords
data
radar
neural network
wall
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911195313.6A
Other languages
Chinese (zh)
Other versions
CN111007496A (en
Inventor
孙宝剑
敬雯
吴龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Micro Address Communication Technology Co ltd
Original Assignee
Chengdu Micro Address Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Micro Address Communication Technology Co ltd filed Critical Chengdu Micro Address Communication Technology Co ltd
Priority to CN201911195313.6A priority Critical patent/CN111007496B/en
Publication of CN111007496A publication Critical patent/CN111007496A/en
Application granted granted Critical
Publication of CN111007496B publication Critical patent/CN111007496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/887Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons
    • G01S13/888Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons through wall detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention discloses a through-wall perspective method based on a neural network associated radar, which comprises the following steps of: the partition wall collects radar detection information of a sample space, converts the radar detection information into a two-dimensional array, and stores the two-dimensional array as sample data; collecting image information of a sample space, converting the image information into binaryzation picture data, and storing the binaryzation picture data as label data; bringing the sample data and the label data into a convolutional neural network to obtain a trained data model; and detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model. And (3) recognizing weak differences among received signals by utilizing the self-learning training capability of the convolutional neural network. Thereby distinguishing the action postures of the human body behind the wall as standing, walking and lying; the placement position of large furniture such as sofas, beds and the like.

Description

Through-wall perspective method based on neural network associated radar
Technical Field
The invention belongs to a radar perspective method, and particularly relates to a through-wall perspective method based on a neural network associated radar.
Background
The existing through-wall radar used for actions such as anti-terrorism can penetrate through a wall with the depth of about 40-50 cm. However, only approximate body position information and information on whether or not to stand are available, and information on the posture of the current person and the position of the large-sized furniture placed in the room cannot be obtained.
Disclosure of Invention
The invention provides a through-wall perspective method based on a neural network associated radar, which solves the problems that the existing radar can only obtain rough human body position information and information such as whether to stand or not, and cannot obtain the posture information of the current person and the placing position of large furniture in a room.
The technical scheme adopted by the invention is as follows:
the through-wall perspective method based on the neural network associated radar comprises the following steps:
s1, collecting radar detection information of a sample space by a partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and S4, detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
A two-dimensional array converted from radar detection information of a detection space and a binary image of the detection space are used as a basic learning model; establishing a data model by utilizing the self-learning training capability of the convolutional neural network; distinguishing the difference between the received radar detection signals by using a data model, and enabling the radar detection signals to correspond to the binary image values in the data model to form a binary image of a detection space; the action postures of the human body after the wall is distinguished to be standing, walking and lying through the binary image; information such as the placement position of large furniture such as sofas and beds; the radar solves the problems that the existing radar can only obtain rough human body position information and information such as whether to stand, and cannot obtain the posture information of the current person and the placing position of large-scale furniture in a room.
Further, the method for converting the radar detection information into the two-dimensional array comprises the following steps: reflecting the spatial data detected by the radar in a plane form, recording each point on the plane as (Xi, yi), and taking the value of each (Xi, yi) returned to the receiver as the maximum value of each point; taking the value of each (Xi, yi) in the case of the wall penetration failure as the minimum value of each point; the maximum value and the minimum value which can be obtained by each point are obtained.
Furthermore, each pixel point takes a value of 0-255.
Further, a large number of training samples and labels are collected by repeatedly adopting the steps from the step S1 to the step S3, a data model for radar through-wall detection is trained by utilizing a convolutional neural network, and then the distribution condition of large obstacles in the house is presumed by utilizing the data model.
Further, after the image information of the sample space is collected, the following processing is also performed: and carrying out fuzzy processing on the image information, keeping a main body part in the image information, and carrying out binarization processing on the image information with the main body part kept.
Further, the main body part is an image part reflecting the outline of the article in the image.
The invention has the following advantages and beneficial effects:
1. according to the method, the weak difference between the received signals is distinguished by utilizing the self-learning training capability of the convolutional neural network; thereby distinguishing the action postures of the human body behind the wall as standing, walking and lying; information such as the placement position of large furniture such as sofas and beds;
2. the principle of radar detection of the invention is as follows: the method comprises the steps of emitting radio waves, reflecting the radio waves after the radio waves encounter an obstacle, and calculating the position information of the obstacle according to the specific condition of the reflected radio waves.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a process flow diagram of the present invention.
Fig. 2 is a diagram of a two-dimensional array of radar scans according to the present invention.
FIG. 3 is a spatial map of the binarization of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
Example 1:
as shown in fig. 1, 2 and 3, the present embodiment provides a through-wall perspective method based on neural network associated radar;
when we use the through-wall radar to explore the space behind the wall, consider the situation that the radio wave emitted from the emission source will be reflected when it meets the surface of the cabinet, and some radio waves will be reflected farther because of no blocking of the cabinet, such as after hitting a desk lamp. Therefore, it can be considered that the returned electric wave data detected each time are different according to different environment settings in the house behind the wall.
The whole method comprises three processes, namely data collection, model training and actual speculation:
collecting data: the partition wall collects radar detection information of a sample space, converts the radar detection information into a two-dimensional array, and stores the two-dimensional array as sample data;
collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data; image Binarization (Image Binarization) is a process of setting the gray value of a pixel point on an Image to be 0 or 255, namely, the whole Image presents an obvious black-and-white effect. In digital image processing, a binary image plays a very important role, and binarization of an image greatly reduces the amount of data in the image, thereby making it possible to highlight the contour of a target.
Training a model: bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
It is actually presumed that in an open room with a depth of 4m (any default value may be set as required), the value returned to each receiver (Xi, yi) is taken as the maximum value for each point. Then, the value of each point (Xi, yi) in the case where the wall is thick and cannot pass through the wall is taken as the minimum value of each point. At this time, the maximum value and the minimum value which can be obtained by each point are obtained (each pixel point of the reference picture takes a value of 0-255).
Then, we can use radar to detect a room and output the received data directly as an array of x y information. This array is passed as a sample into the neural network.
Then, a camera is used for shooting an indoor actual picture, and the actual picture is subjected to certain fuzzy processing, so that the information of large furniture, people and the like is distinguished, and some detail data are blurred. And then the image is subjected to binarization processing to obtain an information image of the approximate obstacle. At this time, the binarized image of the obstacle is used as a training label of the neural network.
Convolutional Neural Networks (CNNs) are a class of feed forward Neural Networks (fed Neural Networks) that include convolution computations and have a deep structure, and are one of the representative algorithms of deep learning (deep learning). Convolutional Neural Networks have a representation learning (representation learning) capability, and are capable of performing Shift-Invariant classification (Shift-Invariant classification) on input information according to their hierarchical structure, and are therefore also referred to as "Shift-Invariant Artificial Neural Networks (SIANN)".
The method comprises the following steps:
s1, collecting radar detection information of a sample space by a partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binaryzation picture data, and storing the binaryzation picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and S4, detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
A two-dimensional array converted from radar detection information of a detection space and a binary image of the detection space are used as a basic learning model; establishing a data model by utilizing the self-learning training capability of the convolutional neural network; distinguishing the difference between the received radar detection signals by using a data model, and forming a binarization image of a detection space by using the radar detection signals corresponding to the binarization image value in the data model; the action postures of the human body after the wall is distinguished to be standing, walking and lying through the binary image; information such as the placement position of large furniture such as a sofa, a bed and the like; the problem of current radar can only obtain information such as approximate human position information and whether stand, can't obtain information such as the gesture information of present people and large-scale furniture locating position in the room is solved.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (5)

1. The through-wall perspective method based on the neural network associated radar is characterized by comprising the following steps of:
s1, collecting radar detection information of a sample space by a partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binaryzation picture data, and storing the binaryzation picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
s4, detecting the space to be detected by using a radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model;
the method for converting the radar detection information into the two-dimensional array comprises the following steps: reflecting the spatial data detected by the radar in a plane form, recording each point on the plane as (Xi, yi), and taking the value of each (Xi, yi) returned to the receiver as the maximum value of each point; taking the value of each (Xi, yi) in the case of the wall penetration failure as the minimum value of each point; the maximum and minimum values obtained for each point are obtained.
2. The through-wall perspective method based on neural network associated radar as claimed in claim 1, wherein: each pixel takes on a value of 0-255.
3. The through-wall perspective method based on neural network associated radar as claimed in claim 1, wherein: and (4) repeatedly adopting the steps from the step S1 to the step S3 to collect a large number of training samples and labels, training a data model for radar wall penetration detection by using a convolutional neural network, and then deducing the distribution condition of large obstacles in the house by using the data model.
4. The through-wall perspective method based on neural network associated radar as claimed in claim 1, wherein: after the image information of the sample space is collected, the following processing is also carried out: and carrying out fuzzy processing on the image information, reserving a main body part in the image information, and carrying out binarization processing on the image information reserved with the main body part.
5. The through-wall perspective method based on the neural network associated radar as claimed in claim 4, wherein: the main body portion is an image portion of the image reflecting the contours of the object.
CN201911195313.6A 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar Active CN111007496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195313.6A CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195313.6A CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Publications (2)

Publication Number Publication Date
CN111007496A CN111007496A (en) 2020-04-14
CN111007496B true CN111007496B (en) 2022-11-04

Family

ID=70112257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195313.6A Active CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Country Status (1)

Country Link
CN (1) CN111007496B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111537996A (en) * 2020-06-02 2020-08-14 西安石油大学 Through-wall radar imaging method based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981080A (en) * 2017-02-24 2017-07-25 东华大学 Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN108776336A (en) * 2018-06-11 2018-11-09 电子科技大学 A kind of adaptive through-wall radar static human body object localization method based on EMD
CN110146855A (en) * 2019-06-11 2019-08-20 北京无线电测量研究所 Radar Intermittent AF panel thresholding calculation method and device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9978013B2 (en) * 2014-07-16 2018-05-22 Deep Learning Analytics, LLC Systems and methods for recognizing objects in radar imagery
CN106874961A (en) * 2017-03-03 2017-06-20 北京奥开信息科技有限公司 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
CN107169435B (en) * 2017-05-10 2021-07-20 天津大学 Convolutional neural network human body action classification method based on radar simulation image
CN107862293B (en) * 2017-09-14 2021-05-04 北京航空航天大学 Radar color semantic image generation system and method based on countermeasure generation network
CN108229404B (en) * 2018-01-09 2022-03-08 东南大学 Radar echo signal target identification method based on deep learning
CN108920993B (en) * 2018-03-23 2022-08-16 武汉雷博合创电子科技有限公司 Pedestrian attitude identification method and system based on radar and multi-network fusion
EP3547215A1 (en) * 2018-03-26 2019-10-02 Cohda Wireless Pty Ltd. Systems and methods for automatically training neural networks
CN109270525B (en) * 2018-12-07 2020-06-30 电子科技大学 Through-wall radar imaging method and system based on deep learning
CN109597065B (en) * 2018-12-11 2022-09-09 湖南华诺星空电子技术有限公司 False alarm suppression method and device for through-wall radar detection
CN109948532A (en) * 2019-03-19 2019-06-28 桂林电子科技大学 ULTRA-WIDEBAND RADAR human motion recognition method based on depth convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106981080A (en) * 2017-02-24 2017-07-25 东华大学 Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN108776336A (en) * 2018-06-11 2018-11-09 电子科技大学 A kind of adaptive through-wall radar static human body object localization method based on EMD
CN110146855A (en) * 2019-06-11 2019-08-20 北京无线电测量研究所 Radar Intermittent AF panel thresholding calculation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LPI Radar Waveform Recognition Based on;Jian Wan 等;《symmetry》;20190527;第5卷(第11期);全文 *
基于深度学习的井下巷道行人视觉定位算法;韩江洪 等;《计算机应用》;20190310;第39卷(第3期);全文 *

Also Published As

Publication number Publication date
CN111007496A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN111543902B (en) Floor cleaning method and device, intelligent cleaning equipment and storage medium
CN111568314B (en) Cleaning method and device based on scene recognition, cleaning robot and storage medium
CN109670532B (en) Method, device and system for identifying abnormality of biological organ tissue image
CN103810478B (en) A kind of sitting posture detecting method and device
CN108830144B (en) Lactating sow posture identification method based on improved Faster-R-CNN
CN109394229A (en) A kind of fall detection method, apparatus and system
CN111643010B (en) Cleaning robot control method and device, cleaning robot and storage medium
CN107171872B (en) User behavior prediction method in smart home
CN114862837A (en) Human body security check image detection method and system based on improved YOLOv5s
CN111643017B (en) Cleaning robot control method and device based on schedule information and cleaning robot
CN106559749A (en) A kind of multiple target passive type localization method based on radio frequency tomography
CN113537175B (en) Same-fence swinery average weight estimation method based on computer vision
CN111007496B (en) Through-wall perspective method based on neural network associated radar
CN111401215A (en) Method and system for detecting multi-class targets
CN110490931A (en) Orbit generation method and device, storage medium and electronic device
CN110348434A (en) Camera source discrimination method, system, storage medium and calculating equipment
Aydemir et al. Exploiting and modeling local 3d structure for predicting object locations
CN110188179B (en) Voice directional recognition interaction method, device, equipment and medium
CN116416518A (en) Intelligent obstacle avoidance method and device
Badeka et al. Harvest crate detection for grapes harvesting robot based on YOLOv3 model
Katayama et al. GAN-based color correction for underwater object detection
Xiao et al. Multi-view tracking, re-id, and social network analysis of a flock of visually similar birds in an outdoor aviary
CN110532909B (en) Human behavior identification method based on three-dimensional UWB positioning
CN115291184B (en) Attitude monitoring method combining millimeter wave radar and deep learning
CN112070035A (en) Target tracking method and device based on video stream and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant