CN111007496A - Through-wall perspective method based on neural network associated radar - Google Patents

Through-wall perspective method based on neural network associated radar Download PDF

Info

Publication number
CN111007496A
CN111007496A CN201911195313.6A CN201911195313A CN111007496A CN 111007496 A CN111007496 A CN 111007496A CN 201911195313 A CN201911195313 A CN 201911195313A CN 111007496 A CN111007496 A CN 111007496A
Authority
CN
China
Prior art keywords
data
radar
neural network
wall
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911195313.6A
Other languages
Chinese (zh)
Other versions
CN111007496B (en
Inventor
孙宝剑
敬雯
吴龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Micro-Address Communication Technology Co Ltd
Original Assignee
Chengdu Micro-Address Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Micro-Address Communication Technology Co Ltd filed Critical Chengdu Micro-Address Communication Technology Co Ltd
Priority to CN201911195313.6A priority Critical patent/CN111007496B/en
Publication of CN111007496A publication Critical patent/CN111007496A/en
Application granted granted Critical
Publication of CN111007496B publication Critical patent/CN111007496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/887Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons
    • G01S13/888Radar or analogous systems specially adapted for specific applications for detection of concealed objects, e.g. contraband or weapons through wall detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Electromagnetism (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a through-wall perspective method based on a neural network associated radar, which comprises the following steps of: the partition wall collects radar detection information of a sample space, converts the radar detection information into a two-dimensional array, and stores the two-dimensional array as sample data; collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data; bringing the sample data and the label data into a convolutional neural network to obtain a trained data model; and detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model. And (3) recognizing weak differences among received signals by utilizing the self-learning training capability of the convolutional neural network. So as to distinguish the action postures of the human body after the wall are standing, walking and lying; the placement position of large furniture such as sofas, beds and the like.

Description

Through-wall perspective method based on neural network associated radar
Technical Field
The invention belongs to a radar perspective method, and particularly relates to a through-wall perspective method based on a neural network associated radar.
Background
The existing through-wall radar used for actions such as anti-terrorism can penetrate through a wall with the depth of about 40-50 cm. However, only approximate body position information and information on whether or not to stand are available, and information on the posture of the current person and the position of the large-sized furniture placed in the room cannot be obtained.
Disclosure of Invention
The invention provides a through-wall perspective method based on a neural network associated radar, which solves the problems that the existing radar can only obtain rough human body position information and information such as whether to stand or not, and cannot obtain the posture information of the current person and the placing position of large furniture in a room.
The technical scheme adopted by the invention is as follows:
the through-wall perspective method based on the neural network associated radar comprises the following steps:
s1, collecting radar detection information of the sample space by the partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and S4, detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
A two-dimensional array converted from radar detection information of a detection space and a binary image of the detection space are used as a basic learning model; establishing a data model by utilizing the self-learning training capability of the convolutional neural network; distinguishing the difference between the received radar detection signals by using a data model, and forming a binarization image of a detection space by using the radar detection signals corresponding to the binarization image value in the data model; the action postures of the human body after the wall is distinguished to be standing, walking and lying through the binary image; information such as the placement position of large furniture such as sofas and beds; the radar solves the problems that the existing radar can only obtain rough human body position information and information such as whether to stand, and cannot obtain the posture information of the current person and the placing position of large-scale furniture in a room.
Further, the method for converting the radar detection information into the two-dimensional array comprises the following steps: reflecting the spatial data detected by the radar in a plane form, recording each point on the plane as (Xi, Yi), and taking the value of each (Xi, Yi) returned to the receiver as the maximum value of each point; taking the value of each (Xi, Yi) in the case of the wall penetration failure as the minimum value of each point; the maximum value and the minimum value which can be obtained by each point are obtained.
Furthermore, each pixel point takes a value of 0-255.
Further, a large number of training samples and labels are collected by repeatedly adopting the steps S1-S3, a data model for radar wall penetration detection is trained by utilizing a convolutional neural network, and the distribution situation of large obstacles in the house is presumed by utilizing the data model.
Further, after the image information of the sample space is collected, the following processing is also performed: and carrying out fuzzy processing on the image information, preserving a main body part in the image information, and carrying out binarization processing on the image information with the main body part preserved.
Further, the main body part is an image part reflecting the outline of the object in the image.
The invention has the following advantages and beneficial effects:
1. the method utilizes the self-learning training capability of the convolutional neural network to distinguish the weak difference between the received signals; so as to distinguish the action postures of the human body after the wall are standing, walking and lying; information such as the placement position of large furniture such as sofas and beds;
2. the principle of radar detection of the invention is as follows: the method comprises the steps of emitting radio waves, reflecting the radio waves after the radio waves encounter an obstacle, and calculating the position information of the obstacle according to the specific condition of the reflected radio waves.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a process flow diagram of the present invention.
Fig. 2 is a diagram of a two-dimensional array of radar scans according to the present invention.
FIG. 3 is a spatial map of the present invention binarization.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
Example 1:
as shown in fig. 1, 2 and 3, the present embodiment provides a through-wall perspective method based on neural network associated radar;
when we use the through-wall radar to explore the space behind the wall, consider the situation that the radio wave emitted from the emission source will be reflected when it meets the surface of the cabinet, and some radio waves will be reflected farther because of no blocking of the cabinet, such as after hitting a desk lamp. Therefore, it can be considered that the returned electric wave data detected each time are different according to different environment settings in the house behind the wall.
The whole method comprises three processes, namely data collection, model training and actual speculation:
collecting data: the partition wall collects radar detection information of a sample space, converts the radar detection information into a two-dimensional array, and stores the two-dimensional array as sample data;
collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data; image Binarization (Image Binarization) is a process of setting the gray value of a pixel point on an Image to be 0 or 255, namely, the whole Image presents an obvious black-white effect. In digital image processing, a binary image plays a very important role, and binarization of an image greatly reduces the amount of data in the image, thereby making it possible to highlight the contour of a target.
Training a model: bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
It is actually assumed that the value returned to each receiver (Xi, Yi) in an empty room at a depth of 4m (any default value may be set as required) is taken as the maximum value for each point. Then, the value of each point (Xi, Yi) in the case where the wall is thick and cannot pass through the wall is taken as the minimum value of each point. At this time, the maximum value and the minimum value which can be obtained by each point are obtained (each pixel point of the reference picture takes a value of 0-255).
Then, we can use radar to detect a room and output the received data directly as an array of x y information. This array is transmitted as a sample into the neural network.
Then, a camera is used for shooting an indoor actual picture, and the actual picture is subjected to certain fuzzy processing, so that the information of large furniture, people and the like is distinguished, and some detailed data are blurred. And then the information picture of the approximate obstacle is obtained through binarization processing. At this time, the binarized image of the obstacle is used as a training label of the neural network.
Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the representative algorithms for deep learning (deep). Convolutional neural Networks have a feature learning (rendering) capability, and can perform Shift-Invariant classification (Shift-Invariant classification) on input information according to a hierarchical structure thereof, and are also called "Shift-Invariant artificial neural Networks (SIANN)".
The method comprises the following steps:
s1, collecting radar detection information of the sample space by the partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and S4, detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
A two-dimensional array converted from radar detection information of a detection space and a binary image of the detection space are used as a basic learning model; establishing a data model by utilizing the self-learning training capability of the convolutional neural network; distinguishing the difference between the received radar detection signals by using a data model, and forming a binarization image of a detection space by using the radar detection signals corresponding to the binarization image value in the data model; the action postures of the human body after the wall is distinguished to be standing, walking and lying through the binary image; information such as the placement position of large furniture such as sofas and beds; the radar solves the problems that the existing radar can only obtain rough human body position information and information such as whether to stand, and cannot obtain the posture information of the current person and the placing position of large-scale furniture in a room.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. The through-wall perspective method based on the neural network associated radar is characterized by comprising the following steps of:
s1, collecting radar detection information of the sample space by the partition wall, converting the radar detection information into a two-dimensional array, and storing the two-dimensional array as sample data;
s2, collecting image information of a sample space, converting the image information into binary picture data, and storing the binary picture data as label data;
s3, bringing the sample data and the label data into a convolutional neural network to obtain a trained data model;
and S4, detecting the space to be detected by using the radar partition wall, obtaining real-time radar detection data of the space to be detected, bringing the real-time radar detection data into a data model, and obtaining an indoor binary image through the data model.
2. The through-wall perspective method based on the neural network associated radar as claimed in claim 1, wherein the method for converting the radar detection information into the two-dimensional array is as follows: reflecting the spatial data detected by the radar in a plane form, recording each point on the plane as (Xi, Yi), and taking the value of each (Xi, Yi) returned to the receiver as the maximum value of each point; taking the value of each (Xi, Yi) in the case of the wall penetration failure as the minimum value of each point; the maximum value and the minimum value which can be obtained by each point are obtained.
3. The through-wall perspective method based on the neural network associated radar as claimed in claim 2, wherein: and each pixel point takes a value of 0-255.
4. The through-wall perspective method based on the neural network associated radar as claimed in claim 1, wherein: and (4) repeatedly adopting the steps S1-S3 to collect a large number of training samples and labels, training a data model for radar wall penetration detection by using a convolutional neural network, and then presuming the distribution situation of the large obstacles in the house by using the data model.
5. The through-wall perspective method based on the neural network associated radar as claimed in claim 1, wherein: after the image information of the sample space is collected, the following processing is also carried out: and carrying out fuzzy processing on the image information, reserving a main body part in the image information, and carrying out binarization processing on the image information reserved with the main body part.
6. The through-wall perspective method based on the neural network associated radar as claimed in claim 1, wherein: the main body portion is an image portion of the image reflecting the contours of the object.
CN201911195313.6A 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar Active CN111007496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911195313.6A CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911195313.6A CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Publications (2)

Publication Number Publication Date
CN111007496A true CN111007496A (en) 2020-04-14
CN111007496B CN111007496B (en) 2022-11-04

Family

ID=70112257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911195313.6A Active CN111007496B (en) 2019-11-28 2019-11-28 Through-wall perspective method based on neural network associated radar

Country Status (1)

Country Link
CN (1) CN111007496B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111537996A (en) * 2020-06-02 2020-08-14 西安石油大学 Through-wall radar imaging method based on convolutional neural network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019458A1 (en) * 2014-07-16 2016-01-21 Deep Learning Analytics, LLC Systems and methods for recognizing objects in radar imagery
CN106874961A (en) * 2017-03-03 2017-06-20 北京奥开信息科技有限公司 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
CN106981080A (en) * 2017-02-24 2017-07-25 东华大学 Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN107169435A (en) * 2017-05-10 2017-09-15 天津大学 A kind of convolutional neural networks human action sorting technique based on radar simulation image
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN108776336A (en) * 2018-06-11 2018-11-09 电子科技大学 A kind of adaptive through-wall radar static human body object localization method based on EMD
CN108920993A (en) * 2018-03-23 2018-11-30 武汉雷博合创电子科技有限公司 A kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion
CN109270525A (en) * 2018-12-07 2019-01-25 电子科技大学 Through-wall radar imaging method and system based on deep learning
CN109597065A (en) * 2018-12-11 2019-04-09 湖南华诺星空电子技术有限公司 A kind of false alarm rejection method, apparatus for through-wall radar detection
CN109948532A (en) * 2019-03-19 2019-06-28 桂林电子科技大学 ULTRA-WIDEBAND RADAR human motion recognition method based on depth convolutional neural networks
CN110146855A (en) * 2019-06-11 2019-08-20 北京无线电测量研究所 Radar Intermittent AF panel thresholding calculation method and device
US20190294966A1 (en) * 2018-03-26 2019-09-26 Cohda Wireless Pty Ltd. Systems and methods for automatically training neural networks

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019458A1 (en) * 2014-07-16 2016-01-21 Deep Learning Analytics, LLC Systems and methods for recognizing objects in radar imagery
CN106981080A (en) * 2017-02-24 2017-07-25 东华大学 Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN106874961A (en) * 2017-03-03 2017-06-20 北京奥开信息科技有限公司 A kind of indoor scene recognition methods using the very fast learning machine based on local receptor field
CN107169435A (en) * 2017-05-10 2017-09-15 天津大学 A kind of convolutional neural networks human action sorting technique based on radar simulation image
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN108920993A (en) * 2018-03-23 2018-11-30 武汉雷博合创电子科技有限公司 A kind of pedestrian's gesture recognition method and system based on radar and multiple networks fusion
US20190294966A1 (en) * 2018-03-26 2019-09-26 Cohda Wireless Pty Ltd. Systems and methods for automatically training neural networks
CN108776336A (en) * 2018-06-11 2018-11-09 电子科技大学 A kind of adaptive through-wall radar static human body object localization method based on EMD
CN109270525A (en) * 2018-12-07 2019-01-25 电子科技大学 Through-wall radar imaging method and system based on deep learning
CN109597065A (en) * 2018-12-11 2019-04-09 湖南华诺星空电子技术有限公司 A kind of false alarm rejection method, apparatus for through-wall radar detection
CN109948532A (en) * 2019-03-19 2019-06-28 桂林电子科技大学 ULTRA-WIDEBAND RADAR human motion recognition method based on depth convolutional neural networks
CN110146855A (en) * 2019-06-11 2019-08-20 北京无线电测量研究所 Radar Intermittent AF panel thresholding calculation method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAN WAN 等: "LPI Radar Waveform Recognition Based on", 《SYMMETRY》 *
韩江洪 等: "基于深度学习的井下巷道行人视觉定位算法", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111537996A (en) * 2020-06-02 2020-08-14 西安石油大学 Through-wall radar imaging method based on convolutional neural network

Also Published As

Publication number Publication date
CN111007496B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN111568314B (en) Cleaning method and device based on scene recognition, cleaning robot and storage medium
CN111543902B (en) Floor cleaning method and device, intelligent cleaning equipment and storage medium
CN109670532B (en) Method, device and system for identifying abnormality of biological organ tissue image
CN103810478B (en) A kind of sitting posture detecting method and device
CN111643010B (en) Cleaning robot control method and device, cleaning robot and storage medium
CN111178197A (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN114862837A (en) Human body security check image detection method and system based on improved YOLOv5s
CN112990026B (en) Wireless signal perception model construction and perception method and system based on countermeasure training
CN113537175B (en) Same-fence swinery average weight estimation method based on computer vision
CN111643017B (en) Cleaning robot control method and device based on schedule information and cleaning robot
CN111007496B (en) Through-wall perspective method based on neural network associated radar
CN106597235A (en) Partial discharge detection apparatus and method
CN111401215A (en) Method and system for detecting multi-class targets
CN110490931A (en) Orbit generation method and device, storage medium and electronic device
CN110188179B (en) Voice directional recognition interaction method, device, equipment and medium
CN111160258A (en) Identity recognition method, device, system and storage medium
CN111144156B (en) Image data processing method and related device
CN116416518A (en) Intelligent obstacle avoidance method and device
CN110532909B (en) Human behavior identification method based on three-dimensional UWB positioning
CN113643229A (en) Image composition quality evaluation method and device
CN109165611A (en) A kind of dishes identification intelligent settlement method based on machine vision and neural network
CN115291184B (en) Attitude monitoring method combining millimeter wave radar and deep learning
CN117173743A (en) Time sequence-related self-adaptive information fusion fish population tracking method
CN115841707A (en) Radar human body posture identification method based on deep learning and related equipment
Kim et al. Mobile-based flower recognition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant