CN114740472A - Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system - Google Patents

Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system Download PDF

Info

Publication number
CN114740472A
CN114740472A CN202210312335.1A CN202210312335A CN114740472A CN 114740472 A CN114740472 A CN 114740472A CN 202210312335 A CN202210312335 A CN 202210312335A CN 114740472 A CN114740472 A CN 114740472A
Authority
CN
China
Prior art keywords
dimensional
neural network
network model
column vector
range profile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210312335.1A
Other languages
Chinese (zh)
Inventor
罗成高
梁传英
邓彬
刘康
王宏强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210312335.1A priority Critical patent/CN114740472A/en
Publication of CN114740472A publication Critical patent/CN114740472A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/904SAR modes
    • G01S13/9043Forward-looking SAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/90Lidar systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a scanning-free single-channel terahertz radar forward-looking three-dimensional imaging method and system. The method comprises the following steps: acquiring a one-dimensional range profile through a single-transmitting and single-receiving terahertz radar, acquiring a three-dimensional depth image through a flight time camera, and training a neural network model by taking a large number of one-dimensional range profiles as input and a corresponding three-dimensional depth image as output to obtain a trained neural network model; and acquiring a one-dimensional distance image of the scene to be detected, inputting the one-dimensional distance image into the trained neural network model, and obtaining a three-dimensional image of the scene to be detected according to the output of the trained neural network model. According to the invention, only a single-channel terahertz radar with single transmitting and single receiving is used, so that the use and wave front modulation of an antenna array are avoided, the scanless single-channel forward-looking three-dimensional imaging is realized without depending on aperture accumulation or relative movement, the system is greatly simplified, and the cost is reduced; a forward-looking three-dimensional imaging algorithm based on deep learning is provided, the imaging efficiency of the system is improved, and the imaging requirement of high resolution and high frame rate can be met.

Description

Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system
Technical Field
The application relates to the technical field of terahertz radar three-dimensional imaging, in particular to a scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system.
Background
The term "terahertz" is formally appeared in 1970 and refers to electromagnetic waves with a frequency of 0.1-10 THz. The terahertz wave is in a transition frequency band from electronics to optics. Compared with a microwave radar, the terahertz radar has the advantages of short wavelength, large bandwidth and extremely high spatial resolution, so that the imaging resolution is extremely high, and the target characteristics can be finely depicted.
The classical radar imaging technology is generally based on the principles of tomography and range-doppler and is realized by a virtual synthetic aperture formed by the relative motion of a target and a radar; in the fields of forward looking and staring imaging, such as aperture coding imaging, electromagnetic vortex imaging and the like, a rich equivalent irradiation mode is obtained mainly by modulating a directional diagram and a wavefront space of electromagnetic waves, so that target information in echoes is rich, and super-resolution imaging is performed according to the rich equivalent irradiation mode. However, the above methods either require relative motion between the radar and the target or require modulation of the electromagnetic wave or wavefront space, which complicates the imaging system, and have their respective limitations. In addition, the existing radar forward-looking imaging technology uses an array system, a scanning system and a wavefront spatial modulation system. The radar imaging systems of the array system and the wavefront spatial modulation system have complex structures, so the cost is higher; however, the radar imaging system of the scanning system requires a long data accumulation process, and it is difficult to realize high frame rate imaging. Therefore, the prior art has the problem of poor adaptability.
Disclosure of Invention
In view of the foregoing, there is a need to provide a method, a system, a computer device and a storage medium for scanning-free single-channel terahertz radar forward-looking three-dimensional imaging, which can simplify a radar imaging system and simultaneously meet the requirement of high-resolution high-frame-rate imaging.
A method for forward-looking three-dimensional imaging of a scanless single-channel terahertz radar, the method comprising:
respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a flight time camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model;
and acquiring a one-dimensional range profile of the scene to be detected, converting the one-dimensional range profile into a column vector, inputting the column vector into the trained neural network model, and acquiring a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
In one embodiment, the method further comprises the following steps: controlling a single-transmitting single-receiving terahertz radar to acquire signals in a trigger mode through a first clock signal, so that the single-transmitting single-receiving terahertz radar transmits and receives signals every time a falling edge is detected and stores data to obtain a one-dimensional range profile of a detection scene;
controlling a time-of-flight camera to acquire signals in a trigger mode through a second clock signal, so that the time-of-flight camera acquires and stores data when detecting a falling edge, and a three-dimensional depth image corresponding to the detection scene is obtained; the three-dimensional depth image comprises horizontal and vertical coordinates and depth information of each point in the detection scene;
the first clock signal is synchronized with the second clock signal.
In one embodiment, the method further comprises the following steps: converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model; the neural network is a neural network of a multilayer perceptron; the neural network of the multilayer perceptron comprises an input layer, three full-connection layers and an output layer; the full connection layer is used for connecting each point of input data with each point of output data; each of the fully connected layers is followed by an activation function layer.
In one embodiment, the method further comprises the following steps: converting the one-dimensional distance image in the training set into a column vector and inputting the column vector into a pre-designed neural network model;
flattening the dimension M multiplied by N of the three-dimensional depth image corresponding to the one-dimensional range profile in the horizontal and longitudinal directions into a column vector with the length of MN;
and taking the column vector with the length of MN as the output of the neural network model, and training the neural network model to obtain the trained neural network model.
In one embodiment, the method further comprises the following steps: and restoring the column vector with the length of MN output by the trained neural network model into the dimension of M multiplied by N to obtain the three-dimensional image of the scene to be detected.
A scanless single channel terahertz radar forward-looking three-dimensional imaging system, the system comprising:
the data collection module is used for respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a time-of-flight camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
the model training module is used for converting the one-dimensional range profile in the training set into a column vector and inputting the column vector into a pre-designed neural network model, and training the neural network model by taking the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as output to obtain the trained neural network model;
and the model application module is used for acquiring a one-dimensional distance image of the scene to be detected, converting the one-dimensional distance image into a column vector, inputting the column vector into the trained neural network model, and obtaining a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a flight time camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model;
and acquiring a one-dimensional range profile of the scene to be detected, converting the one-dimensional range profile into a column vector, inputting the column vector into the trained neural network model, and acquiring a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a flight time camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model;
and acquiring a one-dimensional range profile of the scene to be detected, converting the one-dimensional range profile into a column vector, inputting the column vector into the trained neural network model, and acquiring a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
According to the method and the system for forward-looking three-dimensional imaging of the single-channel terahertz radar without scanning, a one-dimensional distance image of a detection scene is obtained through single-transmitting and single-receiving of a terahertz radar transmitting and receiving signal, a three-dimensional depth image is collected through a flight time camera, and a training set formed by a one-dimensional distance image and three-dimensional depth image data pair is obtained; training a neural network model by taking a large number of one-dimensional distance images as input and corresponding three-dimensional depth images as output to obtain a trained neural network model; and acquiring a one-dimensional distance image of the scene to be detected, inputting the one-dimensional distance image into the trained neural network model, and obtaining a three-dimensional image of the scene to be detected according to the output of the trained neural network model. According to the invention, only a single-channel terahertz radar with single transmitting and single receiving is used, so that the use and wave front modulation of an antenna array are avoided, the scanless single-channel forward-looking three-dimensional imaging is realized without depending on aperture accumulation or relative movement, the system is greatly simplified, and the cost is reduced; a forward-looking three-dimensional imaging algorithm based on deep learning is provided, the imaging efficiency of the system is improved, and the imaging requirement of high resolution and high frame rate can be met.
Drawings
FIG. 1 is an application scene diagram of a forward-looking three-dimensional imaging method of a single-channel terahertz radar without scanning in one embodiment;
FIG. 2 is a schematic flow chart of a front-view three-dimensional imaging method of the scanless single-channel terahertz radar in one embodiment;
FIG. 3 is a multi-layered perceptron model framework used in one embodiment;
FIG. 4 is a diagram of a front-view three-dimensional imaging system of the scanless single-channel terahertz radar in one embodiment;
FIG. 5 is a structural block diagram of a front-view three-dimensional imaging system of the scannerless single-channel terahertz radar in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The scanning-free single-channel terahertz radar forward-looking three-dimensional imaging method can be applied to the application environment shown in fig. 1. Under the condition of front-view imaging, the terahertz radar system transmits a chirp pulse waveform and acquires a one-dimensional range profile of a scene, and the time-of-flight camera is used for simultaneously acquiring depth information of the whole scene, namely a real three-dimensional image of the scene. Two paths of synchronous clock signals are divided through a control and processing terminal, a terahertz radar system and a flight time camera are controlled to work synchronously, and a one-dimensional range profile-depth image pair is obtained, wherein the one-dimensional range profile-depth image pair serves as input of a training neural network model, and the other one-dimensional range profile-depth image pair serves as output. The control and processing terminal may be, but is not limited to, various personal computers, notebook computers, tablet computers and portable wearable devices.
In one embodiment, as shown in fig. 2, a scanning-free single-channel terahertz radar front-view three-dimensional imaging method is provided, which is described by taking the method as an example of being applied to the control and processing terminal in fig. 1, and includes the following steps:
step 202, respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a time-of-flight camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profiles and three-dimensional depth image pairs.
The invention only uses the single-channel terahertz radar with single transmitting and single receiving, can avoid the use of an antenna array and wave front modulation, does not depend on aperture accumulation or relative motion to realize non-scanning single-channel forward-looking three-dimensional imaging, greatly simplifies the system and reduces the cost.
The terahertz radar transmits a chirp signal to a target scene, and the general expression form of the signal is as follows:
Figure BDA0003568850570000061
where w (t) is the real envelope,
Figure BDA0003568850570000062
the phase is modulated for the signal. For standard LFM signals, there are:
Figure BDA0003568850570000063
in the formula TpIndicating the pulse width of the signal and K the signal modulation frequency.
Let the received echo delay be t0Then the expression of the echo is:
sr(t)=s(t-t0)=rect((t-t0)/Tp)exp(jπK(t-t0)2) (3)
the spectrum form is:
Figure BDA0003568850570000064
according to the set linear frequency modulation characteristic, the time domain and frequency domain responses of the matched filter can be obtained as follows:
Figure BDA0003568850570000065
after matched filtering, the frequency domain expression of the output signal is as follows:
Figure BDA0003568850570000066
the fourier transform of the rectangular window function is a sinc function, and the time domain expression of the output signal after matched filtering can be obtained from the properties of the fourier transform as follows:
sout(t)=IFFT[Sout(f)]=Tpsinc[πKTp(t-t0)] (7)
by observing the above formula, the output signal can be known to be t ═ t according to the nature of sinc function0At a time, there is a peak, t0And multiplying the position information by c/2 to obtain the position information of the detected target, namely a one-dimensional range profile.
The time-of-flight camera can measure the distance between itself and an object in the detection scene, and then acquire the 3D information of the scene.
And 204, converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as output, and training the neural network model to obtain the trained neural network model.
Neural network models are typically trained using back propagation algorithms. And reducing the loss function by adjusting the parameters to obtain an optimal network model.
And step 206, acquiring the one-dimensional range profile of the scene to be detected, converting the one-dimensional range profile into a column vector, inputting the column vector into the trained neural network model, and obtaining the three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
Aiming at the defect that the radar imaging system of the array system and the wavefront spatial modulation system has a complex structure and is high in cost, the radar imaging system adopts a single-transmitting single-receiving radar system without wavefront modulation, so that the system can be greatly simplified, and the cost can be reduced; aiming at the defect that a radar imaging system of a scanning system needs a long-time data accumulation process and is difficult to realize high-frame-rate imaging, the invention provides a forward-looking three-dimensional imaging algorithm based on deep learning, and after a neural network model training stage is finished, one-dimensional range profile echoes of a single channel are directly input into a trained network model to invert a three-dimensional image so as to realize real-time imaging. The invention realizes a scanning-free single-channel terahertz radar forward-looking three-dimensional imaging system which is simple and small in system, low in cost and capable of meeting the imaging requirement of high resolution and high frame rate.
In the method for forward-looking three-dimensional imaging of the single-channel scanning-free terahertz radar, a one-dimensional range profile of a detection scene is obtained by transmitting and receiving a terahertz radar transmitting and receiving signal by a single transmitter, a three-dimensional depth image is collected by a flight time camera, and a training set consisting of one-dimensional range profile and three-dimensional depth image data pairs is obtained; training a neural network model by taking a large number of one-dimensional distance images as input and corresponding three-dimensional depth images as output to obtain a trained neural network model; and acquiring a one-dimensional distance image of the scene to be detected, inputting the one-dimensional distance image into the trained neural network model, and obtaining a three-dimensional image of the scene to be detected according to the output of the trained neural network model. According to the invention, only a single-transmitting single-receiving single-channel terahertz radar is used, the use and wave front modulation of an antenna array are avoided, the non-scanning single-channel forward-looking three-dimensional imaging is realized without depending on aperture accumulation or relative movement, the system is greatly simplified, and the cost is reduced; the forward-looking three-dimensional imaging algorithm based on the deep learning is provided, the imaging efficiency of the system is improved, and the imaging requirement of high resolution and high frame rate can be met.
In one embodiment, the method further comprises the following steps: the method comprises the steps that a single-transmitting single-receiving terahertz radar is controlled to collect signals in a trigger mode through a first clock signal, so that the single-transmitting single-receiving terahertz radar receives and transmits signals every time a falling edge is detected and stores data, and a one-dimensional range profile of a detection scene is obtained; controlling a flight time camera to acquire signals in a trigger mode through a second clock signal, so that the flight time camera acquires and stores data when detecting a falling edge to obtain a three-dimensional depth image corresponding to a detection scene; the three-dimensional depth image comprises horizontal and vertical coordinates and depth information of each point in a detection scene; the first clock signal is synchronized with the second clock signal.
In one embodiment, the method further comprises the following steps: converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model; the neural network is a neural network of a multilayer perceptron; the neural network of the multilayer perceptron comprises an input layer, three full-connection layers and an output layer; the full connection layer is used for connecting each point of input data with each point of output data, and the number of the nodes of the three full connection layers is 1024, 512 and 256 respectively; each fully connected layer is followed by an activation function layer, the activation function used being the tanh function.
Specifically, the multi-layered perceptron model framework used by the present invention is shown in FIG. 3. A Multilayer Perceptron (MLP) is an Artificial Neural Network (ANN) that is in a forward configuration that maps a set of input vectors to a set of output vectors. It can be abstracted into a directed graph which is composed of a plurality of node layers with distinct layers, and each layer of nodes is fully connected to the next adjacent layer of nodes. In MLP, each node of the remaining layers, except the input layer, is provided with an activation function, which is typically trained using a back-propagation algorithm. Analyzing the structure of the data can know that it can connect every point of the input data with every point of the output data. And each range unit of the single-transmitting and single-receiving terahertz radar for acquiring the one-dimensional range profile contains information from all imaging grids of the three-dimensional scene. It can be seen that both have some similarity in structure.
In one embodiment, the method further comprises the following steps: converting the one-dimensional distance image in the training set into a column vector and inputting the column vector into a pre-designed neural network model; flattening the dimension M multiplied by N of the three-dimensional depth image corresponding to the one-dimensional range profile in the horizontal and longitudinal directions into a column vector with the length of MN; and (5) taking the column vector with the length of MN as the output of the neural network model, and training the neural network model to obtain the trained neural network model.
Specifically, certain preprocessing is performed on data acquired by the single-transmitting and single-receiving terahertz radar and the time-of-flight camera to acquire main information of the data. Then each one-dimensional range image is stored in a column vector form, each depth image (with the dimension of M multiplied by N) is similarly flattened into a column vector (with the dimension of 1 multiplied by MN), and the column vectors are in one-to-one correspondence, and each pair of data is a group. Randomly pick 90% of all data sets as training set and the remaining 10% as test set. And taking the one-dimensional distance image in each group of data as the input of a training neural network, and taking the column vector generated by the depth image as the output. And reducing the loss function by adjusting the parameters to obtain an optimal network model.
In one embodiment, the method further comprises the following steps: and restoring the column vector with the length of MN output by the trained neural network model into the dimension of M multiplied by N to obtain the three-dimensional image of the scene to be detected.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a scanless single channel terahertz radar front-view three-dimensional imaging system, comprising: a data collection module 502, a model training module 504, and a model application module 506, wherein:
the data collection module 502 is configured to respectively control a single-transmitting and single-receiving terahertz radar to receive and transmit signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, control a time-of-flight camera to acquire signals to obtain a three-dimensional depth image corresponding to the detection scene, and further obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs by transforming the posture and position of a target in the detection scene;
the model training module 504 is configured to convert the one-dimensional range profile in the training set into a column vector, input the column vector into a pre-designed neural network model, train the neural network model by using the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as an output, and obtain the trained neural network model;
the model application module 506 is configured to obtain a one-dimensional range profile of the scene to be detected, convert the one-dimensional range profile into a column vector, input the column vector into the trained neural network model, and obtain a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
The data collection module 502 is further configured to control the single-transmitting and single-receiving terahertz radar to collect signals in a trigger mode through a first clock signal, so that the single-transmitting and single-receiving terahertz radar receives and transmits signals every time a falling edge is detected and stores data to obtain a one-dimensional range profile of a detection scene; controlling a flight time camera to acquire signals in a trigger mode through a second clock signal, so that the flight time camera acquires and stores data when detecting a falling edge to obtain a three-dimensional depth image corresponding to a detection scene; the three-dimensional depth image comprises horizontal and vertical coordinates and depth information of each point in a detection scene; the first clock signal is synchronized with the second clock signal.
The model training module 504 is further configured to convert the one-dimensional range profile in the training set into a column vector, input the column vector into a pre-designed neural network model, and train the neural network model by using the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as an output to obtain a trained neural network model; the neural network is a neural network of a multilayer perceptron; the neural network of the multilayer perceptron comprises an input layer, three full-connection layers and an output layer; the full connection layer is used for connecting each point of input data with each point of output data; each fully connected layer is followed by an activation function layer.
The model training module 504 is further configured to convert the one-dimensional range profile in the training set into a column vector and input the column vector into a pre-designed neural network model; flattening the dimension M multiplied by N of the three-dimensional depth image corresponding to the one-dimensional range profile in the transverse and longitudinal directions into a column vector with the length of MN; and (5) taking the column vector with the length of MN as the output of the neural network model, and training the neural network model to obtain the trained neural network model.
The model training module 504 is further configured to train the neural network model through a back propagation algorithm, so as to obtain a trained neural network model.
The model application module 506 is further configured to restore the column vector output by the trained neural network model and having a length of MN to M × N dimensions, so as to obtain a three-dimensional image of the scene to be detected.
For specific limitations of the forward-looking three-dimensional imaging system of the scannerless single-channel terahertz radar, reference may be made to the above limitations of the forward-looking three-dimensional imaging method of the scannerless single-channel terahertz radar, and details are not repeated here. All modules in the scanning-free single-channel terahertz radar forward-looking three-dimensional imaging system can be completely or partially realized through software, hardware and a combination of the software and the hardware. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a scanning-free single-channel terahertz radar forward-looking three-dimensional imaging method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (8)

1. A method for forward-looking three-dimensional imaging of a scanless single-channel terahertz radar is characterized by comprising the following steps:
respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a flight time camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model;
and acquiring a one-dimensional range profile of the scene to be detected, converting the one-dimensional range profile into a column vector, inputting the column vector into the trained neural network model, and acquiring a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
2. The method according to claim 1, wherein the controlling the single-transmitting and single-receiving terahertz radar transceiver signal to obtain a one-dimensional range image of the detection scene and the controlling the time-of-flight camera to acquire the signal to obtain a three-dimensional depth image corresponding to the detection scene respectively through two synchronous clock signals comprises:
controlling a single-transmitting single-receiving terahertz radar to acquire signals in a trigger mode through a first clock signal, so that the single-transmitting single-receiving terahertz radar transmits and receives signals every time a falling edge is detected and stores data to obtain a one-dimensional range profile of a detection scene;
controlling a time-of-flight camera to acquire signals in a trigger mode through a second clock signal, so that the time-of-flight camera acquires and stores data when detecting a falling edge, and a three-dimensional depth image corresponding to the detection scene is obtained; the three-dimensional depth image comprises horizontal and vertical coordinates and depth information of each point in the detection scene;
the first clock signal is synchronized with the second clock signal.
3. The method of claim 1, wherein converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, and training the neural network model by using the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as an output to obtain the trained neural network model, comprises:
converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, flattening the three-dimensional depth image corresponding to the one-dimensional range profile to obtain the column vector as an output, and training the neural network model to obtain a trained neural network model; the neural network is a neural network of a multilayer perceptron; the neural network of the multilayer perceptron comprises an input layer, three full-connection layers and an output layer; the full connection layer is used for connecting each point of input data with each point of output data; each of the fully connected layers is followed by an activation function layer.
4. The method of claim 3, wherein converting the one-dimensional range profile in the training set into a column vector, inputting the column vector into a pre-designed neural network model, and training the neural network model by using the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as an output to obtain the trained neural network model, comprises:
converting the one-dimensional range profile in the training set into a column vector and inputting the column vector into a pre-designed neural network model;
flattening the dimension M multiplied by N of the three-dimensional depth image corresponding to the one-dimensional range profile in the horizontal and longitudinal directions into a column vector with the length of MN;
and taking the column vector with the length of MN as the output of the neural network model, and training the neural network model to obtain the trained neural network model.
5. The method according to claim 4, wherein obtaining the three-dimensional image of the scene to be detected according to the column vectors output by the trained neural network model comprises:
and restoring the column vector with the length of MN output by the trained neural network model into the dimension of M multiplied by N to obtain the three-dimensional image of the scene to be detected.
6. A scannerless single-channel terahertz radar forward-looking three-dimensional imaging system, comprising:
the data collection module is used for respectively controlling a single-transmitting and single-receiving terahertz radar to receive and send signals through two synchronous clock signals to obtain a one-dimensional range profile of a detection scene, controlling a time-of-flight camera to collect signals to obtain a three-dimensional depth image corresponding to the detection scene, and further converting the posture and the position of a target in the detection scene to obtain a training set formed by a plurality of groups of one-dimensional range profile and three-dimensional depth image pairs;
the model training module is used for converting the one-dimensional range profile in the training set into a column vector and inputting the column vector into a pre-designed neural network model, and training the neural network model by taking the column vector obtained by flattening the three-dimensional depth image corresponding to the one-dimensional range profile as output to obtain the trained neural network model;
and the model application module is used for acquiring a one-dimensional distance image of the scene to be detected, converting the one-dimensional distance image into a column vector, inputting the column vector into the trained neural network model, and obtaining a three-dimensional image of the scene to be detected according to the column vector output by the trained neural network model.
7. The system of claim 6, wherein the data collection module is further configured to:
controlling a single-transmitting single-receiving terahertz radar to acquire signals in a trigger mode through a first clock signal, so that the single-transmitting single-receiving terahertz radar transmits and receives signals every time a falling edge is detected and stores data to obtain a one-dimensional range profile of a detection scene;
controlling a time-of-flight camera to acquire signals in a trigger mode through a second clock signal, so that the time-of-flight camera acquires and stores data when detecting a falling edge, and a three-dimensional depth image corresponding to the detection scene is obtained; the three-dimensional depth image comprises horizontal and vertical coordinates and depth information of each point in the detection scene;
the first clock signal is synchronized with the second clock signal.
8. The system of claim 7, wherein the model training module is further configured to:
converting the one-dimensional distance image in the training set into a column vector and inputting the column vector into a pre-designed neural network model;
flattening the dimension M multiplied by N of the three-dimensional depth image corresponding to the one-dimensional range profile in the transverse and longitudinal directions into a column vector with the length of MN;
and taking the column vector with the length of MN as the output of the neural network model, and training the neural network model to obtain the trained neural network model.
CN202210312335.1A 2022-03-28 2022-03-28 Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system Pending CN114740472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210312335.1A CN114740472A (en) 2022-03-28 2022-03-28 Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210312335.1A CN114740472A (en) 2022-03-28 2022-03-28 Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system

Publications (1)

Publication Number Publication Date
CN114740472A true CN114740472A (en) 2022-07-12

Family

ID=82277792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312335.1A Pending CN114740472A (en) 2022-03-28 2022-03-28 Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system

Country Status (1)

Country Link
CN (1) CN114740472A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327543A (en) * 2022-08-15 2022-11-11 中国科学院空天信息创新研究院 Multi-node time-frequency synchronization method for unmanned aerial vehicle swarm SAR
CN115586542A (en) * 2022-11-28 2023-01-10 中国人民解放军国防科技大学 Remote terahertz single photon radar imaging method and device based on scaling training
CN117111093A (en) * 2023-10-20 2023-11-24 中山大学 Single-pixel three-dimensional imaging method and system based on neural network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327543A (en) * 2022-08-15 2022-11-11 中国科学院空天信息创新研究院 Multi-node time-frequency synchronization method for unmanned aerial vehicle swarm SAR
CN115586542A (en) * 2022-11-28 2023-01-10 中国人民解放军国防科技大学 Remote terahertz single photon radar imaging method and device based on scaling training
CN115586542B (en) * 2022-11-28 2023-03-03 中国人民解放军国防科技大学 Remote terahertz single photon radar imaging method and device based on scaling training
CN117111093A (en) * 2023-10-20 2023-11-24 中山大学 Single-pixel three-dimensional imaging method and system based on neural network
CN117111093B (en) * 2023-10-20 2024-02-06 中山大学 Single-pixel three-dimensional imaging method and system based on neural network

Similar Documents

Publication Publication Date Title
CN114740472A (en) Scanning-free single-channel terahertz radar foresight three-dimensional imaging method and system
Sengupta et al. mm-Pose: Real-time human skeletal posture estimation using mmWave radars and CNNs
Kim et al. Human activity classification based on point clouds measured by millimeter wave MIMO radar with deep recurrent neural networks
Pu et al. OSRanP: A novel way for radar imaging utilizing joint sparsity and low-rankness
Zhang et al. Matrix completion for downward-looking 3-D SAR imaging with a random sparse linear array
Armanious et al. An adversarial super-resolution remedy for radar design trade-offs
CN110728213A (en) Fine-grained human body posture estimation method based on wireless radio frequency signals
US20230333209A1 (en) Gesture recognition method and apparatus
Wu et al. Super-resolution for MIMO array SAR 3-D imaging based on compressive sensing and deep neural network
CN109298417B (en) Building internal structure detection method and device based on radar signal processing
CN110554384A (en) imaging method based on microwave signal
CN113050086A (en) Ground penetrating radar system, control method, device, equipment and storage medium
Liang et al. Through-the-wall high-dimensional imaging of human vital signs by combining multiple enhancement algorithms using portable LFMCW-MIMO radar
KR20220141748A (en) Method and computer readable storage medium for extracting target information from radar signal
CN113109797B (en) Method and device for detecting target of frequency modulation continuous wave staring radar and computer equipment
Liang et al. A posture recognition based fall detection system using a 24GHz CMOS FMCW radar SoC
Wei et al. Learning-based split unfolding framework for 3-D mmW radar sparse imaging
CN115586542B (en) Remote terahertz single photon radar imaging method and device based on scaling training
Tang et al. Indoor scene reconstruction for through-the-wall radar imaging using low-rank and sparsity constraints
CN114740470A (en) Microwave wavefront modulation foresight imaging method and device based on attribute scattering model
Ni et al. A novel scan SAR imaging method for maritime surveillance via Lp regularization
Dao et al. Temporal rate up-conversion of synthetic aperture radar via low-rank matrix recovery
Chen et al. Rf-inpainter: Multimodal image inpainting based on vision and radio signals
CN113421281A (en) Pedestrian micromotion part separation method based on segmentation theory
Xie et al. CNN based joint positioning and pose recognition of concealed human for 3D through-wall imaging radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination