CN108279671B - Terahertz-based environment sensing method and device and computer readable storage medium - Google Patents
Terahertz-based environment sensing method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN108279671B CN108279671B CN201810017859.1A CN201810017859A CN108279671B CN 108279671 B CN108279671 B CN 108279671B CN 201810017859 A CN201810017859 A CN 201810017859A CN 108279671 B CN108279671 B CN 108279671B
- Authority
- CN
- China
- Prior art keywords
- invisible
- data
- target
- terahertz
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013527 convolutional neural network Methods 0.000 claims description 27
- 238000013135 deep learning Methods 0.000 claims description 16
- 230000007613 environmental effect Effects 0.000 claims description 14
- 239000002245 particle Substances 0.000 claims description 12
- 230000008447 perception Effects 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 10
- 238000013178 mathematical model Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 239000004566 building material Substances 0.000 description 3
- 239000000779 smoke Substances 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0242—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0259—Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
Abstract
The invention discloses a terahertz-based environment sensing method, which comprises the following steps: acquiring first data detected by a terahertz radar and second data detected by a preset sensor; fusing the first data and the second data to determine an invisible target of an invisible area; determining a motion state of the invisible target based on the first data detected by the terahertz radar. The invention also discloses an environment sensing device based on terahertz and a computer readable storage medium. According to the method, the environment of the road section shielded by the shielding object is sensed based on the terahertz radar, so that the shielded target is identified, and the safe driving of the unmanned vehicle can be ensured.
Description
Technical Field
The invention relates to the technical field of unmanned driving, in particular to an environment sensing method and device based on terahertz and a computer readable storage medium.
Background
The unmanned vehicle is an intelligent vehicle, integrates a plurality of technologies such as automatic control, a system structure, artificial intelligence, visual calculation and the like, is a product of high development of computer science, mode recognition and intelligent control technologies, is an important mark for measuring national research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
The unmanned vehicle needs to have environment perception capability to realize automatic driving. The unmanned vehicle mainly senses the surrounding environment of the vehicle by using a vehicle-mounted sensor, and realizes the detection and identification of targets such as vehicles, pedestrians and the like. At present, vehicle-mounted sensors installed on an unmanned vehicle comprise sensors such as a camera, a millimeter wave radar and a laser radar, the sensors mainly detect targets in a visible region, and when the unmanned vehicle enters a road section sheltered by a high building, the sensors cannot sense the environment of the road section sheltered by the high building, cannot detect and identify dynamic targets of the sheltered road section, easily cause safety accidents such as collision and the like, and cannot guarantee safe driving of the unmanned vehicle.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an environment sensing method and device based on terahertz and a computer readable storage medium, and aims to solve the technical problems that an existing vehicle-mounted sensor of an unmanned vehicle cannot detect the environment of a road section shielded by a shielding object, cannot detect and identify a dynamic target of the shielded road section, and cannot ensure safe driving of the unmanned vehicle.
In order to achieve the above object, the present invention provides a terahertz-based environmental sensing method, including:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor;
fusing the first data and the second data to determine an invisible target of an invisible area;
determining a motion state of the invisible target based on the first data detected by the terahertz radar.
Optionally, the step of fusing the first data and the second data to determine the invisible target of the invisible area includes:
respectively extracting a first interested region from the first data and a second interested region from the second data;
comparing the first region of interest with the second region of interest, and segmenting an invisible region from the first region of interest;
invisible objects are identified from the invisible region.
Optionally, the step of identifying invisible objects from the invisible area comprises:
and carrying out invisible target identification on the invisible area by adopting a convolutional neural network based on deep learning, and identifying an invisible target, wherein the convolutional neural network based on deep learning is a model which is obtained by training in advance and is used for identifying the invisible target.
Optionally, the invisible target identification is performed on the invisible area by using a convolutional neural network based on deep learning, and the step of identifying the invisible target includes:
extracting the features of the invisible area to obtain a feature vector;
and inputting the feature vector into a convolutional neural network based on deep learning, and identifying an invisible target.
Optionally, the step of determining the motion state of the invisible object based on the speed data of the invisible object comprises:
and inputting the speed data of the invisible target into a preset mathematical model to obtain the motion state of the invisible target.
Optionally, the mathematical model comprises a kalman filter model and a particle filter model.
Optionally, the method is applied to an unmanned vehicle, and after the step of determining the motion state of the invisible target based on the first data detected by the terahertz radar, the method comprises:
acquiring the motion state of the unmanned vehicle;
and determining a driving decision of the unmanned vehicle according to the motion state of the unmanned vehicle and in combination with the motion state of the invisible target, and controlling the driving of the unmanned vehicle based on the driving decision.
In addition, to achieve the above object, the present invention further provides a terahertz-based environmental sensing apparatus, including: the terahertz-based environment perception device comprises a memory, a processor and a terahertz-based environment perception program stored on the memory and capable of running on the processor, wherein when the terahertz-based environment perception program is executed by the processor, the following steps are realized:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor;
fusing the first data and the second data to determine an invisible target of an invisible area;
determining a motion state of the invisible target based on the first data detected by the terahertz radar.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having a terahertz-based environment sensing program stored thereon, which when executed by a processor, implements the steps of:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor;
fusing the first data and the second data to determine an invisible target of an invisible area;
determining a motion state of the invisible target based on the first data detected by the terahertz radar.
The method comprises the steps of acquiring first data detected by a terahertz radar and second data detected by a preset sensor; fusing the first data and the second data to determine an invisible target of an invisible area; determining a motion state of the invisible target based on the first data detected by the terahertz radar. Through the mode, based on the characteristic that the terahertz radar can penetrate through obstacles such as cloud smoke, building materials and the like to detect the target, the data including the visible region and the invisible region detected by the terahertz radar are fused with the data of the visible region detected by the preset sensor to determine the invisible target of the invisible region, the motion state of the invisible target is further determined, the corresponding driving decision is made to control the driving of the unmanned vehicle, the occurrence of collision accidents is avoided, and therefore the safe driving of the unmanned vehicle is guaranteed.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flowchart of a terahertz-based environmental sensing method according to a first embodiment of the present invention;
FIG. 3 is a schematic flowchart of a terahertz-based environmental sensing method according to a second embodiment of the present invention;
FIG. 4 is a schematic flowchart of a third embodiment of the terahertz-based environmental sensing method according to the present invention;
fig. 5 is a schematic flowchart of a terahertz-based environmental sensing method according to a fourth embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: acquiring first data detected by a terahertz radar and second data detected by a preset sensor; fusing the first data and the second data to determine an invisible target of an invisible area; determining a motion state of the invisible target based on the first data detected by the terahertz radar.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a terminal belonging to a device in a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention is provided with an automatic driving system of an unmanned vehicle.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a Wi-Fi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a terahertz-based environment-aware program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting the terahertz radar and the preset sensor, and performing data communication with the terahertz radar and the preset sensor respectively; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the terahertz-based environment sensing program stored in the memory 1005, and perform the following operations:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor;
fusing the first data and the second data to determine an invisible target of an invisible area;
determining a motion state of the invisible target based on the first data detected by the terahertz radar.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
respectively extracting a first interested region from the first data and a second interested region from the second data;
comparing the first region of interest with the second region of interest, and segmenting an invisible region from the first region of interest;
invisible objects are identified from the invisible region.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
and carrying out invisible target identification on the invisible area by adopting a convolutional neural network based on deep learning, and identifying an invisible target, wherein the convolutional neural network based on deep learning is a model which is obtained by training in advance and is used for identifying the invisible target.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
extracting the features of the invisible area to obtain a feature vector;
and inputting the feature vector into a convolutional neural network based on deep learning, and identifying an invisible target.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
extracting speed data of an invisible target from the first data;
determining a motion state of the invisible object based on the velocity data of the invisible object.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
and inputting the speed data of the invisible target into a preset mathematical model to obtain the motion state of the invisible target.
Further, the processor 1001 may call the terahertz-based environment sensing program stored in the memory 1005, and also perform the following operations:
acquiring the motion state of the unmanned vehicle;
and determining a driving decision of the unmanned vehicle according to the motion state of the unmanned vehicle and in combination with the motion state of the invisible target, and controlling the driving of the unmanned vehicle based on the driving decision.
Based on the hardware structure, various embodiments of the terahertz-based environment sensing method are provided.
Referring to fig. 2, a first embodiment of the terahertz-based environment sensing method according to the present invention provides a terahertz-based environment sensing method, including:
step S10, acquiring first environmental data detected by the terahertz radar and second environmental data detected by a preset sensor;
the present embodiment is applied to an automatic driving system of an unmanned vehicle. Terahertz (THz) waves refer to electromagnetic waves with frequencies in the range of 100GHz to 10THz, and are between millimeter waves and infrared light. The terahertz wave has the characteristics of strong penetrability, high use safety, good directionality, high bandwidth and the like. Based on the characteristics of terahertz, compared with the radar of a common microwave band, the terahertz radar has stronger penetrating capability and can penetrate through obstacles such as cloud smoke, building materials and the like to carry out target detection. According to the embodiment, the terahertz radar is installed on the unmanned vehicle, and the environment perception of the road section shielded by the shielding object is realized by combining other vehicle-mounted sensors (which can be sensors such as a camera, a millimeter wave radar and a laser radar) installed on the unmanned vehicle. Sensors such as cameras, millimeter wave radars, laser radars, etc. may detect the visible region.
First, first environment data detected by a terahertz radar and second environment data detected by a preset sensor are acquired.
The preset sensor is a vehicle-mounted sensor installed on the unmanned vehicle, except for the terahertz radar, and can be a camera, a millimeter wave radar, a laser radar and other sensors, or a combination of one or more than two of the sensors (more than one, including the number). The present embodiment takes the millimeter wave radar as an example of the preset sensor. The millimeter wave radar is a radar which works in a millimeter wave band (millimeter wave) for detection, has the advantages of wide detection range, small influence of weather and the like, and is widely applied to the fields of automatic driving and the like, and the millimeter wave radar in the embodiment can be a Frequency Modulated Continuous Wave (FMCW) millimeter wave radar. And the automatic driving system of the unmanned vehicle is respectively in communication connection with the terahertz radar and the millimeter wave radar.
Specifically, when the unmanned vehicle is started, the automatic driving system enables the terahertz radar and the millimeter wave radar to detect the environment around the unmanned vehicle, and during the running process of the unmanned vehicle, the automatic driving system acquires data (defined as first data) detected by the terahertz radar and data (defined as second data) detected by the millimeter wave radar in real time or at regular time. The first data detected by the terahertz radar comprise three-dimensional point cloud data of objects in a visible area and an invisible area around the unmanned vehicle, and the second data detected by the millimeter wave radar comprise three-dimensional point cloud data of objects in the visible area around the unmanned vehicle.
Step S20, fusing the first data and the second data to determine an invisible target of an invisible area;
and then, fusing the first data detected by the terahertz radar and the second data detected by the millimeter wave radar. Specifically, an ROI (region of interest) is extracted from first data detected by the terahertz radar and an ROI (second region of interest) is extracted from second data detected by the millimeter wave radar, the first region of interest and the second region of interest are compared to segment an invisible region, and then dynamic targets (defined as invisible targets) such as pedestrians and vehicles are identified from the invisible region.
Step S30, determining a motion state of the invisible target based on the first data detected by the terahertz radar.
In this embodiment, the first data detected by the terahertz radar further includes velocity data of an invisible target. After the invisible target of the invisible area is determined, the speed of the invisible target is extracted from the first data detected by the terahertz radar, and then the motion track of the invisible target is predicted according to the speed of the invisible target, so that a corresponding driving decision is made to control the safe driving of the unmanned vehicle, and the occurrence of collision accidents is avoided.
The application scenario of this embodiment may be: when the unmanned vehicle enters a road section shielded by a high building, an automatic driving system of the unmanned vehicle acquires three-dimensional point cloud data including objects in a visible region and an invisible region detected by a terahertz radar and three-dimensional point cloud data of the objects in the visible region detected by a millimeter wave radar, then the data detected by the terahertz radar and the data detected by the millimeter wave radar are fused to determine the invisible region, then dynamic targets such as vehicles and pedestrians are identified from the invisible region, the motion tracks of the dynamic targets such as the vehicles and the pedestrians are further predicted, then a corresponding driving decision is made based on the predicted motion tracks of the dynamic targets such as the vehicles and the pedestrians, and the safe driving of the unmanned vehicle is controlled.
In the embodiment, first data detected by a terahertz radar and second data detected by a preset sensor are acquired; fusing the first data and the second data to determine an invisible target of an invisible area; determining a motion state of the invisible target based on the first data detected by the terahertz radar. Through the mode, based on the characteristic that the terahertz radar can penetrate through obstacles such as cloud smoke, building materials and the like to perform target detection, the data including the visible region and the invisible region detected by the terahertz radar are fused with the data of the visible region detected by the preset sensor to determine the invisible target of the invisible region, the motion state of the invisible target is further determined, the corresponding driving decision is made to control the driving of the unmanned vehicle, the occurrence of collision accidents is avoided, and therefore the safe driving of the unmanned vehicle is guaranteed.
Further, referring to fig. 3, a second embodiment of the terahertz-based environment sensing method according to the present invention provides a terahertz-based environment sensing method, and based on the above embodiment shown in fig. 2, step S20 may include:
step S21, extracting a first region of interest from the first data and a second region of interest from the second data, respectively;
step S22, comparing the first region of interest with the second region of interest, and segmenting an invisible region from the first region of interest;
and step S23, identifying invisible objects from the invisible area.
According to the first embodiment, after the unmanned vehicle acquires the first data detected by the terahertz radar and the second data detected by the millimeter wave radar, the two data need to be fused. Specifically, a first region of interest is extracted from first data detected by the terahertz radar and a second region of interest is extracted from second data detected by the millimeter wave radar, the first region of interest includes a region where an object in the visible region is located and a region where an invisible object in the invisible region is located, and the second region of interest includes a region where an object in the visible region is located. Further, the first region of interest is compared with the second region of interest, so that a region in the first region of interest, which is a region where the object is located in the visible region, is determined, then a region outside the overlapping region, which is a region where the object is located in the invisible region, is segmented from the first region of interest, and the invisible target is identified from the region. Specifically, step S23 may include:
and step S230, carrying out invisible target identification on the invisible area by adopting a convolutional neural network based on deep learning, and identifying an invisible target, wherein the convolutional neural network based on deep learning is a model which is obtained by training in advance and is used for identifying the invisible target.
Step S230 may include:
step S231, extracting the features of the invisible area to obtain a feature vector;
and step S232, inputting the feature vectors into a convolutional neural network based on deep learning, and identifying invisible targets.
In this embodiment, after the invisible region is segmented from the first region of interest, the invisible target can be identified from the invisible region by an algorithm of a deep learning neural network. The learning method of Deep learning can be divided into unsupervised learning and supervised learning, wherein the unsupervised learning comprises Deep Belief Networks (DBN), and the supervised learning comprises Convolutional Neural Networks (CNN). Convolutional Neural Networks (CNN), a variant of artificial neural networks, are a multi-layered perceptron (MLP), and have become an efficient identification method.
The embodiment adopts a Convolutional Neural Network (CNN) to identify invisible areas. The convolutional neural network is a model trained in advance for identifying invisible targets. The training of the convolutional neural network comprises the steps of calculating a convolutional characteristic value and a loss function value, specifically, samples such as pedestrians and vehicles are input into a convolutional layer of the convolutional neural network to be trained for processing to obtain the convolutional characteristic value, then the convolutional characteristic value is input into a loss function layer of the convolutional neural network to be trained for loss function calculation to obtain the loss function value, when the loss function value is smaller than the convolutional characteristic value, the convolutional layer is adjusted, the convolutional characteristic value and the loss function value are calculated in an iterative mode until the iteration number reaches a certain number (for example, 300 ten thousand), and the convolutional neural network can be obtained.
When the convolutional neural network is adopted to identify the invisible area, firstly, the feature extraction is carried out on the invisible area to obtain a feature vector, the feature vector is input into the trained convolutional neural network, and then an identification result can be obtained to identify invisible targets such as vehicles, pedestrians and the like in the invisible area.
In the implementation, the first data detected by the terahertz radar and the second data detected by the millimeter wave radar are fused, so that the invisible area is segmented from the first data detected by the terahertz radar, invisible targets such as pedestrians and vehicles are identified from the segmented invisible area, and a foundation is laid for the subsequent prediction of the motion trail of the invisible targets such as the pedestrians and the vehicles.
Further, referring to fig. 4, a third embodiment of the terahertz-based environmental sensing method according to the present invention provides a terahertz-based environmental sensing method, and based on the embodiments shown in fig. 2 and fig. 3, step S30 may include:
step S31, extracting speed data of an invisible object from the first data;
step S32, determining a motion state of the invisible object based on the velocity data of the invisible object.
Wherein, the step S32 may include:
and step S320, inputting the speed data of the invisible target into a preset mathematical model to obtain the motion state of the invisible target.
In this embodiment, the first data detected by the terahertz radar further includes velocity data of the object. Based on the embodiment, after the invisible target is identified from the invisible area, the speed data corresponding to the invisible target is extracted from the first data detected by the terahertz radar, and then the speed data is input into a preset mathematical model for predicting the motion state of the object, so that the motion track of the invisible target is predicted, namely, the invisible target is tracked. The mathematical model may be a kalman filter model or a particle filter model.
Kalman filtering (Kalman filtering) is an algorithm for performing optimal estimation on a system state by inputting and outputting observation data through a system using a linear system state equation, and since the observation data includes the influence of noise and interference in the system, the optimal estimation can also be regarded as a filtering process. Taking the speed data of the pedestrian detected by the Hertz radar as an example, the position of the pedestrian at the next moment can be predicted by presetting a Kalman filtering model, and the motion state of the pedestrian is obtained.
Particle filter (Particle filter) is a process of approximating a probability density function by searching a group of random samples propagating in a state space, and substituting an integral operation with a sample mean value to further obtain a minimum variance estimation of a system state, wherein the samples are vividly called as 'particles', so that the samples are called as Particle filter. The particle filter model uses a set of samples, i.e. particles, at each time point to represent the distribution. By means of the particle filter model, the particle set St +1 at the next time instant can be recursively estimated on the basis of the current time estimate St. The pedestrian speed data detected by the Hertz radar is taken as an example, the speed at the current moment is input into the particle filter model, the speed of the pedestrian at the next moment can be predicted, and the motion state of the pedestrian is obtained.
In more implementations, the motion trajectory of the invisible target can also be predicted by presetting an Extended Kalman Filter (EKF) model, an Unscented Kalman Filter (UKF) model, and the like.
In this embodiment, after the invisible region is identified, the invisible target is further tracked for the automatic driving system to determine the corresponding driving strategy.
Further, referring to fig. 5, a fourth embodiment of the terahertz-based environmental sensing method according to the present invention provides a terahertz-based environmental sensing method, which, based on the embodiments shown in fig. 2 and fig. 3, after step S30, may include:
step S40, acquiring the motion state of the unmanned vehicle;
and step S50, determining a driving decision of the unmanned vehicle according to the motion state of the unmanned vehicle and the motion state of the invisible target, and controlling the driving of the unmanned vehicle based on the driving decision.
After the motion state of the invisible target is obtained through the preset mathematical model, the automatic driving system can obtain the motion state of the unmanned vehicle. In this embodiment, an Inertial Measurement Unit (IMU) sensor is mounted in advance on the unmanned vehicle, and the IMU sensor includes three single-axis accelerometers and three single-axis gyroscopes, and can detect the motion state of the unmanned vehicle itself. The automatic driving system can acquire the motion state of the unmanned vehicle from the IMU sensor, then determines the position relation of the invisible target relative to the unmanned vehicle at the future moment by combining the motion state of the invisible target, and formulates a coping strategy of the unmanned vehicle so as to avoid collision with the invisible target.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer readable storage medium of the present invention stores a terahertz-based environment sensing program, and when executed by a processor, the terahertz-based environment sensing program implements the following operations:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor;
fusing the first data and the second data to determine an invisible target of an invisible area;
determining a motion state of the invisible target based on the first data detected by the terahertz radar.
The specific embodiment of the terahertz-based environment sensing program stored on the computer storage medium of the present invention executed by the processor is basically the same as the embodiments of the terahertz-based environment sensing method described above, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (8)
1. A terahertz-based environment sensing method is characterized by comprising the following steps:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor, wherein the first data comprises three-dimensional point cloud data of an object, the second data is the three-dimensional point cloud data of the object in a visible area, and the terahertz radar and the preset sensor are both installed on an unmanned vehicle;
fusing the first data and the second data to determine an invisible target of an invisible area, wherein the invisible target comprises a first region of interest extracted from the first data and a second region of interest extracted from the second data, the first region of interest is compared with the second region of interest, the invisible area is divided from the first region of interest, and the invisible target is identified from the invisible area;
determining a motion state of the invisible target based on first data detected by the terahertz radar;
the method comprises the steps of obtaining the motion state of an unmanned vehicle, wherein the motion state of the unmanned vehicle is detected by an IMU sensor pre-installed on the unmanned vehicle;
according to the motion state of the unmanned vehicle, identifying a dynamic target in the invisible targets, predicting the motion track of the dynamic target, and making a corresponding driving decision based on the predicted motion track of the dynamic target to control the safe driving of the unmanned vehicle.
2. The terahertz-based context awareness method of claim 1, wherein the step of identifying invisible targets from the invisible region comprises:
and carrying out invisible target identification on the invisible area by adopting a convolutional neural network based on deep learning, and identifying an invisible target, wherein the convolutional neural network based on deep learning is a model which is obtained by training in advance and is used for identifying the invisible target.
3. The terahertz-based environmental perception method according to claim 2, wherein the invisible target recognition is performed on the invisible region by using a convolutional neural network based on deep learning, and the step of recognizing the invisible target includes:
extracting the features of the invisible area to obtain a feature vector;
and inputting the feature vector into a convolutional neural network based on deep learning, and identifying an invisible target.
4. The terahertz-based context awareness method of claim 1, wherein the step of determining the motion state of the invisible target based on the first data detected by the terahertz radar comprises:
extracting speed data of an invisible target from the first data;
determining a motion state of the invisible object based on the velocity data of the invisible object.
5. The terahertz-based context awareness method of claim 4, wherein the step of determining the motion state of the invisible target based on the velocity data of the invisible target comprises:
and inputting the speed data of the invisible target into a preset mathematical model to obtain the motion state of the invisible target.
6. The terahertz-based environmental perception method of claim 5, wherein the mathematical model includes a Kalman filter model and a particle filter model.
7. A terahertz-based environment sensing device is characterized by comprising: the terahertz-based environment perception device comprises a memory, a processor and a terahertz-based environment perception program stored on the memory and capable of running on the processor, wherein when the terahertz-based environment perception program is executed by the processor, the following steps are realized:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor, wherein the second data is three-dimensional point cloud data of an object in a visible area, and the terahertz radar and the preset sensor are both installed on an unmanned vehicle;
fusing the first data and the second data to determine an invisible target of an invisible area, wherein the invisible target comprises a first region of interest extracted from the first data and a second region of interest extracted from the second data, the first region of interest is compared with the second region of interest, the invisible area is divided from the first region of interest, and the invisible target is identified from the invisible area;
determining a motion state of the invisible target based on first data detected by the terahertz radar;
the method comprises the steps of obtaining the motion state of an unmanned vehicle, wherein the motion state of the unmanned vehicle is detected by an IMU sensor pre-installed on the unmanned vehicle;
according to the motion state of the unmanned vehicle, identifying a dynamic target in the invisible targets, predicting the motion track of the dynamic target, and making a corresponding driving decision based on the predicted motion track of the dynamic target to control the safe driving of the unmanned vehicle.
8. A computer-readable storage medium, wherein the computer-readable storage medium stores a terahertz-based context awareness program, and when executed by a processor, the terahertz-based context awareness program implements the following steps:
acquiring first data detected by a terahertz radar and second data detected by a preset sensor, wherein the second data is three-dimensional point cloud data of an object in a visible area, and the terahertz radar and the preset sensor are both installed on an unmanned vehicle;
fusing the first data and the second data to determine an invisible target of an invisible area, wherein the invisible target comprises a first region of interest extracted from the first data and a second region of interest extracted from the second data, the first region of interest is compared with the second region of interest, the invisible area is divided from the first region of interest, and the invisible target is identified from the invisible area;
determining a motion state of the invisible target based on first data detected by the terahertz radar;
the method comprises the steps of obtaining the motion state of an unmanned vehicle, wherein the motion state of the unmanned vehicle is detected by an IMU sensor pre-installed on the unmanned vehicle;
according to the motion state of the unmanned vehicle, identifying a dynamic target in the invisible targets, predicting the motion track of the dynamic target, and making a corresponding driving decision based on the predicted motion track of the dynamic target to control the safe driving of the unmanned vehicle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017859.1A CN108279671B (en) | 2018-01-08 | 2018-01-08 | Terahertz-based environment sensing method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810017859.1A CN108279671B (en) | 2018-01-08 | 2018-01-08 | Terahertz-based environment sensing method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108279671A CN108279671A (en) | 2018-07-13 |
CN108279671B true CN108279671B (en) | 2021-12-14 |
Family
ID=62803418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810017859.1A Active CN108279671B (en) | 2018-01-08 | 2018-01-08 | Terahertz-based environment sensing method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108279671B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110068543A (en) * | 2019-03-26 | 2019-07-30 | 昆明理工大学 | A kind of tera-hertz spectra recognition methods based on transfer learning |
CN110427034B (en) * | 2019-08-13 | 2022-09-02 | 浙江吉利汽车研究院有限公司 | Target tracking system and method based on vehicle-road cooperation |
CN112346081B (en) * | 2020-10-22 | 2022-10-18 | 上海无线电设备研究所 | Data joint inversion method for terahertz and millimeter wave cloud radar |
CN112764022A (en) * | 2020-12-24 | 2021-05-07 | 珠海格力电器股份有限公司 | Public safety management method and public safety management system |
CN114274979A (en) * | 2022-01-07 | 2022-04-05 | 中国第一汽车股份有限公司 | Target attention degree grade distinguishing method and device for automatic driving and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014119A (en) * | 2006-02-08 | 2007-08-08 | 财团法人工业技术研究院 | Vehicle assisted monitoring apparatus and method |
CN101498654A (en) * | 2008-01-29 | 2009-08-05 | 佳能株式会社 | Inspection apparatus and inspection method by using terahertz wave |
CN102759753A (en) * | 2011-04-29 | 2012-10-31 | 同方威视技术股份有限公司 | Method and device for detecting hidden dangerous article |
CN103153729A (en) * | 2010-08-10 | 2013-06-12 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Method and system for regulating driving stability |
CN103210434A (en) * | 2010-09-15 | 2013-07-17 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Visual driver information and warning system for driver of motor vehicle |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN106228162A (en) * | 2016-07-22 | 2016-12-14 | 王威 | A kind of quick object identification method of mobile robot based on degree of depth study |
CN106292668A (en) * | 2016-08-31 | 2017-01-04 | 上海理工大学 | Pilotless automobile system based on THz wave detection |
WO2017021813A2 (en) * | 2015-07-23 | 2017-02-09 | Mahmoud Meribout | System and method for real-time flow measurement in pipelines using thz imaging |
CN106662871A (en) * | 2014-07-02 | 2017-05-10 | Zf腓德烈斯哈芬股份公司 | Position-dependent representation of vehicle environment data on a mobile unit |
CN106716173A (en) * | 2014-09-25 | 2017-05-24 | 奥迪股份公司 | Method for operating a multiplicity of radar sensors in a motor vehicle and motor vehicle |
CN106965740A (en) * | 2017-03-27 | 2017-07-21 | 东莞市莱曼光电科技有限公司 | A kind of double-colored temperature car light and decision method that environment is judged based on black light |
CN107328472A (en) * | 2017-06-02 | 2017-11-07 | 中国科学院上海微系统与信息技术研究所 | A kind of tera-hertz spectra detection system and method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7489865B2 (en) * | 2002-02-01 | 2009-02-10 | Cubic Corporation | Integrated optical communication and range finding system and applications thereof |
CN108231094B (en) * | 2013-01-07 | 2021-07-27 | 阿森蒂亚影像有限公司 | Optical guidance system and method using mutually differentiated signal correction sensors |
CN106595631B (en) * | 2016-10-25 | 2019-08-23 | 纳恩博(北京)科技有限公司 | A kind of method and electronic equipment of avoiding barrier |
CN106599668B (en) * | 2016-12-29 | 2019-11-08 | 中国科学院长春光学精密机械与物理研究所 | A kind of target identities identifying system |
-
2018
- 2018-01-08 CN CN201810017859.1A patent/CN108279671B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101014119A (en) * | 2006-02-08 | 2007-08-08 | 财团法人工业技术研究院 | Vehicle assisted monitoring apparatus and method |
CN101498654A (en) * | 2008-01-29 | 2009-08-05 | 佳能株式会社 | Inspection apparatus and inspection method by using terahertz wave |
CN103153729A (en) * | 2010-08-10 | 2013-06-12 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Method and system for regulating driving stability |
CN103210434A (en) * | 2010-09-15 | 2013-07-17 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Visual driver information and warning system for driver of motor vehicle |
CN102759753A (en) * | 2011-04-29 | 2012-10-31 | 同方威视技术股份有限公司 | Method and device for detecting hidden dangerous article |
CN106662871A (en) * | 2014-07-02 | 2017-05-10 | Zf腓德烈斯哈芬股份公司 | Position-dependent representation of vehicle environment data on a mobile unit |
CN106716173A (en) * | 2014-09-25 | 2017-05-24 | 奥迪股份公司 | Method for operating a multiplicity of radar sensors in a motor vehicle and motor vehicle |
WO2017021813A2 (en) * | 2015-07-23 | 2017-02-09 | Mahmoud Meribout | System and method for real-time flow measurement in pipelines using thz imaging |
CN105184271A (en) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | Automatic vehicle detection method based on deep learning |
CN106228162A (en) * | 2016-07-22 | 2016-12-14 | 王威 | A kind of quick object identification method of mobile robot based on degree of depth study |
CN106292668A (en) * | 2016-08-31 | 2017-01-04 | 上海理工大学 | Pilotless automobile system based on THz wave detection |
CN106965740A (en) * | 2017-03-27 | 2017-07-21 | 东莞市莱曼光电科技有限公司 | A kind of double-colored temperature car light and decision method that environment is judged based on black light |
CN107328472A (en) * | 2017-06-02 | 2017-11-07 | 中国科学院上海微系统与信息技术研究所 | A kind of tera-hertz spectra detection system and method |
Non-Patent Citations (1)
Title |
---|
毫米波与太赫兹技术;洪伟,等;《中国科学》;20161231;第46卷(第8期);第1086-1107页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108279671A (en) | 2018-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108279671B (en) | Terahertz-based environment sensing method and device and computer readable storage medium | |
CN108226951B (en) | Laser sensor based real-time tracking method for fast moving obstacle | |
CN107944351B (en) | Image recognition method, image recognition device and computer-readable storage medium | |
KR20190026116A (en) | Method and apparatus of recognizing object | |
US20200307589A1 (en) | Automatic lane merge with tunable merge behaviors | |
US11371851B2 (en) | Method and system for determining landmarks in an environment of a vehicle | |
KR102472075B1 (en) | System and method for supporting automatic detection service based on real-time road image and radar signal analysis results | |
US20200290611A1 (en) | Smooth transition between adaptive cruise control and cruise control using virtual vehicle | |
CN108399778A (en) | Swarm intelligence congestion reminding method, system and computer readable storage medium | |
WO2018145308A1 (en) | Filter reusing mechanism for constructing robust deep convolutional neural network | |
CN111553605A (en) | Vehicle lane change risk assessment method, device, equipment and storage medium | |
CN112445204A (en) | Object movement navigation method and device in construction site and computer equipment | |
CN114291082A (en) | Method and device for controlling a vehicle | |
KR102060286B1 (en) | Radar object detection threshold value determination method using image information and radar object information generation device using the same | |
CN110390252B (en) | Obstacle detection method and device based on prior map information and storage medium | |
CN110781730B (en) | Intelligent driving sensing method and sensing device | |
EP4160269A1 (en) | Systems and methods for onboard analysis of sensor data for sensor fusion | |
CN113611131B (en) | Vehicle passing method, device, equipment and computer readable storage medium | |
CN116823884A (en) | Multi-target tracking method, system, computer equipment and storage medium | |
CN115817466A (en) | Collision risk assessment method and device | |
CN115421512A (en) | Image detection method and device for unmanned aerial vehicle, electronic equipment and storage medium | |
CN111274336B (en) | Target track processing method and device, storage medium and electronic device | |
CN114426030B (en) | Pedestrian passing intention estimation method, device, equipment and automobile | |
CN115437366A (en) | Obstacle tracking method, device, equipment and computer readable storage medium | |
CN113076830A (en) | Environment passing area detection method and device, vehicle-mounted terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |