CN113447923A - Target detection method, device, system, electronic equipment and storage medium - Google Patents

Target detection method, device, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN113447923A
CN113447923A CN202110729545.6A CN202110729545A CN113447923A CN 113447923 A CN113447923 A CN 113447923A CN 202110729545 A CN202110729545 A CN 202110729545A CN 113447923 A CN113447923 A CN 113447923A
Authority
CN
China
Prior art keywords
target
radar
point cloud
visible light
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729545.6A
Other languages
Chinese (zh)
Inventor
张经纬
王宇龙
张明
赵显�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202110729545.6A priority Critical patent/CN113447923A/en
Publication of CN113447923A publication Critical patent/CN113447923A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The embodiment of the application provides a target detection method, a target detection device, a target detection system, electronic equipment and a storage medium, and radar point cloud data to be detected are obtained; projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid; performing channel mapping of the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images; and analyzing the pseudo visible light image by using a pre-trained first deep learning network to obtain the type of the target in the point cloud data of the radar to be detected and the position of the target in a preset plane coordinate system, so as to obtain the type and the position of the target. And fusing the target in the radar point cloud data and the target in the visible light image based on the position of the target in the radar point cloud data to be detected in the preset plane coordinate system and the position of the target in the visible light image in the preset plane coordinate system to obtain fused target information, so that the fusion of the radar target and the visible light target is realized.

Description

Target detection method, device, system, electronic equipment and storage medium
Technical Field
The present application relates to the field of object detection technologies, and in particular, to an object detection method, an apparatus, a system, an electronic device, and a storage medium.
Background
With the development of intelligent traffic, the requirement for acquiring data by a sensor is higher and higher, and the detection of a traffic target becomes a key link in intelligent traffic, wherein the traffic target can be a motor vehicle, a non-motor vehicle, a pedestrian and the like. In the prior art, a radar is used for collecting point clouds of traffic targets in the surrounding environment, and then the point clouds are clustered to obtain point cloud clusters, so that the position of the target is represented by the point cloud clusters. However, in the above manner, the point cloud cluster is used to represent the position of the target, i.e. an accurate target frame cannot be obtained, and the type of the target cannot be obtained.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a system, an electronic device, and a storage medium for detecting a target type represented by a point cloud cluster in a radar point cloud. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a target detection method based on a radar point cloud, where the method includes: acquiring point cloud data of a radar to be detected; projecting the radar point cloud data to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid; performing channel mapping of the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images; and analyzing the pseudo visible light image by utilizing a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
In a possible implementation manner, the acquiring radar point cloud data to be detected includes: acquiring current frame radar point cloud data; and aiming at the point cloud data of the static target in the current frame radar point cloud data, selecting the radar point cloud data with the corresponding frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data to obtain the point cloud data of the radar to be detected.
In a possible implementation manner, the selecting, according to the frame number of the current frame radar point cloud data, the radar point cloud data corresponding to the frame number to be overlaid with the current frame radar point cloud data to obtain radar point cloud data to be detected, where the selecting includes: aiming at the point cloud data of a static target in the current frame radar point cloud data, obtaining the position of the point cloud data in a preset three-dimensional coordinate system; selecting radar point cloud data corresponding to the frame number according to the frame number of the radar point cloud data of the current frame; performing position compensation on the radar point cloud data corresponding to the frame number to obtain radar point cloud data after the position compensation; selecting target point cloud data corresponding to the point cloud data from the radar point cloud data after position compensation according to the position of the point cloud data in a preset three-dimensional coordinate system; and overlapping the point cloud data and the target point cloud data to obtain the point cloud data of the radar to be detected.
In a possible embodiment, the performing channel mapping of the visible light image based on the number of points in each grid to obtain a pseudo visible light image includes: mapping the number of the middle points of each grid into each element value of the matrix, and arranging each element value according to the arrangement position of each grid to obtain a point cloud density matrix; and performing channel mapping of the visible light image on the point cloud density matrix to obtain a pseudo visible light image.
In a second aspect, an embodiment of the present application provides a target detection method based on a radar point cloud and a visible light image, where the method includes: acquiring a visible light image collected by a camera; for each frame of visible light image, utilizing a pre-trained second deep learning network to perform target detection to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system; acquiring radar point cloud data collected by a radar; for each frame of radar point cloud data, obtaining radar target information of a radar target corresponding to the frame of radar point cloud data by using the radar point cloud-based target detection method in any one of the first aspects, wherein for any one radar target, the radar target information of the radar target comprises the type of the radar target obtained based on the radar point cloud data and the position of the radar target in a preset plane coordinate system; and obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
In one possible embodiment, the acquiring the visible light image captured by the camera includes: acquiring each visible light image respectively collected by a plurality of cameras; after the target detection is performed on each frame of visible light image by using the pre-trained second deep learning network to obtain the image target information of the image target corresponding to the frame of visible light image, the method further includes: and integrating the image target information of each image target corresponding to each frame of visible light image acquired by the plurality of cameras at the same time to obtain the image target information of each image target at the corresponding visible light image acquisition time.
In a possible implementation manner, the obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target includes: aiming at the visible light image acquisition time of each frame of visible light image and the radar point cloud acquisition time of each frame of radar point cloud data, establishing an association relation between the visible light image acquisition time and the radar point cloud acquisition time according to the principle that the difference value of the acquisition times is minimum; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the similarity between a radar target and an image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time, and determining the matching relationship between the radar target and the image target based on the similarity; and fusing the radar target information of the radar target with the matching relation with the image target information of the image target to obtain fused target information.
In a possible embodiment, the step of calculating a similarity between a radar target and an image target according to image target information of each image target at the visible light image acquisition time and radar target information of each radar target at the radar point cloud acquisition time, and determining a matching relationship between the radar target and the image target based on the similarity, for each pair of visible light image acquisition time and radar point cloud acquisition time having an association relationship, includes: for each pair of visible light image acquisition time and radar point cloud acquisition time which have an association relationship, converting the position of each image target in an image plane coordinate system at the visible light image acquisition time and the position of each radar target in a preset plane coordinate system at the radar point cloud acquisition time into the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the positions of the image targets at the visible light image acquisition time and the radar targets at the radar point cloud acquisition time in the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the similarity between an image target at the visible light image acquisition time and a radar target at the radar point cloud acquisition time according to the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time; and for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the matching relationship between the radar target and the image target according to the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time.
In one possible embodiment, the method further comprises: and aiming at the point cloud data of the moving target in the current frame radar point cloud data, clustering the point cloud data of the moving target to obtain a point cloud cluster of the moving target, and projecting the point cloud cluster of the moving target to a coordinate system of a visible light image to obtain fusion target information.
In a possible implementation mode, the cameras are four fisheye cameras, the radars are four-corner millimeter wave radars which are respectively arranged in four directions of the front, the back, the left and the right of the vehicle, and the four-corner millimeter wave radars are respectively arranged on four corners of the vehicle.
In a third aspect, an embodiment of the present application provides an apparatus for detecting a target based on a radar point cloud, where the apparatus includes: the point cloud data acquisition module is used for acquiring point cloud data of the radar to be detected; the point cloud data projection module is used for projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid; the pseudo visible light image determining module is used for carrying out channel mapping on the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images; and the radar target detection module is used for analyzing the pseudo visible light image by utilizing a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
In one possible embodiment, the point cloud data obtaining module includes: the current frame data acquisition submodule is used for acquiring current frame radar point cloud data; and the static target processing submodule is used for selecting the radar point cloud data corresponding to the frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data aiming at the point cloud data of the static target in the current frame radar point cloud data to obtain the to-be-detected radar point cloud data.
In a possible implementation manner, the static object processing sub-module is specifically configured to: aiming at the point cloud data of a static target in the current frame radar point cloud data, obtaining the position of the point cloud data in a preset three-dimensional coordinate system; selecting radar point cloud data corresponding to the frame number according to the frame number of the radar point cloud data of the current frame; performing position compensation on the radar point cloud data corresponding to the frame number to obtain radar point cloud data after the position compensation; selecting target point cloud data corresponding to the point cloud data from the radar point cloud data after position compensation according to the position of the point cloud data in a preset three-dimensional coordinate system; and overlapping the point cloud data and the target point cloud data to obtain the point cloud data of the radar to be detected.
In a possible implementation manner, the pseudo visible light image determining module is specifically configured to: mapping the number of the middle points of each grid into each element value of the matrix, and arranging each element value according to the arrangement position of each grid to obtain a point cloud density matrix; and performing channel mapping of the visible light image on the point cloud density matrix to obtain a pseudo visible light image.
In a fourth aspect, an embodiment of the present application provides a target detection apparatus based on a radar point cloud and a visible light image, the apparatus includes: the visible light image acquisition module is used for acquiring a visible light image acquired by the camera; the visible light target detection module is used for performing target detection on each frame of visible light image by using a pre-trained second deep learning network to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system; the radar data acquisition module is used for acquiring radar point cloud data acquired by a radar; a target detection device calling module, configured to call, for each frame of radar point cloud data, any of the target detection devices based on radar point cloud of the third aspect to obtain radar target information of a radar target corresponding to the frame of radar point cloud data, where, for any radar target, the radar target information of the radar target includes a type of the radar target obtained based on the radar point cloud data and a position of the radar target in a preset planar coordinate system; and the fusion target information determining module is used for obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
In a possible implementation manner, the visible light image obtaining module is specifically configured to: acquiring each visible light image respectively collected by a plurality of cameras; the device further comprises: and the image target integration module is used for integrating the image target information of each image target corresponding to each frame of visible light image acquired by the plurality of cameras at the same moment to obtain the image target information of each image target at the corresponding visible light image acquisition moment.
In one possible implementation, the fusion target information determining module includes: the incidence relation establishing sub-module is used for establishing the incidence relation between the visible light image acquisition time and the radar point cloud acquisition time according to the principle that the difference value of the acquisition times is minimum aiming at the visible light image acquisition time of each frame of visible light image and the radar point cloud acquisition time of each frame of radar point cloud data; the matching relation determining submodule is used for calculating the similarity between the radar target and the image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time aiming at each pair of visible light image acquisition time and radar point cloud acquisition time which have the association relation, and determining the matching relation between the radar target and the image target based on the similarity; and the target information fusion sub-module is used for fusing the radar target information of the radar target with the matching relation with the image target information of the image target to obtain fusion target information.
In a possible implementation manner, the matching relationship determining submodule is specifically configured to: for each pair of visible light image acquisition time and radar point cloud acquisition time which have an association relationship, converting the position of each image target in an image plane coordinate system at the visible light image acquisition time and the position of each radar target in a preset plane coordinate system at the radar point cloud acquisition time into the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the positions of the image targets at the visible light image acquisition time and the radar targets at the radar point cloud acquisition time in the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the similarity between an image target at the visible light image acquisition time and a radar target at the radar point cloud acquisition time according to the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time; and for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the matching relationship between the radar target and the image target according to the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time.
In a possible embodiment, the apparatus further comprises: and the moving target projection module is used for clustering the point cloud data of the moving target in the current frame radar point cloud data to obtain a point cloud cluster of the moving target, and projecting the point cloud cluster of the moving target to a coordinate system of a visible light image to obtain fusion target information.
In a possible implementation mode, the cameras are four fisheye cameras, the radars are four-corner millimeter wave radars which are respectively arranged in four directions of the front, the back, the left and the right of the vehicle, and the four-corner millimeter wave radars are respectively arranged on four corners of the vehicle.
In a fifth aspect, an embodiment of the present application provides an object detection system, including:
radar, camera, and computing device;
the camera is used for acquiring visible light images;
the radar, radar point cloud data for acquisition
The computing device is configured to implement the target detection method according to any of the present application at runtime.
In a sixth aspect, an embodiment of the present application provides an electronic device, including a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to implement the target detection method according to any one of the present applications when executing the program stored in the memory.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and the computer program, when executed by a processor, implements an object detection method described in any of the present application.
The embodiment of the application has the following beneficial effects:
the target detection method, the target detection device, the target detection system, the electronic equipment and the storage medium, which are provided by the embodiment of the application, are used for acquiring point cloud data of a radar to be detected; projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid; performing channel mapping of the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images; and analyzing the pseudo visible light image by using a pre-trained first deep learning network to obtain the type of the target in the radar point cloud data to be detected and the position of the target in a preset plane coordinate system. And acquiring a pseudo visible light image by adopting a raster processing mode for the radar point cloud, and detecting and tracking the target on the basis of the pseudo visible light image to obtain the type and the position of the target and obtain a target frame of the target. And aiming at the static target, the point cloud with a complete target can be obtained in a multi-frame superposition mode, so that a complete target frame can be obtained, and the position description of the target is facilitated.
In addition, a visible light image collected by the camera can be acquired; for each frame of visible light image, utilizing a pre-trained second deep learning network to perform target detection to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system; and obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target. The fusion target information may include a fusion target type and a fusion target position. Because the radar has the advantage of accurate positioning, the position of the fusion target can be obtained based on the position of the radar target in the radar point cloud data in the preset plane coordinate system, and the type of the image target in the visible light image is more reliable, so that the type of the image target is used as the type of the fusion target.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a target detection method based on radar point cloud according to an embodiment of the present application;
fig. 2 is a schematic diagram of a possible implementation manner of step S101 in the embodiment of the present application;
fig. 3 is a first schematic diagram of a target detection method based on a radar point cloud and a visible light image according to an embodiment of the present disclosure;
FIG. 4 is a second schematic diagram of a target detection method based on a radar point cloud and a visible light image according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a possible implementation manner of step S205 in the embodiment of the present application;
FIG. 6 is a schematic diagram of the installation positions of the radar and the camera in the embodiment of the present application;
FIG. 7 is a third schematic diagram of a target detection method based on a radar point cloud and a visible light image according to an embodiment of the present disclosure;
FIG. 8 is a fourth schematic diagram of a target detection method based on a radar point cloud and a visible light image according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the description herein are intended to be within the scope of the present disclosure.
In order to detect the type of a target represented by a point cloud cluster in a radar point cloud, an embodiment of the present application provides a target detection method based on the radar point cloud, and referring to fig. 1, the method includes:
s101, point cloud data of the radar to be detected are obtained.
The target detection method based on the radar point cloud can be realized through electronic equipment, and the electronic equipment can be a smart phone, a computer, a circuit board, a system-on-chip or the like.
The radar point cloud data to be detected can be original point cloud data directly acquired by a radar or point cloud data obtained by clustering the original point cloud data. In a possible implementation, referring to fig. 2, the acquiring radar point cloud data to be detected includes:
s1011, acquiring current frame radar point cloud data.
The current frame radar point cloud data can be point cloud data acquired by one radar or point cloud data acquired by a plurality of radars, and in one example, each frame of radar point cloud data acquired by the plurality of radars at the current moment is acquired; and fusing each frame of radar point cloud data acquired at the current moment to obtain the current frame of radar point cloud data. Specifically, each frame of radar point cloud data acquired by each radar at the same time can be converted to the same coordinate system for fusion, so that the current frame of radar point cloud data is obtained.
And S1012, aiming at the point cloud data of the static target in the current frame radar point cloud data, selecting radar point cloud data corresponding to the frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data, and obtaining the point cloud data of the radar to be detected.
The stationary target is a stationary target in a world coordinate system, and a radar that collects radar point cloud data may move relative to the world coordinate system, in one example, whether the point cloud data corresponds to the stationary target may be determined according to a moving speed V1 of the radar and a doppler speed V2 of the point cloud data, specifically, if | V1-V2| is less than a preset speed threshold, the point cloud data is considered as the point cloud data of the stationary target, otherwise, the point cloud data is considered as the point cloud data of the moving target.
In one example, point cloud data in each frame of radar point cloud data can be converted into a world coordinate system by combining with the world coordinate positioning of a radar, the point cloud data in the current frame of radar point cloud data is matched with the point cloud data in the previous frame of radar point cloud data to obtain point cloud data belonging to the same target, if the error of the position of the point cloud data of the same target is within a preset error range, the target is considered to be a static target, and if not, the target is considered to be a moving target.
For point cloud data of a stationary object, its position in the world coordinate system should be invariant. The radar point cloud data corresponding to the frame number is selected according to the frame number of the radar point cloud data of the current frame, for example, radar point cloud data of N (N is a positive integer) frames before the current frame can be selected, the selected radar point cloud data and the radar point cloud data of the current frame can be overlapped under a world coordinate system, and a point cloud cluster which is a static target after being overlapped is obtained and serves as radar point cloud data to be detected.
In a possible implementation manner, the selecting, according to the frame number of the current frame radar point cloud data, the radar point cloud data corresponding to the frame number to be overlaid with the current frame radar point cloud data to obtain radar point cloud data to be detected, where the selecting includes:
the method comprises the steps of firstly, aiming at point cloud data of a static target in current frame radar point cloud data, obtaining the position of the point cloud data in a preset three-dimensional coordinate system.
The preset three-dimensional coordinate system may be selected according to actual conditions, for example, the preset three-dimensional coordinate system may be a world coordinate system, and the preset three-dimensional coordinate system may be a radar coordinate system of the current frame radar point cloud data.
And step two, selecting the radar point cloud data corresponding to the frame number according to the frame number of the current frame radar point cloud data.
For example, if the frame number of the current frame radar point cloud data is M, each frame radar point cloud data with frame numbers from M-N to M-1 may be selected, where N is a positive integer smaller than M.
And step three, performing position compensation on the radar point cloud data corresponding to the frame number to obtain radar point cloud data after position compensation.
In one example, the radar for collecting the radar point cloud data is installed on a vehicle, and according to the speed and yaw rate of the vehicle, the position difference between the position (historical position) for collecting the radar point cloud data corresponding to the frame number and the position for collecting the radar point cloud data of the current frame (current position) is calculated, and according to the position difference, the radar point cloud data corresponding to the frame number is subjected to position compensation, so that the radar point cloud data after the position compensation is obtained.
In one example, the position difference between the position (historical position) of the radar point cloud data collected corresponding to the frame number and the position of the radar point cloud data collected in the current frame (current position) can be obtained by combining the positioning of the radar in the world coordinate, and the radar point cloud data corresponding to the frame number is subjected to position compensation according to the position difference, so that the radar point cloud data after the position compensation is obtained.
And step four, selecting target point cloud data corresponding to the point cloud data from the radar point cloud data after position compensation according to the position of the point cloud data in a preset three-dimensional coordinate system.
For example, the current frame radar point cloud data and the radar point cloud data after position compensation may be both converted into a preset three-dimensional coordinate system, and point cloud data of the radar point cloud data after position compensation (hereinafter referred to as a target point cloud cluster) at a corresponding position is obtained according to the position of the point cloud data of the stationary target in the preset three-dimensional coordinate system.
And step five, overlapping the point cloud data and the target point cloud data to obtain the point cloud data of the radar to be detected.
And superposing the point cloud data of the static target and the target point cloud data corresponding to the point cloud data to obtain the radar point cloud data to be detected.
S102, projecting the radar point cloud data to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid.
The designated direction can be set according to the actual situation in a self-defined manner, in one example, the designated direction can be a horizontal direction, the radar point cloud data to be detected is three-dimensional data, the radar point cloud data to be detected can be projected onto a horizontal plane, and then, according to the preset grid size, grid division is performed on a corresponding area on the horizontal plane (the corresponding area can be the projection area of the radar point cloud data to be detected); the corresponding region may also be divided into M × N grids according to a preset grid number (e.g., M × N). And counting the number of points in each grid.
S103, channel mapping of the visible light image is carried out based on the number of the middle points of each grid, and a pseudo visible light image is obtained.
The pseudo visible light image refers to data having the same data structure as the visible light image; the pseudo visible light image is not obtained by shooting through an optical imaging principle, but is a data structure for converting point cloud data into a visible light image in a grid projection mode. In one example, each grid can be considered as a pixel, and the number of points in the grid is mapped to the channel value of the corresponding pixel in the visible light image, thereby obtaining a pseudo visible light image. The channel type of the visible light is not limited herein, and may be an RGB channel, a YUV channel, or a gray scale channel, etc., which is within the scope of the present application. Taking RGB channels as an example, in one example, a preset Color Bar list may be used for mapping, wherein the Color Bar list records a mapping relationship between RGB Color values and a specified single value (for example, the number of dots in a grid).
In a possible embodiment, the performing channel mapping of the visible light image based on the number of points in each grid to obtain a pseudo visible light image includes:
step one, mapping the number of the middle points of each grid into each element value of the matrix, and arranging each element value according to the arrangement position of each grid to obtain the point cloud density matrix.
For example, M × N grids may be mapped to an M × N point cloud density matrix, one grid corresponds to one element, and the element value is the number of points in the corresponding grid, thereby obtaining the point cloud density matrix.
And step two, performing channel mapping of the visible light image on the point cloud density matrix to obtain a pseudo visible light image.
In one example, an element in the point cloud density matrix corresponds to a pixel in the visible light image, and the value of the element is the channel value of the element.
And S104, analyzing the pseudo visible light image by using a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
The first deep learning network may be RCNN or YOLO, and the first deep learning network is used to analyze the pseudo visible light image, so as to obtain the type of the target and the position of the target in the preset planar coordinate system. The preset plane coordinate system can be selected in a self-defined manner according to actual conditions, and can be a vertical plane coordinate system or a horizontal plane coordinate system and the like. In one possible embodiment, the predetermined planar coordinate system is a horizontal plane coordinate system, such as a latitude and longitude coordinate system.
In one example, the process of pre-training the first deep learning network may include:
step 1, obtaining a plurality of sample pseudo visible light images marked with the types and positions of the targets, wherein the obtaining mode of the sample pseudo visible light images can refer to the obtaining mode of the pseudo visible light images, and details are not repeated here.
And 2, selecting a sample pseudo visible light image, inputting the sample pseudo visible light image into the first deep learning network for analysis, and obtaining the type and the position of the predicted target.
And 3, calculating the loss of the first deep learning network according to the type and the position of the target marked by the selected sample pseudo visible light image and the type and the position of the predicted target.
And 4, adjusting parameters of the first deep learning network according to the loss of the first deep learning network, returning to the step 2 to continue executing until the loss of the first deep learning network converges or reaches the preset training times, and obtaining the pre-trained first deep learning network.
In the embodiment of the application, the radar point cloud is subjected to grid processing to obtain the pseudo-visible light image, and target detection and tracking are performed on the basis, so that the type and the position of a target can be obtained, and a target frame of the target can be obtained. And aiming at the static target, the point cloud with a complete target can be obtained in a multi-frame superposition mode, so that a complete target frame can be obtained, and the position description of the target is facilitated.
The embodiment of the application also provides a target detection method based on radar point cloud and visible light image, referring to fig. 3, the method includes:
s201, acquiring a visible light image collected by a camera.
The target detection method based on the radar point cloud and the visible light image can be realized through electronic equipment, and the electronic equipment can be a smart phone, a computer, a circuit board, a system-on-chip or the like.
And S202, for each frame of visible light image, performing target detection by using a pre-trained second deep learning network to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system.
The second deep learning network may be RCNN or YOLO, and the training process of the second deep learning network may refer to the training process of the target detection network in the related art, which is not described herein again.
The target in the visible light image is called an image target, the image plane coordinate system is a plane coordinate system in the designated direction, and the origin of the image plane coordinate system and the origin of the preset plane coordinate system can be the same or different. And obtaining the type of the image target in the visible light image and the position of the image target in a visible light image coordinate system by utilizing a second deep learning network, and then obtaining the position of the image target in the image plane coordinate system according to the conversion relation between the visible light image coordinate system and the image plane coordinate system.
S203, acquiring radar point cloud data collected by a radar.
And S204, aiming at each frame of radar point cloud data, obtaining radar target information of a radar target corresponding to the frame of radar point cloud data by using any radar point cloud-based target detection method, wherein aiming at any radar target, the radar target information of the radar target comprises the type of the radar target obtained based on the radar point cloud data and the position of the radar target in a preset plane coordinate system.
And S205, obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
And matching the image target and the radar target according to the image target information and the radar target information, and associating the successfully matched targets as the same fusion target so as to obtain fusion target information.
In one example, image objects of the same type may be matched according to location, so as to obtain a matching result.
In one example, the fusion target information may include a fusion target type and a fusion target location. Because the radar has the advantage of accurate positioning, the position of the fusion target can be obtained based on the position of the radar target in the radar point cloud data in the preset plane coordinate system, and the type of the image target in the visible light image is more reliable, so that the type of the image target is used as the type of the fusion target.
In one possible embodiment, referring to fig. 4, the acquiring of the visible light image captured by the camera comprises:
s2011, acquiring visible light images respectively acquired by a plurality of cameras;
in view of the fact that the shooting angle of each camera is limited, in an actual scene, in order to obtain an image in a more comprehensive scene, a plurality of cameras can be used for collecting visible light images respectively, and the shooting ranges of the cameras can be intersected or completely different, and are within the protection range of the application.
After the target detection is performed on each frame of visible light image by using the pre-trained second deep learning network to obtain the image target information of the image target corresponding to the frame of visible light image, the method further includes:
and S206, integrating the image target information of each image target corresponding to each frame of visible light image acquired by the plurality of cameras at the same time to obtain the visible light image target information of each image target at the corresponding visible light image acquisition time.
And integrating the image target information of each image target in the visible light images at different angles acquired by each camera at any visible light image acquisition time, so as to obtain the visible light image target information of the image target integrated at multiple angles at the visible light image acquisition time. For example, the four cameras respectively acquire a visible light image 1, a visible light image 2, a visible light image 3 and a visible light image 4 at a time a, wherein the visible light image 1 includes an image target a, the visible light image 2 does not include the image target, the visible light image 3 does not include the image target, the visible light image 4 includes an image target b and an image target c, and the image target a, the image target b and the image target c are included at the visible light image acquisition time a through integration.
In the embodiment of the application, the acquisition area and the angle of the image target can be increased by integrating the visible light image target information of the image target in the visible light images acquired by the plurality of cameras, and the requirement of large-angle acquisition can be met.
In a possible implementation manner, referring to fig. 5, the obtaining the fusion target information according to the image target information of each image target and the radar target information of each radar target includes:
and S2051, establishing an association relation between the visible light image acquisition time and the radar point cloud acquisition time according to the principle that the difference value of the acquisition times is minimum aiming at the visible light image acquisition time of each frame of visible light image and the radar point cloud acquisition time of each frame of radar point cloud data.
Frame rates of the camera and the radar may be different, so that image target information and radar target information need to be aligned in time, and an association relationship between a visible light image acquisition time and a radar point cloud acquisition time is established. In one example, the acquisition time of the device (camera or radar) with the higher frame rate may be used as a reference, the acquisition time of another device is used for calculating the difference, and the association relationship is established according to the principle that the difference between the acquisition times is the smallest.
For example, the timestamps of the visible light image acquisition time are respectively: 40 milliseconds (visible light image acquisition time A), 80 milliseconds (visible light image acquisition time B), 120 milliseconds (visible light image acquisition time C), 160 milliseconds (visible light image acquisition time D), 200 milliseconds (visible light image acquisition time E), the timestamp of radar point cloud acquisition time is respectively: 50 milliseconds (radar point cloud collection time a), 100 milliseconds (radar point cloud collection time b), 150 milliseconds (radar point cloud collection time c) and 200 milliseconds (radar point cloud collection time d); establishing an association relationship between a visible light image acquisition time A and a radar point cloud acquisition time a, establishing an association relationship between a visible light image acquisition time B and a radar point cloud acquisition time B, establishing an association relationship between a visible light image acquisition time C and a radar point cloud acquisition time B, establishing an association relationship between a visible light image acquisition time D and a radar point cloud acquisition time C, and establishing an association relationship between a visible light image acquisition time E and a radar point cloud acquisition time D.
And S2052, for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the similarity between the radar target and the image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time, and determining the matching relationship between the radar target and the image target based on the similarity.
The visible light image acquisition time and the radar point cloud acquisition time which have the association relation are regarded as the same time, and the matching relation between the radar target and the image target at the same time can be determined based on Hungary algorithm or shortest path algorithm and the like.
And S2053, fusing the radar target information of the radar target with the matching relation with the image target information of the image target to obtain fused target information.
In the embodiment of the application, the radar target and the image target are aligned in the time dimension, and the obtained fusion target information is more accurate.
In a possible embodiment, the step of calculating a similarity between a radar target and an image target according to image target information of each image target at the visible light image acquisition time and radar target information of each radar target at the radar point cloud acquisition time, and determining a matching relationship between the radar target and the image target based on the similarity, for each pair of visible light image acquisition time and radar point cloud acquisition time having an association relationship, includes:
and step A, for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, converting the position of each image target in an image plane coordinate system at the visible light image acquisition time and the position of each radar target in a preset plane coordinate system at the radar point cloud acquisition time to the same plane coordinate system.
The origin of the image plane coordinate system and the origin of the predetermined plane coordinate system may be different, that is, they may not be the same coordinate system, and therefore, it is necessary to convert the position of the image target and the position of the radar target to the same plane coordinate system. The same plane coordinate system may be an image plane coordinate system or a predetermined plane coordinate system, or may be another plane coordinate system. In one example, the cameras are four fisheye cameras, and the radars are four-corner millimeter wave radars, as shown in fig. 6, the four fisheye cameras are respectively disposed in the front, rear, left, and right directions of the vehicle, and the four-corner millimeter wave radars are respectively disposed at four corners of the vehicle, so that the same plane coordinate system may be a horizontal plane coordinate system with the center of the rear axle of the vehicle as an origin.
And B, calculating the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the positions of the image targets at the visible light image acquisition time and the radar targets at the radar point cloud acquisition time in the same plane coordinate system for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship.
Aiming at the image target and the radar target: the similarity of intersection is the intersection ratio of the image target and the radar target.
And step C, determining the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time for each pair of visible light image acquisition time and radar point cloud acquisition time which have the association relationship.
When the similarity between the image target and the radar target is determined, the similarity of the intersection and the similarity of the shape and the distance between the image target and the radar target can be considered, and the similarity is within the protection range of the application.
And D, determining the matching relation between the radar target and the image target according to the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time aiming at each pair of visible light image acquisition time and radar point cloud acquisition time which have the association relation.
And solving the similarity between the image target and the radar target through a shortest path algorithm or Hungary algorithm and the like to obtain the matching relation between the radar target and the image target. For example, in the process of matching by using the hungarian algorithm, the similarity between the image target and the radar target needs to be converted into the difference between the image target and the radar target as the cost of the hungarian algorithm, wherein the difference between the image target and the radar target is inversely related to the similarity between the image target and the radar target, for example, the difference is 1-similarity for a pair of the image target and the radar target.
In the embodiment of the application, the distance between the image target and the radar target is considered, and the cross-correlation similarity and the similarity are introduced to match the image target and the radar target, so that the matching precision of the image target and the radar target can be improved, and the precision of target detection is finally improved.
In a possible implementation manner, referring to fig. 7, the obtaining, by using any one of the above target detection methods based on a radar point cloud, radar target information of a radar target corresponding to each frame of radar point cloud data includes:
s1011, acquiring current frame radar point cloud data;
s1012, aiming at the point cloud data of a static target in the current frame radar point cloud data, selecting radar point cloud data corresponding to a frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data to obtain radar point cloud data to be detected;
s102, projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid;
s103, performing channel mapping of the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images;
and S104, analyzing the pseudo visible light image by using a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
The method further comprises the following steps: and S207, clustering the point cloud data of the moving target according to the point cloud data of the moving target in the current frame radar point cloud data to obtain a point cloud cluster of the moving target, and projecting the point cloud cluster of the moving target to a coordinate system of a visible light image to obtain fusion target information.
Unlike stationary objects, matching of moving objects is mainly done on visible light images. Similarly, time alignment is required before matching. In contrast, the position of the radar target needs to be converted into the coordinate system of the visible light image, i.e. the radar target is projected into the visible light image.
In one example, the predetermined planar coordinate system is a horizontal plane coordinate system. A static target (namely a static radar target) obtained based on the radar point cloud and an image target obtained based on the visible light image both have accurate top view detection frames, and matching is carried out under a horizontal plane coordinate system. Let the number of stationary targets be M, expressed as
Figure BDA0003139589640000171
The image objects are N, and are represented as
Figure BDA0003139589640000172
Obtaining a matching relation { { R) of an image target and a static target by utilizing a preset matching algorithmi,Vj}k}. Here, a Global Nearest Neighbor (GNN) incidence matrix is adopted, a cost matrix is constructed according to the position and speed similarity between a static target and an image target, and the optimal matching result is obtained by solving through a Hungary algorithm.
Moving targets in the radar point cloud data are in a clustering form, an accurate top view detection frame is not provided, and matching needs to be performed under a visible light image. And according to camera and radar calibration parameters, projecting the point cloud cluster of the moving target on the visible light image, searching the corresponding relation with a target frame in the visible light image, and obtaining a matching result.
In the visible light image and radar point cloud matching and associating scheme in the prior art, the visible light image and the radar point cloud are projected to the same coordinate system by means of sensor calibration parameters, and simple threshold processing is carried out. The unreasonable projection parameter error and prior size restrict the matching precision. In the embodiment of the present application,
according to the characteristics of radar detection targets, different matching strategies are adopted, static target detection is carried out on the basis of multi-frame superposition and rasterization aiming at the static targets so as to replace the traditional target detection based on radar tracks, matching of the static targets and the visible light targets is realized based on pseudo visible light images and visible light images, association requirements under complex conditions are met, and the positioning performance of the targets is improved through fusion. Specifically, the outlines (sizes) of the static target and the image target under the top view are complete, and the optimal solution is obtained by using the Hungarian algorithm for matching according to the similarity of the position and the speed. And if the target contour (size) of the moving target obtained based on the radar point cloud data is incomplete, the traditional matching is carried out by utilizing the projection information of the clustering points.
In one example the camera is a fisheye camera and the radar is an angular millimeter wave radar. In the following, an example is given of an on-vehicle fisheye camera and an angular millimeter wave radar, and in one possible implementation, the cameras are four fisheye cameras, the radars are four angular millimeter wave radars, the four fisheye cameras are respectively arranged in four directions of the vehicle, the four angular millimeter wave radars are respectively arranged on four corners of the vehicle.
The arrangement of each fisheye camera and the angle millimeter wave radar may be as shown in fig. 6, and the target detection method based on the radar point cloud and the visible light image may be as shown in fig. 8, including:
step one, acquiring camera input: fisheye images collected by 4 vehicle-mounted cameras.
Step two, obtaining the input of the angle millimeter wave radar: and 4, collecting original radar point clouds collected by the millimeter wave radar with 4 angles.
Step three, image preprocessing: and carrying out target detection on the fisheye image of the camera to obtain attributes such as a target type, a fisheye image target frame and the like. And then, projecting and converting the image coordinate system and the vehicle coordinate system by using the camera calibration parameters to obtain a target frame under the vehicle coordinate system. And fusing the 4 visual angle cameras to obtain a final result.
Step four, radar pretreatment: and converting the radar point cloud into a vehicle coordinate system by using the radar calibration parameters. Firstly, carrying out dynamic and static judgment, carrying out multi-frame addition and rasterization operation on static radar point cloud data, and then carrying out target detection and tracking processing to obtain a top view detection frame of a static target; and clustering the moving radar point cloud data to obtain clustering information of the moving point cloud data.
In one example, a stationary object and a moving object may be distinguished by determining whether the millimeter wave point cloud moves relative to the vehicle. And decomposing the vehicle speed to the installation normal direction of the angular radar to obtain a speed V1, wherein the Doppler speed of the point cloud is V2, the absolute value of V1-V2 is calculated, the point cloud is considered as a static target if the absolute value is less than a certain threshold value, and otherwise, the point cloud is a moving target.
And performing multi-frame superposition on the static target to form dense point cloud. For example, the point cloud data of M frames before the current time may be taken, the motion displacement and the rotation angle of the vehicle may be compensated, the position of the vehicle at the current time may be predicted, and the point cloud data of a plurality of frames superimposed may be obtained. Dividing the horizontal plane of the vehicle coordinate system into M x N grids at intervals of D, and counting the number of multi-frame superposed point clouds in each grid to form a point cloud density matrix A. And mapping the visible light image channel to the A to generate a pseudo visible light image. And acquiring the type of the target and a target frame (the target frame is used for representing the position of the target) by using the trained model based on the pseudo visible light image. And performing multi-target tracking processing on the static target acquired by the target detection unit to acquire a stable top view detection frame.
For a moving object, compensation for a plurality of frames cannot be performed, and therefore, only processing for a single frame can be performed. And (4) performing target clustering processing by using a clustering method such as dbscan according to position, speed information and the like among the point clouds to obtain the point cloud cluster.
And step five, matching the image target and the static target to lay a foundation for target fusion. First, the image information and the radar information need to be time synchronized first to reduce the offset. And aiming at the targets in different motion states, different association strategies are adopted. Matching the static target in a top view, and carrying out similarity measurement by using the position information of the detection frame of the static target and the detection frame of the image target to obtain a matching pair of the image target and the radar static target; matching the moving target on the fisheye image, projecting the point cloud cluster to the fisheye image by using the calibration parameters of the radar and the camera, and comparing the point cloud cluster with the position of a detection frame of the target in the fisheye image to obtain a matching relation.
In one example, the association matching module matching process may include:
step A: and respectively acquiring information such as a target detection frame, speed and the like obtained based on the fisheye image and the radar point cloud.
And B: and judging whether the data of the radar and the camera are synchronous in time. There is a frame rate difference between the radar and camera sensors, and generally, the frame rate of the radar is low. Therefore, if the time is not synchronized, it is necessary to go to step C; if synchronized in time, go to step D.
And C: and re-inputting data information of the radar and the camera.
Step D: and judging the motion state of the static target, and adopting different matching strategies. And E is executed if the target is a static target, and F is executed if the target is a moving target.
Step E: the radar static target and the image target both have accurate top view detection frames, and matching is performed under the top view. Let the number of stationary targets be M, expressed as
Figure BDA0003139589640000191
The image objects are N, and are represented as
Figure BDA0003139589640000192
The matching algorithm is to search the matching relation between the image and the static object { { R { (R)i,Vj}k}. And constructing a cost matrix by adopting the GNN incidence matrix according to the position and speed similarity between the static target and the image target, and solving by using a Hungarian algorithm to obtain an optimal matching result.
Step F: the moving target of the radar is in a clustering form, an accurate top view detection frame is not provided, and matching needs to be carried out under a fisheye image. And projecting the motion radar clustering points to the fisheye image according to camera and radar calibration parameters, and searching the corresponding relation with the fisheye image frame to obtain a matching result.
Step G: outputting radar and image relation { { R after correlation matchingi,Vj}k}。
And step six, post-processing the matched image target and the matched static target, performing multi-frame association, performing state filtering, and fusing information such as the type of the target in the single-frame image and the radar, a detection frame and the like.
Performing single-frame association, for example, for associated static targets and image targets, because the radar has the advantage of accurate positioning, directly trusting a radar top view detection frame; and the type of the target in the fisheye image is more reliable, so that the type of the target in the fisheye image is directly trusted. And for the related moving target and the image target, correcting the image top view detection frame by using the key point of the target in the fisheye image and the radar clustering center, wherein the rest is similar to the static target. For unassociated targets, the detection results of fisheye images or radar are trusted.
And performing multi-frame association, and associating the detection result obtained after the target track and the single frame are fused, such as image track associated image, radar matching measurement or image measurement, image, radar matching track associated image measurement, radar measurement or image, radar matching measurement and the like.
And (3) carrying out state estimation: and according to the result of multi-frame association, filtering the target state of the track, predicting position change, smoothing pose information and the like, and outputting a final fusion result.
And step seven, outputting fusion target information containing types and positioning information.
The embodiment of the application also provides a target detection device based on radar point cloud, the device includes:
the point cloud data acquisition module is used for acquiring point cloud data of the radar to be detected;
the point cloud data projection module is used for projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid;
the pseudo visible light image determining module is used for carrying out channel mapping on the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images;
and the radar target detection module is used for analyzing the pseudo visible light image by utilizing a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
In one possible embodiment, the point cloud data obtaining module includes:
the current frame data acquisition submodule is used for acquiring current frame radar point cloud data;
and the static target processing submodule is used for selecting the radar point cloud data corresponding to the frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data aiming at the point cloud data of the static target in the current frame radar point cloud data to obtain the to-be-detected radar point cloud data.
In a possible implementation manner, the static object processing sub-module is specifically configured to: aiming at the point cloud data of a static target in the current frame radar point cloud data, obtaining the position of the point cloud data in a preset three-dimensional coordinate system; selecting radar point cloud data corresponding to the frame number according to the frame number of the radar point cloud data of the current frame; performing position compensation on the radar point cloud data corresponding to the frame number to obtain radar point cloud data after the position compensation; selecting target point cloud data corresponding to the point cloud data from the radar point cloud data after position compensation according to the position of the point cloud data in a preset three-dimensional coordinate system; and overlapping the point cloud data and the target point cloud data to obtain the point cloud data of the radar to be detected.
In a possible implementation manner, the pseudo visible light image determining module is specifically configured to: mapping the number of the middle points of each grid into each element value of the matrix, and arranging each element value according to the arrangement position of each grid to obtain a point cloud density matrix; and performing channel mapping of the visible light image on the point cloud density matrix to obtain a pseudo visible light image.
The embodiment of the application also provides a target detection device based on radar point cloud and visible light image, the device includes:
the visible light image acquisition module is used for acquiring a visible light image acquired by the camera;
the visible light target detection module is used for performing target detection on each frame of visible light image by using a pre-trained second deep learning network to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system;
the radar data acquisition module is used for acquiring radar point cloud data acquired by a radar;
the target detection device calling module is used for calling any one of the radar point cloud-based target detection devices to obtain radar target information of a radar target corresponding to the radar point cloud data of each frame of radar point cloud data, wherein the radar target information of the radar target comprises the type of the radar target obtained based on the radar point cloud data and the position of the radar target in a preset plane coordinate system;
and the fusion target information determining module is used for obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
In a possible implementation manner, the visible light image obtaining module is specifically configured to: acquiring each visible light image respectively collected by a plurality of cameras;
the device further comprises:
and the image target integration module is used for integrating the image target information of each image target corresponding to each frame of visible light image acquired by the plurality of cameras at the same moment to obtain the image target information of each image target at the corresponding visible light image acquisition moment.
In one possible implementation, the fusion target information determining module includes:
the incidence relation establishing sub-module is used for establishing the incidence relation between the visible light image acquisition time and the radar point cloud acquisition time according to the principle that the difference value of the acquisition times is minimum aiming at the visible light image acquisition time of each frame of visible light image and the radar point cloud acquisition time of each frame of radar point cloud data;
the matching relation determining submodule is used for calculating the similarity between the radar target and the image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time aiming at each pair of visible light image acquisition time and radar point cloud acquisition time which have the association relation, and determining the matching relation between the radar target and the image target based on the similarity;
and the target information fusion sub-module is used for fusing the radar target information of the radar target with the matching relation with the image target information of the image target to obtain fusion target information.
In a possible implementation manner, the matching relationship determining submodule is specifically configured to: for each pair of visible light image acquisition time and radar point cloud acquisition time which have an association relationship, converting the position of each image target in an image plane coordinate system at the visible light image acquisition time and the position of each radar target in a preset plane coordinate system at the radar point cloud acquisition time into the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the positions of the image targets at the visible light image acquisition time and the radar targets at the radar point cloud acquisition time in the same plane coordinate system; for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the similarity between an image target at the visible light image acquisition time and a radar target at the radar point cloud acquisition time according to the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time; and for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the matching relationship between the radar target and the image target according to the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time.
In a possible embodiment, the apparatus further comprises: and the moving target projection module is used for clustering the point cloud data of the moving target in the current frame radar point cloud data to obtain a point cloud cluster of the moving target, and projecting the point cloud cluster of the moving target to a coordinate system of a visible light image to obtain fusion target information.
In a possible implementation mode, the cameras are four fisheye cameras, the radars are four-corner millimeter wave radars which are respectively arranged in four directions of the front, the back, the left and the right of the vehicle, and the four-corner millimeter wave radars are respectively arranged on four corners of the vehicle.
An embodiment of the present application further provides a target detection system, including:
radar, camera, and computing device;
the camera is used for acquiring visible light images;
the radar, radar point cloud data for acquisition
The computing device is used for realizing any one of the target detection methods during operation.
An embodiment of the present application further provides an electronic device, including: a processor and a memory;
the memory is used for storing computer programs;
the processor is configured to implement any of the above object detection methods when executing the computer program stored in the memory.
Optionally, referring to fig. 9, the electronic device according to the embodiment of the present application further includes a communication interface 22 and a communication bus 24, where the processor 21, the communication interface 22, and the memory 23 complete communication with each other through the communication bus 24.
The communication bus mentioned in the electronic device may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processing), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements any of the above object detection methods.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the object detection methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments of the apparatus, the system, the electronic device, the computer program product, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (15)

1. A target detection method based on radar point cloud, which is characterized by comprising the following steps:
acquiring point cloud data of a radar to be detected;
projecting the radar point cloud data to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid;
performing channel mapping of the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images;
and analyzing the pseudo visible light image by utilizing a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
2. The method of claim 1, wherein the acquiring the radar point cloud data to be detected comprises:
acquiring current frame radar point cloud data;
and aiming at the point cloud data of the static target in the current frame radar point cloud data, selecting the radar point cloud data with the corresponding frame number to be superposed with the current frame radar point cloud data according to the frame number of the current frame radar point cloud data to obtain the point cloud data of the radar to be detected.
3. The method according to claim 2, wherein the step of selecting, according to the frame number of the current frame radar point cloud data, the radar point cloud data corresponding to the frame number to be overlaid with the current frame radar point cloud data for the point cloud data of the stationary target in the current frame radar point cloud data to obtain the radar point cloud data to be detected comprises:
aiming at the point cloud data of a static target in the current frame radar point cloud data, obtaining the position of the point cloud data in a preset three-dimensional coordinate system;
selecting radar point cloud data corresponding to the frame number according to the frame number of the radar point cloud data of the current frame;
performing position compensation on the radar point cloud data corresponding to the frame number to obtain radar point cloud data after the position compensation;
selecting target point cloud data corresponding to the point cloud data from the radar point cloud data after position compensation according to the position of the point cloud data in a preset three-dimensional coordinate system;
and overlapping the point cloud data and the target point cloud data to obtain the point cloud data of the radar to be detected.
4. The method of claim 1, wherein performing channel mapping of the visible light image based on the number of points in each grid to obtain a pseudo visible light image comprises:
mapping the number of the middle points of each grid into each element value of the matrix, and arranging each element value according to the arrangement position of each grid to obtain a point cloud density matrix;
and performing channel mapping of the visible light image on the point cloud density matrix to obtain a pseudo visible light image.
5. A target detection method based on radar point cloud and visible light images is characterized by comprising the following steps:
acquiring a visible light image collected by a camera;
for each frame of visible light image, utilizing a pre-trained second deep learning network to perform target detection to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system;
acquiring radar point cloud data collected by a radar;
for each frame of radar point cloud data, obtaining radar target information of a radar target corresponding to the frame of radar point cloud data by using the radar point cloud-based target detection method as claimed in any one of claims 1 to 4, wherein for any radar target, the radar target information of the radar target comprises the type of the radar target obtained based on the radar point cloud data and the position of the radar target in a preset plane coordinate system;
and obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
6. The method of claim 5, wherein the acquiring visible light images captured by a camera comprises:
acquiring each visible light image respectively collected by a plurality of cameras;
after the target detection is performed on each frame of visible light image by using the pre-trained second deep learning network to obtain the image target information of the image target corresponding to the frame of visible light image, the method further includes:
and integrating the image target information of each image target corresponding to each frame of visible light image acquired by the plurality of cameras at the same time to obtain the image target information of each image target at the corresponding visible light image acquisition time.
7. The method according to claim 5 or 6, wherein obtaining the fused target information according to the image target information of each image target and the radar target information of each radar target comprises:
aiming at the visible light image acquisition time of each frame of visible light image and the radar point cloud acquisition time of each frame of radar point cloud data, establishing an association relation between the visible light image acquisition time and the radar point cloud acquisition time according to the principle that the difference value of the acquisition times is minimum;
for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the similarity between a radar target and an image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time, and determining the matching relationship between the radar target and the image target based on the similarity;
and fusing the radar target information of the radar target with the matching relation with the image target information of the image target to obtain fused target information.
8. The method of claim 7, wherein the step of calculating the similarity between the radar target and the image target according to the image target information of each image target at the visible light image acquisition time and the radar target information of each radar target at the radar point cloud acquisition time, and determining the matching relationship between the radar target and the image target based on the similarity, for each pair of visible light image acquisition time and radar point cloud acquisition time having an association relationship, comprises:
for each pair of visible light image acquisition time and radar point cloud acquisition time which have an association relationship, converting the position of each image target in an image plane coordinate system at the visible light image acquisition time and the position of each radar target in a preset plane coordinate system at the radar point cloud acquisition time into the same plane coordinate system;
for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, calculating the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time according to the positions of the image targets at the visible light image acquisition time and the radar targets at the radar point cloud acquisition time in the same plane coordinate system;
for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the similarity between an image target at the visible light image acquisition time and a radar target at the radar point cloud acquisition time according to the intersection and comparison similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time;
and for each pair of visible light image acquisition time and radar point cloud acquisition time with the association relationship, determining the matching relationship between the radar target and the image target according to the similarity between the image target at the visible light image acquisition time and the radar target at the radar point cloud acquisition time.
9. The method of claim 5, further comprising
And aiming at the point cloud data of the moving target in the current frame radar point cloud data, clustering the point cloud data of the moving target to obtain a point cloud cluster of the moving target, and projecting the point cloud cluster of the moving target to a coordinate system of a visible light image to obtain fusion target information.
10. The method according to claim 5, wherein the cameras are four fisheye cameras, and the radars are four-corner millimeter wave radars, the four fisheye cameras being respectively disposed in four directions, front, rear, left, and right, of a vehicle, the four-corner millimeter wave radars being respectively disposed at four corners of the vehicle.
11. An apparatus for object detection based on a radar point cloud, the apparatus comprising:
the point cloud data acquisition module is used for acquiring point cloud data of the radar to be detected;
the point cloud data projection module is used for projecting the point cloud data of the radar to be detected onto a two-dimensional grid plane in a specified direction to obtain the number of points in each grid;
the pseudo visible light image determining module is used for carrying out channel mapping on the visible light images based on the number of the middle points of each grid to obtain pseudo visible light images;
and the radar target detection module is used for analyzing the pseudo visible light image by utilizing a pre-trained first deep learning network to obtain the type of a radar target in the radar point cloud data to be detected and the position of the radar target in a preset plane coordinate system, wherein the preset plane coordinate system is the plane coordinate system in the specified direction.
12. A target detection device based on radar point cloud and visible light image, characterized in that the device comprises:
the visible light image acquisition module is used for acquiring a visible light image acquired by the camera;
the visible light target detection module is used for performing target detection on each frame of visible light image by using a pre-trained second deep learning network to obtain image target information of an image target corresponding to the frame of visible light image, wherein for any image target, the image target information of the image target comprises the type of the image target obtained based on the visible light image and the position of the image target in an image plane coordinate system;
the radar data acquisition module is used for acquiring radar point cloud data acquired by a radar;
a target detection device calling module, configured to call the radar point cloud-based target detection device according to claim 11 for each frame of radar point cloud data to obtain radar target information of a radar target corresponding to the frame of radar point cloud data, where, for any radar target, the radar target information of the radar target includes a type of the radar target obtained based on the radar point cloud data and a position of the radar target in a preset planar coordinate system;
and the fusion target information determining module is used for obtaining fusion target information according to the image target information of each image target and the radar target information of each radar target.
13. An object detection system, comprising:
radar, camera, and computing device;
the camera is used for acquiring visible light images;
the radar, radar point cloud data for acquisition
The computing device for implementing the method of any of claims 1-10 at runtime.
14. An electronic device comprising a processor and a memory;
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method of any of claims 1-10.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
CN202110729545.6A 2021-06-29 2021-06-29 Target detection method, device, system, electronic equipment and storage medium Pending CN113447923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729545.6A CN113447923A (en) 2021-06-29 2021-06-29 Target detection method, device, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729545.6A CN113447923A (en) 2021-06-29 2021-06-29 Target detection method, device, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113447923A true CN113447923A (en) 2021-09-28

Family

ID=77814183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729545.6A Pending CN113447923A (en) 2021-06-29 2021-06-29 Target detection method, device, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113447923A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264355A (en) * 2021-11-18 2022-04-01 河南讯飞智元信息科技有限公司 Weight detection method, weight detection device, electronic equipment and storage medium
CN114723715A (en) * 2022-04-12 2022-07-08 小米汽车科技有限公司 Vehicle target detection method, device, equipment, vehicle and medium
CN114943943A (en) * 2022-05-16 2022-08-26 中国电信股份有限公司 Target track obtaining method, device, equipment and storage medium
CN114972490A (en) * 2022-07-29 2022-08-30 苏州魔视智能科技有限公司 Automatic data labeling method, device, equipment and storage medium
CN115421122A (en) * 2022-08-30 2022-12-02 北京京东乾石科技有限公司 Target object detection method and device, electronic equipment and readable storage medium
CN115657012A (en) * 2022-12-23 2023-01-31 深圳佑驾创新科技有限公司 Matching method, device and equipment of image target and radar target and storage medium
WO2023115412A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Target recognition method and device
CN117251748A (en) * 2023-10-10 2023-12-19 中国船舶集团有限公司第七〇九研究所 Track prediction method, equipment and storage medium based on historical rule mining
WO2024051025A1 (en) * 2022-09-07 2024-03-14 劢微机器人科技(深圳)有限公司 Pallet positioning method, device, and equipment, and readable storage medium
TWI837854B (en) * 2022-05-19 2024-04-01 鈺立微電子股份有限公司 Depth processing system and operational method thereof
CN118155038A (en) * 2024-05-11 2024-06-07 中国第一汽车股份有限公司 Multi-target track detection method, device, equipment and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697006A (en) * 2009-09-18 2010-04-21 北京航空航天大学 Target identification method based on data fusion of airborne radar and infrared imaging sensor
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
US20190206123A1 (en) * 2017-12-29 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111191582A (en) * 2019-12-27 2020-05-22 深圳市越疆科技有限公司 Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
CN111339876A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Method and device for identifying types of regions in scene
CN111353512A (en) * 2018-12-20 2020-06-30 长沙智能驾驶研究院有限公司 Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN111723721A (en) * 2020-06-15 2020-09-29 中国传媒大学 Three-dimensional target detection method, system and device based on RGB-D
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN111860493A (en) * 2020-06-12 2020-10-30 北京图森智途科技有限公司 Target detection method and device based on point cloud data
CN111985349A (en) * 2020-07-30 2020-11-24 河海大学 Radar received signal type classification and identification method and system
CN112149550A (en) * 2020-09-21 2020-12-29 华南理工大学 Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN112287859A (en) * 2020-11-03 2021-01-29 北京京东乾石科技有限公司 Object recognition method, device and system, computer readable storage medium
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion
CN112561966A (en) * 2020-12-22 2021-03-26 清华大学 Sparse point cloud multi-target tracking method fusing spatio-temporal information
CN112580561A (en) * 2020-12-25 2021-03-30 上海高德威智能交通系统有限公司 Target detection method and device, electronic equipment and storage medium
WO2021072710A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Point cloud fusion method and system for moving object, and computer storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697006A (en) * 2009-09-18 2010-04-21 北京航空航天大学 Target identification method based on data fusion of airborne radar and infrared imaging sensor
CN106485690A (en) * 2015-08-25 2017-03-08 南京理工大学 Cloud data based on a feature and the autoregistration fusion method of optical image
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
US20190206123A1 (en) * 2017-12-29 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for fusing point cloud data
CN111353512A (en) * 2018-12-20 2020-06-30 长沙智能驾驶研究院有限公司 Obstacle classification method, obstacle classification device, storage medium and computer equipment
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
WO2021072710A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Point cloud fusion method and system for moving object, and computer storage medium
CN111191582A (en) * 2019-12-27 2020-05-22 深圳市越疆科技有限公司 Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
US10929694B1 (en) * 2020-01-22 2021-02-23 Tsinghua University Lane detection method and system based on vision and lidar multi-level fusion
CN111339876A (en) * 2020-02-19 2020-06-26 北京百度网讯科技有限公司 Method and device for identifying types of regions in scene
CN111860493A (en) * 2020-06-12 2020-10-30 北京图森智途科技有限公司 Target detection method and device based on point cloud data
CN111723721A (en) * 2020-06-15 2020-09-29 中国传媒大学 Three-dimensional target detection method, system and device based on RGB-D
CN111985349A (en) * 2020-07-30 2020-11-24 河海大学 Radar received signal type classification and identification method and system
CN112149550A (en) * 2020-09-21 2020-12-29 华南理工大学 Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN112287859A (en) * 2020-11-03 2021-01-29 北京京东乾石科技有限公司 Object recognition method, device and system, computer readable storage medium
CN112561966A (en) * 2020-12-22 2021-03-26 清华大学 Sparse point cloud multi-target tracking method fusing spatio-temporal information
CN112580561A (en) * 2020-12-25 2021-03-30 上海高德威智能交通系统有限公司 Target detection method and device, electronic equipment and storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264355A (en) * 2021-11-18 2022-04-01 河南讯飞智元信息科技有限公司 Weight detection method, weight detection device, electronic equipment and storage medium
WO2023115412A1 (en) * 2021-12-22 2023-06-29 华为技术有限公司 Target recognition method and device
CN114723715A (en) * 2022-04-12 2022-07-08 小米汽车科技有限公司 Vehicle target detection method, device, equipment, vehicle and medium
CN114723715B (en) * 2022-04-12 2023-09-19 小米汽车科技有限公司 Vehicle target detection method, device, equipment, vehicle and medium
CN114943943A (en) * 2022-05-16 2022-08-26 中国电信股份有限公司 Target track obtaining method, device, equipment and storage medium
CN114943943B (en) * 2022-05-16 2023-10-03 中国电信股份有限公司 Target track obtaining method, device, equipment and storage medium
TWI837854B (en) * 2022-05-19 2024-04-01 鈺立微電子股份有限公司 Depth processing system and operational method thereof
CN114972490A (en) * 2022-07-29 2022-08-30 苏州魔视智能科技有限公司 Automatic data labeling method, device, equipment and storage medium
CN114972490B (en) * 2022-07-29 2022-12-20 苏州魔视智能科技有限公司 Automatic data labeling method, device, equipment and storage medium
CN115421122A (en) * 2022-08-30 2022-12-02 北京京东乾石科技有限公司 Target object detection method and device, electronic equipment and readable storage medium
WO2024051025A1 (en) * 2022-09-07 2024-03-14 劢微机器人科技(深圳)有限公司 Pallet positioning method, device, and equipment, and readable storage medium
CN115657012A (en) * 2022-12-23 2023-01-31 深圳佑驾创新科技有限公司 Matching method, device and equipment of image target and radar target and storage medium
CN117251748A (en) * 2023-10-10 2023-12-19 中国船舶集团有限公司第七〇九研究所 Track prediction method, equipment and storage medium based on historical rule mining
CN117251748B (en) * 2023-10-10 2024-04-19 中国船舶集团有限公司第七〇九研究所 Track prediction method, equipment and storage medium based on historical rule mining
CN118155038A (en) * 2024-05-11 2024-06-07 中国第一汽车股份有限公司 Multi-target track detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
CN113421305B (en) Target detection method, device, system, electronic equipment and storage medium
CN105335955B (en) Method for checking object and object test equipment
CN113012215B (en) Space positioning method, system and equipment
CN111738071B (en) Inverse perspective transformation method based on motion change of monocular camera
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN112634369A (en) Space and or graph model generation method and device, electronic equipment and storage medium
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN113034586B (en) Road inclination angle detection method and detection system
CN115345905A (en) Target object tracking method, device, terminal and storage medium
CN115900712B (en) Combined positioning method for evaluating credibility of information source
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation
CN112130153A (en) Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN105303554A (en) Image feature point 3D reconstruction method and device
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN116630216A (en) Target fusion method, device, equipment and storage medium based on radar and image
CN112001247B (en) Multi-target detection method, equipment and storage device
CN115542271A (en) Radar coordinate and video coordinate calibration method, equipment and related device
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN117554949B (en) Linkage type target relay tracking method and system
CN116681884B (en) Object detection method and related device
CN117611800A (en) YOLO-based target grounding point detection and ranging method
Lin Real‐Time Multitarget Tracking for Panoramic Video Based on Dual Neural Networks for Multisensor Information Fusion
CN117115248A (en) Event positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination