CN114067556B - Environment sensing method, device, server and readable storage medium - Google Patents

Environment sensing method, device, server and readable storage medium Download PDF

Info

Publication number
CN114067556B
CN114067556B CN202010778358.2A CN202010778358A CN114067556B CN 114067556 B CN114067556 B CN 114067556B CN 202010778358 A CN202010778358 A CN 202010778358A CN 114067556 B CN114067556 B CN 114067556B
Authority
CN
China
Prior art keywords
base station
target
information
detection result
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010778358.2A
Other languages
Chinese (zh)
Other versions
CN114067556A (en
Inventor
关喜嘉
王邓江
王亚军
邓永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202010778358.2A priority Critical patent/CN114067556B/en
Publication of CN114067556A publication Critical patent/CN114067556A/en
Application granted granted Critical
Publication of CN114067556B publication Critical patent/CN114067556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to an environment sensing method, an environment sensing device, a server and a readable storage medium. The method comprises the following steps: respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system; acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing; mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system. In the method, a multi-base-station system is used for covering the detection range of the whole traffic scene, and the perception information of the whole global scene is obtained based on the single base station perception data of a single road side base station, so that the perception information of the whole traffic scene is obtained, and the range of perception environment is greatly improved.

Description

Environment sensing method, device, server and readable storage medium
Technical Field
The present application relates to the field of base station technologies, and in particular, to an environment sensing method, an environment sensing apparatus, a server, and a readable storage medium.
Background
In the current traffic field, it is usually necessary to monitor some road data, such as monitoring whether a tracked vehicle is in violation of driving, etc., so as to reduce the duty pressure of traffic duty personnel. With the continuous development of base station technology, such as cameras, laser radars or millimeter wave radars in base stations, the method has the advantages of high resolution, good concealment, strong active interference resistance and the like, and is widely applied to the process of road monitoring.
In the conventional technology, a base station on one side of a road is used for collecting point cloud data or image data in a certain range at fixed time intervals, and target detection and analysis are performed on the point cloud data or the image data so as to achieve the purpose of road monitoring.
However, when detecting a target in its detection area, a single base station in the conventional technology can only detect the target in its detection area, and cannot completely cover the entire traffic road scene, which results in a limited range of the sensing environment.
Disclosure of Invention
In view of the above, it is necessary to provide an environment sensing method, apparatus, server and readable storage medium for solving the problem of the limited range of sensed environments in the conventional art.
In a first aspect, this embodiment provides an environment sensing method, which is applied to a multiple base station system, where the multiple base station system includes multiple roadside base stations, and the method includes:
respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
In one embodiment, the obtaining of the target detection result of each roadside base station based on the single base station sensing data after the time-space synchronization processing includes:
if the side base stations in each path have perception overlapping areas, performing data enhancement processing on single base station perception data corresponding to the perception overlapping areas to obtain enhanced single base station perception data;
and processing the enhanced single base station sensing data by using a target detection algorithm to obtain a target detection result of each road side base station.
In one embodiment, the perception information in the global scene comprises a target moving track in the global scene;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene, wherein the perception information comprises:
performing association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target moving track in the global scene; the previous target detection result comprises a target detection result corresponding to a time before the current time.
In one embodiment, the target detection result comprises the position of the target, the speed of the target and the course angle of the target, and the previous target detection result also comprises the prediction information of the target; performing association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target movement track under the global scene, wherein the method comprises the following steps:
calculating the position and the direction of the corresponding target after the preset time according to the target detection result of each road side base station and the relative position between each road side base station to obtain the prediction information of each target;
and performing correlation matching on target detection results in the global scene according to the prediction information of each target to obtain a target moving track in the global scene.
In one embodiment, the obtaining of the target movement trajectory in the global scene by performing correlation matching on the target detection result in the global scene according to the prediction information of each target includes:
determining a target road side base station from a plurality of road side base stations based on the position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current moment;
after the preset time length, acquiring current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station;
and if the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result.
In one embodiment, the method further includes:
if the current target detection result is not matched with the candidate prediction information, judging whether a target corresponding to the current target detection result is a newly added target;
and if the target corresponding to the current target detection result is the newly added target, adding the perception information of the newly added target in the perception information of the global scene.
In one embodiment, the method further includes:
obtaining position information in the candidate prediction information, and if the target road side base station does not detect a current target detection result corresponding to the position information, determining a target subsequent time at which a target detection result is matched with the prediction information in subsequent time; the subsequent time is a time after the current time;
and taking the candidate prediction information corresponding to the target time before the subsequent time as a target detection result of the target road side base station.
In one embodiment, the method further includes:
judging whether potential safety hazards exist in the global scene or not according to the prediction information;
and if the potential safety hazard exists, outputting safety early warning information.
In one embodiment, the determining whether the potential safety hazard exists in the global scene according to the prediction information includes:
and acquiring the prediction information of the multiple targets, and determining that potential safety hazards exist in the global scene if the position information in the prediction information of the multiple targets is overlapped.
In one embodiment, before performing space-time synchronization processing on the single base station sensing data of each side base station according to calibration parameters of the multi-base station system, the method further includes:
measuring longitude and latitude information of each roadside base station by using a measuring instrument, and determining an initial calibration parameter according to the longitude and latitude information;
processing the single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station;
respectively selecting to-be-registered sensing data corresponding to each lateral base station from first single base station sensing data corresponding to each lateral base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of the multi-base station system; the preset condition is used for representing the data range of the selected to-be-registered sensing data.
In a second aspect, the present embodiment provides an environment sensing apparatus, which is applied to a multi-base station system, and the apparatus includes:
the acquisition module is used for respectively acquiring single base station sensing data of each road side base station and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of the multi-base station system;
the target detection module is used for acquiring a target detection result of each road side base station based on the single base station sensing data after the time-space synchronization processing;
the determining module is used for mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
In a third aspect, the present embodiment provides a server, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
In a fourth aspect, the present embodiments provide a computer readable storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing the steps of:
respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
The environment sensing method, the environment sensing device, the server and the readable storage medium can respectively acquire the single base station sensing data of each road side base station, and perform space-time synchronous processing on the single base station sensing data of each road side base station according to the calibration parameters of the multi-base station system; acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing; mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system. In the method, a multi-base station system is used for covering the detection range of the whole traffic scene, and the perception information of the whole global scene is obtained based on the single-base station perception data of the single road side base station, so that the perception information of the whole traffic scene is obtained, and the range of perception environment is greatly improved.
Drawings
FIG. 1 is a diagram illustrating a multi-base station system to which the context awareness method is applied in one embodiment;
FIG. 2 is a flow diagram of a method for context awareness in one embodiment;
FIG. 3 is a flow chart of a context awareness method in another embodiment;
FIG. 4 is a flow chart of a context awareness method in yet another embodiment;
FIG. 5 is a flow chart of a context awareness method in yet another embodiment;
FIG. 6 is a flow chart of a context awareness method in yet another embodiment;
FIG. 7 is a block diagram of an apparatus for context awareness in one embodiment;
fig. 8 is an internal configuration diagram of a server in one embodiment.
Description of reference numerals:
11: a roadside base station; 12: and (4) a server.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The environment sensing method provided by the embodiment of the application can be applied to the multi-base station system shown in fig. 1. The multi-base-station system comprises a plurality of roadside base stations 11, wherein the roadside base stations 11 are arranged at different directions in a current detection scene, such as oblique opposite angles of a road intersection; each roadside base station 11 may collect sensing data within the respective detection range and send the sensing data to the server 12. The server 12 may obtain a target detection result of each roadside base station 11 based on each sensing data, and map the target detection result to the global scene. Optionally, the roadside base station 11 may include a sensor such as a laser radar or a millimeter wave radar, and the like, and the point cloud data is obtained by scanning the laser radar or the millimeter wave data is obtained by scanning the millimeter wave radar, and may further include a shooting camera for shooting camera data of a current detection scene. The server 12 may be implemented as a stand-alone server or as a service cluster of multiple servers. It should be noted that, if the road side base station 11 has the detection processing capability, the processing procedure of the server 12 may also be implemented.
In an embodiment, as shown in fig. 2, an environment sensing method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and relates to a specific process of generating, by the server, sensing information in a global scene based on single base station sensing data of each roadside base station. The method comprises the following steps:
s101, respectively obtaining single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system.
The single base station sensing data may be acquired data, such as point cloud data or camera data, in a current detection range acquired by the roadside base station. The server can acquire the acquired single base station sensing data from each side base station. Each path of side base station has its own base station coordinate system, so the acquired single base station sensing data is under its own base station coordinate system; in order to make the obtained single base station sensing data under the same reference and obtain the sensing information of the global scene under the same reference, the server needs to perform space-time synchronization processing on each single base station sensing data. Specifically, the server may perform space-time synchronization processing on the single base station sensing data of each side base station according to calibration parameters of the multi-base station system, and optionally, the server may register the single base station sensing data to the same space-time according to the calibration parameters (the calibration parameters may include parameters such as a translation vector and a rotation matrix).
And S102, acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing.
Specifically, the server may perform target detection on the obtained single base station sensing data after the time-space processing, and obtain information such as a position, a speed, a course angle, an acceleration, and a category (such as pedestrians and vehicles) of a target in a detection range of the side base stations on each road as a target detection result. Alternatively, the server may perform target detection on the single base station sensing data based on a deep learning algorithm (such as a neural network) to obtain a target detection result.
S103, mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
Specifically, the target detection result of each road side base station is based on a single road side base station, and in order to obtain the target detection result of the whole multi-base station system, the server may map each target detection result to a global scene, that is, map the target detection result of each road side base station to global sensing data, so as to obtain sensing information in the global scene. And if the global scene is determined based on the perception range of the multi-base station system, the server can mark each target detection result on the global scene to obtain perception information under the global scene.
In the environment sensing method provided by this embodiment, a server performs space-time synchronization processing on acquired single base station sensing data of each roadside base station according to calibration parameters of a multi-base station system, acquires a target detection result of each roadside base station based on the single base station sensing data after the space-time synchronization processing, and then maps the target detection result of each roadside base station to a global scene to generate sensing information under the global scene; the global scene is determined based on the perception range of the multi-base station system. In the method, a multi-base-station system is used for covering the detection range of the whole traffic scene, and the perception information of the whole global scene is obtained based on the single base station perception data of a single road side base station, so that the perception information of the whole traffic scene is obtained, and the range of perception environment is greatly improved.
For the convenience of understanding, the process of performing space-time synchronization processing on single base station sensing data of each side base station (hereinafter referred to as a base station) according to the calibration parameters of the multi-base station system is described in detail below. The process may include the steps of:
and A, measuring the longitude and latitude information of each road side base station by using a measuring instrument, and determining an initial calibration parameter according to the longitude and latitude information.
The base station is internally provided with a measuring instrument capable of measuring the latitude and longitude information of the base station, and the latitude and longitude information is positioning information of the base station under a geodetic coordinate system. Each base station has its own base station coordinate system, and usually the base station coordinate systems of different base stations are different, so that the single base station sensing data acquired by different base stations are located under different base station coordinate systems (point cloud data is taken as an example to explain below, the point cloud data is single base station sensing data, the first point cloud data is first single base station sensing data, and the point cloud data to be registered is sensing data to be registered).
Specifically, after the latitude and longitude information of each base station is measured by the measuring instrument, the server may determine an initial calibration parameter according to the latitude and longitude information of each base station, where the initial calibration parameter is used to perform coarse registration on point cloud data acquired by each base station. Optionally, the server may determine a distance between the base stations according to the latitude and longitude information of each base station, and determine an initial calibration parameter according to the distance between the base stations and a base station coordinate system of the server; the initial calibration parameters may include a translation vector and a rotation matrix required in the registration.
And B, processing the single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station.
Specifically, the server may process the point cloud data of each base station according to the determined initial calibration parameters, and synchronize the point cloud data of each base station to the same space to obtain first point cloud data corresponding to each base station. Optionally, the same space may be a base station coordinate system space of a certain base station in each base station, or may be a reference coordinate system space (e.g., a terrestrial coordinate system) selected by the server. Optionally, assuming that the translation vector and the rotation matrix in the initial calibration parameters are T and R, the server may convert point cloud data P0 of the base station by using a relational expression including P0 × R + T to obtain first point cloud data.
Respectively selecting to-be-registered sensing data corresponding to each lateral base station from first single base station sensing data corresponding to each lateral base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of the multi-base station system; the preset condition is used for representing the data range of the selected to-be-registered sensing data.
The rough registration process is performed according to the latitude and longitude information of the base station, and the accuracy of the latitude and longitude information depends on the hardware factors of the base station, so in order to further improve the synchronization precision of the point cloud data of each base station in the same space, the present embodiment performs a fine registration process on the point cloud data of each base station again.
Specifically, for the first point cloud data corresponding to each base station, the server may select, according to a preset condition, to-be-registered point cloud data corresponding to each base station from each first point cloud data, where the preset condition is used to characterize a data range of the selected to-be-registered point cloud data. Optionally, data within a range from the point cloud center Xm (e.g., 10 m) in the first point cloud data may be selected as the point cloud data to be registered, that is, only the point cloud data with a higher point cloud density is selected, so as to reduce the data amount in the registration process. And then the server processes the selected cloud data of the point to be registered by using a preset registration algorithm to obtain calibration parameters when the multi-base station system is in accurate registration, and then the calibration parameters are adopted to register the data to be registered. Optionally, the preset registration algorithm may be an Iterative Closest Point (ICP) algorithm, or may be other types of Point cloud registration algorithms, which is not limited in this embodiment. Therefore, for the point cloud data acquired by the multiple base stations, the precise calibration parameters of the multiple base station system are determined through two processes of rough registration and precise registration, and then the point cloud data of the base stations are registered according to the calibration parameters, so that the spatial synchronism of the point cloud data of the multiple base stations is greatly improved.
In one embodiment, the detection ranges of the base stations have a certain overlapping area, the base stations can detect a common target in the overlapping area, and in order to improve the uniformity of the detected common target information, the server can select point cloud data corresponding to the overlapping area for registration. The above process of respectively selecting the to-be-registered sensing data corresponding to each side base station from the first single base station sensing data corresponding to each side base station according to the preset condition may include the following steps:
and C1, determining the overlapping area between the base stations according to the detection range of each base station.
And C2, acquiring point cloud data corresponding to the overlapping area from the first point cloud data as point cloud data to be registered for each base station.
Specifically, the server may determine the overlapping area between the base stations by the detection ranges of the respective base stations, for example, assuming that the detection ranges of the base stations a and B are both circles of a radius of 50m and the distance between the base stations a and B is 80m, the overlapping area of the detection range of the base station a and the detection range of the base station B may be determined to be an area of a width of 20 m.
Then, for each base station, the server may obtain, from the first point cloud data, the point cloud data corresponding to the overlapping area as point cloud data to be registered. Optionally, the server may delete the point cloud data of the non-overlapping area in the first point cloud data to obtain the point cloud data to be registered. By selecting the point cloud data corresponding to the overlapping area between the base stations as the point cloud data to be registered, the point cloud data amount during registration can be reduced, the registration efficiency is improved, and the uniformity of common target information in the detection range of the base stations can be improved.
In an embodiment, the determining the initial calibration parameter according to the latitude and longitude information may include the following steps:
a1, obtaining original calibration parameters according to longitude and latitude information of each base station.
And A2, evaluating the original calibration parameters by using the common target in the detection range of each base station, and acquiring the initial calibration parameters according to the evaluation result.
Specifically, the process of obtaining the original calibration parameters according to the latitude and longitude information of each base station may refer to the description of the above embodiment, and is not described herein again. And after the original calibration parameters are obtained, the server further evaluates the original calibration parameters to obtain calibration parameters with higher precision, and the result precision of the coarse registration is improved. After the original calibration parameters are obtained, the server can process the point cloud data of each base station by using the original calibration parameters, then perform target detection on the processed point cloud data, and evaluate the original calibration parameters by using a common target in the detection range of each base station to obtain the initial calibration parameters. Optionally, the server may respectively calculate distances from the common target to the base stations, evaluate the original calibration parameters according to the difference between the distances, if the distance difference is smaller than a preset difference threshold, use the original calibration parameters as the initial calibration parameters, if the distance error is not smaller than the difference threshold, measure the latitude and longitude information of each base station by using the measuring instrument again, and obtain the original calibration parameters again according to the latitude and longitude information, so that the execution is repeated until the distance difference between the common target and each base station is smaller than the difference threshold. Optionally, the server may further evaluate the original calibration parameters according to a difference between the coordinates of the common target detected by each base station, so as to obtain the initial calibration parameters.
In another realizable mode, the server can also obtain detection frames of a common target in the detection range of each base station, and determine the overlapping degree of the detection frames of the common target; and if the overlapping degree of the detection frames is greater than the threshold value of the overlapping degree, taking the original calibration parameters as the initial calibration parameters. Optionally, a target detection algorithm based on deep learning may be adopted to perform target detection on each processed point cloud data, and a detection frame of a common target in each base station detection range is determined, where the detection frame may be a minimum three-dimensional frame capable of surrounding the target and has information such as length, width, and the like. Then, determining the overlapping degree between the detection frames according to the detection frames of the common target, wherein if the overlapping degree is greater than a preset overlapping degree threshold (such as 90%), the obtained original calibration parameter has higher precision, and the original calibration parameter can be used as the initial calibration parameter; if the overlapping degree is not greater than the overlapping degree threshold value, the accuracy of the obtained original calibration parameters is low, the measuring instrument is required to be used for measuring the longitude and latitude information of each base station again, the original calibration parameters are obtained again according to the longitude and latitude information, and the operation is repeated until the overlapping degree between the detection frames of the common target is greater than the overlapping degree threshold value. Therefore, the precision registration process is executed on the premise of ensuring that the rough registration has certain precision, and the accuracy of point cloud registration can be further improved.
In an embodiment, the server may further determine the original calibration parameter by using latitude and longitude information of the target within the detection range of the base station and latitude and longitude information of the base station. The process of A1 above may include:
and A11, acquiring longitude and latitude information of the target in the detection range of each base station.
And A12, determining included angles and distances among the base stations according to the longitude and latitude information of the base stations and the longitude and latitude information of the target.
Specifically, the latitude and longitude information of the target in the detection range of the base station can also be position information in a geodetic coordinate system, and can be measured by using a measuring instrument in the base station; and then selecting a geodetic coordinate system as a reference coordinate system, determining an included angle between a preset coordinate axis in each base station coordinate system and a reference direction under the geodetic coordinate system by the server according to the longitude and latitude information of each base station, the longitude and latitude information of the target in the detection range of each base station and the base station coordinate system of each base station, and determining the included angle between each base station according to the included angle between the preset coordinate axis in each base station coordinate system and the reference direction.
For example, the base station coordinate system may be a three-dimensional coordinate system including an X axis, a Y axis and a Z axis, the reference direction may be a north direction, and the server may determine an angle between the Y axis in the base station coordinate system and the north direction in the geodetic coordinate system. Assuming that the longitude of the base station A is Aj and the latitude is Aw, the longitude of the target is Bj and the latitude is Bw, optionally, the server can be based on
Figure BDA0002619302320000131
Calculating a reference angle F; of course, the server may also be based on the inclusion
Figure BDA0002619302320000132
The reference angle is calculated. If the target is in a first quadrant and a positive Y-axis half axis of a base station coordinate system of the base station, an included angle Azimuth = F between a Y axis and a positive north direction in the base station coordinate system; if the target is in the second quadrant of the base station coordinate system, then Azimuth =360 ° + a; azimuth =180 ° + a if the target is in the third quadrant, the fourth quadrant, and the negative half axis of the Y-axis of the base station coordinate system. Therefore, an included angle Azimuth1 between the Y axis in the coordinate system of the base station A and the north direction under the geodetic coordinate system can be calculated, an included angle Azimuth2 between the Y axis in the coordinate system of the base station B and the north direction under the geodetic coordinate system can be calculated, and the included angle delta A = Azimuth1-Azimuth2 between the base station A and the base station B can be obtained by performing difference operation on the included angle Azimuth1 and the included angle Azimuth2.
In addition, the server may determine the distance between two base stations according to the latitude and longitude information of each base station, for example, by calculating the difference in longitude between two base stations and calculating the difference in latitude between two base stations, and further according to the information contained in the difference
Figure BDA0002619302320000133
Determining the distance between two base stations by using the distance formula (1), wherein the distance is a difference of latitude (Δ J) and the longitude (Δ W) is a difference of longitude (Δ J); optionally, a serviceThe device can also directly take the distance Δ J as the distance between two base stations in the longitude direction and the distance Δ W as the distance in the latitude direction.
And A13, determining original calibration parameters according to the included angles and the distances among the base stations.
Specifically, the server may use an included angle between the base stations as a rotation matrix, use a distance between the base stations as a translation vector, and use the rotation matrix and the translation vector as original calibration parameters. Therefore, the original calibration parameters are determined based on the longitude and latitude information of the base station and the longitude and latitude information of the target, the precision of the obtained original calibration parameters can be improved, and further the spatial synchronism of the point cloud data of the base stations is improved.
In order to facilitate understanding of the process of processing point cloud data to be registered by using the preset registration algorithm, in this embodiment, two base stations are used for explanation, and assuming that cloud data of a point to be registered of one base station is second point cloud data and cloud data of a point to be registered of another base station is third point cloud data, the process of processing sensing data to be registered by using the preset registration algorithm to obtain calibration parameters of a multi-base station system may include:
and C3, acquiring a matched point pair in the second point cloud data and the third point cloud data according to the distance value between the point cloud points of the second point cloud data and the point cloud points of the third point cloud data.
Specifically, assuming that the second point cloud data is P0 and the third point cloud data is Q, for each point cloud point in the point cloud data P0, a point cloud point closest to the point cloud point of P0 is searched from the point cloud data Q to form a plurality of point pairs.
And C4, calculating the mean square error of each point pair by adopting an error function, determining a rotation conversion parameter corresponding to the minimum mean square error value, and processing the second point cloud data and the third point cloud data by utilizing the rotation conversion parameter to obtain first candidate point cloud data and second candidate point cloud data.
Specifically, each point pair includes one point cloud point of P0 and one point cloud point of Q (P) i ,q i ) Wherein the correspondence in the initial point pair is not necessarily all correct, and an incorrect correspondence may affect the final registration junctionIf yes, the present embodiment may also adopt a direction vector threshold to reject the wrong point pair. Then, calculating the mean square errors of the plurality of point pairs by using an error function, determining a rotation conversion parameter when the mean square error is minimum, and converting the second point cloud data P0 into the first candidate point cloud data P1 by using the rotation conversion parameter, wherein it needs to be noted that the third point cloud data can be directly used as the second candidate point cloud data without converting the third point cloud data; alternatively, the error function may be expressed as
Figure BDA0002619302320000141
Wherein n is the number of the point pairs, R is a rotation matrix in the rotation conversion parameters, t is a translation vector in the rotation conversion parameters, the current determined values of R and t are the values when the mean square error is minimum, and the values are determined according to p i '={Rp i +t,p i E.g. P0) to convert the point cloud data P0 into P1.
And C5, calculating the mean square error of the first candidate point cloud data and the second candidate point cloud data, and if the mean square error is smaller than an error threshold, taking the rotation conversion parameters as calibration parameters of the multi-base-station system.
Then, a mean square error between the first candidate point cloud data P1 and the second candidate point cloud data Q is calculated, and optionally, a mean square error may be adopted
Figure BDA0002619302320000151
The mean square error is calculated by the relation of (a) to (b), p i ' is and q i P at the same point pair i Converted into the original. And if the mean square error is smaller than the error threshold, taking the obtained rotation conversion parameter as a calibration parameter of the multi-base station system. And if the mean square error is not less than the preset error, determining the point pair between the point cloud data P1 and Q, and re-executing the process of calculating the mean square error of the point pair until the mean square error is less than the preset error or the iteration frequency reaches the preset frequency. Therefore, the calibration parameters of the fine registration process are obtained through iteration, and the precision of the obtained calibration parameters can be greatly improved.
In an embodiment, after the server obtains cloud data to be registered (e.g., point cloud data corresponding to an overlapping area) corresponding to each base station, data to be rejected whose data accuracy is not greater than an accuracy threshold, for example, some data with insignificant characteristics, in the cloud data to be registered may be determined based on the data accuracy and the accuracy threshold of the cloud data to be registered, and the data to be rejected is rejected from the cloud data to be registered. Then, the server can process the cloud data of the point to be registered by using a preset registration algorithm to obtain calibration parameters of the multi-base station system. Therefore, data with higher precision in the cloud data of each point to be registered can be reserved, and high-precision data can be provided for the subsequent fine registration process, so that the accuracy of the point cloud registration result can be further improved. Optionally, the server may further filter ground points of the point cloud data to be registered, that is, filter ground point data in the point cloud data to be registered, so as to reduce an influence of the ground points on the data registration process.
In one embodiment, in addition to spatially synchronizing the single base station aware data of multiple base stations, time synchronization may also be implemented. Optionally, the time synchronization process may include: receiving a base station time axis sent by each base station; the base station time axes are synchronized to the same time axis based on the base station time axis of each base station and the reference time axis. Specifically, a reference time axis is selected first, and optionally, the reference time axis may be a GPS time axis; then, time differences Δ T1, Δ T2, and so on between the base station time axes of the respective base stations and the reference time axis are calculated. If two base stations are taken as an example, the difference between Δ T1 and Δ T2 is taken as the time difference between the base station time axis of the first base station and the base station time axis of the second base station, and then the second base station can synchronize its base station time axis to the base station time axis of the first base station according to the time difference. Thus, time synchronization between the base stations is achieved.
In one embodiment, the specific process of obtaining the target detection result of each road side base station based on the single base station sensing data after the time-space synchronization processing is involved. Alternatively, as shown in fig. 3, the S102 may include:
s201, if the side base stations in each path have perception overlapping areas, data enhancement processing is carried out on single base station perception data corresponding to the perception overlapping areas, and enhanced single base station perception data are obtained.
S202, processing the enhanced single base station sensing data by using a target detection algorithm to obtain a target detection result of each road side base station.
Specifically, as shown in the scene diagram of fig. 1, the detection range of the roadside base station may have a perception overlap region, and then the single base station perception data acquired by each roadside base station may also have overlap data. For example, the detection areas of the base station a and the base station B are all circles with a radius of 50m, and the distance between the base station a and the base station B is 80m, it can be determined that the width of the sensing overlapping area of the detection areas of the base station a and the base station B is 20m, and the single base station sensing data corresponding to the sensing overlapping area is the acquisition data corresponding to the 20 m. And then the server can perform enhancement processing on the part of single base station perception data to obtain enhanced single base station perception data. For example, performing densification, if the single base station sensing data is point cloud data, increasing the point cloud density of the data by adopting an interpolation algorithm to improve the characteristic dimension of the target; if the single base station sensing data is camera data (image data), a difference algorithm can be adopted to increase the information dimensionality of the pixel points, and then the enhanced single base station sensing data is obtained.
Then, the server may process the enhanced single base station sensing data by using a target detection algorithm, which may be a detection algorithm based on deep learning, such as an algorithm based on a neural network model, and obtain a target detection result of each roadside base station after detecting the enhanced single base station sensing data. By enhancing the single base station sensing data, the accuracy of the obtained target detection result can be greatly improved.
Optionally, when the single base station sensing data is point cloud data, each side base station may also share a target detection result in a sensing overlap region detected by another base station based on the enhanced single base station sensing data. For example, if the base station a detects a part of an object (for example, the head of a vehicle) in the overlapping area based on the enhanced single-base-station sensing data, and the base station B detects another part of the object (for example, the body of the vehicle) based on the enhanced single-base-station sensing data, the base station B may share the head information of the base station a, so that the obtained object detection result has integrity, and the detection capability of the base station B is also improved.
In an embodiment, the perception information in the global scene includes a target moving track in the global scene, that is, a tracking process of the target is implemented, and optionally, the S103 may include: performing association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target moving track in the global scene; the previous target detection result comprises a target detection result corresponding to a time before the current time.
Specifically, the target detection result may include the position of the target at the current time, and then the previous target detection result also includes the position of the target at the time between the current times; the server can also allocate a target identifier to the detected target to distinguish different targets, and the same target uses the same target identifier. Therefore, the server can associate the target detection result with the previous target detection result through the target identifier and the position of the target, so as to obtain the target movement track in the global scene.
It should be noted that the server may assign the same target identifier to the target when it is determined that the target in the current target detection result and the target in the previous target detection result are the same target, so as to implement the tracking process of the target. The following describes in detail a specific process for implementing target tracking:
in one embodiment, the target detection result may include a position of the target, a speed of the target, and a heading angle of the target, and the previous target detection result further includes prediction information of the target; optionally, as shown in fig. 4, the step S103 includes:
s301, calculating the position and the direction of the corresponding target after the preset time according to the target detection result of each road side base station and the relative position between each road side base station to obtain the prediction information of each target.
In particular, the server may be based on the current time of day goalThe position, speed and heading angle, and the relative position between the roadside base stations predict the position and direction of the target after a preset duration (which may be multiple preset durations). For example, the current time is 16. Alternatively, the server may be according to the package
Figure BDA0002619302320000181
Calculates the position of the target after a time interval of deltat, wherein (X) i ,Y i ) For the longitude and latitude, V, of the target at the current time i For the velocity of the target at the current time, psi i The course angle of the target at the current moment is taken as the target; according to the inclusion of V i +a i The relation of Δ t calculates the velocity of the target at a subsequent time after the time interval of Δ t, where a i The acceleration of the target at the current time is taken.
In addition, each road side base station can continuously acquire data within a preset time, predict a target detection result acquired at each moment and cover the prediction information acquired at the previous moment with the prediction information acquired at the next moment. Exemplarily, the prediction information of the target at ten times of 16; the target is detected when the power of 16.
S302, performing correlation matching on the target detection result in the global scene according to the prediction information of each target to obtain a target movement track in the global scene.
Specifically, the server may match the prediction information of each target with the target detection result at the current time, and if the matching indicates that the target is still in the detection area of the roadside base station at the current time, assign the target identifier of the target corresponding to the prediction information to the target corresponding to the target detection result, and obtain the movement trajectory of the target according to the position of the target at the previous time and the position of the current time.
Optionally, the server may further determine whether a potential safety hazard exists in the global scene according to the obtained prediction information; and if the potential safety hazard exists, outputting safety early warning information. Optionally, the server may obtain the prediction information of multiple targets, and if there is overlap in the position information in the prediction information of multiple targets, it is determined that there is a potential safety hazard in the global scene. For example, if the position information of two or more targets overlaps in the prediction information, which indicates that the two or more targets may collide, that is, there is a safety hazard, the safety warning information may be output.
Optionally, the target detection result may further include size information of the target, and the process of tracking the target according to the target detection result in the global scene and the prediction information of each target may be implemented in the following manner (where the following prediction spatial information is prediction information):
d, acquiring three-dimensional space information of each target in the detection area at the current moment from the target detection result in the global scene; the three-dimensional spatial information includes position information and size information of the object.
The three-dimensional space information comprises position information and size information of a target; the position information, i.e. the current geographic position of the target, may be represented by latitude and longitude information in a geodetic coordinate system, and the size information may be represented by a size of a detection frame capable of surrounding the target, such as a length, a width, and the like of the detection frame.
E, comparing the three-dimensional space information of each target in the detection area at the current moment with the predicted space information of each target in the target set, and determining a corresponding identifier for the target with the three-dimensional space information matched with the predicted space information to complete target tracking; the predicted space information is obtained by predicting three-dimensional space information of targets in a target set, and the target set comprises the targets in the detection area at the last moment.
The target tracking process is generally a process of associating a driving state (which may include position information and the like) of one target at the previous time with a driving state of the target at the current time to obtain an overall driving state of the target. In this embodiment, the server may store the targets detected at the last time and the three-dimensional spatial information corresponding to each target, and each target may be located in a target set, and the target set may be stored in a list form.
Specifically, the server may compare the three-dimensional spatial information of each object detected at the current time with predicted spatial information of each object in the object set, where the predicted spatial information is obtained by predicting the three-dimensional spatial information of the objects in the object set, that is, the three-dimensional spatial information of the current time predicted by the three-dimensional spatial information of the previous time. If the three-dimensional space information of one target (a) is matched with the predicted space information at the current moment, the identifier of the target corresponding to the matched predicted space information can be used as the identifier of the target (a) at the current moment, so that the position information of the target (a) at the previous moment and the position information of the current moment can be determined, and the tracking process of the target is completed.
Optionally, the server may compare the position information of the target at the current time with the position information in the predicted spatial information, and if two targets with the same or similar position information exist, compare the size information between the two targets; if the size information is the same or similar, the target at the current moment and the target corresponding to the predicted spatial information can be regarded as the same target, and an identifier is determined for the target at the current moment. Therefore, the target tracking process is completed by fully considering the prior target detection result of the target, and the target tracking accuracy can be greatly improved.
Generally, the data volume of single base station sensing data acquired by the roadside base station is large, and the calculation amount is increased if the space-time synchronization is performed on all the single base station sensing data. Therefore, in this embodiment, the target detection may be performed first, and only the obtained three-dimensional spatial information is subjected to coordinate system conversion, so as to improve the calculation efficiency. Optionally, the target detection process may include:
and F, respectively carrying out target detection processing on the single base station sensing data of each path of side base station to obtain three-dimensional space information of a target in each single base station sensing data.
Specifically, the server may first perform target detection processing on each single base station sensing data, and optionally, may execute the target detection processing process by using a target detection algorithm based on deep learning, to obtain three-dimensional spatial information of a target in each single base station sensing data.
G, selecting a coordinate system where the first three-dimensional space information is located from a plurality of three-dimensional space information of the single base station sensing data as a reference coordinate system, converting the second three-dimensional space information to the reference coordinate system where the first three-dimensional space information is located according to a preset conversion matrix, and fusing the converted second three-dimensional space information and the first three-dimensional space information to obtain fused three-dimensional space information; the second three-dimensional space information is other three-dimensional space information of different point cloud data corresponding to the first three-dimensional space information in the plurality of three-dimensional space information, and one point cloud data corresponds to the plurality of three-dimensional space information.
The server can select a coordinate system where the first three-dimensional space information is located from the plurality of three-dimensional space information as a reference coordinate system, and convert other three-dimensional space information into the reference coordinate system, so that the plurality of three-dimensional space information are located in the same coordinate system, and one point cloud data generally corresponds to the plurality of three-dimensional space information, that is, a scene corresponding to one point cloud data includes a plurality of targets. Specifically, the server may convert, according to a preset conversion matrix, second three-dimensional space information into the reference coordinate system, where the second three-dimensional space information is other three-dimensional space information of the plurality of three-dimensional space information corresponding to different point cloud data from the first three-dimensional space information, that is, the first three-dimensional space information and the second three-dimensional space information are obtained from different point cloud data. Optionally, the transformation matrix may represent a relative relationship between the reference coordinate system and a coordinate system in which the second three-dimensional spatial information is located; alternatively, the transformation matrix may be determined according to an ICP algorithm to transform the second three-dimensional spatial information into the reference coordinate system in which the first three-dimensional spatial information is located. And then fusing the converted second three-dimensional space information and the first three-dimensional space information to obtain fused three-dimensional space information, wherein the fusion operation can be a union operation of the two three-dimensional space information.
And H, performing redundancy removal processing on the fused three-dimensional space information to obtain the three-dimensional space information of each target in the detection area at the current moment.
Specifically, for a scene in which overlapping portions exist in scanning areas of multiple base stations, a target may exist in fused three-dimensional spatial information and have multiple spatial information, that is, multiple base stations detect the target at the same time, the server needs to perform redundancy removal processing on the scene, so that each target corresponds to only one piece of three-dimensional spatial information, that is, unique three-dimensional spatial information of each target in the detection area at the current time is obtained. Optionally, the server may perform redundancy removal processing on the fused three-dimensional spatial information by using a non-maximum suppression algorithm to obtain the three-dimensional spatial information of each target in the detection area at the current time. It is understood that the optimal (e.g., the highest precision of the position information or the smallest frame that can enclose the target) is selected from the plurality of pieces of three-dimensional space information as the final three-dimensional space information. And comparing the three-dimensional space information of each target in the detection area at the current moment with the predicted space information of each target in the target set, and determining a corresponding identifier for the target with the three-dimensional space information matched with the predicted space information so as to complete target tracking. Therefore, different three-dimensional space information is converted to the same coordinate system, so that the three-dimensional space information is in the same space domain, and the accuracy of a subsequent target tracking result is improved; meanwhile, only the three-dimensional space information is converted, and the conversion efficiency is also improved.
In an embodiment, the step of comparing the three-dimensional spatial information of each target with the predicted spatial information to determine an identifier for the target in the detection area at the current time may include the following steps:
and E1, identifying a first characteristic of the target aiming at the target corresponding to each piece of three-dimensional space information at the current moment.
And E2, identifying second characteristics of the target aiming at the target corresponding to each piece of predicted spatial information.
Specifically, the server may identify, for each target corresponding to the three-dimensional spatial information at the current time, a first feature of the target based on a deep learning target identification algorithm, and also identify, for each target corresponding to the predicted spatial information, a second feature of the target. Optionally, the server may also employ a point cloud re-recognition network to identify the target features.
And E3, if the target with the similarity between the first feature and the second feature larger than the similarity threshold exists in the current moment, taking the identifier of the target corresponding to the second feature as the identifier of the target corresponding to the first feature.
Specifically, if there is a target with a similarity between the first feature and the second feature being greater than the similarity threshold among all targets corresponding to the current time, that is, the target at the current time exists in the target set, that is, the target is also scanned at the previous time; the server may use the identification of the target corresponding to the second feature (the identification of the target in the target set) as the identification of the target corresponding to the first feature, that is, the identification of the target at the current time, thereby achieving the purpose of determining the identification for the target at the current time and associating with the target at the previous time.
Of course, among all the targets at the current time, there is also a target whose similarity between the first feature and the second feature is not greater than the similarity threshold, that is, which fails to pass through the similarity matching. Optionally, the server may further calculate an intersection ratio between three-dimensional spatial information corresponding to an object whose similarity is not greater than a similarity threshold value among objects corresponding to the current time and candidate predicted spatial information, where the candidate predicted spatial information is predicted spatial information of an object whose similarity is not greater than the similarity threshold value among the objects in the object set, that is, calculate an intersection ratio between spatial information of an object whose similarity matches the failed object among the objects in the current time and the object set. And if the intersection ratio is larger than the intersection ratio threshold value, taking the identifier of the target corresponding to the candidate prediction space information meeting the condition as the identifier of the target corresponding to the three-dimensional space information at the current moment. Therefore, through the double matching of the intersection and comparison of the target characteristics and the three-dimensional space information, the corresponding identification is determined for the target detected at the current moment, the accuracy of the determined identification can be greatly improved, and the accuracy of the target tracking result is further improved.
In one embodiment, another specific process is involved in which the server compares the three-dimensional spatial information of each target with the predicted spatial information to determine an identification for the target in the detection area at the current time. The process may further comprise the steps of:
e4, predicting the three-dimensional spatial information of the targets in the target set by adopting a Kalman filter to obtain predicted spatial information of each target in the target set; and the identification of the target corresponding to the predicted spatial information corresponds to the identification of the target in the target set.
Specifically, for each target in the target set, the server predicts three-dimensional spatial information of the target by using a kalman filter, predicts predicted spatial information of each target at the current time, and the identifier of the target corresponding to each predicted spatial information is the identifier of the target in the corresponding target set.
And E5, calculating the intersection ratio between the three-dimensional space information and all the predicted space information aiming at each target at the current moment, and if the three-dimensional space information with the intersection ratio larger than the intersection ratio threshold exists, taking the mark of the target corresponding to the matched predicted space information as the mark of the target corresponding to the three-dimensional space information.
Specifically, for each target detected at the current moment, the server calculates the intersection ratio between the three-dimensional spatial information and all the predicted spatial information, and the intersection ratio can be the coincidence degree of the sizes of the target detection frames; if there is three-dimensional space information with the intersection ratio larger than the intersection ratio threshold (for example, 90%), the identifier of the target corresponding to the predicted space information matched with the three-dimensional space information is used as the identifier of the target corresponding to the three-dimensional space information.
Of course, in all the targets at the current time, three-dimensional spatial information with the intersection ratio not greater than the intersection ratio threshold value inevitably exists, that is, the intersection ratio matching fails, the server may identify a third feature of the first target and a fourth feature of the second target, where the first target is a target with the intersection ratio not greater than the intersection ratio threshold value in the target corresponding to the current time, and the second target is a target with the predicted spatial information intersection ratio not greater than the intersection ratio threshold value in the target set, that is, a target with the intersection ratio matching failing in the current time and the target set. Optionally, the point cloud re-recognition network may be used to extract the third feature and the fourth feature respectively. And then, calculating the similarity between the third feature and the fourth feature, and if the similarity is greater than a similarity threshold value, taking the identifier corresponding to the second target as the identifier of the matched first target. Therefore, through double matching of the intersection and comparison of the target characteristics and the three-dimensional space information, the corresponding identification is determined for the target detected at the current moment, the accuracy of the determined identification can be greatly improved, and the accuracy of the target tracking result is further improved.
In an embodiment, there may also be a target whose identification is not determined at the current time, and if the target newly enters the detection area and does not exist in the target set, the server may assign a random identification to the target whose identification is not determined, and store the target and the random identification in the target set, where the random identification is different from the identifications of other targets in the target set. Thus, each target in the target set can be used to match the target in the detection area at the next time to determine the identifier. Optionally, for the targets in the target set, there may be a case where the target leaves the detection area at the next time, and the server may remove the target no longer located in the detection area from the target set.
In the above, the process of tracking the target in the detection area by the server to obtain the target moving track in the whole multi-base station system is described in detail, and the detection and tracking process of a road-side base station in the multi-base station system is described below by taking the road-side base station as an example.
In one embodiment, as shown in fig. 5, the step S302 may include:
s401, determining a target road side base station from a plurality of road side base stations based on position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current time.
Specifically, the server may know where the target is to be reached according to the position information in the candidate prediction information, and may know which roadside base station the position is within the detection range of according to the position information and the detection range of the roadside base station, and then use the roadside base station as the target roadside base station.
S402, after the preset time length, obtaining current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station.
And S403, if the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result.
Specifically, after the target road side base station is determined, current single base station sensing data of the target road side base station after a preset time length can be obtained, and target detection is performed on the current single base station sensing data to obtain a current target detection result. Then, the target detection result is matched with the candidate prediction information, the matching process may refer to the description of the above embodiment (for example, according to the target characteristics, the detection frame cross-comparison mode, and the like), and if the matching is successful, the target corresponding to the candidate prediction information is associated with the target in the current target detection result, that is, the target identifier corresponding to the candidate prediction information is assigned to the target in the current target detection result.
Optionally, if the current target detection result is not matched with the candidate prediction information, and the target roadside base station does not detect the target corresponding to the candidate prediction information, it is determined whether the target corresponding to the current target detection result is a new target, for example, if the target is not detected before the target roadside base station, the target is considered as the new target, and the sensing information of the new target is added to the sensing information of the global scene, so as to improve the comprehensiveness of the sensing information of the global scene.
Optionally, the server may further obtain location information in the candidate prediction information, and if the target roadside base station does not detect the current target detection result corresponding to the location information, that is, the target roadside base station does not detect the target at the predicted location, it indicates that the target roadside base station has a weak sensing capability at the location; the server may determine a target subsequent time at which the target detection result matches the prediction information in the subsequent time, that is, determine a time at which the target road side base station detects the target; and then, taking the candidate prediction information corresponding to the target before the subsequent time as the target detection result of the target road side base station.
For example, for the current target detection result at time 16. If there is no matched pose data, it indicates that the target roadside base station has not detected the target at time 16 00; and the candidate prediction information before (16. And if no matched pose data exists, continuing to compare candidate prediction information at a subsequent moment until the subsequent moment of the target is determined.
To better understand the overall process of the above-described context awareness method, the method is described below as an overall embodiment. As shown in fig. 6, the method includes:
s501, respectively obtaining single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
s502, if the side base stations in each path have perception overlapping areas, performing data enhancement processing on single base station perception data corresponding to the perception overlapping areas to obtain enhanced single base station perception data;
s503, processing and enhancing the single base station sensing data by using a target detection algorithm to obtain target detection results of the road side base stations;
s504, calculating the position and the direction of the corresponding target after a preset time according to the target detection result of each road side base station and the relative position between each road side base station to obtain the prediction information of each target;
s505, determining a target road side base station from a plurality of road side base stations based on position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current moment;
s506, after the preset time length, obtaining current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station;
and S507, if the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result to obtain the perception information under the global scene.
For the implementation process of each step, reference may be made to the description of the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided an environment sensing apparatus including: an acquisition module 21, an object detection module 22 and a determination module 23.
Specifically, the obtaining module 21 is configured to obtain single base station sensing data of each roadside base station, and perform space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of the multi-base station system;
the target detection module 22 is configured to obtain a target detection result of each roadside base station based on the single base station sensing data after the time-space synchronization processing;
the determining module 23 is configured to map the target detection result of each roadside base station to a global scene, and generate perception information in the global scene; the global scene is determined based on the perception range of the multi-base station system.
The environment sensing apparatus provided in this embodiment may perform the method embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
In one embodiment, the target detection module 22 is specifically configured to, in a case that a sensing overlap region exists in each lateral base station, perform data enhancement processing on single base station sensing data corresponding to the sensing overlap region to obtain enhanced single base station sensing data; and processing the enhanced single base station sensing data by using a target detection algorithm to obtain a target detection result of each road side base station.
In one embodiment, the perception information in the global scene comprises a target movement track in the global scene; the determining module 23 is specifically configured to perform association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target movement track in the global scene; the previous target detection result comprises a target detection result corresponding to a time before the current time.
In one embodiment, the target detection result comprises a position of the target, a speed of the target, and a heading angle of the target, and the previous target detection result further comprises prediction information of the target; the determining module 23 is specifically configured to calculate a position and a direction of a corresponding target after a preset time according to a target detection result of each roadside base station and a relative position between each roadside base station, so as to obtain prediction information of each target; and performing correlation matching on target detection results in the global scene according to the prediction information of each target to obtain a target moving track in the global scene.
In one embodiment, the determining module 23 is specifically configured to determine a target roadside base station from the plurality of roadside base stations based on the position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current moment; after the preset time length, obtaining current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station; and under the condition that the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result.
In one embodiment, the apparatus further includes an adding module, configured to determine whether a target corresponding to the current target detection result is a new target when the current target detection result does not match the candidate prediction information; and under the condition that the target corresponding to the current target detection result is the newly added target, adding the perception information of the newly added target in the perception information of the global scene.
In an embodiment, the determining module 23 is further configured to obtain location information in the candidate prediction information, and determine a target subsequent time at which a target detection result matches the prediction information in subsequent times when the target roadside base station does not detect a current target detection result corresponding to the location information; the subsequent time is a time after the current time; and taking the candidate prediction information corresponding to the target time before the subsequent time as a target detection result of the target road side base station.
In one embodiment, the device further comprises an early warning module, which is used for judging whether the potential safety hazard exists in the global scene according to the prediction information; and if the potential safety hazard exists, outputting safety early warning information.
In one embodiment, the early warning module is specifically configured to acquire prediction information of multiple targets, and if position information in the prediction information of the multiple targets overlaps, it is determined that a potential safety hazard exists in a global scene.
In one embodiment, the apparatus further includes a calibration parameter determining module, configured to measure longitude and latitude information of each roadside base station by using the measuring instrument, and determine an initial calibration parameter according to the longitude and latitude information; processing single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station; respectively selecting to-be-registered sensing data corresponding to each lateral base station from first single base station sensing data corresponding to each lateral base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of the multi-base station system; the preset condition is used for representing the data range of the selected to-be-registered sensing data.
The environment sensing apparatus provided in this embodiment may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
For specific limitations of the environment sensing apparatus, reference may be made to the above limitations of the environment sensing method, which are not described herein again. The modules in the environment sensing device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the base station, and can also be stored in a memory in the base station in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a server is provided, and the internal structure of the server may be as shown in fig. 8. The server includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the server is configured to provide computing and control capabilities. The memory of the server comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the server is used for storing single base station perception data and perception information under a global scene. The network interface of the server is used for communicating with an external terminal through network connection. The computer program is executed by a processor to implement an environment awareness method.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the servers to which the subject application applies, as a particular server may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a server comprising a memory and a processor, the memory having a computer program stored therein, the processor when executing the computer program implementing the steps of:
respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
The implementation principle and technical effect of the server provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the side base stations in each path have perception overlapping areas, performing data enhancement processing on single base station perception data corresponding to the perception overlapping areas to obtain enhanced single base station perception data;
and processing the enhanced single base station sensing data by using a target detection algorithm to obtain a target detection result of each road side base station.
In one embodiment, the perception information in the global scene comprises a target movement track in the global scene; the processor, when executing the computer program, further performs the steps of:
performing association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target moving track in the global scene; the previous target detection result comprises a target detection result corresponding to a time before the current time.
In one embodiment, the target detection result comprises a position of the target, a speed of the target, and a heading angle of the target, and the previous target detection result further comprises prediction information of the target; the processor, when executing the computer program, further performs the steps of:
calculating the position and the direction of the corresponding target after a preset time according to the target detection result of each road side base station and the relative position between each road side base station to obtain the prediction information of each target;
and performing correlation matching on the target detection result in the global scene according to the prediction information of each target to obtain a target moving track in the global scene.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a target road side base station from a plurality of road side base stations based on the position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current moment;
after the preset time length, obtaining current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station;
and if the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the current target detection result is not matched with the candidate prediction information, judging whether a target corresponding to the current target detection result is a newly added target;
and if the target corresponding to the current target detection result is the newly added target, adding the perception information of the newly added target in the perception information of the global scene.
In one embodiment, the processor when executing the computer program further performs the steps of:
obtaining position information in the candidate prediction information, and if the target road side base station does not detect a current target detection result corresponding to the position information, determining a target subsequent time at which a target detection result is matched with the prediction information in subsequent time; the subsequent time is a time after the current time;
and taking the candidate prediction information corresponding to the target time before the subsequent time as a target detection result of the target road side base station.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
judging whether potential safety hazards exist in the global scene or not according to the prediction information;
and if the potential safety hazard exists, outputting safety early warning information.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and acquiring the prediction information of the multiple targets, and determining that potential safety hazards exist in the global scene if the position information in the prediction information of the multiple targets is overlapped.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
measuring longitude and latitude information of each roadside base station by using a measuring instrument, and determining an initial calibration parameter according to the longitude and latitude information;
processing single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station;
respectively selecting to-be-registered sensing data corresponding to each lateral base station from first single base station sensing data corresponding to each lateral base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of the multi-base station system; the preset condition is used for representing the data range of the selected to-be-registered sensing data.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
respectively acquiring single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of a multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; the global scene is determined based on the perception range of the multi-base station system.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the side base stations in each path have perception overlapping areas, performing data enhancement processing on single base station perception data corresponding to the perception overlapping areas to obtain enhanced single base station perception data;
and processing the enhanced single base station sensing data by using a target detection algorithm to obtain a target detection result of each road side base station.
In one embodiment, the perception information in the global scene comprises a target movement track in the global scene; the computer program when executed by the processor further realizes the steps of:
performing association matching on the target detection result mapped to the global scene and the previous target detection result to obtain a target moving track under the global scene; the previous target detection result comprises a target detection result corresponding to a time before the current time.
In one embodiment, the target detection result comprises a position of the target, a speed of the target, and a heading angle of the target, and the previous target detection result further comprises prediction information of the target; the computer program when executed by the processor further realizes the steps of:
calculating the position and the direction of the corresponding target after the preset time according to the target detection result of each road side base station and the relative position between each road side base station to obtain the prediction information of each target;
and performing correlation matching on the target detection result in the global scene according to the prediction information of each target to obtain a target moving track in the global scene.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a target road side base station from a plurality of road side base stations based on the position information in the candidate prediction information; the candidate prediction information is prediction information of any one target based on the current moment;
after the preset time length, acquiring current single base station sensing data of the target road side base station, and carrying out target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station;
and if the current target detection result is matched with the candidate prediction information, associating the target corresponding to the candidate prediction information with the target in the current target detection result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the current target detection result is not matched with the candidate prediction information, judging whether a target corresponding to the current target detection result is a newly added target;
and if the target corresponding to the current target detection result is the newly added target, adding the perception information of the newly added target in the perception information of the global scene.
In one embodiment, the computer program when executed by the processor further performs the steps of:
obtaining position information in the candidate prediction information, and if the target road side base station does not detect a current target detection result corresponding to the position information, determining a target subsequent time at which a target detection result is matched with the prediction information in subsequent time; the subsequent time is a time after the current time;
and taking the candidate prediction information corresponding to the target time before the subsequent time as a target detection result of the target road side base station.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether potential safety hazards exist in the global scene or not according to the prediction information;
and if the potential safety hazard exists, outputting safety early warning information.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and acquiring the prediction information of the multiple targets, and determining that potential safety hazards exist in the global scene if the position information in the prediction information of the multiple targets is overlapped.
In one embodiment, the computer program when executed by the processor further performs the steps of:
measuring longitude and latitude information of each roadside base station by using a measuring instrument, and determining an initial calibration parameter according to the longitude and latitude information;
processing single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station;
respectively selecting to-be-registered sensing data corresponding to each lateral base station from first single base station sensing data corresponding to each lateral base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of the multi-base station system; the preset condition is used for representing the data range of the selected to-be-registered sensing data.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An environment sensing method is applied to a multi-base station system, wherein the multi-base station system comprises a plurality of roadside base stations, and the method comprises the following steps:
respectively obtaining single base station sensing data of each road side base station, and performing space-time synchronization processing on the single base station sensing data of each road side base station according to calibration parameters of the multi-base station system;
acquiring target detection results of the road side base stations based on the single base station sensing data after the time-space synchronization processing;
mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; wherein the global scene is determined based on the perception range of the multi-base station system, and comprises: calculating the position and the direction of a corresponding target after a preset time according to the target detection result of each roadside base station and the relative position between the roadside base stations, and determining the prediction information of each target; determining a target roadside base station from the plurality of roadside base stations based on the location information in the candidate prediction information; the candidate prediction information is prediction information of any target based on the current moment; obtaining a current target detection result of the target road side base station after the preset time length; if the current target detection result is matched with the candidate prediction information, associating a target corresponding to the candidate prediction information with a target in the current target detection result to obtain a target moving track under the global scene;
obtaining position information in the candidate prediction information, and if the target road side base station does not detect a current target detection result corresponding to the position information, determining a target subsequent time at which a target detection result is matched with the prediction information in subsequent time; the subsequent time is a time after the current time;
and taking the candidate prediction information corresponding to the target time before the target subsequent time as a target detection result of the target road side base station.
2. The method according to claim 1, wherein the obtaining the target detection result of each road side base station based on the single base station sensing data after the time-space synchronization processing comprises:
if the roadside base stations have perception overlapping areas, performing data enhancement processing on single base station perception data corresponding to the perception overlapping areas to obtain enhanced single base station perception data;
and processing the enhanced single base station perception data by using a target detection algorithm to obtain a target detection result of each road side base station.
3. The method of claim 1, wherein the obtaining the current target detection result of the target rsbs after the preset duration comprises:
and after the preset time length, acquiring current single base station sensing data of the target road side base station, and performing target detection on the current single base station sensing data to obtain a current target detection result of the target road side base station.
4. The method of claim 3, further comprising:
if the current target detection result is not matched with the candidate prediction information, judging whether a target corresponding to the current target detection result is a newly added target;
and if the target corresponding to the current target detection result is a newly added target, adding the perception information of the newly added target in the perception information of the global scene.
5. The method of claim 1, further comprising:
judging whether the potential safety hazard exists in the global scene or not according to the prediction information;
and if the potential safety hazard exists, outputting safety early warning information.
6. The method according to claim 5, wherein the determining whether a potential safety hazard exists in the global scene according to the prediction information comprises:
obtaining the prediction information of a plurality of targets, and if the position information in the prediction information of the plurality of targets is overlapped, determining that the potential safety hazard exists in the global scene.
7. The method according to claim 1, wherein before performing space-time synchronization processing on the single base station sensing data of each of the roadside base stations according to the calibration parameters of the multi-base station system, the method further comprises:
measuring longitude and latitude information of each roadside base station by using a measuring instrument, and determining an initial calibration parameter according to the longitude and latitude information;
processing the single base station sensing data of each road side base station by using the initial calibration parameters to obtain first single base station sensing data corresponding to each road side base station;
respectively selecting to-be-registered sensing data corresponding to each road side base station from first single base station sensing data corresponding to each road side base station according to preset conditions, and processing the to-be-registered sensing data by using a preset registration algorithm to obtain calibration parameters of a multi-base station system; the preset conditions are used for representing the data range of the selected to-be-registered sensing data.
8. The environment perception device is applied to a multi-base station system, wherein the multi-base station system comprises a plurality of roadside base stations, and the device comprises:
the acquisition module is used for respectively acquiring single base station sensing data of each road side base station and performing space-time synchronization processing on the single base station sensing data of each road side base station according to the calibration parameters of the multi-base station system;
the target detection module is used for acquiring a target detection result of each road side base station based on the single base station sensing data after the time-space synchronization processing;
the determining module is used for mapping the target detection result of each road side base station to a global scene to generate perception information under the global scene; wherein the global scene is determined based on the perception range of the multi-base station system, and comprises: calculating the position and the direction of a corresponding target after a preset time according to the target detection result of each roadside base station and the relative position between the roadside base stations, and determining the prediction information of each target; determining a target roadside base station from the plurality of roadside base stations based on the location information in the candidate prediction information; obtaining a current target detection result of the target road side base station after the preset time; if the current target detection result is matched with the candidate prediction information, associating a target corresponding to the candidate prediction information with a target in the current target detection result to obtain a target moving track under the global scene; the candidate prediction information is prediction information of any target based on the current moment;
the determining module is further configured to obtain location information in the candidate prediction information, and determine a target subsequent time at which a target detection result matches the prediction information in subsequent times when the target roadside base station does not detect a current target detection result corresponding to the location information; the subsequent time is a time after the current time; and taking the candidate prediction information corresponding to the target time before the subsequent time as a target detection result of the target road side base station.
9. A server comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010778358.2A 2020-08-05 2020-08-05 Environment sensing method, device, server and readable storage medium Active CN114067556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010778358.2A CN114067556B (en) 2020-08-05 2020-08-05 Environment sensing method, device, server and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010778358.2A CN114067556B (en) 2020-08-05 2020-08-05 Environment sensing method, device, server and readable storage medium

Publications (2)

Publication Number Publication Date
CN114067556A CN114067556A (en) 2022-02-18
CN114067556B true CN114067556B (en) 2023-03-14

Family

ID=80232045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010778358.2A Active CN114067556B (en) 2020-08-05 2020-08-05 Environment sensing method, device, server and readable storage medium

Country Status (1)

Country Link
CN (1) CN114067556B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100852A (en) * 2022-06-09 2022-09-23 智能汽车创新发展平台(上海)有限公司 High-availability roadside fusion sensing system and method for serving intelligent networked automobile
CN116564077B (en) * 2023-04-12 2024-03-15 广州爱浦路网络技术有限公司 Traffic condition detection method, device and medium based on communication network and data management technology

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270067A (en) * 2011-06-17 2011-12-07 清华大学 Contact track fusion method of multiple hierarchical cameras on interactive surface
CN108437986A (en) * 2017-02-16 2018-08-24 上海汽车集团股份有限公司 Vehicle drive assist system and householder method
CN109272745A (en) * 2018-08-20 2019-01-25 浙江工业大学 A kind of track of vehicle prediction technique based on deep neural network
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
WO2019161663A1 (en) * 2018-02-24 2019-08-29 北京图森未来科技有限公司 Harbor area monitoring method and system, and central control system
CN110412595A (en) * 2019-06-04 2019-11-05 深圳市速腾聚创科技有限公司 Roadbed cognitive method, system, vehicle, equipment and storage medium
CN111090095A (en) * 2019-12-24 2020-05-01 联创汽车电子有限公司 Information fusion environment perception system and perception method thereof
CN111316286A (en) * 2019-03-27 2020-06-19 深圳市大疆创新科技有限公司 Trajectory prediction method and device, storage medium, driving system and vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110487288B (en) * 2018-05-14 2024-03-01 华为技术有限公司 Road estimation method and road estimation system
CN111316127A (en) * 2018-12-29 2020-06-19 深圳市大疆创新科技有限公司 Target track determining method, target tracking system and vehicle
CN111414852A (en) * 2020-03-19 2020-07-14 驭势科技(南京)有限公司 Image prediction and vehicle behavior planning method, device and system and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102270067A (en) * 2011-06-17 2011-12-07 清华大学 Contact track fusion method of multiple hierarchical cameras on interactive surface
CN108437986A (en) * 2017-02-16 2018-08-24 上海汽车集团股份有限公司 Vehicle drive assist system and householder method
WO2019161663A1 (en) * 2018-02-24 2019-08-29 北京图森未来科技有限公司 Harbor area monitoring method and system, and central control system
CN109272745A (en) * 2018-08-20 2019-01-25 浙江工业大学 A kind of track of vehicle prediction technique based on deep neural network
CN109829386A (en) * 2019-01-04 2019-05-31 清华大学 Intelligent vehicle based on Multi-source Information Fusion can traffic areas detection method
CN111316286A (en) * 2019-03-27 2020-06-19 深圳市大疆创新科技有限公司 Trajectory prediction method and device, storage medium, driving system and vehicle
CN110412595A (en) * 2019-06-04 2019-11-05 深圳市速腾聚创科技有限公司 Roadbed cognitive method, system, vehicle, equipment and storage medium
CN111090095A (en) * 2019-12-24 2020-05-01 联创汽车电子有限公司 Information fusion environment perception system and perception method thereof

Also Published As

Publication number Publication date
CN114067556A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN114091561A (en) Target tracking method, device, server and readable storage medium
CN109901139B (en) Laser radar calibration method, device, equipment and storage medium
US10909395B2 (en) Object detection apparatus
JP5114514B2 (en) Position estimation device
US9794519B2 (en) Positioning apparatus and positioning method regarding a position of mobile object
US9069055B2 (en) Wireless positioning method and apparatus using wireless sensor network
CN114067556B (en) Environment sensing method, device, server and readable storage medium
US20220157168A1 (en) V2X with 5G/6G Image Exchange and AI-Based Viewpoint Fusion
CN111161353A (en) Vehicle positioning method and device, readable storage medium and computer equipment
CN111536990A (en) On-line external reference mis-calibration detection between sensors
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN114449533B (en) Base station deployment method, environment awareness method, device, computer equipment and storage medium
CN111353510B (en) Multi-sensor target detection method, multi-sensor target detection device, computer equipment and storage medium
US11417204B2 (en) Vehicle identification method and system
Muresan et al. Multimodal sparse LIDAR object tracking in clutter
CN114371484A (en) Vehicle positioning method and device, computer equipment and storage medium
CN114067555B (en) Registration method and device for data of multiple base stations, server and readable storage medium
US11215459B2 (en) Object recognition device, object recognition method and program
CN112689234A (en) Indoor vehicle positioning method and device, computer equipment and storage medium
CN113203424B (en) Multi-sensor data fusion method and device and related equipment
CN115272408A (en) Vehicle stationary detection method, device, computer equipment and storage medium
CN113611112B (en) Target association method, device, equipment and storage medium
CN114509762A (en) Data processing method, device, equipment and medium
CN114078325B (en) Multi-perception system registration method, device, computer equipment and storage medium
Mikhalev et al. Fusion of sensor data for source localization using the Hough transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant