CN116338717A - Sensor data fusion method and device, computer equipment and readable storage medium - Google Patents

Sensor data fusion method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN116338717A
CN116338717A CN202310343149.9A CN202310343149A CN116338717A CN 116338717 A CN116338717 A CN 116338717A CN 202310343149 A CN202310343149 A CN 202310343149A CN 116338717 A CN116338717 A CN 116338717A
Authority
CN
China
Prior art keywords
information
target
result
processed
perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310343149.9A
Other languages
Chinese (zh)
Inventor
康晓华
谢欣燕
曹扬
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Intelligent Mining Technology Co Ltd
Original Assignee
Sany Intelligent Mining Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Intelligent Mining Technology Co Ltd filed Critical Sany Intelligent Mining Technology Co Ltd
Priority to CN202310343149.9A priority Critical patent/CN116338717A/en
Publication of CN116338717A publication Critical patent/CN116338717A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a sensor data fusion method, a device, computer equipment and a computer readable storage medium, relates to the technical field of unmanned, and aims to supplement through millimeter wave detection results, enhance sensing capability, enlarge sensing range by utilizing collaborative sensing results and road side sensing results and improve sensing precision. The method comprises the following steps: when the perception information is received, determining a target fusion mode in a plurality of fusion modes, and determining a target detection range by using space-time synchronization operation; obtaining information to be processed in a target detection range, and matching the information to be processed by using a matching rule to obtain a target matching result; performing compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of perception information by using an integration rule to obtain a target fusion result; and processing the target fusion result by using a Kalman algorithm to obtain obstacle related information.

Description

Sensor data fusion method and device, computer equipment and readable storage medium
Technical Field
The present disclosure relates to the field of unmanned technologies, and in particular, to a sensor data fusion method, a device, a computer device, and a readable storage medium.
Background
With the rapid development of unmanned surface mine, unmanned surface mine has also been developed rapidly. Because mining area environment dust is larger, road conditions are complex, all problems can not be solved only by relying on a single sensor, the surrounding environment information of a vehicle is required to be acquired in an automatic driving process through multiple sensors, sensing information is acquired in a complementary mode according to different sensor characteristics, then obstacle information and tracking target information extracted in a severe mining area environment are matched, and therefore the unmanned mining card vehicle can adapt to the complex mining area environment in the use process, and the safety and fluency in the whole driving process are guaranteed.
In the related art, the sensor data fusion method adopts the output results of information such as laser radar, millimeter wave radar, images and the like to fuse, so as to obtain the driving environment information. However, the applicant realizes that dust emission can be caused by vehicle running in a mining area environment, so that detection omission occurs in a laser radar, the sensing precision of sensor data fusion is low, and the sensing omission occurs when the sensor is singly used for sensing the environment aiming at the complex road condition environment of the mining area, so that the sensing precision is low, and the vehicle has a large collision risk.
Disclosure of Invention
In view of this, the application provides a multisensor fusion method, a device, computer equipment and a readable storage medium, and mainly aims to solve the problems that dust emission is brought to vehicle operation in mining area environment, laser radar is missed, so that the sensing precision of multisensor fusion is not high, and the sensing omission is caused by sensing environment by using a sensor singly for the environment of the complex road condition of mining area, so that the sensing precision is not high, and the vehicle has a large collision risk.
According to a first aspect of the present application, there is provided a sensor data fusion method, the method comprising:
when receiving input perception information, determining a target fusion mode corresponding to the perception information in a plurality of fusion modes, and determining a target detection range corresponding to the target fusion mode by using space-time synchronization operation;
obtaining information to be processed in the target detection range from the perception information, and matching the information to be processed by utilizing a matching rule corresponding to the target fusion mode to obtain a target matching result;
performing compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of the perception information by using an integration rule corresponding to the target fusion mode to obtain a target fusion result;
And acquiring a Kalman algorithm, and processing the target fusion result by using the Kalman algorithm to obtain obstacle related information.
Optionally, the determining the target detection range corresponding to the target fusion mode by using a space-time synchronization operation includes:
determining a plurality of target sensors corresponding to the target fusion mode, acquiring sensor state information from the sensing information, and detecting the sensor state information, wherein the plurality of target sensors comprise a laser radar, a millimeter wave radar and a camera sensor, and the sensor state information indicates the working states of the plurality of target sensors;
if the sensor state information indicates that the working states of the plurality of target sensors are normal, acquiring coordinate information of each target sensor;
determining the sample vehicle corresponding to the perception information, acquiring rear axle center position information of the sample vehicle, and adjusting coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle;
acquiring a time stamp of the laser radar, and sequentially determining the time stamp of the millimeter wave radar and the time stamp of the camera sensor, which are similar to the time stamp of the laser radar;
Unifying the laser radar, the millimeter wave radar and the camera sensor under the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar and the time stamp of the camera sensor;
obtaining a detection range of each target sensor to obtain a plurality of detection ranges, and determining an intersection of the plurality of detection ranges to obtain the target detection range.
Optionally, unifying the laser radar, the millimeter wave radar, and the camera sensor with the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar, and the time stamp of the camera sensor includes:
acquiring millimeter wave detection results corresponding to the time stamp of the millimeter wave radar and camera detection results corresponding to the time stamp of the camera sensor;
respectively calculating the time stamp of the millimeter wave radar, the time stamp of the camera sensor and the time stamp of the laser radar to obtain a millimeter wave time difference and a camera time difference;
and performing motion compensation operation on the millimeter wave detection result by utilizing the millimeter wave time difference, and performing motion compensation operation on the camera detection result by utilizing the camera time difference, so that the laser radar, the millimeter wave radar and the camera sensor are under the same time stamp.
Optionally, the obtaining the information to be processed in the target detection range from the sensing information, and matching the information to be processed by using a matching rule corresponding to the target fusion mode, to obtain a target matching result, including:
acquiring a traditional laser radar sensing output result, an intelligent laser radar sensing output result, a millimeter wave sensing output result and the scene sensing parameters from the sensing information, and respectively acquiring traditional laser radar sensing information to be processed, intelligent laser radar sensing information to be processed, millimeter wave sensing information to be processed and scene sensing information to be processed in the target detection range from the traditional laser radar sensing output result, the intelligent laser radar sensing output result, the millimeter wave sensing output result and the millimeter wave sensing output result, wherein the scene sensing parameters comprise a communication cooperative sensing result and a road side sensing result;
taking the traditional perception information of the laser radar to be processed, the intelligent perception information of the laser radar to be processed, the millimeter wave perception information to be processed and the scene perception information to be processed as the information to be processed, and acquiring the matching rule corresponding to the target fusion mode;
Determining first content matched with the intelligent perception information of the laser radar to be processed in the traditional perception information of the laser radar to be processed by utilizing the matching rule, acquiring second content matched with the first content in the intelligent perception information of the laser radar to be processed, and replacing the first content in the traditional perception information of the laser radar to be processed by utilizing the second content to obtain a first matching result, wherein if first unmatched content except the second content exists in the intelligent perception information of the laser radar to be processed, the first unmatched content is filtered;
determining third content matched with the millimeter wave sensing information to be processed in the first matching result by utilizing the matching rule, acquiring fourth content matched with the third content in the millimeter wave sensing information to be processed, and replacing the third content in the first matching result by utilizing the fourth content to obtain a second matching result;
and determining fifth content matched with the to-be-processed scene perception information in the second matching result by utilizing the matching rule, acquiring sixth content matched with the fifth content in the to-be-processed scene perception information, and replacing the fifth content in the second matching result by utilizing the sixth content to obtain the target matching result.
Optionally, the calculating the compensation of the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result, and the scene perception parameter of the perception information by using an integration rule corresponding to the target fusion mode to obtain a target fusion result, including:
acquiring millimeter wave perception information to be processed from the information to be processed, acquiring fourth content acquired from the millimeter wave perception information to be processed by utilizing the matching rule, and determining second unmatched content except the fourth content in the millimeter wave perception information to be processed;
acquiring the dust detection algorithm, determining a plurality of pieces of position information of a plurality of dust particles by using the dust detection algorithm, determining a plurality of pieces of associated sensing information associated with the plurality of pieces of position information in the second unmatched content, and taking the plurality of pieces of associated sensing information as the millimeter wave compensation result;
and acquiring the scene perception parameters from the perception information, acquiring the integration rule corresponding to the target fusion mode, and integrating the target matching result, the millimeter wave compensation result and the scene perception parameters by utilizing the integration rule to obtain the target fusion result.
Optionally, the method further comprises:
when the perception information comprises a laser radar traditional perception output result, a laser radar intelligent perception output result, a millimeter wave perception output result and scene perception parameters, acquiring to-be-processed laser radar traditional perception information, to-be-processed laser radar intelligent perception information, to-be-processed millimeter wave perception information and to-be-processed scene perception information from the to-be-processed information;
acquiring appointed laser radar traditional perception information which does not comprise the laser radar traditional perception information to be processed from the laser radar traditional perception output result, acquiring appointed laser radar intelligent perception information which does not comprise the laser radar intelligent perception information to be processed from the laser radar intelligent perception output result, acquiring appointed millimeter wave perception information which does not comprise the millimeter wave perception information to be processed from the millimeter wave perception output result, and acquiring appointed scene perception information which does not comprise the scene perception information to be processed from the scene perception parameters;
acquiring first content determined in the traditional perception information of the laser radar to be processed by utilizing the matching rule, and determining third unmatched content except the first content in the traditional perception information of the laser radar to be processed;
And generating the target fusion result based on the appointed laser radar traditional perception information, the appointed laser radar intelligent perception information, the appointed millimeter wave perception information, the appointed scene perception information, the third unmatched content, the target matching result, the millimeter wave compensation result and the scene perception parameter.
Optionally, the processing the target fusion result by using the kalman algorithm to obtain obstacle related information includes:
acquiring a last fusion result of the target fusion result, determining time information corresponding to the last fusion result, acquiring time information corresponding to the target fusion result, and calculating by utilizing the time information corresponding to the last fusion result and the time information corresponding to the target fusion result to obtain a target time difference;
acquiring a time difference algorithm, and calculating the target time difference by using the time difference algorithm to obtain speed information corresponding to the target fusion result;
acquiring a sample vehicle corresponding to the perception information, and acquiring a preset motion model corresponding to the sample vehicle, wherein the preset motion model is a preset virtual model of uniform motion;
Acquiring the speed information, and performing prediction operation on the speed information and the target time difference by using the preset motion model to obtain predicted position information;
acquiring the speed information from the target fusion result, acquiring a plurality of target sensors corresponding to the target fusion mode, and acquiring a plurality of sensor detection position information corresponding to the plurality of target sensors;
and updating the speed information, the target fusion result, the detection position information of the plurality of sensors and the prediction position information by using the Kalman algorithm to obtain obstacle related information.
According to a second aspect of the present application, there is provided a sensor data fusion device comprising:
the determining module is used for determining a target fusion mode corresponding to the perception information in a plurality of fusion modes when the input perception information is received, and determining a target detection range corresponding to the target fusion mode by using space-time synchronization operation;
the matching module is used for acquiring information to be processed in the target detection range from the perception information, and matching the information to be processed by utilizing a matching rule corresponding to the target fusion mode to obtain a target matching result;
The integrating module is used for carrying out compensation calculation on the information to be processed by utilizing a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of the perception information by utilizing an integrating rule corresponding to the target fusion mode to obtain a target fusion result;
and the processing module is used for acquiring a Kalman algorithm, and processing the target fusion result by using the Kalman algorithm to obtain obstacle related information.
Optionally, the determining module is configured to determine a plurality of target sensors corresponding to the target fusion mode, obtain sensor state information from the sensing information, and detect the sensor state information, where the plurality of target sensors include a laser radar, a millimeter wave radar, and a camera sensor, and the sensor state information indicates working states of the plurality of target sensors; if the sensor state information indicates that the working states of the plurality of target sensors are normal, acquiring coordinate information of each target sensor; determining the sample vehicle corresponding to the perception information, acquiring rear axle center position information of the sample vehicle, and adjusting coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle; acquiring a time stamp of the laser radar, and sequentially determining the time stamp of the millimeter wave radar and the time stamp of the camera sensor, which are similar to the time stamp of the laser radar; unifying the laser radar, the millimeter wave radar and the camera sensor under the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar and the time stamp of the camera sensor; obtaining a detection range of each target sensor to obtain a plurality of detection ranges, and determining an intersection of the plurality of detection ranges to obtain the target detection range.
Optionally, the determining module is configured to obtain a millimeter wave detection result corresponding to a timestamp of the millimeter wave radar and a camera detection result corresponding to a timestamp of the camera sensor; respectively calculating the time stamp of the millimeter wave radar, the time stamp of the camera sensor and the time stamp of the laser radar to obtain a millimeter wave time difference and a camera time difference; and performing motion compensation operation on the millimeter wave detection result by utilizing the millimeter wave time difference, and performing motion compensation operation on the camera detection result by utilizing the camera time difference, so that the laser radar, the millimeter wave radar and the camera sensor are under the same time stamp.
Optionally, the matching module is configured to obtain a laser radar traditional sensing output result, a laser radar intelligent sensing output result, a millimeter wave sensing output result, and the scene sensing parameter from the sensing information, and obtain to-be-processed laser radar traditional sensing information, to-be-processed laser radar intelligent sensing information, to-be-processed millimeter wave sensing information, and to-be-processed scene sensing information within the target detection range from the laser radar traditional sensing output result, the laser radar intelligent sensing output result, the millimeter wave sensing output result, and the scene sensing parameter respectively, where the scene sensing parameter includes a communication cooperative sensing result, and a roadside sensing result; taking the traditional perception information of the laser radar to be processed, the intelligent perception information of the laser radar to be processed, the millimeter wave perception information to be processed and the scene perception information to be processed as the information to be processed, and acquiring the matching rule corresponding to the target fusion mode; determining first content matched with the intelligent perception information of the laser radar to be processed in the traditional perception information of the laser radar to be processed by utilizing the matching rule, acquiring second content matched with the first content in the intelligent perception information of the laser radar to be processed, and replacing the first content in the traditional perception information of the laser radar to be processed by utilizing the second content to obtain a first matching result, wherein if first unmatched content except the second content exists in the intelligent perception information of the laser radar to be processed, the first unmatched content is filtered; determining third content matched with the millimeter wave sensing information to be processed in the first matching result by utilizing the matching rule, acquiring fourth content matched with the third content in the millimeter wave sensing information to be processed, and replacing the third content in the first matching result by utilizing the fourth content to obtain a second matching result; and determining fifth content matched with the to-be-processed scene perception information in the second matching result by utilizing the matching rule, acquiring sixth content matched with the fifth content in the to-be-processed scene perception information, and replacing the fifth content in the second matching result by utilizing the sixth content to obtain the target matching result.
Optionally, the integrating module is configured to obtain millimeter wave sensing information to be processed from the information to be processed, obtain fourth content obtained from the millimeter wave sensing information to be processed by using the matching rule, and determine second unmatched content in the millimeter wave sensing information to be processed except for the fourth content; acquiring the dust detection algorithm, determining a plurality of pieces of position information of a plurality of dust particles by using the dust detection algorithm, determining a plurality of pieces of associated sensing information associated with the plurality of pieces of position information in the second unmatched content, and taking the plurality of pieces of associated sensing information as the millimeter wave compensation result; and acquiring the scene perception parameters from the perception information, acquiring the integration rule corresponding to the target fusion mode, and integrating the target matching result, the millimeter wave compensation result and the scene perception parameters by utilizing the integration rule to obtain the target fusion result.
Optionally, the apparatus further comprises: and generating a module.
The generation module is used for acquiring traditional perception information of the laser radar to be processed, intelligent perception information of the laser radar to be processed, millimeter wave perception information to be processed and scene perception information to be processed in the information to be processed when the perception information comprises traditional perception output results of the laser radar, intelligent perception output results of the laser radar, millimeter wave perception output results and scene perception parameters; acquiring appointed laser radar traditional perception information which does not comprise the laser radar traditional perception information to be processed from the laser radar traditional perception output result, acquiring appointed laser radar intelligent perception information which does not comprise the laser radar intelligent perception information to be processed from the laser radar intelligent perception output result, acquiring appointed millimeter wave perception information which does not comprise the millimeter wave perception information to be processed from the millimeter wave perception output result, and acquiring appointed scene perception information which does not comprise the scene perception information to be processed from the scene perception parameters; acquiring first content determined in the traditional perception information of the laser radar to be processed by utilizing the matching rule, and determining third unmatched content except the first content in the traditional perception information of the laser radar to be processed; and generating the target fusion result based on the appointed laser radar traditional perception information, the appointed laser radar intelligent perception information, the appointed millimeter wave perception information, the appointed scene perception information, the third unmatched content, the target matching result, the millimeter wave compensation result and the scene perception parameter.
Optionally, the processing module is configured to obtain a previous fusion result of the target fusion result, determine time information corresponding to the previous fusion result, obtain time information corresponding to the target fusion result, and calculate using the time information corresponding to the previous fusion result and the time information corresponding to the target fusion result to obtain a target time difference; acquiring a time difference algorithm, and calculating the target time difference by using the time difference algorithm to obtain speed information corresponding to the target fusion result; acquiring a sample vehicle corresponding to the perception information, and acquiring a preset motion model corresponding to the sample vehicle, wherein the preset motion model is a preset virtual model of uniform motion; acquiring the speed information, and performing prediction operation on the speed information and the target time difference by using the preset motion model to obtain predicted position information; acquiring the speed information from the target fusion result, acquiring a plurality of target sensors corresponding to the target fusion mode, and acquiring a plurality of sensor detection position information corresponding to the plurality of target sensors; and updating the speed information, the target fusion result, the detection position information of the plurality of sensors and the prediction position information by using the Kalman algorithm to obtain obstacle related information.
According to a third aspect of the present application there is provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the first aspects described above when the computer program is executed by the processor.
According to a fourth aspect of the present application there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the first aspects described above.
By means of the technical scheme, the sensor data fusion method, the sensor data fusion device, the computer equipment and the computer readable storage medium are characterized in that when input perception information is received, a target fusion mode corresponding to the perception information is determined in a plurality of fusion modes, a target detection range corresponding to the target fusion mode is determined through time-space synchronous operation, information to be processed in the target detection range is obtained in the perception information, the information to be processed is matched through a matching rule corresponding to the target fusion mode, a target matching result is obtained, compensation calculation is conducted on the information to be processed through a dust detection algorithm, a millimeter wave compensation result is obtained, and scene perception parameters of the target matching result, the millimeter wave compensation result and the perception information are integrated through an integration rule corresponding to the target fusion mode, a Kalman algorithm is obtained, the target fusion result is processed through the Kalman algorithm, obstacle related information is obtained, the perception result is supplemented through a detection result of a millimeter wave radar, the perception capability of a vehicle under a dust condition is enhanced, the perception range of the vehicle is enlarged through a V2V collaborative perception result and a road side perception result, and the perception precision of the vehicle is improved, and the vehicle environment is enabled to be better to adapt to a mining area.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 is a schematic flow chart of a method for sensor data fusion according to an embodiment of the present application;
FIG. 2A is a schematic flow chart of a method for sensor data fusion according to an embodiment of the present disclosure;
FIG. 2B is a schematic flow chart of a method for fusion tracking according to an embodiment of the present disclosure;
FIG. 2C is a schematic flow chart of a multi-sensor sensing fusion according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a sensor data fusion structure according to an embodiment of the present disclosure;
FIG. 3B is a schematic diagram illustrating a sensor data fusion structure according to an embodiment of the present disclosure;
fig. 4 shows a schematic device structure of a computer device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the application provides a sensor data fusion method, as shown in fig. 1, which comprises the following steps:
101. when the input perception information is received, determining a target fusion mode corresponding to the perception information in a plurality of fusion modes, and determining a target detection range corresponding to the target fusion mode by using space-time synchronization operation.
Because the open-pit mining area has large environmental dust and complex road conditions, all problems can not be solved by only relying on a single sensor, and therefore, the data fusion by adopting multiple sensors is important. However, even if the output results of information such as laser radar, millimeter wave radar, image and the like are fused, the related art cannot accurately sense under the dust condition, and in the face of complex road condition environments, sensing omission occurs, so that sensing accuracy is not high.
In order to solve the problem, the application provides a sensor data fusion method, which utilizes the information such as the traditional sensing output result of a laser radar, the sensing output result of a laser radar AI (Artificial Intelligence ), the sensing output result of a millimeter wave, the state information of a sensor and the like, and carries out detection result fusion through algorithms such as multi-sensor space-time synchronization, FOV (angle of view) judgment, space matching, millimeter wave detection result compensation, V2V (Vehicle to Vehicle, vehicle-to-vehicle end) fusion and the like, and utilizes the fused result to track, and finally outputs the position, the size, the direction angle, the category, the speed and the tracking ID (Identity Document, the identity identification number) of an obstacle. The execution main body of the multi-sensor fusion system can be a multi-sensor fusion system, the multi-sensor fusion system provides services for users by means of the computing power of a server, the server can be an independent server, cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution networks (Content Delivery Network, CDNs) and servers for basic cloud computing such as big data and an artificial intelligent platform, so that the multi-sensor fusion system can analyze more accurate driving environment information, and perception precision is improved.
In the embodiment of the application, when receiving input perception information, the multi-sensor fusion system determines a target fusion mode corresponding to the perception information in a plurality of fusion modes, wherein the perception information is a perception output result of a plurality of sensors at a certain moment, and the fusion mode of the perception information can be selected according to an actual operation effect. Then, the multi-sensor fusion system determines a target detection range corresponding to the target fusion mode by using a space-time synchronization operation, wherein the space-time synchronization operation is to ensure that obstacle information detected by different sensors is at the same time point, and the detected position information needs to be unified under a coordinate system. Therefore, the multi-sensor fusion system can input the sensing information according to the actual application scene and determine the corresponding target detection range, and the fusion sensing precision is improved.
102. And acquiring information to be processed in a target detection range from the perception information, and matching the information to be processed by utilizing a matching rule corresponding to a target fusion mode to obtain a target matching result.
In order to ensure the unmanned safety of the mining area, the accuracy and stability of fusion sensing are required to be improved, so that the multi-sensor fusion system matches target sensing information acquired from a plurality of sensors. In the embodiment of the application, the multi-sensor fusion system acquires information to be processed within a target detection range from the sensing information, and matches the information to be processed by utilizing a matching rule corresponding to a target fusion mode to obtain a target matching result. The multi-sensor fusion system corrects and supplements the matched result by matching a plurality of data in the information to be processed, thereby improving the accuracy of fusion perception.
103. And carrying out compensation calculation on the information to be processed by utilizing a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of perception information by utilizing an integration rule corresponding to the target fusion mode to obtain a target fusion result.
In the embodiment of the application, the multi-sensor fusion system performs compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result. The compensation calculation specifically uses the position information of dust detection of the laser radar to search the detection result of the millimeter wave radar near the dust, if the detection result of the millimeter wave is found, the detection result of the millimeter wave is considered to be missed detection generated after dust shielding, and the detection result of the millimeter wave is recalled, so that the perception capability of the vehicle can be enhanced, and the perception precision is improved. And then, integrating the target matching result, the millimeter wave compensation result and the scene perception parameters of the perception information by the multi-sensor fusion system by utilizing an integration rule corresponding to the target fusion mode to obtain a target fusion result. In order to avoid collision risk of vehicles, the method adopts a V2V fusion technology, namely fusion is carried out by utilizing scene perception parameters comprising a V2V collaborative perception result and a road side perception result, and the problem that collision risk caused by the fact that a vehicle end sensor cannot detect an obstacle under a complex road condition can be solved.
104. And acquiring a Kalman algorithm, and processing a target fusion result by using the Kalman algorithm to obtain obstacle related information.
In the embodiment of the application, the multi-sensor fusion system acquires a Kalman algorithm, and the Kalman algorithm is utilized to process the target fusion result to obtain the obstacle related information. In this way, the multi-sensor fusion system processes and tracks the obtained target fusion result, and processes the target fusion result through a Kalman algorithm, so that obstacle related information such as the position, the size, the direction angle, the category, the speed, the tracking ID and the like of the obstacle can be obtained, more accurate driving environment information can be provided, and the vehicle can be better adapted to the mining area environment.
According to the method provided by the embodiment of the application, when input perception information is received, a target fusion mode corresponding to the perception information is determined in a plurality of fusion modes, a target detection range corresponding to the target fusion mode is determined by utilizing space-time synchronization operation, information to be processed in the target detection range is acquired in the perception information, matching is conducted on the information to be processed by utilizing a matching rule corresponding to the target fusion mode, a target matching result is obtained, compensation calculation is conducted on the information to be processed by utilizing a dust detection algorithm, a millimeter wave compensation result is obtained, scene perception parameters of the target matching result, the millimeter wave compensation result and the perception information are integrated by utilizing an integration rule corresponding to the target fusion mode, a Kalman algorithm is acquired, the target fusion result is processed by utilizing the Kalman algorithm, obstacle related information is acquired, the perception result is supplemented by utilizing a detection result of a millimeter wave radar, the perception capability of a vehicle under a dust condition is enhanced, the perception range of the vehicle is enlarged by utilizing a V2V collaborative perception result and a road side perception result, and the perception precision is improved, and the vehicle is enabled to be better adapted to a mining area environment.
Further, as a refinement and extension of the foregoing embodiment, in order to fully describe a specific implementation procedure of the embodiment, another method for fusing sensor data is provided in the embodiment of the present application, as shown in fig. 2A, where the method includes:
201. when the input perception information is received, determining a target fusion mode corresponding to the perception information in a plurality of fusion modes.
The user manually inputs the information combination into the multi-sensor fusion system, the information combination comprises a fusion laser radar sensing output result, a laser radar intelligent sensing output result, a millimeter wave sensing output result, sensor state information, a communication cooperative sensing result and a road side sensing result, and the user can select a proper information combination according to the current driving environment. In the embodiment of the application, when receiving the input perception information, the multi-sensor fusion system determines a target fusion mode corresponding to the perception information in a plurality of fusion modes. The multi-sensor fusion system mainly reads configuration files during program initialization so as to load different fusion modes, and comprises three fusion modes: and in the first mode, the traditional perception output result of the laser radar and the intelligent perception output result of the laser radar are fused. And secondly, fusing the traditional perception output result of the laser radar, the intelligent perception output result of the laser radar and the millimeter wave perception output result. And in the third mode, the traditional perception output result of the laser radar, the intelligent perception output result of the laser radar, the millimeter wave perception output result, the communication cooperative perception result and the road side perception result are fused, and a user can select multi-sensor perception fusion input according to the actual working environment and the running effect of the vehicle. For example, the millimeter wave radar detection result has poor detection effect in a certain environment, and a user can adopt manual selection to perform perception fusion by only using the laser radar detection result. In the embodiment of the application, the mode three is taken as an example for description, the fused sensing result is supplemented through the millimeter wave sensing output result, the sensing capability of the vehicle under the dust condition is enhanced, meanwhile, the sensing range of the vehicle is enlarged by utilizing the communication cooperative sensing result and the road side sensing result, the sensing precision is improved, and the vehicle is enabled to be better adapted to the mining area environment.
202. And determining a sample vehicle corresponding to the sensing information, acquiring rear axle center position information of the sample vehicle, and adjusting coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle.
In order to ensure the accuracy of multi-sensor data fusion, the application proposes to spatially synchronize a plurality of sensors, that is, unify coordinate axes of the sensors to a coordinate origin with the center of a rear axle of a vehicle. In the embodiment of the application, the multi-sensor fusion system determines a plurality of target sensors corresponding to the target fusion mode, wherein the plurality of target sensors comprise a laser radar, a millimeter wave radar and a camera sensor. It should be noted that, if the target fusion mode corresponding to the sensing information is the mode one, the plurality of target sensors include a laser radar and a camera sensor, and if the target fusion mode corresponding to the sensing information is the mode two, the plurality of sensors include a laser radar, a millimeter wave radar and a camera sensor. Then, the multi-sensor fusion system acquires sensor state information from the sensing information and detects the sensor state information, wherein the sensor state information indicates the working states of the plurality of target sensors. If the sensor state information indicates that the working states of the plurality of target sensors are normal, the coordinate information of each target sensor is acquired, so that whether the plurality of sensors work normally or not can be monitored through the sensor state information, the abnormal sensor state information is timely processed when abnormal conditions are found, and the influence on the multi-sensor fusion system to perform data fusion due to the fact that abnormal sensing information of the sensors is acquired is avoided. Then, the multi-sensor fusion system determines a sample vehicle corresponding to the sensing information, acquires rear axle center position information of the sample vehicle, and adjusts coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle. The multi-sensor fusion system realizes the spatial synchronization of the plurality of sensors by unifying the coordinate axes of the sensors to the coordinate origin by taking the center of the rear axle of the sample vehicle, so that the target detection range is more accurately determined later, and more accurate data are acquired and matched.
203. And unifying the laser radar, the millimeter wave radar and the camera sensor under the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar and the time stamp of the camera sensor.
Since the premise of multi-sensor data fusion is space-time synchronization, the position information detected by different sensors needs to be unified under one coordinate system, and the detected obstacle information needs to be ensured to be at the same time point, so that time synchronization is needed for a plurality of sensors. Based on the time stamp of the laser radar, the millimeter wave radar, the V2V sensor and the like find a frame of detection result which is closest to the time stamp of the laser radar according to the time stamp of the millimeter wave radar and the V2V sensor, and then perform corresponding motion compensation according to the time difference, and synchronize under the same time stamp as the laser radar, so that the time-space synchronization operation of multiple sensors is completed. In the embodiment of the application, the multi-sensor fusion system acquires the time stamp of the laser radar, and sequentially determines the time stamp of the millimeter wave radar and the time stamp of the camera sensor, which are close to the time stamp of the laser radar. Then, the multi-sensor fusion system acquires millimeter wave detection results corresponding to the time stamps of the millimeter wave radars and camera detection results corresponding to the time stamps of the camera sensors so as to carry out motion compensation operation on the detection results. And then, the multi-sensor fusion system calculates the time difference between the time stamp of the millimeter wave radar, the time stamp of the camera sensor and the time stamp of the laser radar respectively to obtain the millimeter wave time difference and the camera time difference. Finally, the multi-sensor fusion system performs motion compensation operation on millimeter wave detection results by utilizing millimeter wave time differences and performs motion compensation operation on camera detection results by utilizing camera time differences, so that the laser radar, the millimeter wave radar and the camera sensor are positioned under the same time stamp, and the target detection range can be determined.
204. The detection range of each target sensor is obtained, a plurality of detection ranges are obtained, and an intersection of the plurality of detection ranges is determined, so that the target detection range is obtained.
Because each sensor has respective detection range, in order to improve the precision of fusion perception, the intersection FOV judgment is carried out on the detection ranges of a plurality of sensors, the detection results in the range are matched, and the detection results outside the range can be used as the complement of the whole perception output result, so that the perception precision is further improved. In the embodiment of the application, the multi-sensor fusion system acquires the detection range of each target sensor to obtain a plurality of detection ranges, and determines the intersection of the plurality of detection ranges to obtain the target detection range. Each sensor has a respective detection range, namely an actual sensing range, and the multi-sensor fusion system determines a target detection range according to intersection of detection ranges of the laser radar, the millimeter wave radar and the camera sensor. It should be noted that, if the target fusion mode is mode one, the target detection range is determined according to the intersection of the laser radar detection range and the camera sensor detection range, and if the target fusion mode is mode two, the target detection range is determined according to the intersection of the detection ranges of the laser radar, the millimeter wave radar and the camera sensor. In this way, the multi-sensor fusion system determines the target detection range by performing FOV judgment on a plurality of sensors synchronized in time, so that the detection results in the target detection range can be matched later, and the data fusion accuracy of the multi-sensor fusion system is improved.
205. And acquiring information to be processed in the target detection range from the perception information.
In the embodiment of the application, the multi-sensor fusion system acquires a traditional sensing output result of the laser radar, an intelligent sensing output result of the laser radar, a millimeter wave sensing output result and scene sensing parameters in sensing information, and acquires traditional sensing information of the laser radar, intelligent sensing information of the laser radar, millimeter wave sensing information and scene sensing information of the to-be-processed in a target detection range in the traditional sensing output result of the laser radar, the intelligent sensing output result of the laser radar, the millimeter wave sensing information of the to-be-processed and the scene sensing information of the to-be-processed respectively. The scene perception parameters comprise a communication cooperative perception result and a road side perception result, and can better expand the perception range aiming at complex working conditions in the mining area environment. Moreover, considering the problem that one target can return to a plurality of targets in the traditional detection algorithm of the laser radar, the detection result is corrected by adopting the AI detection algorithm of the laser radar, namely intelligent perception information of the laser radar is utilized, so that the perception precision is better improved. And considering that the bounding box size output by adopting the traditional detection algorithm of the laser radar is inaccurate, the bounding box size can be corrected by adopting the intelligent perception information of the laser radar. Then, the multi-sensor fusion system takes the traditional perception information of the laser radar to be processed, the intelligent perception information of the laser radar to be processed, the millimeter wave perception information to be processed and the scene perception information to be processed as the information to be processed, so that the perception information for target matching can be determined, and the data fusion efficiency of the multi-sensor fusion system is improved.
It should be noted that, if the target fusion mode is mode one, the multi-sensor fusion system acquires the traditional sensing output result of the laser radar and the intelligent sensing output result of the laser radar in the sensing information, and acquires the traditional sensing information of the laser radar to be processed and the intelligent sensing information of the laser radar to be processed in the target detection range respectively in the traditional sensing output result of the laser radar and the intelligent sensing output result of the laser radar. Then, the multi-sensor fusion system takes the traditional perception information of the laser radar to be processed and the intelligent perception information of the laser radar to be processed as the information to be processed.
If the target fusion mode is the mode two, the multi-sensor fusion system acquires a laser radar traditional perception output result, a laser radar intelligent perception output result and a millimeter wave perception output result from the perception information, and acquires laser radar traditional perception information to be processed, laser radar intelligent perception information to be processed and millimeter wave perception information to be processed in a target detection range from the laser radar traditional perception output result, the laser radar intelligent perception output result and the millimeter wave perception output result respectively. Then, the multi-sensor fusion system takes the traditional perception information of the laser radar to be processed, the intelligent perception information of the laser radar to be processed and the millimeter wave perception information to be processed as the information to be processed.
206. And matching the information to be processed by using a matching rule corresponding to the target fusion mode to obtain a target matching result.
In the embodiment of the application, the multi-sensor fusion system acquires a matching rule corresponding to the target fusion mode. Then, the multi-sensor fusion system determines first content matched with the intelligent perception information of the laser radar to be processed in the traditional perception information of the laser radar to be processed by utilizing a matching rule, and acquires second content matched with the first content in the intelligent perception information of the laser radar to be processed, namely, searches the traditional perception information of the laser radar to be processed in a certain range by taking the intelligent perception information of the laser radar to be processed as a reference. Then, the multi-sensor fusion system replaces the first content in the traditional perception information of the laser radar to be processed by utilizing the second content to obtain a first matching result, namely, replaces the corresponding traditional perception information of the laser radar to be processed by the intelligent perception information of the laser radar to be processed, wherein if the first unmatched content except the second content exists in the intelligent perception information of the laser radar to be processed, which indicates that the intelligent perception information of the laser radar to be processed is false detection, the first unmatched content is filtered. It should be noted that, after all the intelligent perception information of the laser radar to be processed is traversed, if there is any unmatched traditional perception information of the laser radar to be processed, the traditional perception information of the laser radar to be processed needs to be reserved. Therefore, the multi-sensor fusion system can match the traditional perception output result of the laser radar with the intelligent perception output result of the laser radar.
Then, the multi-sensor fusion system determines third content matched with the millimeter wave sensing information to be processed in the first matching result by utilizing a matching rule, and acquires fourth content matched with the third content in the millimeter wave sensing information to be processed, namely searching the millimeter wave sensing information to be processed in a certain range by taking the first matching result as a reference. And then, the multi-sensor fusion system replaces the third content in the first matching result by utilizing the fourth content to obtain a second matching result, namely, replaces the corresponding first matching result by the millimeter wave perception information to be processed. The first matching result without matching needs to be reserved, and the millimeter wave sensing information to be processed without matching needs to be compensated and calculated according to a dust detection algorithm in the follow-up process. Therefore, the multi-sensor fusion system can match the traditional perception output result of the laser radar, the matching result of the intelligent perception output result of the laser radar and the millimeter wave perception output result. In order to improve accuracy of target matching, the first matching result may be fused by using a kalman filter, so as to calculate a fusion coefficient of the detection target frame. Because the vehicle operation can bring the raise dust under the mining area environment, the laser radar can appear leaking to examine this moment, and millimeter wave is better to the penetrability of dust, so, utilize millimeter wave radar's testing result to supplement and can strengthen the perceptibility.
And then, the multi-sensor fusion system determines fifth content matched with the scene sensing information to be processed in the second matching result by utilizing a matching rule, and acquires sixth content matched with the fifth content in the scene sensing information to be processed, namely searching the scene sensing information to be processed in a certain range by taking the second matching result as a reference. It should be noted that the multi-sensor fusion system can further improve the sensing precision by fusing the drive test sensing information in the to-be-processed scene sensing information with the vehicle end sensing information, and solves the problems of obstacle target missed detection caused by dust shielding and target missed detection caused by complex mining area working conditions. And finally, replacing the fifth content in the second matching result by the multi-sensor fusion system by utilizing the sixth content, wherein the second matching result which is not matched needs to be reserved, and obtaining a target matching result, namely replacing the corresponding second matching result by the scene perception information to be processed. Different from urban roads, environmental road conditions in mining areas are complex, and sensing omission is caused by sensing environment by using a sensor, so that collision risks are brought, and V2V collaborative sensing information, road side sensing information and other vehicle positioning information sent by V2V are combined to be avoided.
It should be noted that, if the target fusion mode is mode one, the multi-sensor fusion system takes the first matching result as the target matching result; if the target fusion mode is mode two, the multi-sensor fusion system takes the second matching result as the target matching result.
207. And carrying out compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result.
Because the laser radar easily causes the omission of obstacles in the places where dust and water vapor are generated, the dust position information and the recall of millimeter wave perception output results can be given out by utilizing the dust detection algorithm among the traditional algorithms of the laser radar and utilizing the laser radar point cloud reflection intensity information, the density information and the like to be used as the supplement of the omission of laser radar output. In the embodiment of the application, the multi-sensor fusion system acquires millimeter wave sensing information to be processed in the information to be processed, and acquires fourth content acquired in the millimeter wave sensing information to be processed by utilizing the matching rule. Then, the multi-sensor fusion system determines second unmatched content except the fourth content in the millimeter wave sensing information to be processed, namely, obtains the millimeter wave sensing information to be processed which is not matched in the target matching process. Subsequently, the multi-sensor fusion system acquires a dust detection algorithm, and determines a plurality of position information of a plurality of dust particles using the dust detection algorithm. Then, the multi-sensor fusion system determines a plurality of associated sensing information associated with the plurality of position information in the second unmatched content, and uses the plurality of associated sensing information as a millimeter wave compensation result, so that by traversing whether the millimeter wave output result in the second unmatched content is near the dust detection position or not, if so, the result retention is performed, and the sensing capability of the sample vehicle can be enhanced.
208. And integrating the target matching result, the millimeter wave compensation result and the scene perception parameters of the perception information by utilizing an integration rule corresponding to the target fusion mode to obtain a target fusion result.
In the embodiment of the application, the multi-sensor fusion system acquires scene perception parameters in perception information and acquires an integration rule corresponding to a target fusion mode. And then, the multi-sensor fusion system utilizes an integration rule to integrate the target matching result, the millimeter wave compensation result and the scene perception parameter, so as to obtain a target fusion result. The multi-sensor fusion system utilizes the laser radar, the millimeter wave radar and the V2V collaborative sensing result and the road side sensing result to fuse, so that the sensing precision can be improved, and the sample vehicle can be better adapted to the mining area environment.
In an alternative embodiment, in order to improve the sensing precision, the multi-sensor fusion system may integrate the detection result outside the target detection range, the laser radar traditional sensing output result, the laser radar intelligent sensing output result, the millimeter wave sensing output result, the communication cooperative sensing result, the matching result of the road side sensing result, the non-matching laser radar traditional sensing output result, the millimeter wave compensated detection result, the communication cooperative sensing result and the road side sensing result. Specifically, when the sensing information comprises a laser radar traditional sensing output result, a laser radar intelligent sensing output result, a millimeter wave sensing output result and scene sensing parameters, the multi-sensor fusion system acquires the laser radar traditional sensing information to be processed, the laser radar intelligent sensing information to be processed, the millimeter wave sensing information to be processed and the scene sensing information to be processed in the information to be processed. Then, the multi-sensor fusion system acquires specified laser radar traditional perception information which does not comprise the laser radar traditional perception information to be processed in the laser radar traditional perception output result, acquires specified laser radar intelligent perception information which does not comprise the laser radar intelligent perception information to be processed in the laser radar intelligent perception output result, acquires specified millimeter wave perception information which does not comprise the millimeter wave perception information to be processed in the millimeter wave perception output result, and acquires specified scene perception information which does not comprise the scene perception information to be processed in the field Jing Ganzhi parameter. Then, the multi-sensor fusion system acquires first content determined in the traditional perception information of the laser radar to be processed by utilizing the matching rule, and determines third unmatched content except the first content in the traditional perception information of the laser radar to be processed, namely unmatched traditional perception information of the laser radar to be processed. And finally, the multi-sensor fusion system generates a target fusion result based on the appointed laser radar traditional perception information, the appointed laser radar intelligent perception information, the appointed millimeter wave perception information, the appointed scene perception information, the third unmatched content, the target matching result, the millimeter wave compensation result and the scene perception parameter.
It should be noted that, if the target fusion mode is mode one, the integration rule corresponding to mode one is to integrate the matched laser radar traditional perception output result, the matched laser radar intelligent perception output result and the unmatched laser radar traditional perception output result to obtain the target fusion result. If the target fusion mode is the mode two, the integration rule is to integrate the detection result outside the target detection range, the matching result of the traditional perception output result of the laser radar, the intelligent perception output result of the laser radar and the millimeter wave perception output result in the target detection range, and the detection result after the traditional perception output result of the laser radar and the millimeter wave compensation which are not matched are integrated, so that the target fusion result is obtained. Therefore, according to different application scenes, the sensor data can be fused through different fusion modes, for example, the millimeter wave radar detection result has poor detection effect in a certain environment, and therefore, only the laser radar detection result can be selected to be used for sensing fusion.
209. And acquiring a Kalman algorithm, and processing a target fusion result by using the Kalman algorithm to obtain obstacle related information.
After the target fusion result is obtained, the multi-sensor fusion system tracks by utilizing the target fusion result, and provides relevant information such as the position, the size, the category, the speed, the ID and the like of the obstacle for the sample vehicle. In the embodiment of the application, the multi-sensor fusion system acquires a last fusion result of the target fusion result, and determines time information corresponding to the last fusion result. And then, the multi-sensor fusion system acquires time information corresponding to the target fusion result, and calculates by utilizing the time information corresponding to the last fusion result and the time information corresponding to the target fusion result to obtain a target time difference. And then, the multi-sensor fusion system acquires a time difference algorithm, and calculates the target time difference by using the time difference algorithm to obtain speed information corresponding to the target fusion result. Further, the multi-sensor fusion system acquires a sample vehicle corresponding to the sensing information and acquires a preset motion model corresponding to the sample vehicle, wherein the preset motion model is a preset virtual model of uniform motion, each sensing target has speed information in the uniform motion process of the sample vehicle, the multi-sensor fusion system calculates a time interval according to the time information, and then the position of the target is predicted by using the speed information and the time interval through the preset motion model, so that Kalman prediction is completed. And then, the multi-sensor fusion system acquires speed information, and the speed information and the target time difference are predicted by using a preset motion model to obtain predicted position information. Then, the multi-sensor fusion system acquires speed information in the target fusion result, acquires a plurality of target sensors corresponding to the target fusion mode, and acquires a plurality of sensor detection position information corresponding to the plurality of target sensors. And finally, the multi-sensor fusion system updates the speed information, the target fusion result, the detection position information and the prediction position information of the plurality of sensors by using a Kalman algorithm to obtain obstacle related information, so that the perception precision can be improved, and more accurate driving environment information can be obtained. Therefore, the multi-sensor fusion system maintains a cache list through the obtained obstacle related information to finish target tracking, so that the vehicle is better adapted to the mining area environment.
By the above process, a flow diagram of a fusion tracking method provided in the embodiment of the present application is as follows:
as shown in fig. 2B, the multi-sensor fusion system calculates a time difference between a current detection target and a target corresponding to a previous frame, and determines whether speed information exists in the current detection result. If no speed information exists, calculating the speed information by using the time difference. If the speed information exists, the Kalman prediction is performed by using the set motion model. Then, the multi-sensor fusion system updates information such as a time stamp, an obstacle course angle, an obstacle height and the like, and inputs sensor detection position and speed information to carry out Kalman update to obtain obstacle related information.
In summary, a flow chart of multi-sensor sensing fusion provided in the embodiment of the present application is as follows:
as shown in fig. 2C, source information is input in the multi-sensor fusion system, where the source information includes 1, a laser radar conventional algorithm, an AI output result, 2, a millimeter wave output result, 3, V2V cooperative sensing result, a road side sensing result, 4, and vehicle positioning information. The user inputs information combinations, wherein the information combinations can be 1, 2, 3 and 4 combinations, and can also be 1, 3 and 4 combinations. When the user selects the combination of 1, 2, 3 and 4, the algorithm processing procedure of the multi-sensor fusion system is as follows: firstly, performing multi-sensor space-time synchronization, then performing intersection FOV judgment and completing target matching, and then performing fusion tracking through millimeter wave compensation and V2V information compensation, namely completing track matching by using a Hungary matching algorithm, and then completing fusion of a current frame result and a historical frame result by using Kalman filtering, so that the position noise of an output detection frame is smaller, and finally providing information, position, size, direction angle, speed and tracking ID of a final output obstacle for a main control. When the user selects the combination of 1, 3 and 4, the algorithm processing procedure of the multi-sensor fusion system is as follows: firstly, performing multi-sensor space-time synchronization, then performing target matching, performing fusion tracking through V2V information compensation, and finally outputting information, position, size, direction angle, speed and tracking ID of the provided obstacle.
According to the method provided by the embodiment of the application, when input perception information is received, a target fusion mode corresponding to the perception information is determined in a plurality of fusion modes, a target detection range corresponding to the target fusion mode is determined by utilizing space-time synchronization operation, information to be processed in the target detection range is acquired in the perception information, matching is conducted on the information to be processed by utilizing a matching rule corresponding to the target fusion mode, a target matching result is obtained, compensation calculation is conducted on the information to be processed by utilizing a dust detection algorithm, a millimeter wave compensation result is obtained, scene perception parameters of the target matching result, the millimeter wave compensation result and the perception information are integrated by utilizing an integration rule corresponding to the target fusion mode, a Kalman algorithm is acquired, the target fusion result is processed by utilizing the Kalman algorithm, obstacle related information is acquired, the perception result is supplemented by utilizing a detection result of a millimeter wave radar, the perception capability of a vehicle under a dust condition is enhanced, the perception range of the vehicle is enlarged by utilizing a V2V collaborative perception result and a road side perception result, and the perception precision is improved, and the vehicle is enabled to be better adapted to a mining area environment.
Further, as a specific implementation of the method illustrated in fig. 1, an embodiment of the present application provides a sensor data fusion device, as shown in fig. 3A, where the device includes: a determining module 301, a matching module 302, an integrating module 303 and a processing module 304.
The determining module 301 is configured to determine, when receiving input sensing information, a target fusion mode corresponding to the sensing information from among a plurality of fusion modes, and determine a target detection range corresponding to the target fusion mode by using a space-time synchronization operation;
the matching module 302 is configured to obtain information to be processed within a target detection range from the sensing information, and match the information to be processed by using a matching rule corresponding to the target fusion mode, so as to obtain a target matching result;
the integrating module 303 is configured to perform compensation calculation on information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrate a target matching result, a millimeter wave compensation result, and scene perception parameters of perception information by using an integration rule corresponding to a target fusion mode to obtain a target fusion result;
and the processing module 304 is configured to acquire a kalman algorithm, and process the target fusion result by using the kalman algorithm to obtain the obstacle related information.
In a specific application scenario, the determining module 301 is configured to determine a plurality of target sensors corresponding to a target fusion mode, obtain sensor state information from sensing information, and detect the sensor state information, where the plurality of target sensors include a laser radar, a millimeter wave radar, and a camera sensor, and the sensor state information indicates working states of the plurality of target sensors; if the sensor state information indicates that the working states of the plurality of target sensors are normal, acquiring coordinate information of each target sensor; determining a sample vehicle corresponding to the perception information, acquiring rear axle center position information of the sample vehicle, and adjusting coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle; acquiring a time stamp of the laser radar, and sequentially determining the time stamp of the millimeter wave radar and the time stamp of the camera sensor, which are similar to the time stamp of the laser radar; unifying the laser radar, the millimeter wave radar and the camera sensor under the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar and the time stamp of the camera sensor; the detection range of each target sensor is obtained, a plurality of detection ranges are obtained, and an intersection of the plurality of detection ranges is determined, so that the target detection range is obtained.
In a specific application scenario, the determining module 301 is configured to obtain a millimeter wave detection result corresponding to a timestamp of the millimeter wave radar and a camera detection result corresponding to a timestamp of the camera sensor; respectively calculating the time stamp of the millimeter wave radar, the time stamp of the camera sensor and the time stamp of the laser radar to obtain the millimeter wave time difference and the camera time difference; and performing motion compensation operation on the millimeter wave detection result by utilizing the millimeter wave time difference, and performing motion compensation operation on the camera detection result by utilizing the camera time difference, so that the laser radar, the millimeter wave radar and the camera sensor are positioned under the same time stamp.
In a specific application scenario, the matching module 302 is configured to obtain a traditional sensing output result of the laser radar, an intelligent sensing output result of the laser radar, a millimeter wave sensing output result, and a scene sensing parameter from sensing information, and obtain traditional sensing information of the laser radar, intelligent sensing information of the laser radar, millimeter wave sensing output result, and scene sensing parameter to be processed, which are within a target detection range, respectively from the traditional sensing output result of the laser radar, the intelligent sensing information of the laser radar to be processed, the millimeter wave sensing information to be processed, and the scene sensing information to be processed, where the scene sensing parameter includes a communication cooperative sensing result, and a roadside sensing result; taking traditional perception information of the laser radar to be processed, intelligent perception information of the laser radar to be processed, millimeter wave perception information to be processed and scene perception information to be processed as the information to be processed, and acquiring a matching rule corresponding to a target fusion mode; determining first content matched with intelligent perception information of the laser radar to be processed in traditional perception information of the laser radar to be processed by utilizing a matching rule, acquiring second content matched with the first content in the intelligent perception information of the laser radar to be processed, and replacing the first content in the traditional perception information of the laser radar to be processed by utilizing the second content to obtain a first matching result, wherein if first unmatched content except the second content exists in the intelligent perception information of the laser radar to be processed, filtering the first unmatched content; determining third content matched with millimeter wave sensing information to be processed in the first matching result by utilizing a matching rule, acquiring fourth content matched with the third content in the millimeter wave sensing information to be processed, and replacing the third content in the first matching result by utilizing the fourth content to obtain a second matching result; and determining fifth content matched with the scene perception information to be processed in the second matching result by utilizing the matching rule, acquiring sixth content matched with the fifth content in the scene perception information to be processed, and replacing the fifth content in the second matching result by utilizing the sixth content to obtain a target matching result.
In a specific application scenario, the integrating module 303 is configured to obtain millimeter wave sensing information to be processed from information to be processed, obtain fourth content obtained from the millimeter wave sensing information to be processed by using a matching rule, and determine second unmatched content except the fourth content in the millimeter wave sensing information to be processed; acquiring a dust detection algorithm, determining a plurality of pieces of position information of a plurality of dust particles by using the dust detection algorithm, determining a plurality of pieces of associated sensing information associated with the plurality of pieces of position information in second unmatched content, and taking the plurality of pieces of associated sensing information as millimeter wave compensation results; and acquiring scene sensing parameters from the sensing information, acquiring an integration rule corresponding to the target fusion mode, and integrating the target matching result, the millimeter wave compensation result and the scene sensing parameters by utilizing the integration rule to obtain a target fusion result.
In a specific application scenario, as shown in fig. 3B, the apparatus further includes: and a generation module 305.
The generating module 305 is configured to obtain, when the sensing information includes a laser radar legacy sensing output result, a laser radar intelligent sensing output result, a millimeter wave sensing output result, and a scene sensing parameter, to-be-processed laser radar legacy sensing information, to-be-processed laser radar intelligent sensing information, to-be-processed millimeter wave sensing information, and to-be-processed scene sensing information in the to-be-processed information; acquiring appointed laser radar traditional perception information which does not comprise laser radar traditional perception information to be processed in a laser radar traditional perception output result, acquiring appointed laser radar intelligent perception information which does not comprise laser radar intelligent perception information to be processed in a laser radar intelligent perception output result, acquiring appointed millimeter wave perception information which does not comprise millimeter wave perception information to be processed in a millimeter wave perception output result, and acquiring appointed scene perception information which does not comprise scene perception information to be processed in a field Jing Ganzhi parameter; acquiring first content determined in the traditional perception information of the laser radar to be processed by utilizing a matching rule, and determining third unmatched content except the first content in the traditional perception information of the laser radar to be processed; and generating a target fusion result based on the appointed laser radar traditional perception information, the appointed laser radar intelligent perception information, the appointed millimeter wave perception information, the appointed scene perception information, the third unmatched content, the target matching result, the millimeter wave compensation result and the scene perception parameter.
In a specific application scenario, the processing module 304 is configured to obtain a previous fusion result of the target fusion result, determine time information corresponding to the previous fusion result, obtain time information corresponding to the target fusion result, and calculate using the time information corresponding to the previous fusion result and the time information corresponding to the target fusion result to obtain a target time difference; obtaining a time difference algorithm, and calculating a target time difference by using the time difference algorithm to obtain speed information corresponding to a target fusion result; obtaining a sample vehicle corresponding to the perception information, and obtaining a preset motion model corresponding to the sample vehicle, wherein the preset motion model is a preset virtual model of uniform motion; acquiring speed information, and performing prediction operation on the speed information and a target time difference by using a preset motion model to obtain predicted position information; acquiring speed information in a target fusion result, acquiring a plurality of target sensors corresponding to a target fusion mode, and acquiring detection position information of a plurality of sensors corresponding to the plurality of target sensors; and updating the speed information, the target fusion result, the detection position information of the plurality of sensors and the prediction position information by using a Kalman algorithm to obtain obstacle related information.
According to the device provided by the embodiment of the application, when the input perception information is received, the target fusion mode corresponding to the perception information is determined in the fusion modes, the target detection range corresponding to the target fusion mode is determined by utilizing the time-space synchronization operation, the information to be processed in the target detection range is acquired in the perception information, the information to be processed is matched by utilizing the matching rule corresponding to the target fusion mode, the target matching result is obtained, the compensation calculation is carried out on the information to be processed by utilizing the dust detection algorithm, the millimeter wave compensation result is obtained, the integration rule corresponding to the target fusion mode is utilized to integrate the scene perception parameters of the target matching result, the millimeter wave compensation result and the perception information, the target fusion result is obtained, the Kalman algorithm is utilized to process the target fusion result, the related information is obtained, the perception result is supplemented by utilizing the detection result of the millimeter wave radar, the perception capability of the vehicle under the dust condition is enhanced, the perception range of the vehicle is enlarged by utilizing the V2V collaborative perception result and the road side perception result, the perception precision is improved, and the vehicle is enabled to be better adapted to the mining area environment.
It should be noted that, for other corresponding descriptions of each functional unit related to the sensor data fusion apparatus provided in the embodiment of the present application, reference may be made to corresponding descriptions in fig. 1 and fig. 2A to fig. 2C, which are not repeated herein.
In an exemplary embodiment, referring to fig. 4, there is also provided a computer device, which includes a bus, a processor, a memory, and a communication interface, and may further include an input-output interface and a display device, where each functional unit may perform communication with each other through the bus. The memory stores a computer program, and the processor is configured to execute the program stored in the memory and perform the sensor data fusion method in the above embodiment.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method of sensor data fusion.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or may be implemented by means of software plus necessary general hardware platforms. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the methods described in various implementation scenarios of the present application.
Those skilled in the art will appreciate that the drawings are merely schematic illustrations of one preferred implementation scenario, and that the modules or flows in the drawings are not necessarily required to practice the present application.
Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The foregoing application serial numbers are merely for description, and do not represent advantages or disadvantages of the implementation scenario.
The foregoing disclosure is merely a few specific implementations of the present application, but the present application is not limited thereto and any variations that can be considered by a person skilled in the art shall fall within the protection scope of the present application.

Claims (10)

1. A method of sensor data fusion, comprising:
when receiving input perception information, determining a target fusion mode corresponding to the perception information in a plurality of fusion modes, and determining a target detection range corresponding to the target fusion mode by using space-time synchronization operation;
obtaining information to be processed in the target detection range from the perception information, and matching the information to be processed by utilizing a matching rule corresponding to the target fusion mode to obtain a target matching result;
Performing compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of the perception information by using an integration rule corresponding to the target fusion mode to obtain a target fusion result;
and acquiring a Kalman algorithm, and processing the target fusion result by using the Kalman algorithm to obtain obstacle related information.
2. The method of claim 1, wherein determining the target detection range corresponding to the target fusion pattern using a spatiotemporal synchronization operation comprises:
determining a plurality of target sensors corresponding to the target fusion mode, acquiring sensor state information from the sensing information, and detecting the sensor state information, wherein the plurality of target sensors comprise a laser radar, a millimeter wave radar and a camera sensor, and the sensor state information indicates the working states of the plurality of target sensors;
if the sensor state information indicates that the working states of the plurality of target sensors are normal, acquiring coordinate information of each target sensor;
Determining the sample vehicle corresponding to the perception information, acquiring rear axle center position information of the sample vehicle, and adjusting coordinate information of each target sensor by utilizing the rear axle center position information so as to unify the coordinate origin of each target sensor to the rear axle center of the sample vehicle;
acquiring a time stamp of the laser radar, and sequentially determining the time stamp of the millimeter wave radar and the time stamp of the camera sensor, which are similar to the time stamp of the laser radar;
unifying the laser radar, the millimeter wave radar and the camera sensor under the same time stamp by using the time stamp of the laser radar, the time stamp of the millimeter wave radar and the time stamp of the camera sensor;
obtaining a detection range of each target sensor to obtain a plurality of detection ranges, and determining an intersection of the plurality of detection ranges to obtain the target detection range.
3. The method according to claim 2, wherein unifying the lidar, the millimeter wave radar, and the camera sensor with the time stamp of the lidar, the millimeter wave radar, and the time stamp of the camera sensor to the same time stamp comprises:
Acquiring millimeter wave detection results corresponding to the time stamp of the millimeter wave radar and camera detection results corresponding to the time stamp of the camera sensor;
respectively calculating the time stamp of the millimeter wave radar, the time stamp of the camera sensor and the time stamp of the laser radar to obtain a millimeter wave time difference and a camera time difference;
and performing motion compensation operation on the millimeter wave detection result by utilizing the millimeter wave time difference, and performing motion compensation operation on the camera detection result by utilizing the camera time difference, so that the laser radar, the millimeter wave radar and the camera sensor are under the same time stamp.
4. The method of claim 1, wherein the obtaining the information to be processed in the target detection range from the sensing information, and matching the information to be processed by using a matching rule corresponding to the target fusion mode, to obtain a target matching result, includes:
acquiring a traditional laser radar sensing output result, an intelligent laser radar sensing output result, a millimeter wave sensing output result and the scene sensing parameters from the sensing information, and respectively acquiring traditional laser radar sensing information to be processed, intelligent laser radar sensing information to be processed, millimeter wave sensing information to be processed and scene sensing information to be processed in the target detection range from the traditional laser radar sensing output result, the intelligent laser radar sensing output result, the millimeter wave sensing output result and the millimeter wave sensing output result, wherein the scene sensing parameters comprise a communication cooperative sensing result and a road side sensing result;
Taking the traditional perception information of the laser radar to be processed, the intelligent perception information of the laser radar to be processed, the millimeter wave perception information to be processed and the scene perception information to be processed as the information to be processed, and acquiring the matching rule corresponding to the target fusion mode;
determining first content matched with the intelligent perception information of the laser radar to be processed in the traditional perception information of the laser radar to be processed by utilizing the matching rule, acquiring second content matched with the first content in the intelligent perception information of the laser radar to be processed, and replacing the first content in the traditional perception information of the laser radar to be processed by utilizing the second content to obtain a first matching result, wherein if first unmatched content except the second content exists in the intelligent perception information of the laser radar to be processed, the first unmatched content is filtered;
determining third content matched with the millimeter wave sensing information to be processed in the first matching result by utilizing the matching rule, acquiring fourth content matched with the third content in the millimeter wave sensing information to be processed, and replacing the third content in the first matching result by utilizing the fourth content to obtain a second matching result;
And determining fifth content matched with the to-be-processed scene perception information in the second matching result by utilizing the matching rule, acquiring sixth content matched with the fifth content in the to-be-processed scene perception information, and replacing the fifth content in the second matching result by utilizing the sixth content to obtain the target matching result.
5. The method according to claim 1, wherein the performing compensation calculation on the information to be processed by using a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result, and scene perception parameters of the perception information by using an integration rule corresponding to the target fusion mode to obtain a target fusion result, includes:
acquiring millimeter wave perception information to be processed from the information to be processed, acquiring fourth content acquired from the millimeter wave perception information to be processed by utilizing the matching rule, and determining second unmatched content except the fourth content in the millimeter wave perception information to be processed;
acquiring the dust detection algorithm, determining a plurality of pieces of position information of a plurality of dust particles by using the dust detection algorithm, determining a plurality of pieces of associated sensing information associated with the plurality of pieces of position information in the second unmatched content, and taking the plurality of pieces of associated sensing information as the millimeter wave compensation result;
And acquiring the scene perception parameters from the perception information, acquiring the integration rule corresponding to the target fusion mode, and integrating the target matching result, the millimeter wave compensation result and the scene perception parameters by utilizing the integration rule to obtain the target fusion result.
6. The method according to claim 1, wherein the method further comprises:
when the perception information comprises a laser radar traditional perception output result, a laser radar intelligent perception output result, a millimeter wave perception output result and scene perception parameters, acquiring to-be-processed laser radar traditional perception information, to-be-processed laser radar intelligent perception information, to-be-processed millimeter wave perception information and to-be-processed scene perception information from the to-be-processed information;
acquiring appointed laser radar traditional perception information which does not comprise the laser radar traditional perception information to be processed from the laser radar traditional perception output result, acquiring appointed laser radar intelligent perception information which does not comprise the laser radar intelligent perception information to be processed from the laser radar intelligent perception output result, acquiring appointed millimeter wave perception information which does not comprise the millimeter wave perception information to be processed from the millimeter wave perception output result, and acquiring appointed scene perception information which does not comprise the scene perception information to be processed from the scene perception parameters;
Acquiring first content determined in the traditional perception information of the laser radar to be processed by utilizing the matching rule, and determining third unmatched content except the first content in the traditional perception information of the laser radar to be processed;
and generating the target fusion result based on the appointed laser radar traditional perception information, the appointed laser radar intelligent perception information, the appointed millimeter wave perception information, the appointed scene perception information, the third unmatched content, the target matching result, the millimeter wave compensation result and the scene perception parameter.
7. The method of claim 1, wherein the processing the target fusion result using the kalman algorithm to obtain obstacle-related information comprises:
acquiring a last fusion result of the target fusion result, determining time information corresponding to the last fusion result, acquiring time information corresponding to the target fusion result, and calculating by utilizing the time information corresponding to the last fusion result and the time information corresponding to the target fusion result to obtain a target time difference;
acquiring a time difference algorithm, and calculating the target time difference by using the time difference algorithm to obtain speed information corresponding to the target fusion result;
Acquiring a sample vehicle corresponding to the perception information, and acquiring a preset motion model corresponding to the sample vehicle, wherein the preset motion model is a preset virtual model of uniform motion;
acquiring the speed information, and performing prediction operation on the speed information and the target time difference by using the preset motion model to obtain predicted position information;
acquiring the speed information from the target fusion result, acquiring a plurality of target sensors corresponding to the target fusion mode, and acquiring a plurality of sensor detection position information corresponding to the plurality of target sensors;
and updating the speed information, the target fusion result, the detection position information of the plurality of sensors and the prediction position information by using the Kalman algorithm to obtain obstacle related information.
8. A sensor data fusion device, comprising:
the determining module is used for determining a target fusion mode corresponding to the perception information in a plurality of fusion modes when the input perception information is received, and determining a target detection range corresponding to the target fusion mode by using space-time synchronization operation;
the matching module is used for acquiring information to be processed in the target detection range from the perception information, and matching the information to be processed by utilizing a matching rule corresponding to the target fusion mode to obtain a target matching result;
The integrating module is used for carrying out compensation calculation on the information to be processed by utilizing a dust detection algorithm to obtain a millimeter wave compensation result, and integrating the target matching result, the millimeter wave compensation result and scene perception parameters of the perception information by utilizing an integrating rule corresponding to the target fusion mode to obtain a target fusion result;
and the processing module is used for acquiring a Kalman algorithm, and processing the target fusion result by using the Kalman algorithm to obtain obstacle related information.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A readable storage medium having stored thereon a computer program, which when executed by a processor realizes the steps of the method according to any of claims 1 to 7.
CN202310343149.9A 2023-04-03 2023-04-03 Sensor data fusion method and device, computer equipment and readable storage medium Pending CN116338717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310343149.9A CN116338717A (en) 2023-04-03 2023-04-03 Sensor data fusion method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310343149.9A CN116338717A (en) 2023-04-03 2023-04-03 Sensor data fusion method and device, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116338717A true CN116338717A (en) 2023-06-27

Family

ID=86883829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310343149.9A Pending CN116338717A (en) 2023-04-03 2023-04-03 Sensor data fusion method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116338717A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130010A (en) * 2023-10-23 2023-11-28 青岛慧拓智能机器有限公司 Obstacle sensing method and system for unmanned vehicle and unmanned vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130010A (en) * 2023-10-23 2023-11-28 青岛慧拓智能机器有限公司 Obstacle sensing method and system for unmanned vehicle and unmanned vehicle

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
Yu et al. Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN110785719A (en) Method and system for instant object tagging via cross temporal verification in autonomous vehicles
CN108958266A (en) A kind of map datum acquisition methods
CN110869559A (en) Method and system for integrated global and distributed learning in autonomous vehicles
CN110753953A (en) Method and system for object-centric stereo vision in autonomous vehicles via cross-modality verification
CN109084786A (en) A kind of processing method of map datum
CN113379805A (en) Multi-information resource fusion processing method for traffic nodes
CN111986261B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN111784730B (en) Object tracking method and device, electronic equipment and storage medium
WO2024012212A1 (en) Environmental perception method, domain controller, storage medium, and vehicle
JP2017138660A (en) Object detection method, object detection device and program
CN116338717A (en) Sensor data fusion method and device, computer equipment and readable storage medium
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
CN112835030A (en) Data fusion method and device for obstacle target and intelligent automobile
JP2021117048A (en) Change point detector and map information delivery system
Aranjuelo et al. Multimodal deep learning for advanced driving systems
CN114049767A (en) Edge calculation method and device and readable storage medium
CN117576652A (en) Road object identification method and device, storage medium and electronic equipment
Zhao et al. An ISVD and SFFSD-based vehicle ego-positioning method and its application on indoor parking guidance
CN109344776B (en) Data processing method
CN112241167A (en) Information processing method and device in automatic driving and storage medium
CN115379408B (en) Scene perception-based V2X multi-sensor fusion method and device
CN110539748A (en) congestion car following system and terminal based on look around

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination