CN112348993A - Dynamic graph resource establishing method and system capable of providing environment information - Google Patents

Dynamic graph resource establishing method and system capable of providing environment information Download PDF

Info

Publication number
CN112348993A
CN112348993A CN201910725303.2A CN201910725303A CN112348993A CN 112348993 A CN112348993 A CN 112348993A CN 201910725303 A CN201910725303 A CN 201910725303A CN 112348993 A CN112348993 A CN 112348993A
Authority
CN
China
Prior art keywords
vehicle
point cloud
data
cloud data
end device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910725303.2A
Other languages
Chinese (zh)
Inventor
林轩达
王正楷
施淳耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Automotive Research and Testing Center
Original Assignee
Automotive Research and Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Automotive Research and Testing Center filed Critical Automotive Research and Testing Center
Priority to CN201910725303.2A priority Critical patent/CN112348993A/en
Publication of CN112348993A publication Critical patent/CN112348993A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/44Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • H04W4/46Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • H04W88/04Terminal devices adapted for relaying to or from another terminal or user

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a dynamic graph data establishing method and a system capable of providing environment information, wherein the system comprises a cloud servo host, a plurality of relay hosts erected around the environment and a plurality of vehicle end devices respectively arranged on different vehicles, wherein each vehicle end device comprises a light-reaching (LiDAR) sensor and a camera for sensing the environment around the vehicle to generate point cloud (point cloud) data and image data; the cloud servo host establishes dynamic image information according to the three-dimensional coordinate information and transmits the dynamic image information back to each vehicle; by sharing the sensing data of different vehicles, the sensing range of the vehicle to the surroundings can be increased, and the sensing shadow zone (dark zone) or blind zone (blind zone) can be reduced.

Description

Dynamic graph resource establishing method and system capable of providing environment information
Technical Field
The present invention relates to a method for automatically recognizing environmental information, and more particularly, to a method for recognizing obstacles by combining light arrival information and image information.
Background
Since the development progress of the current generation of Artificial Intelligence (AI) and mechanical learning is one thousand miles per day compared to the past, many international factories have been invested in the development of self-driving by using the learning technology which is different from each other. The self-driving system carried on the vehicle detects the situation of the surrounding environment of the vehicle through a multivariate sensing fusion technology, carries out decision judgment according to a sensing result and finally controls the vehicle to react according to the decision result.
In the sensing part, if the sensed information is obtained by sensing the surroundings of the vehicle by the sensors of the vehicle, that is, only one vehicle senses the surroundings, and when a plurality of obstacles exist in the environment and the obstacles are connected or shielded, there are no perceivable shielding areas or blind areas due to the viewing angle and the traveling direction of the vehicle. For example, referring to fig. 10, if there are a first obstacle O1 and a second obstacle O2 in front of the vehicle a itself, the vehicle a can only sense the first obstacle O1 because the second obstacle O2 is shielded by the first obstacle O1, and the rear of the first obstacle O1 is equivalent to a shielding zone Z and cannot know that the second obstacle O2 exists.
The radar (LiDAR) is a sensing device widely adopted in a self-driving system, and can quickly sense the surrounding environment through the radar, so that point cloud data representing the surrounding environment is established, and the three-dimensional geometric information of the ring can be obtained according to the point cloud data; however, the problem of the previous paragraph exists in the light sensing, and when an object is shielded or connected, the geometric information of the object cannot be completely acquired, which is not favorable for the subsequent judgment of whether the object is an obstacle.
Disclosure of Invention
The invention aims to provide a dynamic map data establishing system capable of providing environment information so as to improve the identification capability of a vehicle on objects in a sheltered area or a blind area.
To achieve the above object, the system for creating dynamic map data capable of providing environment information comprises:
a plurality of vehicle end devices for being installed in different vehicles respectively, each vehicle end device comprising:
a radar sensor for sensing an environment surrounding the vehicle to generate point cloud data;
a camera for photographing an environment around the vehicle to generate image data;
a vehicle-mounted controller connected with the radar sensor, the camera and a data transmission unit and used for controlling the point cloud data and the image data to be transmitted outwards through the data transmission unit;
a human-machine interface;
the system comprises a plurality of relay hosts, a plurality of vehicle-end devices and a plurality of relay hosts, wherein each relay host receives point cloud data and image data transmitted by the vehicle-end devices around the relay host and executes a multi-vehicle integration calculation to combine the point cloud data of each vehicle-end device to obtain three-dimensional coordinate information of objects in the surrounding environment of the vehicle;
the cloud servo host is in communication transmission with each vehicle end device and each relay host through connecting lines, receives calculation results of multi-vehicle integration calculation of each relay host, and integrates three-dimensional coordinate information of each object with a base map data to establish dynamic map data information;
and the cloud servo host transmits the dynamic image information back to a human-computer interface of the vehicle.
Another objective of the present invention is to provide a "dynamic graph resource creation method capable of providing environment information", the method comprising:
a. sending a confirmation request to a cloud servo host by a vehicle end device arranged in a vehicle;
b. the cloud servo host selects a relay host adjacent to the vehicle according to the confirmation request and referring to the position of the vehicle, and replies the selected relay host to the vehicle-end device;
c. the vehicle end device transmits the point cloud data and the image data of the vehicle to the selected relay host, wherein the point cloud data is generated by sensing the surrounding environment of the vehicle by a light sensor, and the image data is generated by shooting the surrounding environment of the vehicle by a camera;
d. when the relay host receives point cloud data and image data provided by a plurality of vehicle-end devices, the relay host executes a multi-vehicle integration calculation to combine the point cloud data of each vehicle-end device and obtain three-dimensional coordinate information of objects in the surrounding environment of the vehicle;
e. the cloud servo host receives an operation result obtained after the relay host executes multi-vehicle integration operation, and integrates three-dimensional coordinate information of each object with a base map data to establish dynamic map data information;
f. and the cloud servo host transmits the dynamic image information back to each vehicle end device, wherein the dynamic image information comprises three-dimensional coordinate information of objects in the surrounding environment of the vehicle.
The invention integrates the light sensing information of different vehicles on the road to improve the sensing range of the vehicles to surrounding objects so as to identify the surrounding objects, and combines the coordinate information of the surrounding objects with the environment map to generate dynamic map information. The dynamic map information is transmitted back to different vehicles to achieve resource sharing.
When the invention is applied to the field of self-driving, the dynamic Map information can be a High Definition Map (HD MAP), and a control system on the self-driving can automatically plan a safe driving path according to the information presented on the HD MAP so as to avoid the obstacles identified on the road.
Drawings
FIG. 1: the system of the invention is shown schematically.
FIG. 2: the system circuit block diagram of the invention.
FIG. 3: the invention relates to a cooperation flow chart of a vehicle-end device, a relay host and a cloud servo host.
FIG. 4: the invention utilizes two groups of point cloud data PA and PB which are detected by two radar sensors.
FIG. 5: the invention adopts a flow chart of an Iterative Closest Point (ICP) algorithm.
FIG. 6: in fig. 4, the two groups of point cloud data PA and PB are superimposed by using an iterative closest point algorithm (ICP).
FIG. 7A: the invention uses a light sensor to detect the environment to obtain a point cloud data schematic diagram.
FIG. 7B: the invention uses the camera to detect the image data schematic diagram obtained after the environment.
FIG. 8: the invention relates to a flow chart of bicycle data calculation.
FIG. 9: the invention identifies the schematic diagram of the shielding area according to the multi-vehicle sensing data.
FIG. 10: schematic view of a blind or shaded area of a vehicle.
Drawings
10 vehicle end device 11 light sensor
12 camera 13 vehicle-mounted controller
14 data transfer unit 15 human-machine interface
20 relay host and 30 cloud server
31 schema data base
O1 first obstacle O2 second obstacle
Z shaded area
Detailed Description
Referring to fig. 1 and 2, a system of the present invention is schematically illustrated, and includes a vehicle-end device 10 installed on each vehicle, relay hosts 20 dispersed in a road environment, and a cloud server 30.
Each vehicle-end device 10 on each vehicle includes a radar sensor 11, a camera 12, an onboard controller 13, a data transmission unit 14, and a human-machine interface 15. The radar sensor 11 is connected to the onboard controller 13 for sensing the environment around the vehicle to generate point cloud data. The camera 12 is connected to the on-vehicle controller 13, and captures an environment around the vehicle to generate image data. The vehicle-mounted controller 13 transmits the point cloud data and the image data to the outside through the data transmission unit 14, and a vehicle identification code (Identity, ID) unique to the vehicle is preset in the vehicle-mounted controller 13 to identify the vehicle among a plurality of vehicles. The data transmission unit 14 is a wireless communication unit, has a mobile communication function, and is responsible for bidirectionally transmitting data of the vehicle-end device 10 to the relay host 20 and the cloud server 30.
The relay host 20 is built around the environment, and establishes Vehicle-to-building communication (V2I) with the Vehicle-end devices 10 in the vehicles around the relay host, wherein the relay host 20 has a unique host Identity (ID). The relay host 20 is also communicatively connected to the cloud server 30.
The cloud server 30 can access a map database 31, and the map database 31 stores basic environment information that can be used as a base map. The cloud server 30 is configured to combine the data calculated by the relay host 20 with the base map in the map database 31 to generate dynamic map information, and transmit the dynamic map information to the human-machine interface 15 of each vehicle-end device 10.
Referring to fig. 3, the aforementioned cooperation process of the vehicle-side device 10, the relay host 20 and the cloud server 30 includes the following steps:
s31: a confirmation request is sent from the vehicle-end device 10 to the cloud server 30 to inquire whether the relay host 20 is around.
S32: the cloud server 30 selects the relay host 20 adjacent to the vehicle according to the location of the vehicle, and provides the host Identification (ID) of the relay host 20 to the vehicle, and the cloud server 30 can obtain the location of the vehicle according to the GPS positioning information on the vehicle.
S33: after receiving the reply from the cloud server 30, the vehicle-end device 10 uploads the point cloud data, the image data, and the vehicle identification code (ID) of the vehicle sensed by the vehicle to the relay host 20 selected by the cloud server 30 through the data transmission unit 14.
S34: the relay host 20 determines that a "single-vehicle data calculation" or a "multi-vehicle integration calculation" should be executed according to each item of data received from the vehicle-side device 10, and transmits the calculation result and a host Identification (ID) of the relay host 20 itself to the cloud server 30. When only a single vehicle exists near the periphery of the relay host 20, the relay host 20 only receives the data sent by the single vehicle, and the relay host 20 performs the calculation of the single vehicle data; if there are more than two vehicles around the relay host 20, the data sent by the two vehicles can be received separately, and the multi-vehicle integration calculation is selected to be executed. The flow of the calculation of the single vehicle data and the calculation of the multi-vehicle integration will be described in detail later.
S35: the cloud server 30 receives the operation result sent by the relay host 20, and integrates the operation result with the map data to create the dynamic map data information. Since the operation result includes the three-dimensional coordinate information of the object, when the cloud server 30 creates the dynamic map information, the dynamic map information includes the coordinate information of the object in the vehicle surroundings.
S36: the cloud server 30 transmits the dynamic image data information back to the human-computer interface 15 of the vehicle, and when the dynamic image data information is transmitted, the vehicle to which the dynamic image data information is to be transmitted can be determined according to an identification code (ID) of the vehicle.
Regarding the "multi-vehicle integration calculation" in step S34, when the relay host 20 receives the sensed data of more than two vehicles, the positions of the point cloud data measured by the photosensors 11 of the vehicles with respect to the same object are different because the positions of each vehicle are not identical. Referring to fig. 4, a first group of point cloud data PA and a second group of point cloud data PB are schematically shown, where the first group/second group of point cloud data PA and PB are obtained by detecting the same object by two different light-reaching sensors 11, and it can be seen that the positions of the two groups of point cloud data are slightly different, and each group of point cloud data has a plurality of data points; the invention utilizes an iterative closest point algorithm (ICP) to reduce the distance of each data point in two groups of point cloud data in a repeated iteration mode, namely, the data points of the two groups are aligned and overlapped as much as possible.
Referring to fig. 5, the calculation process of the iterative closest point algorithm (ICP) is not the original of the present invention, so the calculation steps are briefly described as follows:
s51: reading a first group of point cloud data PA and a second group of point cloud data PB;
s52: searching corresponding points in the two groups of point cloud data PA and PB;
s53: calculating a conversion matrix according to the corresponding relation;
s54: replacing coordinates of each data point in the first group of point cloud data PA by using the conversion matrix;
s55: calculating the root mean square value error of the current time, and comparing the root mean square value error with the root mean square value error calculated in the previous time to obtain an error change value;
s56: and judging whether the error variation value is smaller than a preset threshold value, if so, indicating that the first group of point cloud data PA and the second group of point cloud data PB are aligned and finishing calculation, and if not, executing the step S52.
Referring to fig. 6, after the first group of point cloud data PA and the second group of point cloud data PB are calculated by iterative closest point algorithm (ICP), the data points in the two groups of point cloud data may be aligned with each other. In other words, the data measured by the radar sensors 11 on two vehicles can be merged together if there is a common sensing area, and the coordinate position error between them can be reduced. Because the perception ranges of different vehicles are different, a wider perception range can be obtained after the perception areas of all vehicles are combined, and the blind area or the shielding area of a single vehicle is reduced; after the relay host 20 completes the multi-vehicle integration calculation, the merged point cloud data can be transmitted to the cloud server 30 for subsequent image data calculation, because each data point in the point cloud data represents three-dimensional information, the merged point cloud data can represent three-dimensional coordinate information of the object.
On the other hand, regarding the "calculation of single vehicle data" in step S34, when the relay host 20 receives the sensed data of a single vehicle, the relay host 20 performs calculation based on the point cloud data and the image data provided by the vehicle. Referring to fig. 7A and 7B, after sensing the same environment, the radar sensor 11 and the camera 12 on the vehicle can respectively obtain the point cloud data and the image data of the environment, and according to the image data, it can be seen that there is a pedestrian behind the vehicle, but because the pedestrian is partially covered by the vehicle, it is not easy to directly determine whether there is a pedestrian in the point cloud data, therefore, in order to correctly determine the covered object, the relay host 20 executes the calculation of the vehicle data shown in fig. 8, and the flow of the calculation includes the following steps:
s81: for example, the image data may be used to identify objects such as vehicles, pedestrians, and pedestrians (such as motorcycles and cyclists) by using conventional image recognition techniques, and then frame the locations of the objects.
S82: and detecting an obstacle in the point cloud data, acquiring an object bounding box, projecting the bounding box onto the image, and integrating the point cloud data and the image data.
S83: and judging whether the bounding box object in the point cloud data exists in the image data or not, if so, judging that the object belongs to the object in the bright area, and obtaining the three-dimensional coordinate information of the object in the bright area because each data point in the point cloud data represents three-dimensional information. For example, vehicles exist in the image data and the point cloud data, the vehicles belong to bright-area objects, and after the vehicles are judged to be the same object, the three-dimensional coordinate information of the vehicles can be obtained from the point cloud data.
S84: and judging whether a boundary frame object which is not detected in the point cloud data exists in the image or not, if so, the boundary frame object is a dark area object in the point cloud data, for example, a pedestrian behind a vehicle belongs to the dark area object in the point cloud data, and if the pedestrian is identified to exist in the corresponding position in the image data, determining that the dark area object is a pedestrian.
S85: the bounding box of the dark area object obtained in step S84 is projected onto the point cloud data, so as to obtain the three-dimensional coordinate information of the dark area object. Since it can be determined in step S84 that there is a pedestrian and the bounding box position thereof is obtained, the position of the dark area object in the point cloud data is determined by projecting the bounding box position in the image data to the corresponding position in the point cloud data with reference to the bounding box position in the image data.
After the relay host 20 finishes the calculation of the single vehicle data, no matter the objects are bright or dark, the three-dimensional coordinate information of each object can be obtained from the point cloud data, and the three-dimensional coordinate information of the objects is also transmitted to the cloud server 30 for subsequent image data calculation.
After receiving the three-dimensional coordinate information of the object calculated by the relay host 20, the cloud server 30 may combine the three-dimensional coordinate information with the basic diagram data to obtain dynamic diagram data information, and the dynamic diagram data information is transmitted back to the human-machine interface 15 of each vehicle for the vehicle to use. In a preferred embodiment, the generated motion MAP information is a high definition MAP (HD MAP) for self-driving to perform automatic control. In another preferred embodiment, the generated dynamic map information may also be a map for viewing reference by the driver.
In summary, the present invention integrates the sensing information of multiple vehicles or the sensing information of a single vehicle, and has the following features and preferable applications:
firstly, perception optimization of a shading area or a blind area: referring to fig. 9, for car a, it is originally only able to identify the first obstacle O1, but not able to identify the second obstacle O2 located in the sheltered zone Z, and the present invention can further integrate the light-reaching point cloud data of multiple cars, for example, car B can sense the second obstacle O2, when the point cloud data of two cars are integrated, the sensing range of the original single car can be expanded, and after the integrated data is provided to each car, car a can also smoothly know the second obstacle O2 in the sheltered zone Z, so as to optimize the sensing ability of self-driving, and facilitate planning of a driving route by self-driving.
Secondly, amplifying dynamic information on HD MAP: through vehicle-to-building communication (V2I), each relay host 20 can receive instant dynamic information (such as blind areas, obstacle identification, drivable route space, etc.) provided by different vehicles, so that the dynamic map information generated by integration has instantaneity and can be referred by the self-driving system to plan a path. Different vehicles can share the dynamic map information to obtain environment information.
Thirdly, distributed operation: the present invention uses the relay host 20 in each location to perform front-end operation, so that the operation burden of the cloud server 30 can be reduced, and each vehicle can quickly obtain more real-time road environment information.

Claims (14)

1. A dynamic mapping system for providing context information, comprising:
a plurality of vehicle end devices for being installed in different vehicles respectively, each vehicle end device comprising:
a radar sensor for sensing an environment surrounding the vehicle to generate point cloud data;
a camera for photographing an environment around the vehicle to generate image data;
a vehicle-mounted controller connected with the radar sensor, the camera and a data transmission unit and used for controlling the point cloud data and the image data to be transmitted outwards through the data transmission unit;
a human-machine interface;
the system comprises a plurality of relay hosts, a plurality of vehicle-end devices and a plurality of relay hosts, wherein each relay host receives point cloud data and image data transmitted by the vehicle-end devices around the relay host and executes a multi-vehicle integration calculation to combine the point cloud data of each vehicle-end device to obtain three-dimensional coordinate information of objects in the surrounding environment of the vehicle;
the cloud servo host is in communication transmission with each vehicle end device and each relay host through connecting lines, receives calculation results of multi-vehicle integration calculation of each relay host, and integrates three-dimensional coordinate information of each object with a base map with basic environment information to establish dynamic map information;
and the cloud servo host transmits the dynamic image information back to a human-computer interface of the vehicle.
2. The system of claim 1, wherein when the relay host determines to receive only point cloud data and image data provided by a single vehicle-end device, the relay host performs a single vehicle data calculation to identify three-dimensional coordinate information of objects in the vehicle surroundings according to the point cloud data and image data provided by the vehicle-end device, the three-dimensional coordinate information being provided to the cloud server.
3. The system as claimed in claim 1 or 2, wherein the dynamic map information created by the cloud server is high-precision map information.
4. The system of claim 1 or 2, wherein the multi-vehicle integration algorithm uses an iterative closest point algorithm to merge point cloud data of each vehicle-end device.
5. The system of claim 4, wherein the relay host receives a first set of point cloud data provided by a first vehicle and a second set of point cloud data provided by a second vehicle, wherein the first and second sets of point cloud data respectively comprise a plurality of data points, and the iterative closest point algorithm comprises:
reading a first group of point cloud data and a second group of point cloud data;
searching corresponding points in the first group of point cloud data and the second group of point cloud data;
calculating a conversion matrix according to the corresponding relation;
replacing coordinates of each data point in the first group of point cloud data by using the transformation matrix;
calculating the root mean square value error of the current time, and comparing the root mean square value error with the root mean square value error calculated in the previous time to obtain an error change value;
and judging whether the error change value is smaller than a preset threshold value, if so, indicating that the first group of point cloud data and the second group of point cloud data are aligned and finishing calculation, and if not, repeatedly executing the step of searching corresponding points in the first group of point cloud data and the second group of point cloud data.
6. The system as claimed in claim 1 or 2, wherein the cloud server accesses a map database, and the map database stores the base map data.
7. A method for creating dynamic map data capable of providing environment information, comprising:
a. sending a confirmation request to a cloud servo host by a vehicle end device arranged in a vehicle;
b. the cloud servo host selects a relay host adjacent to the vehicle according to the confirmation request and referring to the position of the vehicle, and replies the selected relay host to the vehicle-end device;
c. the vehicle end device transmits the point cloud data and the image data of the vehicle to the selected relay host, wherein the point cloud data is generated by sensing the surrounding environment of the vehicle by a light sensor, and the image data is generated by shooting the surrounding environment of the vehicle by a camera;
d. when the relay host receives point cloud data and image data provided by a plurality of vehicle-end devices, the relay host executes a multi-vehicle integration calculation to combine the point cloud data of each vehicle-end device and obtain three-dimensional coordinate information of objects in the surrounding environment of the vehicle;
e. the cloud servo host receives an operation result obtained after the relay host executes multi-vehicle integration operation, and integrates three-dimensional coordinate information of each object with a base map data to establish dynamic map data information;
f. and the cloud servo host transmits the dynamic image information back to each vehicle end device, wherein the dynamic image information comprises three-dimensional coordinate information of objects in the surrounding environment of the vehicle.
8. The method of claim 7, further comprising:
when the relay host receives point cloud data and image data provided by a single vehicle-end device, the relay host executes single vehicle data calculation, identifies three-dimensional coordinate information of objects in the surrounding environment of the vehicle according to the point cloud data and the image data provided by the vehicle-end device, and provides the three-dimensional coordinate information for the cloud servo host.
9. The method as claimed in claim 7 or 8, wherein when the cloud server replies the selected relay host to the vehicle-end device, the cloud server transmits a host identification code of the relay host to the vehicle-end device;
the vehicle end device transmits the point cloud data and the image data of the vehicle to the selected relay host according to the host identification code, and further transmits a vehicle identification code of the vehicle end device to the relay host.
10. The method of claim 7, wherein the relay host performs a multi-vehicle integration algorithm using an iterative closest point algorithm to merge point cloud data of each vehicle-end device.
11. The method of claim 10, wherein the relay host receives a first set of point cloud data provided by a first vehicle and a second set of point cloud data provided by a second vehicle, wherein the first and second sets of point cloud data respectively comprise a plurality of data points, and the iterative closest point algorithm comprises:
reading a first group of point cloud data and a second group of point cloud data;
searching corresponding points in the first group of point cloud data and the second group of point cloud data;
calculating a conversion matrix according to the corresponding relation;
replacing coordinates of each data point in the first group of point cloud data by using the transformation matrix;
calculating the root mean square value error of the current time, and comparing the root mean square value error with the root mean square value error calculated in the previous time to obtain an error change value;
and judging whether the error change value is smaller than a preset threshold value, if so, indicating that the first group of point cloud data and the second group of point cloud data are aligned and finishing calculation, and if not, repeatedly executing the step of searching corresponding points in the first group of point cloud data and the second group of point cloud data.
12. The method as claimed in claim 8, wherein the relay host performs the following steps when performing the calculation of the single-vehicle data:
recognizing a bounding frame of the object in the image data, and selecting the position of each object by the frame;
detecting an obstacle in the point cloud data to obtain a bounding frame of the object;
judging whether a bounding box in the point cloud data exists in the image data or not, if the bounding box exists in the image data and is the same object, the object belongs to a bright area object, and acquiring three-dimensional coordinate information of the bright area object through the point cloud data;
judging whether a delimiting frame which is not detected in the point cloud data exists in the image, if so, determining that an object corresponding to the delimiting frame is a dark area object in the point cloud data;
and projecting the obtained bounding box of the dark area object to the point cloud data to obtain the three-dimensional coordinate information of the dark area object.
13. The method as claimed in claim 12, wherein the recognizable object types in the image data include vehicle, pedestrian and pedestrian-like objects.
14. The method as claimed in claim 7 or 8, wherein the dynamic map information created by the cloud server is high-precision map information.
CN201910725303.2A 2019-08-07 2019-08-07 Dynamic graph resource establishing method and system capable of providing environment information Pending CN112348993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910725303.2A CN112348993A (en) 2019-08-07 2019-08-07 Dynamic graph resource establishing method and system capable of providing environment information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910725303.2A CN112348993A (en) 2019-08-07 2019-08-07 Dynamic graph resource establishing method and system capable of providing environment information

Publications (1)

Publication Number Publication Date
CN112348993A true CN112348993A (en) 2021-02-09

Family

ID=74366575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910725303.2A Pending CN112348993A (en) 2019-08-07 2019-08-07 Dynamic graph resource establishing method and system capable of providing environment information

Country Status (1)

Country Link
CN (1) CN112348993A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12018959B2 (en) 2022-01-06 2024-06-25 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods of cooperative depth completion with sensor data sharing

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103770705A (en) * 2012-10-17 2014-05-07 现代摩比斯株式会社 System for providing image based on V2I communication and method for providing image using the same
US20140277939A1 (en) * 2013-03-14 2014-09-18 Robert Bosch Gmbh Time and Environment Aware Graphical Displays for Driver Information and Driver Assistance Systems
CN104809900A (en) * 2014-01-29 2015-07-29 泓格科技股份有限公司 Two-way vehicle information integration and publishing system
CN106896393A (en) * 2015-12-21 2017-06-27 财团法人车辆研究测试中心 Vehicle cooperating type object positioning and optimizing method and vehicle co-located device
KR20180040759A (en) * 2016-10-12 2018-04-23 한국전자통신연구원 Device for sharing and learning driving environment data for improving the intelligence judgments of autonomous vehicle and method thereof
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108458745A (en) * 2017-12-23 2018-08-28 天津国科嘉业医疗科技发展有限公司 A kind of environment perception method based on intelligent detection equipment
CN109085608A (en) * 2018-09-12 2018-12-25 奇瑞汽车股份有限公司 Obstacles around the vehicle detection method and device
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
US20190068943A1 (en) * 2017-08-28 2019-02-28 Denso International America, Inc. Environment Perception System for Autonomous Vehicle
CN109902542A (en) * 2017-12-11 2019-06-18 财团法人车辆研究测试中心 The dynamic ground method for detecting of three-dimensional sensor
CN109901193A (en) * 2018-12-03 2019-06-18 财团法人车辆研究测试中心 The light of short distance barrier reaches arrangement for detecting and its method
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103770705A (en) * 2012-10-17 2014-05-07 现代摩比斯株式会社 System for providing image based on V2I communication and method for providing image using the same
US20140277939A1 (en) * 2013-03-14 2014-09-18 Robert Bosch Gmbh Time and Environment Aware Graphical Displays for Driver Information and Driver Assistance Systems
CN104809900A (en) * 2014-01-29 2015-07-29 泓格科技股份有限公司 Two-way vehicle information integration and publishing system
CN106896393A (en) * 2015-12-21 2017-06-27 财团法人车辆研究测试中心 Vehicle cooperating type object positioning and optimizing method and vehicle co-located device
KR20180040759A (en) * 2016-10-12 2018-04-23 한국전자통신연구원 Device for sharing and learning driving environment data for improving the intelligence judgments of autonomous vehicle and method thereof
US20190068943A1 (en) * 2017-08-28 2019-02-28 Denso International America, Inc. Environment Perception System for Autonomous Vehicle
CN109902542A (en) * 2017-12-11 2019-06-18 财团法人车辆研究测试中心 The dynamic ground method for detecting of three-dimensional sensor
CN108458745A (en) * 2017-12-23 2018-08-28 天津国科嘉业医疗科技发展有限公司 A kind of environment perception method based on intelligent detection equipment
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN109100730A (en) * 2018-05-18 2018-12-28 北京师范大学-香港浸会大学联合国际学院 A kind of fast run-up drawing method of more vehicle collaborations
CN109085608A (en) * 2018-09-12 2018-12-25 奇瑞汽车股份有限公司 Obstacles around the vehicle detection method and device
CN109901193A (en) * 2018-12-03 2019-06-18 财团法人车辆研究测试中心 The light of short distance barrier reaches arrangement for detecting and its method
CN110008843A (en) * 2019-03-11 2019-07-12 武汉环宇智行科技有限公司 Combine cognitive approach and system based on the vehicle target of cloud and image data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12018959B2 (en) 2022-01-06 2024-06-25 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods of cooperative depth completion with sensor data sharing

Similar Documents

Publication Publication Date Title
CN108572663B (en) Target tracking
US10318822B2 (en) Object tracking
CN110979321B (en) Obstacle avoidance method for unmanned vehicle
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN109949594B (en) Real-time traffic light identification method
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
JP6682833B2 (en) Database construction system for machine learning of object recognition algorithm
US11010927B2 (en) Method and system for generating dynamic map information capable of providing environment information
KR102604453B1 (en) Method and system for automatically labeling radar data
WO2018020954A1 (en) Database construction system for machine-learning
US20100098295A1 (en) Clear path detection through road modeling
US20210231460A1 (en) Change point detection device and map information distribution system
KR20200001471A (en) Apparatus and method for detecting lane information and computer recordable medium storing computer program thereof
US20210364321A1 (en) Driving information providing method, and vehicle map providing server and method
CN112666535A (en) Environment sensing method and system based on multi-radar data fusion
US20230237783A1 (en) Sensor fusion
KR20210122101A (en) Radar apparatus and method for classifying object
CN107545760B (en) Method for providing positioning information for positioning a vehicle at a positioning location and method for providing information for positioning a vehicle by means of another vehicle
CN112348993A (en) Dynamic graph resource establishing method and system capable of providing environment information
CN113459951A (en) Vehicle exterior environment display method and device, vehicle, equipment and storage medium
US11435191B2 (en) Method and device for determining a highly precise position and for operating an automated vehicle
CN111650604A (en) Method for realizing accurate detection of self-vehicle and peripheral obstacles by using accurate positioning
CN212044739U (en) Positioning device and robot based on inertial data and visual characteristics
TWI747016B (en) Dynamic map data creation method and system capable of providing environmental information
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination