CN114830616A - Driver assistance system, crowdsourcing module, method and computer program - Google Patents
Driver assistance system, crowdsourcing module, method and computer program Download PDFInfo
- Publication number
- CN114830616A CN114830616A CN202080083019.0A CN202080083019A CN114830616A CN 114830616 A CN114830616 A CN 114830616A CN 202080083019 A CN202080083019 A CN 202080083019A CN 114830616 A CN114830616 A CN 114830616A
- Authority
- CN
- China
- Prior art keywords
- data
- environment
- crowdsourcing
- autonomous vehicle
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000004590 computer program Methods 0.000 title claims abstract description 8
- 230000007613 environmental effect Effects 0.000 claims abstract description 70
- 238000004891 communication Methods 0.000 claims abstract description 13
- 230000000295 complement effect Effects 0.000 claims abstract description 4
- 238000002360 preparation method Methods 0.000 claims 1
- 239000013589 supplement Substances 0.000 description 3
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000009469 supplementation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention relates to a driver assistance system (6) in an autonomous vehicle (5) comprising a sensor system designed to record an environmental view by means of environmental data, wherein a communication unit (8) is provided which is designed to send the environmental data to a crowdsourcing module (3), wherein the environmental data comprises at least the position of the autonomous vehicle (5), wherein the communication unit (8) is further designed to receive vehicle-specific crowdsourcing data by means of the crowdsourcing module (3), wherein the vehicle-specific crowdsourcing data forms an extended or complementary 3D environmental model of the environmental view of the autonomous vehicle (5) together with the environmental data of the autonomous vehicle (5). The invention also relates to a crowdsourcing module, a method for operating such a driver assistance system and a computer program.
Description
Technical Field
The invention relates to a driver assistance system in an autonomous vehicle comprising a sensor system designed for recording environmental data of an environmental view. The invention also relates to a crowdsourcing module, a method for operating such a driver assistance system and a computer program.
Background
It is known that driver assistance systems require information about the environment of an autonomous vehicle. This information can only be obtained in partial real time by the vehicle's own sensor system. In particular, the coverage of the field of view of the vehicle environment by the sensor system is insufficient in important cases. In addition, processing time, and acquisition of environmental data are also important. These are generally slow. In particular, a distinction is made between the two forms of driver assistance system.
This aspect is a driver assistance system for lateral and/or longitudinal intervention in vehicle control. On the other hand, this is a driver assistance system which displays information about the environment for the driver and indicates a specific situation or warns the driver of a dangerous situation. This is the case, for example, with parking assistance systems with a 360 view.
However, especially in traffic zones that are not clearly visible (e.g. intersections) the view of the environment cannot be covered by the sensor system itself. Furthermore, real-time processing of sensor data is also a significant challenge.
EP 2817785B 1 discloses a system for generating a virtual 3D environment model and sharing an environment comprising the virtual 3D environment model, the system comprising: a network for receiving an image; an image processing server connected to the network for receiving the images, wherein the server processes the images to establish a virtual 3D environment model of one or more objects in the vicinity of the point of interest based at least in part on the images; an experience platform connected to the image processing server for storing a virtual 3D environment model of the one or more objects, wherein the user can be connected to the experience platform.
Disclosure of Invention
It is an object of the present invention to provide a means for achieving an improved view of the environment.
This object is achieved by a driver assistance system having the features of claim 1 and a crowdsourcing module having the features of claim 6.
Furthermore, the object is achieved by a method according to claim 9 and a computer program according to claim 14.
Further advantageous measures are listed in the dependent claims, which measures can be combined with each other to achieve further advantages.
This object is achieved by a driver assistance system in an autonomous vehicle comprising a sensor system designed to record an environmental view by means of environmental data, wherein a communication unit is provided which is designed for sending the environmental data to a crowdsourcing module, wherein the environmental data comprises at least a location of the autonomous vehicle, and the communication unit is further designed for receiving vehicle-specific crowdsourcing data by means of the crowdsourcing module, wherein the vehicle-specific crowdsourcing data together with the environmental data of the autonomous vehicle form an extended or complementary 3D environmental model of the environmental view of the autonomous vehicle.
The environmental view is a view of the environment of the autonomous vehicle.
The traffic participants are in particular other vehicles, such as passenger cars (PKW), but also motorcycles or the like or pedestrians, for example, with a portable camera. Devices located in the environment and equipped with corresponding sensors (especially fixedly mounted devices) may also act as traffic participants that can provide environmental data for the crowdsourcing module.
The crowdsourcing module is preferably configured to receive vehicle sensor data or traffic data from one or more contributing vehicles or traffic participants. These vehicles or traffic participants preferably move in the service area.
Vehicle-specific crowd-sourced data is data that can be acquired by other traffic participants, and that data can therefore be used to supplement or extend the view of the environment of the autonomous vehicle. These data are determined using the position of the autonomous vehicle. Here, all existing crowdsourcing data may also be used. It is known according to the invention that, for example at an intersection, an environment view extension or supplementation can be achieved if the respective vehicle perspectives of the remaining traffic participants at the intersection can be included as additional data, since the different traffic participants accordingly have different environment views. A 3D environmental model of the environmental view extension or supplement is built based on the additional data and the environmental data of the autonomous vehicle.
The view obscuration present in the autonomous vehicle is counteracted/compensated by the sensor systems of the remaining traffic participants. Thus, the relevant environment of the autonomous vehicle may be presented in a complex situation (e.g. an intersection, an unclear sink that cannot be covered by the autonomous vehicle's own sensor system) e.g. in an autonomous vehicle, in an improved way.
The timely building of a 3D environment model based on environment data obtained during driving may be achieved by external processing in a crowdsourcing module. Thus, separately transmitted additional data from different vehicles can be matched (balanced) with the environmental data in the crowdsourcing module. This makes it possible to assist the driver with a "real-time 3D view" for situations that are not clearly visible.
By means of the invention, a 3D environment model is provided for autonomous vehicles, for example for intersections that are not clearly visible and are dangerous.
Preferably, the communication unit transmits the environmental data in real time and receives the crowdsourcing data in real time. Real-time computation is achieved by moving computationally intensive processing steps from the autonomous vehicle into the crowd-sourcing module. This ensures that the latency (delay time) for providing the 3D environment model in real time is low. By presenting in real time, the driver of the autonomous vehicle can take action in a more controlled and predictive manner without being aware.
Furthermore, the communication unit is designed for receiving vehicle-specific crowdsourcing data as a vehicle-specific video stream or by broadcast. When received as a broadcast, vehicle-specific crowd-sourced data must be extracted separately from the data stream.
In a further preferred embodiment, the vehicle-specific crowdsourcing data is designed as a scene description, wherein the driver assistance system is further designed for generating the 3D environment model using the received scene description. This facilitates reducing the amount of data to be transmitted from the crowdsourcing module to the autonomous vehicle. Reconstructing, based on the scene description, a field of view of the respective traffic participant in the 3D environment model as a supplement or extension of the environment view of the autonomous vehicle.
Preferably, the driver assistance system has a display unit for presenting the 3D environment model, wherein the driver assistance system is designed to present opaquely (intraspecific darstellen) originally recorded environment data of the autonomous vehicle and to present translucently (semitranspecific darstellen) environment data which was not originally generated by the autonomous vehicle. The added views can thus be clearly highlighted. The data to be rendered semi-transparently may be marked out, for example, by a crowdsourcing module so that simple differentiation may be achieved.
Furthermore, the object is achieved by a crowdsourcing module for generating a 3D environment model based on different perspectives, wherein the crowdsourcing module is designed to receive environment data presenting an environment view from traffic participants in the environment, and wherein the traffic participants are at different positions in a predefined service area, wherein the service area refers to an area around a predefined position;
and wherein the crowdsourcing module is designed for generating a 3D environment model or scene description or a 3D environment model or customized scene description customized for the respective traffic participant based on the environment data;
and wherein the crowdsourcing module is designed for sending the 3D environment model or the scene description or the 3D environment model or the customized scene description customized for the respective road participant as vehicle-specific crowdsourcing data to the respective road participant in the service area.
In this case, the customized 3D environment model or the customized scene description for the traffic participant is an extension or refinement of the environment data recorded by the respective traffic participant.
Based on sensor systems (such as cameras, GPS and radar) of traffic participants, the 3D environment model can be calculated according to different viewing angles of the traffic participants through a crowdsourcing module. Thus, for example, the covering of the field of view produced by the first traffic participant can be counteracted by the sensor devices of the other traffic participants in that: A3D environment model is generated for the first traffic participant by means of the sensor systems of the other traffic participants. In this 3D environment model, the environment data are enriched with the data of the other traffic participants, so that the coverage of the field of view can be compensated.
Preferably, the crowdsourcing module is designed for provision in a mobile edge cloud. This mobile edge cloud may also be referred to as an edge server. The edge server is preferably a network element located at the edge of the network. This makes it possible to achieve a fast and secure transmission from the individual traffic participants. Preferably, the edge server is installed near the service area. The edge server may be, for example, a traffic light.
Preferably, the crowdsourcing module is connected to the backend. For example, the received environment data and/or the generated 3D environment model/scene description may be stored in the backend. Preferably, the crowdsourcing module is designed for excluding the transfer of 3D environment models or scene descriptions or 3D environment models or customized scene descriptions customized for the respective road users to road users who are not or no longer in the service area. The 3D environment model or scene description is therefore available to all traffic participants in the service area. No 3D environment model/scene description is sent to traffic participants leaving the local service area.
If the traffic participants are located in a service area, for example an intersection, these traffic participants are captured by the crowdsourcing module and a data exchange of their own position and environmental data with the environmental data of the other traffic participants in the service area takes place.
Furthermore, the object is achieved by a method for operating a driver assistance system in an autonomous vehicle, comprising the steps of:
-recording an environmental view by means of environmental data by means of a sensor system of the driver assistance system;
-sending environmental data to a crowdsourcing module, wherein the environmental data comprises at least a location of the autonomous vehicle;
-receiving, by a crowdsourcing module, vehicle-specific crowdsourcing data, wherein the vehicle-specific crowdsourcing data together with environmental data of the autonomous vehicle form an extended or supplemental 3D environmental model of an environmental view of the autonomous vehicle.
This allows an improved coverage of the environment of the autonomous vehicle in the case of very complex traffic situations, such as intersections, unclear entrances, etc.
Furthermore, by using a crowdsourcing module for providing the 3D environment model in real time, less latency is required. Preferably, the environmental data is transmitted in real-time and the crowdsourcing data is received in real-time.
In a further preferred embodiment, upon receiving the scene description, the driver assistance system generates a 3D environment model based on the received scene description in order to refine and/or expand the environment view generated by the environment data.
Preferably, the initially recorded environmental data of the autonomous vehicle is presented opaquely and the environmental data not initially generated by the autonomous vehicle is presented semi-transparently.
Preferably, the vehicle-specific crowdsourcing data is received by the driver assistance system only within a service area predefined by the crowdsourcing module.
Furthermore, the object is achieved by a computer program comprising instructions for causing a driver assistance system as described above to carry out a method as described above. The method according to the invention can therefore also be installed subsequently very simply into an autonomous vehicle by means of a computer program.
Drawings
Other features and advantages of the present invention will appear from the following description, with reference to the accompanying drawings. In which are schematically shown:
figure 1 shows a first embodiment of the driver assistance system of the invention in an autonomous vehicle at an intersection,
figure 2 shows a method schematically illustrating a model with a 3D environment,
fig. 3 shows a schematic representation of a method with a scene description.
Detailed Description
Fig. 1 shows a first embodiment of the driver assistance system 6 according to the invention in an autonomous vehicle 5 at an intersection 1. The autonomous vehicle 5 has a sensor system (e.g. camera, lidar sensor) by means of which an environmental view S, i.e. the driver' S perspective, can be acquired from environmental data. This environmental view is limited by the acquisition system of the autonomous vehicle 5 and the coverage of the field of view, and is incomplete. The relevant environment of the autonomous vehicle 5 at this complex intersection cannot be covered by the sensor system of the autonomous vehicle 5.
The driver assistance system 6 according to the invention transmits the environment view via the environment data to the crowdsourcing module 3 by means of the communication unit 8. Furthermore, the autonomous vehicle 5 sends its position (data) to the crowdsourcing module 3. The crowdsourcing module 3 is designed for provisioning in a mobile edge cloud and is connected with the backend 2 for processing data and/or storage.
The crowdsourcing module 3 is designed to receive data from a predefined service area L. This means that in principle all the road users 7a, 7b, 7c, 7d located in the service area L can send data to the crowdsourcing module 3. The traffic participants 7a, 7b, 7c, 7d, who each have their own vehicle perspective by means of their sensor system and therefore (viewed from the autonomous vehicle 5) have a further view of the environment, which is referred to here as an additional view Z, are located in the service area L.
The additional views Z generated by the road users 7a, 7b, 7c, 7d are sent as data to the crowdsourcing module 3. These data can likewise have position data of the traffic participants 7a, 7b, 7c, 7 d.
The data are processed in real time in the crowdsourcing module 3 by means of an algorithm 4; thus, for example, the data may be transposed into a common coordinate system or otherwise merged together. It is also possible to perform the treatment partly or completely in the back end 2.
The crowdsourcing module 3 generates a 3D environment model/scene description based on the transmitted view of the environment of the autonomous vehicle 5 and the additional views Z of the traffic participants 7a, 7b, 7c, 7D. The 3D environment model/scene description is presented by crowd-sourced data.
Subsequently, the customized 3D environment model or the customized scene description is transmitted to the autonomous vehicle 5. The 3D environment model or customized scene description is transmitted by vehicle-specific crowd-sourced data.
Such a customization may be, for example: the environmental view or environmental data is enriched by the data of the additional view Z to eliminate the view obscuration or to add all data pointing in the direction of the line of sight of the autonomous vehicle 5.
The thus obtained vehicle-specific crowdsourcing data is transmitted to the autonomous vehicle 5, where it is presented as a 3D environmental model by a display unit.
Vehicle-specific crowdsourcing data may be transmitted from crowdsourcing module 3 to autonomous vehicle 5, for example, as a vehicle-specific video stream or by broadcast.
Furthermore, the originally recorded environmental data of the autonomous vehicle 5 is presented opaquely and the environmental data not originally generated by the autonomous vehicle 5 is presented semi-transparently. Therefore, the driver can accurately recognize the added data.
If the autonomous vehicle 5 leaves the service area L, it neither transmits data to the crowdsourcing module 3 nor receives data from the crowdsourcing module 3. This also applies to the other road users 7a, 7b, 7c, 7 d.
Furthermore, the 3D environment model may contain a 3D view (3D reconstruction) of the environment of the autonomous vehicle 5, and may additionally provide, for example, an overhead view (bird's eye view) of the environment of the autonomous vehicle 5.
By means of the driver assistance system 6 according to the invention, the relevant environment of the autonomous vehicle 5 is presented in real time as a "Live 3D view" (Live 3 DView). The part of the environment that cannot be acquired by the sensor system of the autonomous vehicle 5 can be covered by the sensor systems of the other traffic participants 7a, 7b, 7c, 7d in the locally limited service area L. Thus, an unclear-view and dangerous intersection can be made visible. By transferring computationally intensive processing steps from the autonomous vehicle 5 into the crowdsourcing module 3, the 3D environmental model may be processed in real time and provided in real time in the autonomous vehicle 5.
The provision of vehicle-specific crowdsourcing data and hence real-time 3D environmental models is achieved with less latency by the crowdsourcing module 3.
Fig. 2 shows a method in which the vehicle-specific crowd-sourced data is designed as reconstructed 3D images or reconstructed 3D video.
The autonomous vehicle 5 (fig. 1) is located in a service area L (fig. 1) of the crowdsourcing module 3 (fig. 1).
The autonomous vehicle 5 (fig. 1) records an environmental view with the help of environmental data. The environmental view may be formed as a video, for example. Further, the autonomous vehicle 5 (fig. 1) determines its own position (self-localization). This own position is transmitted by means of video via the communication unit 8 (fig. 1) to the crowdsourcing module 3 (fig. 1).
Furthermore, a further additional view Z (fig. 1) of the environment is generated (e.g. as video) by the other road users 7a, 7b, 7c, 7d (fig. 1) also in the service area L of the crowdsourcing module 3 (fig. 1) and is transmitted to the crowdsourcing module 3 (fig. 1), preferably together with the respective own position (data).
In the crowdsourcing module 3 (fig. 1), the video is analyzed, for example, for objects with the aid of the transmitted video and the own position (data) (object recognition). The recognized objects are correspondingly merged (object merging) and a customized 3D environment model is created therefrom. Only objects pointing, for example, in the direction of travel/direction of line of sight of the autonomous vehicle 5 (fig. 1) are considered here. Objects that are registered by the other road users 7a, 7b, 7c, 7D (fig. 1) and are, for example, opposite to or outside the direction of sight of the autonomous vehicle 5 (fig. 1) are not taken into account or utilized in the vehicle-specific 3D environment model.
The 3D environment model is streamed (e.g. as video) to the autonomous vehicle 5 (fig. 1) by means of vehicle-specific crowd-sourced data and is presented here on a display unit.
Here, the initially recorded environmental data of the autonomous vehicle 5 (fig. 1) may be presented opaquely, and environmental data not initially generated by the autonomous vehicle 5 (fig. 1) may be presented semi-transparently.
The field of view produced in the autonomous vehicle 5 (fig. 1) is covered by the sensor system and the additional views Z recorded thereby for individual traffic participants, for example the traffic participant 7a (fig. 1), in this case supplemented or compensated by the traffic participant 7b (fig. 1).
Thus, the traffic participant 7a (fig. 1) may appear semi-transparently in the transmitted 3D video.
The 3D video may also contain an overhead view of the intersection 1 (fig. 1) or objects identified therein.
Fig. 3 illustrates a method in which vehicle-specific crowd-sourced data is designed as a scene description.
The autonomous vehicles 5 (fig. 1) are located in the service area L of the crowdsourcing module 3 (fig. 1).
The autonomous vehicle 5 (fig. 1) records an environmental view with the help of environmental data. The environment view is here designed as a video/environment image/scene description. Further, the autonomous vehicle 5 measures its own position (self-localization). This own position is transmitted by means of the video/ambient image/scene description to the crowdsourcing module 3 (fig. 1) through the communication unit 8 (fig. 1).
Furthermore, a further additional view Z (fig. 1) of the environment is generated (as a scene description) by the other road users 7a, 7b, 7c, 7d (fig. 1) which are also located in the service area L (fig. 1) of the crowdsourcing module 3 (fig. 1) and is transmitted to the crowdsourcing module 3 (fig. 1), preferably together with the respective own position (data). The scene description preferably contains object data such as the position and trajectory of the recognized objects (traffic participants, buildings, traffic signs, etc.).
In the crowdsourcing module 3 (fig. 1), the transmitted video/environmental image/scene description and the own position (data) of the autonomous vehicle 5 (fig. 1) are enriched with the identified objects (object merging), and a vehicle-specific scene description is built by vehicle-specific crowdsourcing data. Only objects pointing, for example, in the direction of travel/direction of line of sight of the autonomous vehicle 5 (fig. 1) are considered here. Objects that are recorded by the other road users 7a, 7b, 7c, 7d and that are, for example, opposite to or outside the direction of sight of the autonomous vehicle 5 (fig. 1) are not taken into account or utilized in the vehicle-specific crowd-sourced data.
The 3D environment model/scene description is presented by crowd-sourced data.
The scene description is transmitted to the autonomous vehicle 5 (fig. 1) by means of vehicle-specific crowdsourcing data and is presented in the autonomous vehicle as a 3D environment model on a display unit.
Here, the initially recorded environmental data of the autonomous vehicle 5 (fig. 1) may be presented opaquely, and environmental data not initially generated by the autonomous vehicle 5 (fig. 1) may be presented semi-transparently.
The field of view produced in the autonomous vehicle 5 (fig. 1) for an individual traffic participant, for example traffic participant 7a (fig. 1), is covered by the sensor system and the additional views recorded thereby, in this case supplemented or compensated by traffic participant 7b (fig. 1).
Thus, the traffic participant 7a (fig. 1) can be represented semi-transparently in the transmitted 3D environment model.
Those of the road participants 7a, 7b, 7c, 7d (fig. 1) who leave the local service area L (fig. 1) neither send data to the crowdsourcing module 3 (fig. 1) nor receive data.
The 3D environmental model may also contain an overhead view of the intersection 1 (fig. 1) or objects identified therein.
Claims (14)
1. Driver assistance system (6) in an autonomous vehicle (5) comprising a sensor system designed to record an environmental view by means of environmental data,
it is characterized in that the preparation method is characterized in that,
a communication unit (8) is provided, which is designed to transmit the environment data to a crowdsourcing module (3), wherein the environment data comprises at least the position of the autonomous vehicle (5), wherein the communication unit (8) is further designed to receive vehicle-specific crowdsourcing data by the crowdsourcing module (3), wherein the vehicle-specific crowdsourcing data forms an extended or complementary 3D environment model of the environment view of the autonomous vehicle (5) together with the environment data of the autonomous vehicle (5).
2. Driver assistance system (6) according to claim 1,
the communication unit (8) transmits environmental data in real time and receives vehicle-specific crowdsourcing data in real time.
3. Driver assistance system (6) according to any one of the preceding claims,
the communication unit (8) is designed such that vehicle-specific crowdsourcing data is received as a vehicle-specific video stream or by broadcast.
4. Driver assistance system (6) according to any one of the preceding claims,
the vehicle-specific crowdsourcing data is designed as a scene description, wherein the driver assistance system (6) is designed to generate the 3D environment model by means of the received scene description.
5. Driver assistance system (6) according to any one of the preceding claims,
the driver assistance system (6) has a display unit for presenting the 3D environment model, wherein the driver assistance system (6) is designed to present opaquely the originally recorded environment data of the autonomous vehicle (5) and to present translucently the environment data which was not originally generated by the autonomous vehicle (5).
6. A crowdsourcing module (3) for generating a 3D environment model based on different viewing angles,
the crowdsourcing module (3) is designed to receive environment data presenting an environment view from traffic participants in the environment, wherein the traffic participants are at different locations in a predefined service area (L), wherein the service area (L) refers to an area around a predefined location;
wherein the crowdsourcing module (3) is designed to generate a 3D environment model or a scene description or a 3D environment model or a customized scene description customized for the respective road participant (7a, 7b, 7c, 7D) based on the environment data;
wherein the crowdsourcing module (3) is designed to send the 3D environment model or the scene description or a 3D environment model or a customized scene description customized for the respective road participant (7a, 7b, 7c, 7D) as vehicle-specific crowdsourcing data to the respective road participant (7a, 7b, 7c, 7D) in the service area (L).
7. Crowdsourcing module (3) according to claim 6,
the crowdsourcing module (3) is designed to be provided in a mobile edge cloud.
8. The crowdsourcing module (3) according to claim 6 or 7, characterized in that the crowdsourcing module (3) is designed to reject the delivery of 3D environment models or scene descriptions or 3D environment models or customized scene descriptions customized for the respective road participants to road participants not or no longer in the service area (L).
9. A method for operating a driver assistance system (6) in an autonomous vehicle (5), the method having the steps of:
-recording an environmental view by means of environmental data by means of a sensor system of the driver assistance system (6);
-sending said environmental data to a crowdsourcing module (3), wherein said environmental data comprises at least a location of an autonomous vehicle (5);
-receiving, by a crowdsourcing module (3), vehicle-specific crowdsourcing data, wherein the vehicle-specific crowdsourcing data together with environmental data of the autonomous vehicle (5) form an extended or complementary 3D environmental model of an environmental view of the autonomous vehicle (5).
10. The method of claim 9, wherein the environmental data is transmitted in real-time and the crowdsourcing data is received in real-time.
11. Method according to claim 9 or 10, characterized in that upon receiving the scene description, the driver assistance system (6) generates a 3D environment model based on the received scene description to refine and/or extend the environment view generated by the environment data.
12. The method according to any one of claims 9 to 11, characterized in that the initially recorded environmental data of the autonomous vehicle (5) are presented opaquely and the environmental data not initially generated by the autonomous vehicle (5) are presented semi-transparently.
13. The method according to one of claims 9 to 12, characterized in that the vehicle-specific crowdsourcing data is received by the driver assistance system (6) only within a service area (L) predefined by the crowdsourcing module (3).
14. A computer program comprising instructions for causing a driver assistance system (6) according to any one of claims 1 to 5 to carry out the method according to claims 9 to 13.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019219171.1A DE102019219171A1 (en) | 2019-12-09 | 2019-12-09 | Driver assistance system, crowdsourcing module, procedure and computer program |
DE102019219171.1 | 2019-12-09 | ||
PCT/EP2020/084797 WO2021115980A1 (en) | 2019-12-09 | 2020-12-07 | Driver assistance system, crowdsourcing module, method and computer program |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114830616A true CN114830616A (en) | 2022-07-29 |
Family
ID=73834467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202080083019.0A Pending CN114830616A (en) | 2019-12-09 | 2020-12-07 | Driver assistance system, crowdsourcing module, method and computer program |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN114830616A (en) |
DE (1) | DE102019219171A1 (en) |
WO (1) | WO2021115980A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3206042A1 (en) | 2021-01-22 | 2022-07-28 | Sarah Nancy STAAB | Visual data management system and method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106781691A (en) * | 2016-11-30 | 2017-05-31 | 北京汽车集团有限公司 | Drive pre-warning system and method |
CN108569295A (en) * | 2017-03-08 | 2018-09-25 | 奥迪股份公司 | Method and system for environment measuring |
WO2019000417A1 (en) * | 2017-06-30 | 2019-01-03 | SZ DJI Technology Co., Ltd. | Map generation systems and methods |
CN109643367A (en) * | 2016-07-21 | 2019-04-16 | 御眼视觉技术有限公司 | Crowdsourcing and the sparse map of distribution and lane measurement for autonomous vehicle navigation |
US20190205659A1 (en) * | 2018-01-04 | 2019-07-04 | Motionloft, Inc. | Event monitoring with object detection systems |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102010040803A1 (en) * | 2010-09-15 | 2012-03-15 | Continental Teves Ag & Co. Ohg | Visual driver information and warning system for a driver of a motor vehicle |
EP2817785B1 (en) * | 2012-02-23 | 2019-05-15 | Charles D. Huston | System and method for creating an environment and for sharing a location based experience in an environment |
DE102014205511A1 (en) * | 2014-03-25 | 2015-10-01 | Conti Temic Microelectronic Gmbh | METHOD AND DEVICE FOR DISPLAYING OBJECTS ON A VEHICLE INDICATOR |
DE102016208239A1 (en) * | 2016-05-12 | 2017-11-16 | Continental Automotive Gmbh | Method for determining dynamic 3D map data, method for determining analysis data, device, computer program and computer program product |
DE102016223830A1 (en) * | 2016-11-30 | 2018-05-30 | Robert Bosch Gmbh | Method for operating an automated vehicle |
US10098014B1 (en) * | 2018-01-31 | 2018-10-09 | Toyota Jidosha Kabushiki Kaisha | Beam alignment using shared driving intention for vehicular mmWave communication |
-
2019
- 2019-12-09 DE DE102019219171.1A patent/DE102019219171A1/en not_active Ceased
-
2020
- 2020-12-07 WO PCT/EP2020/084797 patent/WO2021115980A1/en active Application Filing
- 2020-12-07 CN CN202080083019.0A patent/CN114830616A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109643367A (en) * | 2016-07-21 | 2019-04-16 | 御眼视觉技术有限公司 | Crowdsourcing and the sparse map of distribution and lane measurement for autonomous vehicle navigation |
CN106781691A (en) * | 2016-11-30 | 2017-05-31 | 北京汽车集团有限公司 | Drive pre-warning system and method |
CN108569295A (en) * | 2017-03-08 | 2018-09-25 | 奥迪股份公司 | Method and system for environment measuring |
WO2019000417A1 (en) * | 2017-06-30 | 2019-01-03 | SZ DJI Technology Co., Ltd. | Map generation systems and methods |
CN110799804A (en) * | 2017-06-30 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Map generation system and method |
US20190205659A1 (en) * | 2018-01-04 | 2019-07-04 | Motionloft, Inc. | Event monitoring with object detection systems |
Also Published As
Publication number | Publication date |
---|---|
DE102019219171A1 (en) | 2021-05-27 |
WO2021115980A1 (en) | 2021-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6835121B2 (en) | Real-time traffic monitoring method using connected car and real-time traffic monitoring system | |
JP7043755B2 (en) | Information processing equipment, information processing methods, programs, and mobiles | |
US20200223454A1 (en) | Enhanced social media experience for autonomous vehicle users | |
JP7188394B2 (en) | Image processing device and image processing method | |
JP6988815B2 (en) | Image processing device and image processing method | |
CN110471058A (en) | The system and method detected automatically for trailer attribute | |
CN110186467A (en) | Group's sensing points cloud map | |
JPWO2019039282A1 (en) | Image processing device and image processing method | |
KR20210022570A (en) | Information processing device and information processing method, imaging device, computer program, information processing system, and mobile device | |
CN111301284B (en) | In-vehicle device, program, and vehicle | |
CN111216127A (en) | Robot control method, device, server and medium | |
CN106164931B (en) | Method and device for displaying objects on a vehicle display device | |
US11397322B2 (en) | Image providing system for vehicle, server system, and image providing method for vehicle | |
KR20210098972A (en) | Information processing apparatus, information processing method, program, moving object control apparatus and moving object | |
JPWO2019188391A1 (en) | Control devices, control methods, and programs | |
JP2018514016A (en) | Vehicle support system | |
JPWO2020009060A1 (en) | Information processing equipment and information processing methods, computer programs, and mobile equipment | |
KR20210142604A (en) | Information processing methods, programs and information processing devices | |
JPWO2019142660A1 (en) | Image processing device, image processing method, and program | |
JP2019185105A (en) | Vehicle system, space area estimation method and space area estimation apparatus | |
CN114830616A (en) | Driver assistance system, crowdsourcing module, method and computer program | |
CN114902309A (en) | Driving support device, driving support method, and program | |
WO2020036043A1 (en) | Information processing device, information processing method and program | |
KR20230091870A (en) | Camera module, information processing system, information processing method and information processing device | |
JP7020429B2 (en) | Cameras, camera processing methods, servers, server processing methods and information processing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230302 Address after: Hannover Applicant after: Continental Automotive Technology Co.,Ltd. Address before: Hannover Applicant before: CONTINENTAL AUTOMOTIVE GmbH |