CN112862980B - Car arhud system based on Carla simulation platform - Google Patents
Car arhud system based on Carla simulation platform Download PDFInfo
- Publication number
- CN112862980B CN112862980B CN202110165281.6A CN202110165281A CN112862980B CN 112862980 B CN112862980 B CN 112862980B CN 202110165281 A CN202110165281 A CN 202110165281A CN 112862980 B CN112862980 B CN 112862980B
- Authority
- CN
- China
- Prior art keywords
- navigation
- information
- vehicle
- carla
- pedestrians
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Geometry (AREA)
- Navigation (AREA)
Abstract
The invention discloses a Carla simulation platform-based vehicle arhud system, which comprises a vehicle peripheral information enhancement display system and a vehicle navigation auxiliary information system, and is characterized in that the vehicle peripheral information enhancement display system creates carrier perception information by combining scene data through Unreal API and functions of Carla, the vehicle navigation auxiliary information system creates navigation prompt information through Unreal API and map service data of Carla, and the finally created carrier perception information and the navigation prompt information are displayed in an arhud mode. The problem that an existing Carla simulation platform generally has no arhud system is solved.
Description
Technical Field
The invention relates to the field of Unreal development, in particular to a Carla simulation platform-based vehicle arhud system.
Background
With the development of autonomous driving, offline testing and simulation are a low-cost, low-risk, and efficient method currently used for unmanned vehicle performance verification in a variety of traffic scenarios. The most widely used are TORCS, PRESCAN, carSim, cara, and the like.
However, the existing simulation platform generally does not provide the function of arhud, where arhud refers to generating a UI that allows a user to interact with a vehicle through an interface controller, and the user can know the state, behavior and decision of the vehicle through the interface, and can also influence and control the vehicle.
Disclosure of Invention
The invention aims to provide a Carla simulation platform-based vehicle arhud system, which solves the problem that the conventional Carla simulation platform generally has no arhud system.
In order to solve the technical problems, the invention adopts the following technical scheme:
a car arhud system based on Carla simulation platform comprises a car peripheral information enhancement display system and a car navigation auxiliary information system, and is characterized in that the car peripheral information enhancement display system creates carrier perception information by combining scene data through Unreal API and function of Carla, the car navigation auxiliary information system creates navigation prompt information through Unreal API and map service data of Carla, and the created carrier perception information and navigation prompt information are displayed in an arhud mode.
The method for realizing the ARHUD system based on the Carla simulation platform can enhance and display peripheral information of the vehicle, including peripheral vehicles and pedestrians, and can also cooperate with navigation data to provide driving auxiliary information.
As a further preferred aspect of the present invention, the vehicle surrounding information enhancement display system includes the functions of: the method comprises the steps of adding collision detection bodies of various levels to a simulation vehicle, adding identification effects to other vehicles and pedestrians, displaying the identification effects when the detection bodies cover other vehicles or pedestrians, and closing the identification effects when the detection bodies leave the other vehicles or pedestrians.
More preferably, the adding of the collision detectors of various levels to the simulated vehicle is to create collision detectors of various levels by using a plain engine blueprint or a c + + code of a carala bottom layer, the adding of the recognition effect to the other vehicle or pedestrian is to add a art material of the recognition effect to the other vehicle or pedestrian in the simulation environment, the displaying of the recognition effect is to display the recognition effect when the detection body covers the other vehicle or pedestrian, specifically to display the recognition effect of the collision object, namely, the other vehicle or pedestrian, in the collision occurrence event of the collision body, and the closing of the recognition effect is to close the recognition effect of the collision object, namely, the other vehicle or pedestrian, in the collision end event of the collision body.
The algorithm for creating various levels of collision detectors using the plain engine blueprint or c + + code of carra's bottom layer is as follows:
HintBounds = CreateDefaultSubobject<UBoxComponent>(TEXT("HintBounds"));
WarningBounds = CreateDefaultSubobject<UBoxComponent>(TEXT("WarningBounds"));
the algorithm for adding art materials with recognition effects to other vehicles or pedestrians in the simulation environment is as follows:
FootIndicator = CreateDefaultSubobject<UStaticMesh>(TEXT("FootIndicator"));
WarningIndicator = CreateDefaultSubobject<UStaticMesh>(TEXT("WarningIndicator"));
in the event of a collision with a collider, the algorithm for displaying the recognition effect of the collision object, i.e. the other vehicle or pedestrian, is as follows:
OnComponentBeginOverlap(AActor* OtherActor) {
OtherActor->ShowFootIndicator(true);
}
in the event of the collision end of a colliding body, the algorithm for turning off the recognition effect of the colliding object, i.e. the other vehicle or pedestrian, is as follows:
OnComponentEndOverlap(AActor* OtherActor) {
OtherActor->ShowFootIndicator(false);
}
as a further preferable aspect of the present invention, the vehicle navigation assistance information system includes functions of creating a navigation cue line effect for a simulated vehicle; dynamically acquiring navigation information; and performing coordinate transformation on the prompt line at the navigation key node according to the navigation direction, and displaying the effect of the prompt line.
As a further preference of the invention, the creating of the navigation prompt line effect for the simulated vehicle is specifically to create the prompt line by utilizing the Unreal Spline curve function of Carla, the dynamically acquiring of the navigation information is specifically to send the current position information to the navigation service to acquire the real-time navigation information, the coordinate transformation is performed on the prompt line at the navigation key node according to the navigation direction, specifically, the coordinate transformation is performed on the prompt line at the navigation key node according to the navigation direction angle, the transformation comprises translation and rotation, and the displaying of the prompt line effect is specifically to display the transformed prompt line list in the scene.
The algorithm for creating a hint line using the Unreal Spline curve function of Carla is as follows:
SplineComponentLeft = CreateDefaultSubobject<USplineComponent>("SplineLeft");
SplineComponentRight = CreateDefaultSubobject<USplineComponent>("SplineRight");
SetRoad(parent, degree);
MakeSpline(SplineComponentLeft, mLeftPoints, SplineMeshArrayLeft);
MakeSpline(SplineComponentRight, mRightPoints, SplineMeshArrayRight);
the algorithm for sending the current position information to the navigation service and acquiring the real-time navigation information is as follows:
bool GetRoutePath(string points, bool primary)
{
bool isSend = true;
GetRoutePathRequest request = new GetRoutePathRequest(points, primary);
request.StartServiceDelegte();
return isSend;
}
void OnSuccessString(string response)
{
OnlineMapsOpenRouteServiceDirectionResultOSRM result = OnlineMapsOpenRouteService.GetDirectionResultsOSRM(response);
CarData c = ModelManager.Instance.Get_carData_X();
List<OnlineMapsVector2d> pointsList = GetRoadPoints(result);
// REBOL todo, god road conditions query, the result of each query segment only keeping the points in the segment returned by osrm
c.routePathList.Insert(0, pointsList);
// REBOL note, fill in the dictionary of road name: pointList, used for road condition analysis
c.routePathDict = ParseRoadData(result);
// invoke high road conditions
//string[] roadNames = CoordinateConvertTool.roadNamesGaode;
var enumerator = CoordinateConvertTool.roadNameMapping.GetEnumerator();
//for (int i = 0; i < roadNames.Length; i++)
while(enumerator.MoveNext())
{
NetworkHelper.Instance.GetGaodeApi(enumerator.Current.Key);
}
NotifyModelControl<CarData>(c);
}
}
And (3) performing coordinate transformation on the prompt line at the navigation key node according to the navigation direction angle, wherein the algorithm comprising translation and rotation is as follows:
// REBOL note, based on the angle, do the angle transformation to the parabola list
void ASimpleSplineActor::SetRoad(AActor* parent, float degree)
{
V/translation matrix, x-axis direction, i.e. offset a little forward along the headstock
// half lane width left lane offset
FTransform t = FTransform();
if (IsValid(parent)) {
t = parent->GetActorTransform();
}
FMatrix matrix = t.ToMatrixWithScale();
V/taking the coordinates of the root node as an offset matrix, and additionally adding half of the width of offset lanes of the head and the left lane
FVector currentPositionLeft = FVector(mCarHeadOffset, - mRoadWidth / 2, 0);
FVector currentPositionRight = FVector(mCarHeadOffset, mRoadWidth / 2, 0);
FMatrix rotateTransferYMatrix = MakeRotationMatrix(degree)
FVector4 m1 = matrix.TransformVector(currentPositionLeft);
FVector4 m2 = matrix.TransformVector(currentPositionRight);
FMatrix f1 = matrix.ConcatTranslation(m1);
FMatrix f2 = matrix.ConcatTranslation(m2);
for (int i = 0; i < mPoints.Num(); i++)
{
FVector basePoint = mPoints[i];
// REBOL note, rotated first, then transformed to parent matrix
FVector leftPoint = rotateTransferYMatrix.TransformFVector4(basePoint);
leftPoint = f1.TransformFVector4(leftPoint);
leftPoint.Z = leftPoint.Z + mHeight;
FVector rightPoint = rotateTransferYMatrix.TransformFVector4(basePoint);
rightPoint = f2.TransformFVector4(rightPoint);
rightPoint.Z = rightPoint.Z + mHeight;
mLeftPoints.Add(leftPoint);
mRightPoints.Add(rightPoint);
}
}
The algorithm for displaying the transformed cue line list in the scene is as follows:
VehicleActor->UpdateSpline(LeftPoints, RightPoints)
as a further preferred aspect of the present invention, the finally created vehicle sensing information and navigation prompt information are displayed in an arhud manner, specifically, after being expanded, the app picture can be projected to the front windshield through the vehicle-mounted projection device in combination with the vehicle and the surrounding information acquired by the real sensor.
Compared with the prior art, the invention can at least achieve one of the following beneficial effects:
1. according to the Unreal API and the function of Carla, the scene data and the map service data are combined, rich carrier perception information and navigation prompt information are created and displayed in an arhud mode, and in the overall solution scheme in the field of unmanned driving, the arhud information is beneficial to enriching driving information and is an indispensable link for user experience.
2. The method and the system for arhud are provided for Carla simulation environment, and through expansion, the real sensors can be combined to obtain vehicle and peripheral information, and projection equipment is matched to realize the arhud effect on the front windshield of a real vehicle, so that the integral solution of unmanned driving is perfected.
Drawings
FIG. 1 is a display flow of arhud implemented in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific example 1:
fig. 1 shows a car arhud system based on carra simulation platform, which comprises a car surrounding information enhancement display system and a car navigation auxiliary information system, and is characterized in that the car surrounding information enhancement display system creates carrier perception information by combining scene data through an unregeal API and function of carra, the car navigation auxiliary information system creates navigation prompt information through the unregeal API of carra and map service data, and the finally created carrier perception information and the navigation prompt information are displayed in an arhud manner.
The method for realizing the ARHUD system based on the Carla simulation platform can enhance and display peripheral information of the vehicle, including peripheral vehicles and pedestrians, and can also cooperate with navigation data to provide driving auxiliary information.
Specific example 2:
the present embodiment is further described with reference to specific embodiment 1, in which the vehicle periphery information enhancement display system includes the following functions: the method includes the steps of adding collision detection bodies of various levels to a simulation vehicle, adding recognition effects to other vehicles and pedestrians, displaying the recognition effects when the detection bodies cover the other vehicles or pedestrians, and turning off the recognition effects when the detection bodies leave the other vehicles or pedestrians.
Specific example 3:
the embodiment is further described with reference to embodiment 2, in which various levels of collision detectors are added to a simulation vehicle, specifically, collision detectors of various levels are created using a plain engine blueprint or c + + code of a cara floor, art materials having recognition effects are added to other vehicles or pedestrians in a simulation environment, when the detection bodies cover the other vehicles or pedestrians, the recognition effects are displayed, specifically, when a collision occurs in a collision body, the recognition effects of collision objects, that is, other vehicles or pedestrians are displayed, and when the detection bodies leave the other vehicles or pedestrians, the recognition effects are turned off, specifically, when a collision end event in the collision body, the recognition effects of collision objects, that is, other vehicles or pedestrians are turned off.
The algorithm for creating various levels of collision detectors using the plain engine blueprint or c + + code of carra's bottom layer is as follows:
HintBounds = CreateDefaultSubobject<UBoxComponent>(TEXT("HintBounds"));
WarningBounds = CreateDefaultSubobject<UBoxComponent>(TEXT("WarningBounds"));
the algorithm for adding art materials with recognition effects to other vehicles or pedestrians in the simulation environment is as follows:
FootIndicator = CreateDefaultSubobject<UStaticMesh>(TEXT("FootIndicator"));
WarningIndicator = CreateDefaultSubobject<UStaticMesh>(TEXT("WarningIndicator"));
in the event of a collision with a collider, the algorithm for displaying the recognition effect of the collision object, i.e. the other vehicle or pedestrian, is as follows:
OnComponentBeginOverlap(AActor* OtherActor) {
OtherActor->ShowFootIndicator(true);
}
in the collision end event of a colliding body, the algorithm for turning off the recognition effect of the collision object, i.e., the other vehicle or pedestrian, is as follows:
OnComponentEndOverlap(AActor* OtherActor) {
OtherActor->ShowFootIndicator(false);
}
specific example 4:
the embodiment further describes a vehicle navigation assistance information system on the basis of a specific embodiment 1, and the vehicle navigation assistance information system comprises the following functions of creating a navigation prompt line effect for a simulated vehicle; dynamically acquiring navigation information; and performing coordinate transformation on the prompt line at the navigation key node according to the navigation direction, and displaying the effect of the prompt line.
Specific example 5:
the embodiment is further described on the basis of the specific embodiment 4, and as a further preferable aspect of the present invention, the creating of the navigation prompt line effect for the simulated vehicle specifically is to create the prompt line by using an universal Spline curve function of cara, the dynamically acquiring of the navigation information specifically is to send current position information to a navigation service, and acquire real-time navigation information, the performing of coordinate transformation on the prompt line at the navigation key node, specifically performing coordinate transformation on the prompt line at the navigation key node, according to a navigation direction angle, and including translation and rotation, and the displaying of the prompt line effect specifically is to display a list of the transformed prompt lines in a scene.
The algorithm for creating the hint line using the Unreal Spline curve function of Carla is as follows:
SplineComponentLeft = CreateDefaultSubobject<USplineComponent>("SplineLeft");
SplineComponentRight = CreateDefaultSubobject<USplineComponent>("SplineRight");
SetRoad(parent, degree);
MakeSpline(SplineComponentLeft, mLeftPoints, SplineMeshArrayLeft);
MakeSpline(SplineComponentRight, mRightPoints, SplineMeshArrayRight);
the algorithm for sending the current position information to the navigation service and acquiring the real-time navigation information is as follows:
bool GetRoutePath(string points, bool primary)
{
bool isSend = true;
GetRoutePathRequest request = new GetRoutePathRequest(points, primary);
request.StartServiceDelegte();
return isSend;
}
void OnSuccessString(string response)
{
OnlineMapsOpenRouteServiceDirectionResultOSRM result = OnlineMapsOpenRouteService.GetDirectionResultsOSRM(response);
CarData c = ModelManager.Instance.Get_carData_X();
List<OnlineMapsVector2d> pointsList = GetRoadPoints(result);
// REBOL todo, goodand road conditions query, the result of each query section only keeps the points in the road section returned by osrm
c.routePathList.Insert(0, pointsList);
// REBOL note, fill in the dictionary of road name pointList, used for road condition analysis
c.routePathDict = ParseRoadData(result);
// invoke high road conditions
//string[] roadNames = CoordinateConvertTool.roadNamesGaode;
var enumerator = CoordinateConvertTool.roadNameMapping.GetEnumerator();
//for (int i = 0; i < roadNames.Length; i++)
while(enumerator.MoveNext())
{
NetworkHelper.Instance.GetGaodeApi(enumerator.Current.Key);
}
NotifyModelControl<CarData>(c);
}
}
And (3) performing coordinate transformation on the prompt line at the navigation key node according to the navigation direction angle, wherein the algorithm comprising translation and rotation is as follows:
// REBOL note, based on angle, do angle transformation to the parabolic list
void ASimpleSplineActor::SetRoad(AActor* parent, float degree)
{
V/translation matrix, x-axis direction, i.e. offset a little forward along the headstock
// left lane offset half lane Width
FTransform t = FTransform();
if (IsValid(parent)) {
t = parent->GetActorTransform();
}
FMatrix matrix = t.ToMatrixWithScale();
And/or taking the coordinates of the root nodes as an offset matrix, and additionally adding half of the width of offset lanes of the head and the left lane
FVector currentPositionLeft = FVector(mCarHeadOffset, - mRoadWidth / 2, 0);
FVector currentPositionRight = FVector(mCarHeadOffset, mRoadWidth / 2, 0);
FMatrix rotateTransferYMatrix = MakeRotationMatrix(degree)
FVector4 m1 = matrix.TransformVector(currentPositionLeft);
FVector4 m2 = matrix.TransformVector(currentPositionRight);
FMatrix f1 = matrix.ConcatTranslation(m1);
FMatrix f2 = matrix.ConcatTranslation(m2);
for (int i = 0; i < mPoints.Num(); i++)
{
FVector basePoint = mPoints[i];
// REBOL note, rotated first, then transformed to parent matrix
FVector leftPoint = rotateTransferYMatrix.TransformFVector4(basePoint);
leftPoint = f1.TransformFVector4(leftPoint);
leftPoint.Z = leftPoint.Z + mHeight;
FVector rightPoint = rotateTransferYMatrix.TransformFVector4(basePoint);
rightPoint = f2.TransformFVector4(rightPoint);
rightPoint.Z = rightPoint.Z + mHeight;
mLeftPoints.Add(leftPoint);
mRightPoints.Add(rightPoint);
}
}
The algorithm for displaying the transformed cue line list in the scene is as follows:
VehicleActor->UpdateSpline(LeftPoints, RightPoints)
specific example 5:
the embodiment is further described in the specific embodiment 1, and the finally created vehicle sensing information and navigation prompt information are displayed in an arhud manner, specifically, after being expanded, an app picture can be projected to a front windshield through a vehicle-mounted projection device in combination with vehicle and surrounding information acquired by a real sensor.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More specifically, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, other uses will also be apparent to those skilled in the art.
Claims (2)
1. A car arhud system based on Carla simulation platform comprises a car peripheral information enhancement display system and a car navigation auxiliary information system, and is characterized in that the car peripheral information enhancement display system creates carrier perception information by combining scene data through Unreal API and function of Carla, the car navigation auxiliary information system creates navigation prompt information by Unreal API and map service data of Carla, and the created carrier perception information and navigation prompt information are displayed in an arhud mode;
the vehicle surrounding information enhancement display system comprises the following functions: the method comprises the steps of adding collision detection bodies of various levels to a simulation vehicle, adding identification effects to other vehicles and pedestrians, displaying the identification effects when the detection bodies cover other vehicles or pedestrians, and closing the identification effects when the detection bodies leave other vehicles or pedestrians;
the method comprises the steps that various levels of collision detection bodies are added to a simulation vehicle, specifically, unreal engine blueprints or c + + codes on the Carla bottom layer are used for creating various levels of collision detection bodies, the recognition effect added to other vehicles or pedestrians in a simulation environment is specifically art materials with recognition effects added to other vehicles or pedestrians in the simulation environment, when the detection bodies cover the other vehicles or pedestrians, the recognition effect is displayed, specifically, the recognition effect of a collision object, namely, the other vehicles or pedestrians is displayed in the collision occurrence event of the collision body, when the detection bodies leave the other vehicles or pedestrians, the recognition effect is closed, specifically, in the collision end event of the collision body, the recognition effect of the collision object, namely, the other vehicles or pedestrians is closed;
the vehicle navigation assistance information system includes the functions of creating a navigation cue effect for a simulated vehicle; dynamically acquiring navigation information; performing coordinate transformation on the prompt line at the navigation key node according to the navigation direction;
the method specifically comprises the steps of creating a navigation prompt line for a simulated vehicle by utilizing an unknown Spline curve function of Carla, dynamically acquiring navigation information, specifically sending current position information to a navigation service to acquire real-time navigation information, performing coordinate transformation on the prompt line at a navigation key node according to a navigation direction, specifically performing coordinate transformation on the prompt line at the navigation key node according to a navigation direction angle, including translation and rotation, and displaying the prompt line in a scene.
2. A cara simulation platform based vehicle arhud system according to claim 1, wherein: the finally created vehicle perception information and navigation prompt information are displayed in an arhud mode, and specifically, after expansion, an app picture can be projected to a front windshield through vehicle-mounted projection equipment by combining vehicle and peripheral information acquired by a real sensor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110165281.6A CN112862980B (en) | 2021-02-06 | 2021-02-06 | Car arhud system based on Carla simulation platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110165281.6A CN112862980B (en) | 2021-02-06 | 2021-02-06 | Car arhud system based on Carla simulation platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862980A CN112862980A (en) | 2021-05-28 |
CN112862980B true CN112862980B (en) | 2022-11-18 |
Family
ID=75988778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110165281.6A Active CN112862980B (en) | 2021-02-06 | 2021-02-06 | Car arhud system based on Carla simulation platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862980B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067062A (en) * | 2022-01-17 | 2022-02-18 | 深圳慧拓无限科技有限公司 | Method and system for simulating real driving scene, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2894509A1 (en) * | 2014-01-13 | 2015-07-15 | Robert Bosch Gmbh | Field of vision display for a vehicle for displaying image information in two independent images to a viewer |
CN107554425A (en) * | 2017-08-23 | 2018-01-09 | 江苏泽景汽车电子股份有限公司 | A kind of vehicle-mounted head-up display AR HUD of augmented reality |
CN111784142A (en) * | 2020-06-24 | 2020-10-16 | 吉林大学 | Task complexity quantification model of advanced driving assistance system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019053695A1 (en) * | 2017-09-18 | 2019-03-21 | Telefonaktiebolaget L M Ericsson (Publ) | System and method for providing precise driving recommendations based on network-assisted scanning of a surrounding environment |
-
2021
- 2021-02-06 CN CN202110165281.6A patent/CN112862980B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2894509A1 (en) * | 2014-01-13 | 2015-07-15 | Robert Bosch Gmbh | Field of vision display for a vehicle for displaying image information in two independent images to a viewer |
CN107554425A (en) * | 2017-08-23 | 2018-01-09 | 江苏泽景汽车电子股份有限公司 | A kind of vehicle-mounted head-up display AR HUD of augmented reality |
CN111784142A (en) * | 2020-06-24 | 2020-10-16 | 吉林大学 | Task complexity quantification model of advanced driving assistance system |
Non-Patent Citations (2)
Title |
---|
Demonstration of a low-cost hyper-realistic testbed for designing future onboard experiences;Pietro Lungaro et al;《Interactive Demos》;20180925;第235-238页 * |
基于数字孪生的网联自动驾驶测试方法研究;葛雨明等;《中兴通讯技术》;20200221(第01期);第29-32页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112862980A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10636295B1 (en) | Method and device for creating traffic scenario with domain adaptation on virtual driving environment for testing, validating, and training autonomous vehicle | |
CN106781692B (en) | Vehicle collision early warning method, device and system | |
CN113196357B (en) | Method and system for controlling autonomous vehicles in response to detecting and analyzing strange traffic signs | |
WO2021057134A1 (en) | Scenario identification method and computing device | |
US10339398B2 (en) | Method and device for recognizing traffic signs | |
Guo et al. | Color modeling by spherical influence field in sensing driving environment | |
CN110837697A (en) | Intelligent traffic simulation system and method for intelligent vehicle | |
US20210132613A1 (en) | Onboard use of scenario description language | |
CN109461342B (en) | Teaching system for unmanned motor vehicle and teaching method thereof | |
CN111104842A (en) | Computer-aided or autonomous driving traffic sign recognition method and device | |
CN114077541A (en) | Method and system for validating automatic control software for an autonomous vehicle | |
CN112862980B (en) | Car arhud system based on Carla simulation platform | |
Shafiee et al. | Deep neural network perception models and robust autonomous driving systems: practical solutions for mitigation and improvement | |
Alhabshee et al. | Deep learning traffic sign recognition in autonomous vehicle | |
CN116853282A (en) | Vehicle control method, device, computer equipment and storage medium | |
CN110588643A (en) | Recognition processing device, vehicle control device, recognition control method, and storage medium | |
CN117413257A (en) | Method and system for testing driver assistance system for vehicle | |
US20230057816A1 (en) | Systems and methods for generating virtual maps in virtual games | |
Kowol et al. | A-eye: Driving with the eyes of ai for corner case generation | |
CN115309773A (en) | Positioning method, device, equipment, medium and vehicle | |
Yeo | Autonomous Driving Technology through Image Classfication and Object Recognition Based on CNN | |
CN110414756B (en) | Vehicle driving system evaluation method, device and computer equipment | |
Tideman | Scenario-based simulation environment for assistance systems | |
Chiang et al. | Establishment of HD Maps Verification and Validation Procedure with OpenDRIVE and Autoware (Lanelet2) Formats | |
CN114013448B (en) | Control method and device for automobile and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |