CN114820971A - Graphical expression method for describing complex driving environment information - Google Patents

Graphical expression method for describing complex driving environment information Download PDF

Info

Publication number
CN114820971A
CN114820971A CN202210479395.2A CN202210479395A CN114820971A CN 114820971 A CN114820971 A CN 114820971A CN 202210479395 A CN202210479395 A CN 202210479395A CN 114820971 A CN114820971 A CN 114820971A
Authority
CN
China
Prior art keywords
vehicle
information
road
environment information
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210479395.2A
Other languages
Chinese (zh)
Other versions
CN114820971B (en
Inventor
詹军
叶昊
王战古
仲昭辉
陈浩源
杨凯
曹子坤
江勐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210479395.2A priority Critical patent/CN114820971B/en
Publication of CN114820971A publication Critical patent/CN114820971A/en
Application granted granted Critical
Publication of CN114820971B publication Critical patent/CN114820971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a graphical expression method for describing complex driving environment information, which is characterized in that the environment information is layered and is graphically expressed respectively for each environment information layer, after all the environment information layers are graphically expressed, the graphs of each layer are stored respectively, the graphs of each environment information layer are sequentially overlapped according to the overlooking sequence as required, the overlapped graphs are subjected to corresponding coordinate system transformation according to the driving speed and the driving direction of a body vehicle, and the graphically expressed comprehensive environment information which takes the body vehicle as the center and changes along with time is displayed. The invention describes the environment perception information from different sensors by using a graphical expression method after comprehensively perceiving the environment perception information, solves the problem of uniform expression of the driving environment, and can more comprehensively and effectively describe the surrounding driving environment information of the vehicle in the driving process.

Description

Graphical expression method for describing complex driving environment information
Technical Field
The invention designs an expression mode of complex driving environment information for an automatic driving vehicle, and particularly relates to a graphical expression method for describing the driving environment information of the vehicle.
Background
Autonomous driving of cars is an important development direction in the automotive industry, and decision planning is a core part of the realization of autonomous driving tasks. In recent years, decision planning algorithms based on artificial intelligence gradually become mainstream of research, wherein a convolutional neural network method represents a strong advantage when image features are extracted, and many decision planning methods based on artificial intelligence adopt a convolutional neural network to extract environmental information features by taking images as model input.
In order to acquire complete information, the current environment sensing mostly depends on various sensors, and the acquired environment information has great difference in data structure and information type, so that the cross-modal expression of the environment information is not facilitated; meanwhile, the original perception information contains a large amount of invalid information irrelevant to decision, so that the information feature extraction efficiency and the robustness of a decision model are deteriorated. Therefore, in the actual application process, the non-uniform expression mode of the driving environment information is not beneficial to the migration and use of the decision model in different working condition scenes and different perception information.
The current methods for representing environment information mainly include the traditional methods of grid maps, topological maps and high-precision maps, and a middle-level expression mode for 'mid to mid' appeared in recent years. Royal forever, et al, in the thesis "autonomous parking path coordination and optimization strategy based on topological map" uses topological map to describe topological information of traffic trajectory nodes in parking area, realizing development of autonomous parking strategy; radu Danescu et al uses a Grid map to express surrounding obstacle position and motion state information in the paper "Modeling and Tracking the Driving Environment With a Particle-Based Occupancy Grid" and thereby predict the future position of the vehicle; nemanja Djuric et al uses a rasterized high-precision map to express the surrounding Driving environment in a paper "Uncertainty-aware Short-term Motion Prediction of Traffic Actors for Autonomous Driving", uses RGB colors to distinguish a self-vehicle from other vehicles, and uses color saturation to present the positions of the self-vehicle at different times, thereby realizing the expression of dynamic information in a single picture; mayank Bansal et al in the paper "ChanffeureNet" Learning to Drive by the imaging the Best and Synthesizing the Worst "describe the road map, traffic lights, speed limits, global path, surrounding vehicles and the position points of the own vehicle at different times in the driving environment information by using different geometric elements, and the expression mode replaces the original sensor data to imitate the driving behavior of the Learning vehicle.
Through analyzing the existing methods, some places which are not perfect are found, firstly, the traditional expression method of the environment model does not intuitively express the traffic rule information by using a figure, for example, a grid map can only express the state of surrounding obstacles but cannot describe the restriction of the traffic rule on the driving path of a vehicle; in addition, the later-presented intermediate-level expression does not provide a clear hierarchical structure for the driving environment information, and the expressed environment information is not complete, wherein the form of the bird's eye view does not take into account the undulation and gradient information of the road, and also does not take into account information describing the influence on the driving behavior of the vehicle obtained by inference, such as the driving style of the surrounding vehicle.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a graphical expression method for describing complex driving environment information, which is used for describing environment perception information from different sensors after comprehensive cognition, so that the problem of uniform expression of the driving environment is solved, and the surrounding driving environment information of a vehicle in the driving process can be more comprehensively and effectively described. The driving environment information expressed in a unified manner includes not only the actual objects and logic information in the environment, such as the positions of surrounding vehicles, traffic rules, and the like, but also inference information obtained by inference. The environment information is expressed by adopting a graphical method, so that the method is more intuitive, and the decision planning of automatic driving is realized by adopting an artificial intelligence method based on image feature extraction.
The purpose of the invention is realized by the following technical scheme:
a graphical expression method for describing complex driving environment information comprises the following steps:
step 1, layering environment information, comprising:
1.1) road layer: the method comprises the steps of obtaining the shape of a central line of a road in a vehicle driving environment, the width of the road, and the tangential and radial elevation information of a road surface;
1.2) traffic regulation layer: the traffic control system comprises road traffic authority and road traffic prohibition and restriction information which are clear by traffic signs and traffic laws in the vehicle running environment;
1.3) object layer: the position, size and motion state information of a static object and a moving object which are provided with entities in the driving environment of the vehicle are contained;
1.4) weather layer: including weather environment information present in a vehicle driving environment;
1.5) reasoning information layer: the method comprises the following steps of obtaining the future position and the attitude of a vehicle and the driving style information of the vehicle by inference in the driving environment of the vehicle;
step 2, respectively carrying out graphical expression on each environment information layer classified in the step 1:
2.1) carrying out environment information graphical expression on the road layer;
2.2) carrying out graphical expression on the traffic regulation layer;
2.3) carrying out graphical expression on the object layer;
2.4) carrying out environment information graphical expression on the weather layer;
2.5) carrying out graphical expression on the reasoning information layer;
and 3, after all the environment information layers are graphically expressed, storing the graphics of each layer respectively, stacking the graphics of each environment information layer in sequence according to the overlooking covering sequence as required, carrying out corresponding coordinate system transformation on the stacked graphics according to the running speed and the running direction of the body vehicle, and displaying the graphically expressed comprehensive environment information which takes the body vehicle as the center and changes along with time.
Further, the step 2.1) comprises: the method comprises the following steps of using a solid line to express the boundary of a road area, using a YUV color model to express elevation information of a road, wherein the value of Y is reserved, U represents the radial angle of the road surface compared with the horizontal plane, and V represents the tangential angle of the road surface compared with the horizontal plane, wherein the specific expression mode is as follows:
U= ((angle_t +20)/40-0.5)*0.3 (1)
V= ((angle_v+20)/40-0.5)*0.3 (2)
in the formula, U is the blue chromaticity U in the YUV color model;
angle _ t-the radial angle of the road surface compared to the horizontal plane;
v-the red chroma V in the YUV color model;
angle _ v — the tangential angle of the road surface compared to the horizontal.
Further, in the step 2.1), in an actual using process, if the elevation information of the road needs to be transformed into an RGB color model using the YUV color model, the transformation relationship is as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in the formula, Y is the gray level Y in the YUV color model;
u-blue chroma U in YUV color model;
v-the red chroma V in the YUV color model;
R-R in the RGB color model, i.e., the luminance of red;
G-G in the RGB color model, the luminance of green;
B-B in the RGB color model, i.e., the luminance of blue.
Further, the step 2.2) comprises: using a solid line and a dotted line which are the same as the road marking type in reality to express lane information on an actual road; meanwhile, the road traffic authority and traffic prohibition and limitation information are converted into the limitation on the driving speed of the vehicle and the modification on the driving style, and then the information of the speed limitation is mapped into the H value of the HUV in the road surface color.
Further, when the information of speed limit is mapped to H value of HUV in road surface color, the change of speed limit is mapped to the gray level of road expression, and the mapping relationship is as follows:
Figure BDA0003626945630000031
in the formula, Y is the gray value Y of the YUV color model of the road pavement;
speed limit -a speed limit for the section of road;
speed limitmax -maximum value of speed limit of the road.
Further, the step 2.3) comprises: the method comprises the steps of using graphic frames with different shapes to express size characteristic information of an object, using filling colors to express the relative speed of the object relative to a self vehicle, using RGB color systems for the filling colors, and respectively using red, green and blue to express the relative speed of the self vehicle relative to the transverse direction, the longitudinal direction and the vertical direction under a self vehicle coordinate system.
Further, the relative speed of the object with respect to the vehicle is expressed by using a filling color in RGB system, and the mapping relationship between the filling color and the relative speed is as follows:
Figure BDA0003626945630000041
wherein, color-a matrix of internal fill colors;
speed x -the relative speed of the vehicle in the x-direction relative to the subject vehicle;
speed y -the relative speed of the vehicle in the y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction relative to the subject vehicle;
speed _ max — the maximum value of the relative speed of the vehicle with respect to the host vehicle in each direction.
Further, the step 2.4) comprises: two independent rectangular boxes are used for expressing rain, snow and fog weather information, wherein one of the two independent rectangular boxes expresses fog by filling color made by RGBA, RGB represents the color of the fog, and transparency is used for expressing visibility at the moment; the other box represents precipitation by lines, wherein the solid lines represent precipitation, the dashed lines represent precipitation, the number of lines represents the amount of precipitation in the area at the moment, and the width and direction of the lines represent the average size of the precipitation particles at the moment and the falling direction of the precipitation particles, i.e. raindrops and snowflakes.
Further, in the step 2.4, the filling color of the rectangular RGBA color system is used for expression, where three parameters of RGB are used for expressing the color of fog, and the transparency a is used for expressing the visibility, and the specific mapping relationship is as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab] (8)
line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog _ r fog _ g fog _ b is the RGB color of fog;
stab-the visibility of the current environment divided by 1000 to get the normalized value;
recolor-RGB color matrix;
line _ num-number of lines;
round-rounding operation;
density-the number of particles within a certain range;
line _ width — width of line;
particle size-particle size
line orientation-the direction of the line;
wind _ direction-particle falling direction.
Further, the step 2.5) comprises: and converting the information obtained by inference into the boundary and filled geometric characteristics of the graph from the step 2.1) to the step 2.4).
By describing the environmental information in the above expression mode, the present invention can bring the following beneficial effects:
(1) a uniform expression mode is provided for various environment perception information from various sensors, so that the environment perception information of different data types and data structures can be described at the same time, and thus, the information fusion of cross-scene and cross-modal is realized;
(2) because the processed environment information is selectively described, the expression of the information is more concise and effective, and the environment information which has little influence on decision behaviors is omitted;
(3) the complex environment information is completely described by using a uniform graphical expression method, so that the subsequent decision algorithm is developed by using an image feature extraction method and a deep learning-based method, and the migration application of the decision algorithm aiming at different input data or different methods in different scenes is facilitated.
Drawings
FIG. 1 is a flow chart of a graphical representation method for describing complex driving environment information according to the present invention
FIG. 2 is a diagram showing the effect of road layer size information expression
FIG. 3 is a display of traffic indication information expression effect of traffic regulation layer
FIG. 4 is a diagram showing the information expression effect of an object layer
FIG. 5 is a weather layer expression effect display
FIG. 6 is a diagram showing the expression effect of the inference information layer
Detailed Description
The following will explain the present invention in a practical embodiment by referring to the drawings and examples.
A graphical expression method for describing complex driving environment information mainly comprises the following steps:
step 1, layering the environment information. Note that what is layered in the layering process is the environmental information, not the entity of the object in the environment, that is, for example, the speed limit expressed by the signboard is the traffic regulation layer, and the signboard itself as the obstacle entity in the driving environment belongs to the object layer. The final layering result is:
1.1) road layer: the method comprises the steps of determining the shape of a central line of a road, the width of the road and the tangential and radial elevation information of the road surface in the vehicle driving environment;
1.2) traffic regulation layer: the system comprises road traffic authority and road traffic prohibition and restriction information which are clear by traffic signs and traffic laws in the vehicle running environment;
1.3) object layer: the position, size and motion state information of a static object and a moving object which have entities in the driving environment of the vehicle are contained;
1.4) weather layer: the weather environment information of rain, fog and snow appearing in the driving environment of the vehicle is contained;
1.5) reasoning information layer: the information obtained by inference in the vehicle driving environment is included, and the information comprises the future position and the attitude of the vehicle and the driving style information of the vehicle.
Step 2, respectively carrying out graphical expression on each environment information layer classified in the step 1:
2.1) graphically expressing the road layer environment information. The graphical expression method of the road layer driving environment information is to use a black solid line to express the boundary of a road area to express the impenetrable attribute under the normal driving non-emergency dodging condition, as shown in fig. 2, and the road boundary is represented by the black solid line. The elevation information of the road is expressed by a YUV color model with a smaller storage space, wherein a value of Y is reserved, U represents a radial angle of the road surface compared with a horizontal plane, and V represents a tangential angle of the road surface compared with the horizontal plane, and considering that a road slope and an inclination angle in an actual driving environment are generally smaller, a specific implementation manner is as follows:
U=((angle_t+20)/40-0.5)*0.3 (1)
V=((angle_v+20)/40-0.5)*0.3 (2)
in the formula, the color of blue in the U-YUV color model is U
angle _ t-the radial angle of the road surface compared to the horizontal plane
V-Red chroma V in YUV color model
angle _ v-the tangential angle of the road surface compared to the horizontal plane
In the actual using process, the conversion from YUV to RGB color model may be needed, and since the conversion from YUV to RGB is not one-to-one, the color parameter range needs to be adjusted, and the conversion relationship is given as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in the formula, Y-the gray level Y in YUV color model
Blue chroma U in U-YUV color model
V-Red chroma V in YUV color model
R-R in RGB color model, i.e. the brightness of the red color
G-luminance of G in RGB color model, i.e., green
B-brightness of B in RGB color model, i.e., blue
Certainly, the elevation information in the two directions is not enough to solve to obtain three brightness parameters of actually used RGB colors, the missing Y value will be described in detail in the speed limit part of the third step, and the specific effects are that the inclination angles of the road in the tangential direction and the radial direction are both 0, the speed limit is 60km/h, and by combining the expression method for the road gradient information in the third step, the road surface filling color YUV array is [0.20, 0.50, 0.50], and the RGB array is [0.13, 0.13, 0.13 ]; when the tangential tilt angle of the road becomes 10 degrees, the fill color YUV array becomes [0.20, 0.75, 0.50], and the RGB color array becomes [0.13, 0.12, 0.22 ].
2.2) carrying out graphical expression on the traffic regulation layer. The traffic rule display modes mainly comprise a road marking, a traffic signal lamp or traffic police command mode and a traffic sign board, the road marking comprises lane lines, stop lines and boundary lines which comprise symbols which are full lines and dotted lines and are used for indicating the driving areas of vehicles, and the concrete expression method is that the full lines and the dotted lines which are the same as the road marking in reality in type are used for expressing lane information on an actual road. The signal of the traffic signal lamp can be converted into the change of the speed limit of the vehicle target road intersection, namely the speed limit of the road at the intersection part and the stop line can be adjusted, specifically, the speed limit is normal when the road is green, the stop line changes the color to white, namely disappears to permit the vehicle to pass, the road speed limit is adjusted to 0 when the road is red, and the stop line is changed to pure black to indicate that the line cannot pass through. The traffic sign boards are generally divided into two categories, main signs and auxiliary signs. The main signs are divided into warning signs, prohibition signs, indication signs, road-indicating signs, tourism zone signs, operation zone signs and notification signs, the signs can be divided into three categories according to the influence degree of the signs on decision behaviors in the driving process of the vehicle, the first category signs do not have mandatory action on the driving of the vehicle, the second category signs include that direct and instant influence is generated on the motion state of the main vehicle, the third category signs influence the possible decision actions in the subsequent driving process by influencing the driving decision style of the vehicle, and a detailed classification list is shown in a table below.
Figure BDA0003626945630000071
Figure BDA0003626945630000081
The change of the speed limit caused by the traffic sign of the second type can be mapped to the gray level of the road expression, and the specific mapping relation when the method is actually applied is shown as the following formula:
Figure BDA0003626945630000082
in the formula, Y is the gray value Y of YUV color model of road pavement
speed limit -speed limitation of the road section
speed limitmax Maximum speed limit of the link, here 120km/h
The practical effect is shown in figure 3, the inclination angles of the lower section of the road in the tangential radial direction are all 0, the speed limit is 30km/h, and by combining the expression method for the road gradient information in the third step, the road surface filling color RGB array is [0.33, 0.33, 0.33 ]; the tangential radial dip angles of the road sections are all 0, the speed limit is 60km/h, and by combining the expression method of the road slope information in the third step, the road surface filling color RGB array is [0.13, 0.13, 0.13 ].
The change of the driving style score caused by the traffic sign of the third category can be mapped to the gray scale of the boundary line of the body vehicle, the actual mapping relation is the same as the above formula, and the effect is shown in the figure of the sixth step.
2.3) carrying out graphical expression on the object layer, wherein the object layer comprises a static object and a movable object. Wherein the stationary objects include stationary obstacles such as road signs, buildings, etc., and the movable objects include surrounding movable objects such as pedestrians, vehicles, etc. The object is represented by a simple geometric figure conforming to the shape of the object, the moving speed of the object relative to the body vehicle is expressed by filling colors made of RGB, and the mapping relation between the filling colors and the relative speed is as follows:
Figure BDA0003626945630000091
wherein, color-a matrix of internal fill colors;
speed x -the relative speed of the vehicle in the x-direction relative to the subject vehicle;
speed y -the relative speed of the vehicle in the y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction relative to the subject vehicle;
speed _ max-maximum value of relative speed of the vehicle with respect to the host vehicle in each direction
The concrete effect is shown in fig. 5, wherein the rectangle outside the road boundary shown on the left side represents a stationary building, while the rectangle shown on the right side represents other road vehicles moving on the road, the actual effect of filling the color is colored, and the example of the color array is shown in the following table.
Figure BDA0003626945630000092
And 2.4) graphically expressing the weather layer environment information. The effects of weather on the vehicle are varied, including effects on driver visibility and sensor perception, among others. Where rain and snow are classified into precipitation type weather, the parameters include precipitation particle size, number of particles within a range, and precipitation direction of particles, which are expressed by line width within a rectangle, number of lines, and direction of lines, respectively. The invention uses the filling color of rectangular RGBA color system to express, wherein RGB three parameters are used to express the color of fog, and the transparency A is used to express the visibility. The specific mapping relationship is shown as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab] (8)
line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog _ r fog _ g fog _ b is the RGB color of fog;
stab-the visibility of the current environment divided by 1000 to get the normalized value;
recolor-RGB color matrix;
line _ num-number of lines;
round-rounding operation;
density-the number of particles within a certain range;
line _ width — width of line;
particle size-particle size
line orientation-the direction of the line;
wind _ Direction-particle drop Direction
The expression effect is shown in fig. 5, and the information contained in the figure is compared with the geometric elements as shown in the following table.
Figure BDA0003626945630000101
2.5) is a graphical representation of the inference information layer. The graphical representation method of the environmental information of the reasoning information layer is to convert the information obtained by reasoning into the boundary and the filled geometric characteristics of the graph from the second step to the fifth step.
In this embodiment, the inferred aggressive degree score of the vehicle driving style can be expressed by using the gray scale of the rectangular frame line of the vehicle, as shown in the left diagram of fig. 6, the left driving style is conservative, so the gray scale is lower, and the right driving style is more aggressive, so the gray scale is higher; the inferred position of the vehicle after 0.5S in the future is expressed in the figure as a solid color rectangle without a bounding box line, and the filling color of the rectangle also adopts the same method as the fourth step to express the relative speed at the position at the future time compared with the current time, and the effect is shown in the right graph of fig. 6.
And 3, after all the environment information layers are completely graphically expressed, storing the graphics of each layer for subsequent work, sequentially stacking five layers of geometric graphics expressing the environment information according to a overlooking covering sequence, and finally performing corresponding coordinate system transformation on all the geometric graphics according to the running speed and the running direction of the body vehicle to visually display the final graphically expressed comprehensive environment information which takes the body vehicle as the center and changes along with time.

Claims (10)

1. A graphical expression method for describing complex driving environment information is characterized by comprising the following steps:
step 1, layering environment information, comprising:
1.1) road layer: the method comprises the steps of obtaining the shape of a central line of a road in a vehicle driving environment, the width of the road, and the tangential and radial elevation information of a road surface;
1.2) traffic regulation layer: the traffic control system comprises road traffic authority and road traffic prohibition and restriction information which are clear by traffic signs and traffic laws in the vehicle running environment;
1.3) object layer: the position, size and motion state information of a static object and a moving object which are provided with entities in the driving environment of the vehicle are contained;
1.4) weather layer: including weather environment information present in a vehicle driving environment;
1.5) reasoning information layer: the method comprises the following steps of obtaining the future position and the attitude of a vehicle and the driving style information of the vehicle by inference in the driving environment of the vehicle;
step 2, respectively carrying out graphical expression on each environment information layer classified in the step 1:
2.1) carrying out environment information graphical expression on the road layer;
2.2) carrying out graphical expression on the traffic regulation layer;
2.3) carrying out graphical expression on the object layer;
2.4) carrying out environment information graphical expression on the weather layer;
2.5) carrying out graphical expression on the reasoning information layer;
and 3, after all the environment information layers are graphically expressed, storing the graphics of each layer respectively, stacking the graphics of each environment information layer in sequence according to the overlooking covering sequence as required, carrying out corresponding coordinate system transformation on the stacked graphics according to the running speed and the running direction of the body vehicle, and displaying the graphically expressed comprehensive environment information which takes the body vehicle as the center and changes along with time.
2. A graphical representation method for describing complex driving environment information according to claim 1, characterized in that said step 2.1) comprises: the method comprises the following steps of using a solid line to express the boundary of a road area, using a YUV color model to express elevation information of a road, wherein the value of Y is reserved, U represents the radial angle of the road surface compared with the horizontal plane, and V represents the tangential angle of the road surface compared with the horizontal plane, wherein the specific expression mode is as follows:
U=((angle_t+20)/40-0.5)*0.3 (1)
V=((angle_v+20)/40-0.5)*0.3 (2)
in the formula, U is the blue chromaticity U in the YUV color model;
angle _ t-the radial angle of the road surface compared to the horizontal plane;
v-the red chroma V in the YUV color model;
angle _ v — the tangential angle of the road surface compared to the horizontal.
3. The graphical representation method for describing complex driving environment information as claimed in claim 2, wherein in step 2.1), if the elevation information of the road needs to be transformed to the RGB color model by using the YUV color model during the actual usage, the transformation relationship is as follows:
R=(Y+1.4075V)/1.5 (3)
G=(Y-0.3455U-0.7169V)/1.5 (4)
B=(Y+1.779B)/1.5 (5)
in the formula, Y is the gray level Y in the YUV color model;
u-blue chroma U in YUV color model;
v-the red chroma V in the YUV color model;
R-R in the RGB color model, i.e., the luminance of red;
G-G in the RGB color model, the luminance of green;
B-B in the RGB color model, i.e., the luminance of blue.
4. A graphical representation method for describing complex driving environment information according to claim 2, characterized in that said step 2.2) comprises: using a solid line and a dotted line which are the same as the road marking type in reality to express lane information on an actual road; meanwhile, the road passing authority and the traffic prohibition and limitation information are converted into the limitation on the driving speed of the vehicle and the modification on the driving style, and then the information of the speed limitation is mapped into the H value of the HUV in the road surface color.
5. A graphical representation method of describing complex driving environment information according to claim 4, characterized in that when the information of speed limit is mapped to H value of HUV in road surface color, the change of speed limit is mapped to the gray level of road representation, and the mapping relationship is as follows:
Figure FDA0003626945620000021
in the formula, Y is the gray value Y of the YUV color model of the road pavement;
speed limit -a speed limit for the section of road;
speed limitmax -maximum value of speed limit of the road.
6. A graphical representation method for describing complex driving environment information according to claim 2, characterized in that said step 2.3) comprises: the method comprises the steps of using graphic frames with different shapes to express size characteristic information of an object, using filling colors to express the relative speed of the object relative to a self vehicle, using RGB color systems for the filling colors, and respectively using red, green and blue to express the relative speed of the self vehicle relative to the transverse direction, the longitudinal direction and the vertical direction under a self vehicle coordinate system.
7. The graphical representation method for describing complex driving environment information as claimed in claim 6, wherein the relative speed of the object with respect to the host vehicle is represented by filling colors made of RGB, and the mapping relationship between the filling colors and the relative speed is as follows:
Figure FDA0003626945620000031
wherein, color-a matrix of internal fill colors;
speed x -the relative speed of the vehicle in the x-direction relative to the subject vehicle;
speed y -the relative speed of the vehicle in the y-direction with respect to the subject vehicle;
speed z -relative speed of the vehicle in the z-direction relative to the subject vehicle;
speed _ max — the maximum value of the relative speed of the vehicle with respect to the host vehicle in each direction.
8. A graphical representation method for describing complex driving environment information according to claim 2, characterized in that said step 2.4) comprises: two independent rectangular boxes are used for expressing rain, snow and fog weather information, wherein one of the two independent rectangular boxes expresses fog by filling color made by RGBA, RGB represents the color of the fog, and transparency is used for expressing visibility at the moment; the other box represents precipitation by lines, wherein the solid lines represent precipitation, the dashed lines represent precipitation, the number of lines represents the amount of precipitation in the area at the moment, and the width and direction of the lines represent the average size of the precipitation particles at the moment and the falling direction of the precipitation particles, i.e. raindrops and snowflakes.
9. The graphical representation method for describing complex driving environment information as claimed in claim 8, wherein in step 2.4, the filling color is represented by a rectangular RGBA color system, wherein three parameters of RGB are used to express the color of fog, and the transparency a is used to express the visibility, and the specific mapping relationship is as follows:
rec_color=[fog_r/255*stab,fog_g/255*stab,fog_b/255*stab] (8)
line_num=round(density*10/1000000) (9)
line_width=particle_size (10)
line_orientation=wind_direction (11)
wherein, fog _ r fog _ g fog _ b is the RGB color of fog;
stab-the visibility of the current environment divided by 1000 to get the normalized value;
recolor-RGB color matrix;
line _ num-number of lines;
round-rounding operation;
density-the number of particles within a certain range;
line _ width — width of line;
particle size-particle size
line orientation-the direction of the line;
wind _ direction-particle falling direction.
10. A graphical representation method for describing complex driving environment information according to claim 2, characterized in that said step 2.5) comprises: and converting the information obtained by inference into the geometric characteristics of the boundary and the filling of the graph in the steps 2.1) to 2.4).
CN202210479395.2A 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information Active CN114820971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210479395.2A CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210479395.2A CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Publications (2)

Publication Number Publication Date
CN114820971A true CN114820971A (en) 2022-07-29
CN114820971B CN114820971B (en) 2023-06-09

Family

ID=82512024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210479395.2A Active CN114820971B (en) 2022-05-05 2022-05-05 Graphical expression method for describing complex driving environment information

Country Status (1)

Country Link
CN (1) CN114820971B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140277939A1 (en) * 2013-03-14 2014-09-18 Robert Bosch Gmbh Time and Environment Aware Graphical Displays for Driver Information and Driver Assistance Systems
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN110007675A (en) * 2019-04-12 2019-07-12 北京航空航天大学 A kind of Vehicular automatic driving decision system based on driving situation map and the training set preparation method based on unmanned plane
CN111539112A (en) * 2020-04-27 2020-08-14 吉林大学 Scene modeling method for automatically driving vehicle to quickly search traffic object
CN112101120A (en) * 2020-08-18 2020-12-18 沃行科技(南京)有限公司 Map model based on automatic driving application scene and application method thereof
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
WO2021148113A1 (en) * 2020-01-22 2021-07-29 Automotive Artificial Intelligence (Aai) Gmbh Computing system and method for training a traffic agent in a simulation environment
CN113895464A (en) * 2021-12-07 2022-01-07 武汉理工大学 Intelligent vehicle driving map generation method and system fusing personalized driving style

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140277939A1 (en) * 2013-03-14 2014-09-18 Robert Bosch Gmbh Time and Environment Aware Graphical Displays for Driver Information and Driver Assistance Systems
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108225364A (en) * 2018-01-04 2018-06-29 吉林大学 A kind of pilotless automobile driving task decision system and method
CN110007675A (en) * 2019-04-12 2019-07-12 北京航空航天大学 A kind of Vehicular automatic driving decision system based on driving situation map and the training set preparation method based on unmanned plane
WO2021000800A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Reasoning method for road drivable region and device
WO2021148113A1 (en) * 2020-01-22 2021-07-29 Automotive Artificial Intelligence (Aai) Gmbh Computing system and method for training a traffic agent in a simulation environment
CN111539112A (en) * 2020-04-27 2020-08-14 吉林大学 Scene modeling method for automatically driving vehicle to quickly search traffic object
CN112101120A (en) * 2020-08-18 2020-12-18 沃行科技(南京)有限公司 Map model based on automatic driving application scene and application method thereof
CN113895464A (en) * 2021-12-07 2022-01-07 武汉理工大学 Intelligent vehicle driving map generation method and system fusing personalized driving style

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BANSAL, M: "ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst", 《15TH CONFERENCE ON ROBOTICS - SCIENCE AND SYSTEMS》 *
N. DJURIC ET AL: "Uncertainty-aware Short-term Motion Prediction of Traffic Actors for Autonomous Driving", 《2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV)》, pages 2084 - 2093 *
S. ULBRICH: "Graph-based context representation, environment modeling and information aggregation for automated driving", 《2014 IEEE INTELLIGENT VEHICLES SYMPOSIUM PROCEEDINGS, DEARBORN》, pages 541 - 547 *
宋琪: "基于多源传感器融合的无人车环境感知算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 035 - 423 *
朱波等: "基于多通道态势图的自动驾驶场景表征方法", 《中国公路学报》, vol. 33, no. 8, pages 204 - 214 *
管欣等: "基于分层信息数据库的智能车仿真环境感知方法研究", 《汽车工程》, vol. 37, no. 01, pages 43 - 48 *
黄武陵: "激光雷达在无人驾驶环境感知中的应用", 《单片机与嵌入式系统应用》, vol. 16, no. 10, pages 3 - 7 *

Also Published As

Publication number Publication date
CN114820971B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
US11373419B2 (en) Automatically detecting unmapped drivable road surfaces for autonomous vehicles
JP4595759B2 (en) Environment recognition device
CN110503716B (en) Method for generating motor vehicle license plate synthetic data
CN111899515B (en) Vehicle detection system based on wisdom road edge calculates gateway
CN110414418A (en) A kind of Approach for road detection of image-lidar image data Multiscale Fusion
CN108876805B (en) End-to-end unsupervised scene passable area cognition and understanding method
US20220373354A1 (en) Automatic generation of vector map for vehicle navigation
WO2023213155A1 (en) Vehicle navigation method and apparatus, computer device and storage medium
CN111651712A (en) Method and system for evaluating complexity of test scene of intelligent automobile
US20240005642A1 (en) Data Augmentation for Vehicle Control
US20240005641A1 (en) Data Augmentation for Detour Path Configuring
CN115830265A (en) Automatic driving movement obstacle segmentation method based on laser radar
CN116142233A (en) Carrier lamp classification system
CN116597690B (en) Highway test scene generation method, equipment and medium for intelligent network-connected automobile
US20220146277A1 (en) Architecture for map change detection in autonomous vehicles
CN113525357A (en) Automatic parking decision model optimization system and method
CN114820971B (en) Graphical expression method for describing complex driving environment information
CN117237919A (en) Intelligent driving sensing method for truck through multi-sensor fusion detection under cross-mode supervised learning
US20230056589A1 (en) Systems and methods for generating multilevel occupancy and occlusion grids for controlling navigation of vehicles
WO2023158706A1 (en) End-to-end processing in automated driving systems
CN115257785A (en) Automatic driving data set manufacturing method and system
CN114648549A (en) Traffic scene target detection and positioning method fusing vision and laser radar
CN115610442A (en) Composite scene for implementing autonomous vehicle
US11702011B1 (en) Data augmentation for driver monitoring
CN117685954B (en) Multi-mode semantic map construction system and method for mining area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant