CN115618602A - Lane-level scene simulation method and system - Google Patents
Lane-level scene simulation method and system Download PDFInfo
- Publication number
- CN115618602A CN115618602A CN202211257883.5A CN202211257883A CN115618602A CN 115618602 A CN115618602 A CN 115618602A CN 202211257883 A CN202211257883 A CN 202211257883A CN 115618602 A CN115618602 A CN 115618602A
- Authority
- CN
- China
- Prior art keywords
- lane
- vehicle
- data
- road
- laser radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 18
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 238000005516 engineering process Methods 0.000 claims abstract description 10
- 238000004590 computer program Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- XXQGYGJZNMSSFD-UHFFFAOYSA-N 2-[2-(dimethylcarbamoyl)phenoxy]acetic acid Chemical compound CN(C)C(=O)C1=CC=CC=C1OCC(O)=O XXQGYGJZNMSSFD-UHFFFAOYSA-N 0.000 description 1
- 102100022443 CXADR-like membrane protein Human genes 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G06T3/04—
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Hardware Design (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a lane level scene simulation method and a system, wherein the method comprises the following steps: acquiring lane image data or laser radar data acquired by a vehicle in a real scene; if the acquired data is lane image data, extracting outermost lane line pixel points in the lane image through an image processing technology, calculating the distance between the outermost lane lines and the own vehicle, acquiring the number of lanes, and judging the lane where the own vehicle is located; if the collected data are laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road; and carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located. By the scheme, accuracy of lane judgment in crowdsourcing scene data can be guaranteed, and precision and reliability of scene simulation are improved.
Description
Technical Field
The invention belongs to the field of automatic driving simulation tests, and particularly relates to a lane level scene simulation method and system.
Background
With the development of automatic driving, the requirements for scene simulation tests are increasing, and the simulation scenes to be built are also more complex and diversified. Basic data of the simulation scene come from images, point clouds and the like acquired from the real scene of the acquired vehicle in the real scene, the lane where the acquired vehicle is located in the general real scene can be manually and directly input or the vehicle can automatically judge, but when the lane where the vehicle is located is judged through crowdsourcing acquired data, the lane where the vehicle is located in the crowdsourcing scene data received may be inaccurate due to different crowdsourcing vehicle-mounted devices and artificial errors or omissions. Therefore, it is necessary to determine the lane where the vehicle is located based on a unified algorithm at the server side or the vehicle side.
Disclosure of Invention
In view of this, embodiments of the present invention provide a lane-level scene simulation method and system, which are used to solve the problem that lanes where vehicles are located in crowd-sourced scene data are inaccurate.
In a first aspect of an embodiment of the present invention, a lane-level scene simulation method is provided, including:
acquiring lane image data or laser radar data acquired by a vehicle in a real scene;
if the collected data is lane image data, extracting outermost lane line pixel points in the lane image through an image processing technology, calculating the distance between the outermost lane line and the own vehicle, acquiring the number of lanes, and judging the lane where the own vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
and carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
In a second aspect of embodiments of the present invention, there is provided a system for lane-level scene simulation, comprising:
the data acquisition module is used for acquiring lane image data or laser radar data acquired by a vehicle in a real scene;
the lane judging module is used for extracting the outermost lane line pixel points in the lane image through an image processing technology if the acquired data is lane image data, calculating the distance between the outermost lane line and the own vehicle, acquiring the number of lanes and judging the lane where the own vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
and the scene simulation module is used for carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
In a third aspect of the embodiments of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable by the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the embodiments of the present invention.
In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor implements the steps of the method provided by the first aspect of the embodiments of the present invention.
In the embodiment of the invention, the lane where the vehicle is located is judged through the vehicle-mounted camera or the vehicle-mounted laser radar, the lane where the vehicle is located can be set in a simulation scene, and the authenticity and the accuracy of the simulation scene are guaranteed. The problem of inaccurate result caused by artificial setting or vehicle self-judgment in crowdsourcing data is avoided, the accuracy and the reliability of lane judgment can be improved based on a unified algorithm, and the lane-level simulation effect is guaranteed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a lane-level scene simulation method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a system for lane-level scene simulation according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification or claims and in the accompanying drawings, are intended to cover a non-exclusive inclusion, such that a process, method or system, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements. In addition, "first" and "second" are used to distinguish different objects, and are not used to describe a specific order.
Referring to fig. 1, a flow chart of a lane-level scene simulation method according to an embodiment of the present invention is schematically illustrated, including:
s101, lane image data or laser radar data collected by a vehicle in a real scene are obtained;
the lane image data is image data acquired by a vehicle-mounted camera, and the vehicle-mounted camera is generally mounted at a vehicle head, a vehicle roof or a rearview mirror and the like, is used for acquiring images in front of the vehicle, and can comprise a road surface, other vehicles, a signboard and the like. The laser radar data are vehicle surrounding object point clouds and the like acquired by a vehicle-mounted laser radar, and are generally mounted on a vehicle roof and used for acquiring distance information of obstacles around the vehicle.
When the vehicle is provided with the vehicle-mounted camera, lane image data can be collected through the vehicle-mounted camera; when the vehicle is provided with the laser radar, the vehicle-mounted laser radar can collect laser radar data.
S102, if the collected data are lane image data, extracting outermost lane line pixel points in the lane image through an image processing technology, calculating the distance between the outermost lane lines and the vehicle, acquiring the number of lanes, and judging the lane where the vehicle is located;
and when the acquired data is a lane image, extracting the lane line pixels on the road surface and the outermost lane line pixels by using a corresponding image processing technology.
Specifically, the lane image is subjected to gray level conversion, gaussian smoothing processing, edge detection and region-of-interest extraction; and extracting lane lines through Hoffman transformation, and selecting the pixels of the lane lines on the outermost side of the road.
Reading the image, converting the image into a gray image by using opencv, performing Gaussian smoothing on the gray image, setting a detection threshold, performing canny edge extraction on the smoothed image, and selecting a Region of Interest (RIO) in the image.
The huffman transform is a method of recognizing geometric shapes in images that can be used to extract lane lines in lane images. The polar coordinate system is used for expressing a straight line, and the expression of the straight line is as follows: y = (-cos θ/sin θ) x + (r/sin θ), reduced to: r = xcos θ + ysin θ. In general, for point (x 0, y 0), a cluster of straight lines passing through the point is collectively defined as: y θ = x0cos θ + y0sin θ. This means that each pair (r θ, θ) represents a straight line passing through the point (x 0, y 0). If for a given point (x 0, y 0) all the lines through it are plotted in the polar coordinate system for the radial polar angle plane, a sinusoid will result. If the above operation is performed on all points in the image, the curves obtained by performing the above operation on two different points intersect in the plane of polar radius and polar angle, which means that they pass through the same straight line. The hough line transform may track the intersection of the corresponding curves for each point in the image. If the number of the curves intersected with one point exceeds the threshold value, the parameter represented by the intersection point can be considered to be a straight line in the original image, and the lane line drawn in the original image according to the parameter is the lane line in the real scene.
And judging the lane where the self-vehicle is located according to the number of the lanes and the number of lane lines on two sides of the self-vehicle.
And selecting pixel points on the outermost lane measuring line, and calculating the distance between the pixel points and the vehicles by combining the camera parameters so as to judge the number of lanes. And judging the lane where the self vehicle is located according to the extracted lane line number and the lane number. For example, if the unidirectional lane has three lanes, the right side has two lane lines, and the left side also has three lane lines, the vehicle is located in the middle lane.
S103, if the collected data are laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
the two edges of the road can be guardrails, isolation zones and the like, and can also be the edge lane lines on the two sides of the lane, the distance between the edge of the road and the vehicle can be directly measured through a laser radar, the distance between the left side and the right side is added to obtain the width of the road, and according to the design standard of common lanes, if the urban lane is generally 3.5 meters, the highway lane is generally 3.75mi, the number of lanes of the current road can be calculated.
And judging the lane where the vehicle is located by combining the distance between the vehicle and any side of the road and the lane data. Assuming that the road has three lanes in one direction, and the distance from the vehicle to the right is 3.6m, the vehicle is in the middle lane of the three lanes.
Preferably, if the vehicle simultaneously collects lane image data and laser radar data, the lane where the vehicle is located is judged according to the lane image data, and the judgment result is verified based on the laser radar data. Therefore, the accuracy and the reliability of the judgment result can be ensured, and the judgment algorithm of the image can be optimized based on the judgment result.
And S104, carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
Other vehicle information collected by a sensor, such as other vehicle positions and distances collected by a camera and other vehicle speed information, or other vehicle distances and speed information collected by a laser radar, is converted into an Opendrive format road map and vehicle behavior simulation in an OpenScenario format is carried out, and the position information of the lane where the vehicle is located can be confirmed through the identified lane position, so that the simulation precision is improved.
OpenDrive describes a static road traffic network required by an automatic driving simulation application and provides a standard interchange format description document, and the standard mainly describes roads and objects on the roads; openscene ario is one of standards formulated by ASAM, is specially used for dynamic scene planning in the field of scene simulation, is used for establishing standards among maps, scenes, tools and test functions, realizes standardized description of intelligent driving dynamic scenes, and comprises three fields, namely, roadNetwork, entity and storage board, which respectively describe scene roads, scene participant parameters and participant behaviors.
In the embodiment, based on a unified lane judgment algorithm, the accuracy and reliability of lane judgment in crowdsourcing scene data can be guaranteed, and the scene simulation precision is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 2 is a schematic structural diagram of a system for lane-level scene simulation according to an embodiment of the present invention, where the system includes:
the data acquisition module 210 is configured to acquire lane image data or laser radar data acquired by a vehicle in a real scene;
the lane judging module 220 is configured to, if the acquired data is lane image data, extract an outermost lane line pixel point in the lane image through an image processing technology, calculate a distance between an outermost lane line and a host vehicle, obtain the number of lanes, and judge a lane where the host vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
wherein, extracting the outermost lane line pixel points in the lane image through the image processing technology comprises:
respectively carrying out gray level conversion, gaussian smoothing processing, edge detection and region-of-interest extraction on the lane image; and extracting lane lines through Hoffman transformation, and selecting the pixels of the lane lines on the outermost side of the road.
Further, the number of lanes is obtained, and the lane where the self-vehicle is located is judged according to the number of lanes and the number of lane lines on two sides of the self-vehicle.
Preferably, the lane determining module further includes:
and the verification module is used for judging the lane where the vehicle is located according to the lane image data when the vehicle simultaneously collects the lane image data and the laser radar data, and verifying the judgment result based on the laser radar data.
And the scene simulation module 230 is configured to perform simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and set the lane where the simulation vehicle is located.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the module described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic equipment is used for lane level road scene simulation. As shown in fig. 3, the electronic apparatus 4 of this embodiment includes: a memory 310, a processor 320, and a system bus 330, the memory 310 including an executable program 3101 stored thereon, it being understood by those skilled in the art that the electronic device architecture shown in fig. 3 does not constitute a limitation of electronic devices, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the electronic device in detail with reference to fig. 3:
the memory 310 may be used to store software programs and modules, and the processor 320 executes various functional applications and data processing of the electronic device by operating the software programs and modules stored in the memory 310. The memory 310 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as cache data) created according to the use of the electronic device, and the like. Further, the memory 310 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
On the memory 310, an executable program 3101 of the network request method is contained, and the executable program 3101 may be divided into one or more modules/units, which are stored in the memory 310 and executed by the processor 320 to realize the lane judgment and the like of the vehicle, and the one or more modules/units may be a series of instruction segments of a computer program capable of performing a specific function, and the instruction segments are used for describing the execution process of the computer program 3101 in the electronic device 3. For example, the computer program 3101 may be divided into functional modules such as a data acquisition module, a lane judgment module, and a scene simulation module.
The processor 320 is a control center of the electronic device, connects various parts of the whole electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 310 and calling data stored in the memory 310, thereby performing overall status monitoring of the electronic device. Alternatively, processor 320 may include one or more processing units; preferably, the processor 320 may integrate an application processor, which mainly handles operating systems, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 320.
The system bus 330 is used to connect functional units inside the computer, and CAN transmit data information, address information, and control information, and may be, for example, a PCI bus, an ISA bus, a CAN bus, etc. The instructions of the processor 320 are transferred to the memory 310 through the bus, the memory 310 feeds data back to the processor 320, and the system bus 330 is responsible for data and instruction interaction between the processor 320 and the memory 310. Of course, other devices, such as network interfaces, display devices, etc., may also be accessible to the system bus 330.
In this embodiment of the present invention, the executable program executed by the process 320 included in the electronic device includes:
acquiring lane image data or laser radar data acquired by a vehicle in a real scene;
if the acquired data is lane image data, extracting outermost lane line pixel points in the lane image through an image processing technology, calculating the distance between the outermost lane lines and the own vehicle, acquiring the number of lanes, and judging the lane where the own vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
and carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the system, the device and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A lane-level scene simulation method is characterized by comprising the following steps:
acquiring lane image data or laser radar data acquired by a vehicle in a real scene;
if the collected data is lane image data, extracting outermost lane line pixel points in the lane image through an image processing technology, calculating the distance between the outermost lane line and the own vehicle, acquiring the number of lanes, and judging the lane where the own vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
and carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
2. The method of claim 1, wherein the extracting outermost lane line pixel points in the lane image by an image processing technique comprises:
respectively carrying out gray level conversion, gaussian smoothing processing, edge detection and region-of-interest extraction on the lane image;
and extracting lane lines through Hoffman transformation, and selecting the pixels of the lane lines on the outermost side of the road.
3. The method of claim 1, wherein the obtaining of the number of lanes and the determining of the lane in which the host vehicle is located comprises:
and judging the lane where the self-vehicle is located according to the number of the lanes and the number of lane lines on two sides of the self-vehicle.
4. The method of claim 1, wherein the acquiring lane image data or lidar data acquired by a vehicle in a real scene further comprises:
and if the vehicle simultaneously acquires the lane image data and the laser radar data, judging the lane where the vehicle is located according to the lane image data, and verifying the judgment result based on the laser radar data.
5. A system for lane-level scene simulation, comprising:
the data acquisition module is used for acquiring lane image data or laser radar data acquired by a vehicle in a real scene;
the lane judging module is used for extracting the outermost lane line pixel points in the lane image through an image processing technology if the acquired data is lane image data, calculating the distance between the outermost lane line and the own vehicle, acquiring the number of lanes and judging the lane where the own vehicle is located;
if the collected data is laser radar data, obtaining the distance from the vehicle to the edges of the two sides of the road based on the laser radar data, calculating the width of the current road, calculating the number of lanes under the width of the current road according to a road planning standard, and judging the lane where the vehicle is located by combining the distance from the vehicle to the edges of the two sides of the road;
and the scene simulation module is used for carrying out simulation scene data conversion based on the real scene data acquired by the vehicle and the lane where the vehicle is located, and setting the lane where the simulation vehicle is located.
6. The system of claim 5, wherein the extracting of the outermost lane line pixel points in the lane image by the image processing technique comprises:
respectively carrying out gray level conversion, gaussian smoothing processing, edge detection and region-of-interest extraction on the lane image;
and extracting lane lines through Hoffman transformation, and selecting the pixels of the lane lines on the outermost side of the road.
7. The system of claim 5, wherein the obtaining the number of lanes and the determining the lane in which the host vehicle is located comprises:
and judging the lane where the self-vehicle is located according to the number of the lanes and the number of lane lines on two sides of the self-vehicle.
8. The system of claim 5, wherein the lane determination module further comprises:
and the verification module is used for judging the lane where the vehicle is located according to the lane image data when the vehicle simultaneously collects the lane image data and the laser radar data, and verifying the judgment result based on the laser radar data.
9. An electronic device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor, when executing said computer program, implements the steps of a lane-level scene simulation method according to any of claims 1 to 4.
10. A computer-readable storage medium, storing a computer program, characterized in that the computer program, when executed, implements the steps of a lane-level scene simulation method according to any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211257883.5A CN115618602A (en) | 2022-10-12 | 2022-10-12 | Lane-level scene simulation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211257883.5A CN115618602A (en) | 2022-10-12 | 2022-10-12 | Lane-level scene simulation method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115618602A true CN115618602A (en) | 2023-01-17 |
Family
ID=84862724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211257883.5A Pending CN115618602A (en) | 2022-10-12 | 2022-10-12 | Lane-level scene simulation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115618602A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116010289A (en) * | 2023-03-27 | 2023-04-25 | 禾多科技(北京)有限公司 | Automatic driving simulation scene test method and device, electronic equipment and readable medium |
-
2022
- 2022-10-12 CN CN202211257883.5A patent/CN115618602A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116010289A (en) * | 2023-03-27 | 2023-04-25 | 禾多科技(北京)有限公司 | Automatic driving simulation scene test method and device, electronic equipment and readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
CN111291676B (en) | Lane line detection method and device based on laser radar point cloud and camera image fusion and chip | |
CN109470254B (en) | Map lane line generation method, device, system and storage medium | |
CN110443225B (en) | Virtual and real lane line identification method and device based on feature pixel statistics | |
WO2018068653A1 (en) | Point cloud data processing method and apparatus, and storage medium | |
CN110598743A (en) | Target object labeling method and device | |
CN110555407B (en) | Pavement vehicle space identification method and electronic equipment | |
CN112581612A (en) | Vehicle-mounted grid map generation method and system based on fusion of laser radar and look-around camera | |
CN110298311B (en) | Method and device for detecting surface water accumulation | |
CN110657812A (en) | Vehicle positioning method and device and vehicle | |
CN113989766A (en) | Road edge detection method and road edge detection equipment applied to vehicle | |
WO2021017211A1 (en) | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal | |
CN112329846A (en) | Laser point cloud data high-precision marking method and system, server and medium | |
CN113255444A (en) | Training method of image recognition model, image recognition method and device | |
KR20210082518A (en) | Intersection detection, neural network training and smart driving methods, devices and devices | |
CN115618602A (en) | Lane-level scene simulation method and system | |
CN115423968A (en) | Power transmission channel optimization method based on point cloud data and live-action three-dimensional model | |
CN114820679A (en) | Image annotation method and device, electronic equipment and storage medium | |
CN114240816A (en) | Road environment sensing method and device, storage medium, electronic equipment and vehicle | |
CN114820657A (en) | Ground point cloud segmentation method, ground point cloud segmentation system, ground modeling method and medium | |
CN111316324A (en) | Automatic driving simulation system, method, equipment and storage medium | |
CN111210411B (en) | Method for detecting vanishing points in image, method for training detection model and electronic equipment | |
CN117079238A (en) | Road edge detection method, device, equipment and storage medium | |
CN109598199B (en) | Lane line generation method and device | |
CN116642490A (en) | Visual positioning navigation method based on hybrid map, robot and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |