CN116168173A - Lane line map generation method, device, electronic device and storage medium - Google Patents

Lane line map generation method, device, electronic device and storage medium Download PDF

Info

Publication number
CN116168173A
CN116168173A CN202310444818.1A CN202310444818A CN116168173A CN 116168173 A CN116168173 A CN 116168173A CN 202310444818 A CN202310444818 A CN 202310444818A CN 116168173 A CN116168173 A CN 116168173A
Authority
CN
China
Prior art keywords
map
lane line
grid
target
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310444818.1A
Other languages
Chinese (zh)
Other versions
CN116168173B (en
Inventor
高海明
华炜
邱奇波
张霄来
张骞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310444818.1A priority Critical patent/CN116168173B/en
Publication of CN116168173A publication Critical patent/CN116168173A/en
Application granted granted Critical
Publication of CN116168173B publication Critical patent/CN116168173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a lane line map generation method, a lane line map generation device, an electronic device and a storage medium, wherein the lane line map generation method comprises the following steps: acquiring an original image acquired by a vehicle-mounted camera; determining a corresponding mask image based on the original image, wherein the mask image comprises lane line attribute information; constructing a corresponding grid map based on the mask image; filling lane line attribute information into a grid map based on the mask image and internal and external parameters of the vehicle-mounted camera to obtain a filled grid map; and generating a local lane line map based on the filled grid map. According to the method and the device, the problem that the lane line map cannot be generated through the image information of the visual space in the prior art is solved, and the lane line map is generated according to the image information of the visual space.

Description

Lane line map generation method, device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of driving assistance technologies, and in particular, to a lane line map generating method, a lane line map generating device, an electronic device, and a storage medium.
Background
The lane line map can provide important priori information for the intelligent auxiliary driving field, for example, can provide basis for real-time positioning and motion planning of intelligent driving vehicles, and therefore, the generation of the lane line map draws a great deal of attention of students in the related field.
The current lane line map is mostly generated based on point cloud data generated by a laser radar and combined with a manual labeling mode, so that the workload is huge, the laser radar sensor is high in price, and the laser radar intensity information is easily affected by accumulated water, lane line materials and abrasion degree, so that the integrity of the map is low. Also, no effective solution has been proposed at present in generating a lane line map based on image information of a visual space.
How to generate a lane line map using image information of a visual space is a problem to be solved.
Disclosure of Invention
In this embodiment, a lane line map generation method, apparatus, electronic apparatus, and storage medium are provided to solve the problem of how to generate a lane line map using image information of a visual space in the related art.
In a first aspect, in this embodiment, there is provided a lane line map generating method, including:
acquiring an original image acquired by a vehicle-mounted camera;
determining a corresponding mask image based on the original image, wherein the mask image comprises lane line attribute information;
constructing a corresponding grid map based on the mask image;
Filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a filled grid map;
and generating a local lane line map based on the filled grid map.
In some of these embodiments, the constructing a corresponding grid map based on the mask image includes:
acquiring internal and external parameters of the vehicle-mounted camera;
and converting the mask image into the grid map based on the internal and external parameters of the vehicle-mounted camera.
In some embodiments, the filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a filled grid map includes:
based on the internal and external parameters of the vehicle-mounted camera, projecting the grid map to an imaging plane of the vehicle-mounted camera to obtain a distortion-free map;
acquiring a mapping relation between an undistorted image and a distorted image in the vehicle-mounted camera;
obtaining a distortion map corresponding to the grid map based on the undistorted map and the mapping relation;
and filling the lane line attribute information into the grid map based on the mask image and the distortion map to obtain a filled grid map.
In some of these embodiments, the lane-line attribute information includes a plurality of attribute categories, and after the generating a local lane-line map based on the filled grid map, the method further includes:
acquiring a plurality of local lane line maps corresponding to the vehicle-mounted camera under different poses;
determining an initial global lane line map based on a plurality of the local lane line maps;
determining the total number of observations of a target grid and attribute categories corresponding to each observation of the target grid, wherein the target grid is any grid in the initial global lane line map;
determining a target attribute category of the target grid based on the attribute observation times of the target grid in each attribute category, the total observation times and a preset threshold;
and generating a target global lane line map based on target attribute categories of all grids in the initial global lane line map.
In some embodiments, the determining the target attribute category of the target grid based on the attribute observation times of the target grid in each attribute category, the total observation times and a preset threshold value includes:
determining class probability corresponding to each attribute class of the target grid based on the attribute observation times of the target grid in each attribute class and the observation total times;
And determining the target attribute category of the target grid based on the category probabilities and the preset threshold.
In some of these embodiments, after generating the target global lane line map based on the target attribute categories of all grids in the initial global lane line map, the method further comprises:
acquiring distribution parameters of the target global lane line map;
updating the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current total observation times of the lane lines in the target global lane line map.
In some embodiments, the updating the target global lane line map based on the distribution parameter, the current number of attribute observations of the lane lines in the target global lane line map in each attribute category, and the current total number of observations of the lane lines in the target global lane line map includes:
determining a current probability distribution function of the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current total observation times of the lane lines in the target global lane line map;
And updating the target global lane line map based on the current probability distribution function.
In a second aspect, in this embodiment, there is provided a lane line map generating apparatus including:
the acquisition module is used for acquiring an original image acquired by the vehicle-mounted camera;
the determining module is used for determining a corresponding mask image based on the original image, wherein the mask image comprises lane line attribute information;
the construction module is used for constructing a corresponding grid map based on the mask image;
the filling module is used for filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a filled grid map;
and the generation module is used for generating a local lane line map based on the filled grid map.
In a third aspect, in this embodiment, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the lane line map generating method according to any one of the first aspect and the first aspect when executing the computer program.
In a fourth aspect, in this embodiment, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the lane line map generation method according to any one of the embodiments of the first aspect and the first aspect described above.
Compared with the related art, the lane line map generation method provided in the embodiment determines the corresponding mask image through the original image acquired by the vehicle-mounted camera, wherein the mask image comprises the attribute information of the lane line, further, the corresponding grid map is constructed according to the mask image, the lane line attribute information in the mask image is filled into the corresponding grid map according to the mask image and the internal and external parameters of the vehicle-mounted camera, the filled grid map is obtained, the attribute information of the lane line is included in the filled grid map, further, the local lane line map is generated according to the filled grid map, and therefore, the lane line map is generated through the image information of the vision space acquired by the vehicle-mounted camera, the problem that the integrity of the generated lane line map is low due to the influence of weather, the lane line material and other factors on the laser radar is avoided.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is an application scenario schematic diagram of a lane line map generating method provided in an embodiment of the present application;
fig. 2 is a flowchart of a lane line map generating method according to an embodiment of the present application;
FIG. 3 is a schematic view of an imaging plane and a bird's eye view plane according to an embodiment of the present disclosure;
FIG. 4 is an embodiment flow chart of a lane line map generation method of the embodiments of the present application;
fig. 5 is a schematic view of a lane line map in a campus environment according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a lane line map generating apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, the present application is described and illustrated below with reference to the accompanying drawings and examples.
Unless defined otherwise, technical or scientific terms used herein shall have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," "these," and the like in this application are not intended to be limiting in number, but rather are singular or plural. The terms "comprising," "including," "having," and any variations thereof, as used in the present application, are intended to cover a non-exclusive inclusion; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (units) is not limited to the list of steps or modules (units), but may include other steps or modules (units) not listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. Typically, the character "/" indicates that the associated object is an "or" relationship. The terms "first," "second," "third," and the like, as referred to in this application, merely distinguish similar objects and do not represent a particular ordering of objects.
The lane line map generation method provided by the embodiment of the application can be applied to an application scene shown in fig. 1, and fig. 1 is a schematic diagram of the application scene of the lane line map generation method provided by the embodiment of the application. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. In this embodiment of the present application, the terminal 102 may be an intelligent vehicle-mounted device, and the terminal 102 may also be various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, where the internet of things devices may be an intelligent sound box, an intelligent television, an intelligent air conditioner, and so on. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
The lane line map can provide important priori information for the intelligent auxiliary driving field, for example, can provide basis for real-time positioning and motion planning of intelligent driving vehicles, and therefore, the generation of the lane line map draws a great deal of attention of students in the related field.
The current lane line map is mostly generated based on point cloud data generated by a laser radar and combined with a manual labeling mode, so that the workload is huge, the laser radar sensor is high in price, and the laser radar intensity information is easily affected by accumulated water, lane line materials and abrasion degree, so that the integrity of the map is low. Also, no effective solution has been proposed at present in generating a lane line map based on image information of a visual space.
How to generate a lane line map using image information of a visual space is a problem to be solved.
In this embodiment, a lane line map generating method is provided, fig. 2 is a flowchart of the lane line map generating method provided in this embodiment, and an execution subject of the method may be an electronic device, optionally, the electronic device may be a server or a terminal device, but the application is not limited thereto. Specifically, as shown in fig. 2, the process includes the following steps:
step S201, acquiring an original image acquired by the vehicle-mounted camera.
Step S202, determining a corresponding mask image based on the original image.
Wherein the mask image comprises lane attribute information.
Specifically, the lane attribute information may include a solid line, a broken line, a stop line, and other four types, and more specifically, the lane attribute information may also be a solid line, a broken line, a stop line, and other corresponding labels, for example, a label of 1 for the solid line, a label of 2 for the broken line, a label of 3 for the stop line, and other labels of 0.
For example, an original image of a map region to be generated, including lane lines, is acquired by an in-vehicle camera.
Specifically, the vehicle-mounted camera may be a vehicle-mounted monocular camera or a vehicle-mounted binocular camera, and the embodiment of the application uses the vehicle-mounted monocular camera as an example for illustration, and is not limited herein.
Further, according to the obtained original image, a mask image corresponding to the original image is determined by a deep learning algorithm. Specifically, texture information of the lane lines in the original image can be obtained according to a deep learning algorithm of the transducer, and further, a corresponding mask image can be obtained according to the texture information of the lane lines, wherein the mask image can comprise attribute information of the lane lines.
In the embodiment of the present application, only the deep learning algorithm of the transducer is taken as an example for explanation, and in practical application, one or more of the R-CNN algorithm, the SPP-Net algorithm, the Fast R-CNN algorithm and the FPN algorithm may be adopted, and other types of deep learning algorithms may also be adopted, which is not limited herein.
Step S203, constructing a corresponding grid map based on the mask image.
Further, a corresponding grid map is constructed according to the mask image, so that the grid map under the bird's eye view vision is obtained.
Step S204, filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera, and obtaining the filled grid map.
The position conversion relation between the mask image and the grid map is determined based on the mask image and the internal and external parameters of the vehicle-mounted camera, and lane line attribute information in the mask image is filled into corresponding positions in the grid map according to the conversion relation, so that the filled grid map is obtained.
Step S205, a local lane line map is generated based on the filled grid map.
Illustratively, the filled grid map is determined as a local lane line map.
In the implementation process, the corresponding mask image is determined through the original image acquired by the vehicle-mounted camera, wherein the mask image comprises the attribute information of the lane lines, further, a corresponding grid map is built according to the mask image, the lane line attribute information in the mask image is filled into the corresponding grid map according to the mask image and the internal and external parameters of the vehicle-mounted camera, the filled grid map is obtained, the attribute information of the lane lines is further included in the filled grid map, a local lane line map is further generated according to the filled grid map, the lane line map is generated according to the image information of the vision space acquired by the vehicle-mounted camera, the problem that the integrity of the generated lane line map is low due to the influence of weather, lane line materials and other factors on the laser radar is avoided, and the lane line map generated according to the image information of the vision space, which is provided by the application, has simple and clear algorithm logic, and can provide accurate prior information for the real-time positioning and the movement planning of the intelligent driving vehicle.
In some of these embodiments, constructing a corresponding grid map based on the mask image may include the steps of:
step 1: and obtaining the internal and external parameters of the vehicle-mounted camera.
Step 2: the mask image is converted into a grid map based on internal and external parameters of the vehicle-mounted camera.
The method includes the steps that an internal parameter and an external parameter of a vehicle-mounted camera are obtained, further, the position conversion relation between a mask image and a bird's-eye view can be determined according to the internal parameter and the external parameter of the vehicle-mounted camera, and further, the position information of a lane line in the mask image can be converted into the bird's-eye view according to the position conversion relation between the mask image and the bird's-eye view, so that a grid map under the vision of the bird's-eye view is obtained.
In the implementation process, the position conversion relation between the mask image and the aerial view is determined according to the internal and external parameters of the vehicle-mounted camera, and then the grid map under the vision of the aerial view is determined according to the position conversion relation between the mask image and the aerial view and the size and the resolution of the set grid map, so that the construction of the grid map is realized.
In some embodiments, filling lane line attribute information into the grid map based on the mask image and internal and external parameters of the vehicle-mounted camera to obtain a filled grid map may include the following steps:
Step 1: and based on the internal and external parameters of the vehicle-mounted camera, projecting the grid map to an imaging plane of the vehicle-mounted camera to obtain the undistorted map.
Step 2: and obtaining a mapping relation between the undistorted image and the distorted image in the vehicle-mounted camera.
Step 3: and obtaining a distortion map corresponding to the grid map based on the undistorted map and the mapping relation.
Step 4: and filling the lane line attribute information into the grid map based on the mask image and the distortion map to obtain a filled grid map.
Since the image of the object by the optical system is distorted with respect to the object itself, i.e., distortion, the direct cause is because the magnification of the edge portion and the center portion of the lens are different. The distortion of the lens is irrevocable, but the corresponding distortion-free image can be determined according to the distortion-free image or the corresponding distortion-free image can be determined according to the distortion-free image through the mapping relation between the distortion-free image and the distortion image in the vehicle-mounted camera.
For example, each grid in the grid map is projected onto the imaging plane of the in-vehicle camera according to the internal and external parameters of the in-vehicle camera, so that a projected map is obtained, and the map obtained by the projection method is undistorted, that is, the projected map is also an undistorted map.
Fig. 3 is a schematic diagram of an imaging plane and a plane of view angle of a bird's eye view, as shown in fig. 3, where a coordinate system where a vehicle-mounted camera is located is ozz, the imaging plane is a plane where uv is located, and the plane of view angle of bird's eye view is a plane where a thick line in fig. 3 is located, that is, a plane where a grid map is located.
Since the mask image is obtained from the original image having distortion, the mask image is distorted, and in order to accurately determine the positional correspondence between the mask image and the grid map, it is necessary to determine a distorted image corresponding to the map after projection.
Specifically, a mapping relationship between an undistorted image and a distorted image in the vehicle-mounted camera can be obtained, and a distorted image corresponding to the projected map is determined according to the undistorted map and the mapping relationship, namely, a distorted map corresponding to the grid map.
Further, since the mask image and the distortion map are distorted, and the positions of the lane lines in the mask image and the positions of the lane lines in the distortion map are in one-to-one correspondence, the attribute information of the lane lines in the mask image can be mapped to the distortion image, and the distortion image is obtained by performing position conversion according to the grid map, so that the attribute information of the lane lines in the grid map corresponding to the positions of the mask image can be determined, and the lane line attribute information of each position in the mask image can be filled into the grid map, thereby obtaining the filled grid map.
In the implementation process, according to the internal and external parameters of the vehicle-mounted camera, the grid map is projected to the imaging plane of the vehicle-mounted camera to obtain the undistorted map, so that the undistorted map on the same plane as the mask image is obtained, further, the influence of the distortion of the vehicle-mounted camera on the position is considered, and the mask image is a distorted image, then according to the mapping relation between the undistorted image and the distorted image in the vehicle-mounted camera, the distorted image corresponding to the undistorted image is determined, so that the distorted map on the same plane as the mask image is obtained, so that the lane line attribute information of the corresponding position in the distorted map can be determined according to the mapping relation between the mask image and the distorted map position, the lane line attribute information of the corresponding position in the grid map is further determined, and the lane line attribute information of each position is filled in the grid map, so that the filled grid map is obtained.
In some embodiments, the obtaining the mapping relationship between the undistorted image and the distorted image in the vehicle-mounted camera may include the following steps:
step 1: and obtaining distortion parameters of the vehicle-mounted camera.
Step 2: and determining the mapping relation between the undistorted image and the distorted image according to the distortion parameters and the original image.
For example, the vehicle-mounted camera may be calibrated by using a checkerboard, so as to obtain a distortion parameter of the vehicle-mounted camera, and specifically, the distortion parameter of the vehicle-mounted camera may be: (k) 1 ,k 2 ,p 1 ,p 2 ,k 3 ) Wherein, (k) 1 ,k 2 ,k 3 ) Is a radial distortion parameter, (p) 1 ,p 2 ) Is a tangential distortion parameter.
Further, according to the distortion parameters, the undistorted position corresponding to each position in the original image is determined, so that the mapping relation between the undistorted image and the distorted image is determined.
In the implementation process, the mapping relation between the undistorted image and the distorted image is determined according to the distortion parameters of the vehicle-mounted camera and the original image, so that the position conversion between the undistorted image and the distorted image is facilitated according to the mapping relation.
In some of these embodiments, the lane line attribute information includes a plurality of attribute categories, and after generating the local lane line map based on the filled grid map, the method may further include the steps of:
step 1: and acquiring a plurality of local lane line maps corresponding to the vehicle-mounted camera under different poses.
Step 2: an initial global lane line map is determined based on the plurality of local lane line maps.
Step 3: and determining the total number of observations of the target grid and the attribute category corresponding to each observation of the target grid, wherein the target grid is any grid in the initial global lane line map.
Step 4: and determining the target attribute category of the target grid based on the attribute observation times, the total observation times and a preset threshold value of the target grid in each attribute category.
Step 5: and generating a target global lane line map based on the target attribute categories of all grids in the initial global lane line map.
For example, since the local lane lines are acquired by the vehicle-mounted camera, the vehicle-mounted camera acquires images according to a set frequency, for example, the frequency of the vehicle-mounted camera is generally 20hz to 30hz, that is, 20 to 30 images can be acquired in one second, and 20 to 30 local lane line maps can be obtained correspondingly, specifically, one local lane line map can be taken as one observation, that is, one observation of each grid in the local lane line images is completed in each observation.
Further, because the size of the acquired image of the vehicle-mounted camera is limited, in order to acquire the global lane line map, the vehicle-mounted camera can acquire the image in different postures, so as to obtain the local lane line map corresponding to the different postures.
Further, the positions of each grid in the plurality of local lane line maps are combined to obtain an initial global lane line map, specifically, the size of the initial global lane line map can be determined according to the running environment of the vehicle, and as an embodiment, the global lane line map in the application has a length of L, a width of W and a grid resolution of δa.
Since there may be a location of repeated observations in the plurality of local lane line maps, in the initial global lane line map, one grid may exist in all of the plurality of local lane line maps, or may exist in only one local lane line map, that is, the total number of observations of each grid may be the same or may be different in the initial global lane line map, the lane line attribute information may include a plurality of attribute categories, specifically, the lane line attribute information may include a solid line, a broken line, a stop line, and other four attribute categories, and in the initial global lane line map, each observation of each grid may obtain a corresponding one attribute category.
Further, determining the total number of observations of the target grid, wherein the total number of observations is the total number of observations of the target grid in the plurality of local lane line maps and the lane line attribute category corresponding to each observation of the target grid, and the target grid is any grid in the initial global lane line map.
Further, determining the occurrence frequency of the target grid in a plurality of attribute categories, namely the attribute observation frequency, and further determining the target attribute category of the target grid according to the attribute observation frequency, the total observation frequency and a preset threshold value of the target grid.
Further, determining target attribute categories of all grids in the initial global lane line map, and determining the initial global lane line map with all the determined target attribute categories as the target global lane line map.
In the implementation process, an initial global lane line map is formed according to the local lane line maps in different postures of the vehicle-mounted camera, so that the initial global lane line map comprises position information of all lane lines, further, according to the total number of times each grid in the initial global lane line map is observed, the target attribute category of each grid is determined according to the attribute category observed each time and a preset threshold value, the target attribute category of each grid in the initial global lane line map is further determined, and the initial global grid map for determining the target attribute categories of all grids is determined to be the target global grid map, so that generation of the global grid map is realized.
In some embodiments, determining the target attribute category of the target grid based on the number of attribute observations, the total number of observations, and a preset threshold for the target grid in each attribute category may include the steps of:
step 1: and determining the class probability corresponding to the target grid in each attribute class based on the attribute observation times and the total observation times of the target grid in each attribute class.
Step 2: and determining the target attribute category of the target grid based on the respective probabilities and a preset threshold.
Exemplary, the class probability corresponding to the target grid in each attribute class can be determined according to the attribute observation times of the target grid in each attribute class and the total observation times, and further, the target attribute grid c of the target grid can be determined according to the attribute probability of each attribute class i
Specifically, if the target grid is the ith grid c in the initial global lane line map i The observation times of the solid line, the broken line, the stop line and other attributes are respectively marked as n 1 ,n 2 ,n 3 And n 4 The total number of observations is n i Respectively count grid c i The class probability at each attribute class, grid c i The class probability for the solid line is P 1 =n 1 /n i Grid c i The class probability of the dotted line is P 2 =n 2 /n i Grid c i Class probability P for stop line 3 =n 3 /n i Grid c i Probability of P for other categories 4 =n 4 /n i Determining the maximum class probability, if grid c i The size of each class probability of P 1 <P 3 <P 4 <P 2 Grid c i The maximum class probability is P 2
Further, according to the maximum class probability and a preset threshold, determining the target attribute class of the target grid, and according to the size between the maximum class probability and the preset threshold, determining the target attribute class of the target grid.
Specifically, assume that the preset threshold is P r If P r ≤P 2 Grid c i The target attribute category of (1) is a dotted line, if P r >P 2 Grid c i Is a non-lane-line attribute.
In the implementation process, the class probability corresponding to the target grid in each attribute class is determined according to the attribute observation times and the observation total times of the target grid in each attribute class, and the target attribute class of the target grid is further determined according to the class probabilities and the preset threshold value, so that the target attribute class of the target grid is accurately determined.
In some of these embodiments, after generating the target global lane line map based on the target attribute categories of all the grids in the initial global lane line map, the method may further comprise the steps of:
Step 1: and obtaining the distribution parameters of the target global lane line map.
Step 2: and updating the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current observation total times of the lane lines in the target global lane line map.
For example, after the target global lane line map is generated, the target global map may be deployed into an actual vehicle for application, and in the application process, the target global map may also be updated, and specifically, the method may further include: the method comprises the steps of obtaining distribution parameters of a target global lane line map, wherein the distribution parameters can be preset, and further, updating the target global lane line map according to the distribution parameters, the current attribute observation times of each grid in each attribute category and the current observation total times in the application process of the target lane line map.
In the implementation process, the target global lane line map is updated according to the distribution parameters of the target global map, the current accumulated attribute observation times and the current accumulated observation total times of each grid in the target global lane line map, so that the target global lane line map can be updated in real time in the actual application of the target global lane line map, and the accuracy of the global lane line map is improved.
In some embodiments, updating the target global lane-line map based on the distribution parameters, the current number of attribute observations of the lane-lines in the target global lane-line map in each attribute category, and the current total number of observations of the lane-lines in the target global lane-line map may include the steps of:
step 1: and determining a current probability distribution function of the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current observation total times of the lane lines in the target global lane line map.
Step 2: and updating the target global lane line map based on the current probability distribution function.
Illustratively, the Beta distribution has important application in statistics as a function of the density of the conjugate a priori distributions of the bernoulli distribution and the binomial distribution. Therefore, in the embodiment of the present application, a Beta distribution function, beta (α, β), may be obtained according to the distribution parameters α and β, and in the actual application process, the current attribute observation times α of any grid in the target global lane line map in each attribute category is recorded 0 Total number of current observations beta 0 Thereby constructing a current probability distribution function Beta (alpha+alpha) of the target global lane line map 0 ,β+β 0 )。
Further, the probability distribution function Beta (alpha+alpha) can be determined according to the current probability distribution function Beta 0 ,β+β 0 ) And updating the target global lane line map.
In the implementation process, according to the distribution parameters and the current attribute observation times alpha of any grid in the target global lane line map in each attribute category 0 Total number of current observations beta 0 Therefore, the current probability distribution function of the target global lane line map can be determined, the quantization of lane line attribute information is realized, and the target global lane line map is updated according to the current probability distribution function, so that the real-time maintenance and updating of the target global lane line map are realized.
An embodiment of a lane line map generation method is also provided in the present embodiment. Fig. 4 is an embodiment flowchart of a lane line map generating method according to the embodiment of the present application, as shown in fig. 4, the flowchart includes the following steps:
step S401, obtaining a lane line mask image based on an original image acquired by the vehicle-mounted camera.
Illustratively, an original image is obtained based on an on-vehicle monocular camera, a mask image comprising lane line attribute information is obtained according to texture information of the original image through a deep learning method, and dense pixel level segmentation is completed.
In recent years, a transform deep learning algorithm plays an increasingly important role in visual tasks, and in the embodiment of the application, a segmentation result of an original image is obtained by using the visual-based transform deep learning algorithm. The lane line attribute information in the mask image obtained by image segmentation comprises lane line solid lines, dotted lines, stop lines and other four types.
Step S402, determining a grid map based on the mask image.
And constructing a local grid map through the preset grid map size and resolution.
Specifically, in the embodiment of the present application, the mask image of the original image needs to be converted into the grid map under the perspective of the aerial view, and the corresponding local grid map is generated by the preset size and resolution of the grid map under the current vehicle body coordinate system.
Step S403, filling lane line attribute information in the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a local lane line map.
Specifically, the center of each grid in the constructed local grid map is projected to an imaging plane according to the internal and external parameters of the vehicle-mounted camera, so that a distortion-free map on the same plane with the mask image is obtained.
Further, a mapping relation between the undistorted image and the original image is obtained, specifically, the vehicle-mounted camera can be calibrated by using a checkerboard, so as to obtain a distortion parameter of the vehicle-mounted camera, and specifically, the distortion parameter of the vehicle-mounted camera can be: (k) 1 ,k 2 ,p 1 ,p 2 ,k 3 ) Wherein, (k) 1 ,k 2 ,k 3 ) Is a radial distortion parameter, (p) 1 ,p 2 ) Is tangential distortion parameter, and then determines the corresponding undistorted position of each position in the original image according to the distortion parameter, thereby determining no distortionAnd (3) distorting the mapping relation between the image and the original image.
Further, the undistorted map in the same plane as the mask image is subjected to position transformation through the mapping relation between the undistorted image and the original image, so that the distorted image in the same plane as the mask image is obtained, the positions of the distorted image and the mask image are in one-to-one correspondence, and the distorted image is obtained by performing position transformation according to the grid map.
And the attribute information of the lane lines in the mask image can be filled into the grid map, so that a local lane line map comprising the lane line information is obtained.
Step S404, a global lane line map is constructed based on a plurality of local lane line maps in different postures of the vehicle-mounted camera, and lane line attribute information of each grid in the global lane line map is determined.
Specifically, the length of the global lane line map is set to be L, the width is set to be W, the grid resolution is δa according to the working environment size of the vehicle, and the global lane line map is constructed according to a plurality of local lane line maps in different postures of the vehicle-mounted camera.
Further, the attribute observation times and the total observation times of each grid on each attribute category in the global lane line map are counted, the ratio of the attribute observation times to the total observation times on each attribute category is determined as the category probability of each attribute category, and further, the value with the maximum category probability is reserved.
Further, determining the maximum probability value and the preset probability threshold value, and if the maximum probability value is greater than or equal to the preset probability threshold value, determining the attribute category corresponding to the maximum probability value as the target attribute category of the grid, thereby determining the lane line attribute information of each grid in the global lane line map.
And applying the obtained global lane line map to the vehicle.
Step S405, updating the global lane line map based on the distribution parameters, the observation times of each grid in the global lane line map in each attribute category and the total observation times.
In the application process of the global lane line map, recording the current real-time acquired attribute observation times and the total observation times of each grid in the global lane line map, and updating the global lane line map by combining with preset distribution parameters.
Specifically, given parameters α and β, a distribution Beta (α, β) is obtained, and in the update process, the number of times of observing the grid attribute of the current lane line and the total number of times of observing are respectively recorded as α 0 ,β 0 Thus, a new distribution Beta (alpha+alpha) 0 ,β+β 0 ). On this basis, the attributes of the corresponding grids can be updated and maintained by calculating the mean value of the new Beta distribution.
In the implementation process, a lane line mask image is obtained from an original image based on a deep learning method; then constructing a local grid map to obtain lane line information under the view angle of the aerial view; combining the real-time pose information of the current camera to construct a lane line statistics grid map, determining the attribute of each grid according to a preset probability threshold value and generating a global lane line map; and finally, in the actual application process, updating and maintaining the global lane line map by introducing Beta distribution. Fig. 5 is a schematic diagram of a lane line map in a campus environment according to an embodiment of the present application.
Although the steps in the flowcharts according to the embodiments described above are shown in order as indicated by the arrows, these steps are not necessarily executed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
The embodiment also provides a lane line map generating device, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. The terms "module," "unit," "sub-unit," and the like as used below may refer to a combination of software and/or hardware that performs a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
Fig. 6 is a block diagram of a lane line map generating apparatus according to an embodiment of the present application, and as shown in fig. 6, the apparatus includes:
the acquiring module 601 is configured to acquire an original image acquired by the vehicle-mounted camera.
The determining module 602 is configured to determine, based on the original image, a corresponding mask image, where the mask image includes lane line attribute information.
A construction module 603, configured to construct a corresponding grid map based on the mask image.
And the filling module 604 is configured to fill the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera, so as to obtain a filled grid map.
The generating module 605 is configured to generate a local lane line map based on the filled grid map.
In some of these embodiments, the building module 603 is specifically configured to:
acquiring internal and external parameters of a vehicle-mounted camera;
the mask image is converted into a grid map based on internal and external parameters of the vehicle-mounted camera.
In some of these embodiments, the filling module 604 is specifically configured to:
projecting a grid map to an imaging plane of the vehicle-mounted camera based on internal and external parameters of the vehicle-mounted camera to obtain a distortion-free map;
acquiring a mapping relation between an undistorted image and a distorted image in a vehicle-mounted camera;
Obtaining a distortion map corresponding to the grid map based on the undistorted map and the mapping relation;
and filling the lane line attribute information into the grid map based on the mask image and the distortion map to obtain a filled grid map.
In some of these embodiments, the lane line attribute information includes a plurality of attribute categories, and the generation module 605 is further configured to:
acquiring a plurality of local lane line maps corresponding to the vehicle-mounted camera under different poses;
determining an initial global lane line map based on the plurality of local lane line maps;
determining the total number of observations of a target grid and attribute categories corresponding to each observation of the target grid, wherein the target grid is any grid in an initial global lane line map;
determining a target attribute category of the target grid based on the attribute observation times, the total observation times and a preset threshold value of the target grid in each attribute category;
and generating a target global lane line map based on the target attribute categories of all grids in the initial global lane line map.
In some of these embodiments, the generation module 605 is specifically configured to:
determining class probability corresponding to the target grid in each attribute class based on the attribute observation times and the total observation times of the target grid in each attribute class;
And determining the target attribute category of the target grid based on the respective probabilities and a preset threshold.
In some of these embodiments, the generation module 605 is further to:
acquiring distribution parameters of a target global lane line map;
and updating the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current observation total times of the lane lines in the target global lane line map.
In some of these embodiments, the generation module 605 is specifically configured to: determining a current probability distribution function of the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current observation total times of the lane lines in the target global lane line map;
and updating the target global lane line map based on the current probability distribution function.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. Fig. 7 is a schematic diagram of an internal structure of a computer device according to an embodiment of the present application. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data acquired by the vehicle-mounted camera. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a lane line map generation method.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is also provided an electronic device including a memory and a processor, the memory storing a computer program, the processor implementing the steps of the method embodiments described above when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
It should be noted that, user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (RandomAccess Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can take many forms, such as static Random access memory (Static Random Access Memory, SRAM) or Dynamic Random access memory (Dynamic Random AccessMemory, DRAM), among others. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the patent. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A lane line map generation method, characterized by comprising:
acquiring an original image acquired by a vehicle-mounted camera;
determining a corresponding mask image based on the original image, wherein the mask image comprises lane line attribute information;
constructing a corresponding grid map based on the mask image;
filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a filled grid map;
And generating a local lane line map based on the filled grid map.
2. The lane line map generation method according to claim 1, wherein the constructing a corresponding grid map based on the mask image includes:
acquiring internal and external parameters of the vehicle-mounted camera;
and converting the mask image into the grid map based on the internal and external parameters of the vehicle-mounted camera.
3. The lane line map generation method according to claim 1, wherein the filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the in-vehicle camera to obtain a filled grid map comprises:
based on the internal and external parameters of the vehicle-mounted camera, projecting the grid map to an imaging plane of the vehicle-mounted camera to obtain a distortion-free map;
acquiring a mapping relation between an undistorted image and a distorted image in the vehicle-mounted camera;
obtaining a distortion map corresponding to the grid map based on the undistorted map and the mapping relation;
and filling the lane line attribute information into the grid map based on the mask image and the distortion map to obtain a filled grid map.
4. The lane-marking map generating method according to claim 1, wherein the lane-marking attribute information includes a plurality of attribute categories, and after the generating a local lane-marking map based on the filled grid map, the method further comprises:
acquiring a plurality of local lane line maps corresponding to the vehicle-mounted camera under different poses;
determining an initial global lane line map based on a plurality of the local lane line maps;
determining the total number of observations of a target grid and attribute categories corresponding to each observation of the target grid, wherein the target grid is any grid in the initial global lane line map;
determining a target attribute category of the target grid based on the attribute observation times of the target grid in each attribute category, the total observation times and a preset threshold;
and generating a target global lane line map based on target attribute categories of all grids in the initial global lane line map.
5. The lane-line map generation method according to claim 4, wherein the determining the target attribute category of the target grid based on the number of attribute observations of the target grid in each attribute category, the total number of observations, and a preset threshold value includes:
Determining class probability corresponding to each attribute class of the target grid based on the attribute observation times of the target grid in each attribute class and the observation total times;
and determining the target attribute category of the target grid based on the category probabilities and the preset threshold.
6. The lane-line map generation method according to claim 4, wherein after generating a target global lane-line map based on target attribute categories of all grids in the initial global lane-line map, the method further comprises:
acquiring distribution parameters of the target global lane line map;
updating the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current total observation times of the lane lines in the target global lane line map.
7. The lane-line map generation method according to claim 6, wherein the updating the target global lane-line map based on the distribution parameter, a current number of attribute observations of lane lines in the target global lane-line map in each attribute category, and a current total number of observations of lane lines in the target global lane-line map, comprises:
Determining a current probability distribution function of the target global lane line map based on the distribution parameters, the current attribute observation times of the lane lines in the target global lane line map in each attribute category and the current total observation times of the lane lines in the target global lane line map;
and updating the target global lane line map based on the current probability distribution function.
8. A lane line map generation apparatus, comprising:
the acquisition module is used for acquiring an original image acquired by the vehicle-mounted camera;
the determining module is used for determining a corresponding mask image based on the original image, wherein the mask image comprises lane line attribute information;
the construction module is used for constructing a corresponding grid map based on the mask image;
the filling module is used for filling the lane line attribute information into the grid map based on the mask image and the internal and external parameters of the vehicle-mounted camera to obtain a filled grid map;
and the generation module is used for generating a local lane line map based on the filled grid map.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the lane line map generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the lane line map generation method of any one of claims 1 to 7.
CN202310444818.1A 2023-04-24 2023-04-24 Lane line map generation method, device, electronic device and storage medium Active CN116168173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310444818.1A CN116168173B (en) 2023-04-24 2023-04-24 Lane line map generation method, device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310444818.1A CN116168173B (en) 2023-04-24 2023-04-24 Lane line map generation method, device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN116168173A true CN116168173A (en) 2023-05-26
CN116168173B CN116168173B (en) 2023-07-18

Family

ID=86420366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310444818.1A Active CN116168173B (en) 2023-04-24 2023-04-24 Lane line map generation method, device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116168173B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052910A (en) * 2017-12-19 2018-05-18 深圳市保千里电子有限公司 A kind of automatic adjusting method, device and the storage medium of vehicle panoramic imaging system
CN109461126A (en) * 2018-10-16 2019-03-12 重庆金山医疗器械有限公司 A kind of image distortion correction method and system
CN111680673A (en) * 2020-08-14 2020-09-18 北京欣奕华科技有限公司 Method, device and equipment for detecting dynamic object in grid map
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN112862839A (en) * 2021-02-24 2021-05-28 清华大学 Method and system for enhancing robustness of semantic segmentation of map elements
US20210347378A1 (en) * 2020-05-11 2021-11-11 Amirhosein Nabatchian Method and system for generating an importance occupancy grid map
CN114625822A (en) * 2022-03-02 2022-06-14 阿波罗智联(北京)科技有限公司 High-precision map updating method and device and electronic equipment
CN114663852A (en) * 2022-02-21 2022-06-24 北京箩筐时空数据技术有限公司 Method and device for constructing lane line graph, electronic equipment and readable storage medium
US20220276655A1 (en) * 2019-08-21 2022-09-01 Sony Group Corporation Information processing device, information processing method, and program
CN115187944A (en) * 2022-06-29 2022-10-14 合众新能源汽车有限公司 Lane line detection method and device
CN115937825A (en) * 2023-01-06 2023-04-07 之江实验室 Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation
CN115937449A (en) * 2022-12-05 2023-04-07 北京百度网讯科技有限公司 High-precision map generation method and device, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052910A (en) * 2017-12-19 2018-05-18 深圳市保千里电子有限公司 A kind of automatic adjusting method, device and the storage medium of vehicle panoramic imaging system
CN109461126A (en) * 2018-10-16 2019-03-12 重庆金山医疗器械有限公司 A kind of image distortion correction method and system
US20220276655A1 (en) * 2019-08-21 2022-09-01 Sony Group Corporation Information processing device, information processing method, and program
US20210347378A1 (en) * 2020-05-11 2021-11-11 Amirhosein Nabatchian Method and system for generating an importance occupancy grid map
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111680673A (en) * 2020-08-14 2020-09-18 北京欣奕华科技有限公司 Method, device and equipment for detecting dynamic object in grid map
CN112862839A (en) * 2021-02-24 2021-05-28 清华大学 Method and system for enhancing robustness of semantic segmentation of map elements
CN114663852A (en) * 2022-02-21 2022-06-24 北京箩筐时空数据技术有限公司 Method and device for constructing lane line graph, electronic equipment and readable storage medium
CN114625822A (en) * 2022-03-02 2022-06-14 阿波罗智联(北京)科技有限公司 High-precision map updating method and device and electronic equipment
CN115187944A (en) * 2022-06-29 2022-10-14 合众新能源汽车有限公司 Lane line detection method and device
CN115937449A (en) * 2022-12-05 2023-04-07 北京百度网讯科技有限公司 High-precision map generation method and device, electronic equipment and storage medium
CN115937825A (en) * 2023-01-06 2023-04-07 之江实验室 Robust lane line generation method and device under BEV (beam-based attitude vector) of on-line pitch angle estimation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI-BIN YU 等: "An Efficient Algorithm for Depression Filling and Flat-Surface Processing in Raster DEMs", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, pages 2198 - 2202 *
耿延龙: "基于激光雷达的热成像巡检机器人设计与实现", 《硕士电子期刊》, pages 1 - 77 *

Also Published As

Publication number Publication date
CN116168173B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
US9087408B2 (en) Systems and methods for generating depthmaps
CN108564527B (en) Panoramic image content completion and restoration method and device based on neural network
US20190340746A1 (en) Stationary object detecting method, apparatus and electronic device
CN109191554B (en) Super-resolution image reconstruction method, device, terminal and storage medium
WO2022267693A1 (en) System and method for super-resolution image processing in remote sensing
CN111161398B (en) Image generation method, device, equipment and storage medium
CN114066999A (en) Target positioning system and method based on three-dimensional modeling
CN113052761B (en) Laser point cloud map fusion method, device and computer readable storage medium
CN116071404A (en) Image registration method, device, computer equipment and storage medium
CN108074250B (en) Matching cost calculation method and device
CN116168173B (en) Lane line map generation method, device, electronic device and storage medium
CN115909255B (en) Image generation and image segmentation methods, devices, equipment, vehicle-mounted terminal and medium
CN117274992A (en) Method, device, equipment and storage medium for constructing plant three-dimensional segmentation model
Alaba et al. Multi-sensor fusion 3D object detection for autonomous driving
CN116758214A (en) Three-dimensional modeling method and device for remote sensing image, electronic equipment and storage medium
US20190156465A1 (en) Converting Imagery and Charts to Polar Projection
CN113744361B (en) Three-dimensional high-precision map construction method and device based on three-dimensional vision
CN112652056B (en) 3D information display method and device
CN114359891A (en) Three-dimensional vehicle detection method, system, device and medium
CN112419459A (en) Method, apparatus, computer device and storage medium for baked model AO mapping
JP7388595B2 (en) Image expansion device, control method, and program
CN118135484B (en) Target detection method and device and related equipment
US11776148B1 (en) Multi-view height estimation from satellite images
CN109269477A (en) A kind of vision positioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant