CN112308904A - Vision-based drawing construction method and device and vehicle-mounted terminal - Google Patents

Vision-based drawing construction method and device and vehicle-mounted terminal Download PDF

Info

Publication number
CN112308904A
CN112308904A CN201910687295.7A CN201910687295A CN112308904A CN 112308904 A CN112308904 A CN 112308904A CN 201910687295 A CN201910687295 A CN 201910687295A CN 112308904 A CN112308904 A CN 112308904A
Authority
CN
China
Prior art keywords
map
road image
edge feature
coordinate system
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910687295.7A
Other languages
Chinese (zh)
Inventor
徐抗
李天威
刘一龙
童哲航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chusudu Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910687295.7A priority Critical patent/CN112308904A/en
Publication of CN112308904A publication Critical patent/CN112308904A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a vision-based drawing construction method and device and a vehicle-mounted terminal. The method comprises the following steps: acquiring a road image acquired by camera equipment; extracting an edge feature map of the road image according to preset edge intensity; determining a positioning pose corresponding to the road image according to data acquired by the motion detection equipment; determining the position information of each point in the edge characteristic graph in a world coordinate system based on a three-dimensional reconstruction algorithm and a positioning pose; selecting map points from all points of the edge feature map according to preset point density, and adding position information of all the map points in a world coordinate system to the map; and the positioning pose is the pose in the world coordinate system of the map. By applying the scheme provided by the embodiment of the invention, the effective information amount for visual positioning in the map can be increased.

Description

Vision-based drawing construction method and device and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a vision-based map building method and device and a vehicle-mounted terminal.
Background
In the technical field of intelligent driving, positioning of vehicles is an important link in intelligent driving. Generally, the vehicle pose can be determined from a satellite positioning system while the vehicle is traveling. However, when the vehicle travels into a scene where the satellite signal is weak or no signal, in order to accurately determine the positioning pose of the vehicle, positioning may be performed based on visual positioning.
The vision-based localization is based on a pre-constructed high-precision map. In the mapping scheme of the high-precision map, most mapping schemes are used for modeling common markers on roads. The signs may generally include lane lines on the ground, ground sign lines, traffic signs, light poles, and the like. When the map is built, semantic information of the marker is extracted from the road image, and the position of the semantic information is added to the map. When the vehicle is positioned, the semantic information of the road image acquired by the camera equipment on the vehicle is matched with the semantic information in the high-precision map.
However, the mapping scheme described above depends heavily on the markers on the road, and when the markers in the scene are rare or even no markers are present, it is difficult for the high-precision map to provide sufficient effective information for visual positioning, and the amount of effective information for visual positioning in the high-precision map is insufficient.
Disclosure of Invention
The invention provides a visual-based map building method, a visual-based map building device and a vehicle-mounted terminal, which are used for increasing the effective information amount for visual positioning in a map. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention discloses a visual-based mapping method, including:
acquiring a road image acquired by camera equipment;
extracting an edge feature map of the road image according to preset edge intensity;
determining a positioning pose corresponding to the road image according to data collected by the motion detection equipment; the positioning pose is a pose in a world coordinate system of the map;
determining the position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and selecting map points from all points of the edge feature map according to a preset point density, and adding the position information of all the map points in the world coordinate system to the map.
Optionally, the step of extracting an edge feature map of the road image according to the preset edge intensity includes:
extracting an edge feature map of the road image based on an edge feature extraction model; the edge feature extraction model is obtained by training according to a sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
Optionally, the step of determining position information of each point in the edge feature map in the world coordinate system based on the three-dimensional reconstruction algorithm and the positioning pose includes:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; the camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge feature map in the world coordinate system.
Optionally, the step of selecting a map point from each point of the edge feature map according to a preset point density includes:
constructing an octree cube grid in the map according to an octree algorithm with a preset voxel size;
and selecting one point from the points of the edge feature map in the octree cube grids for each octree cube grid to serve as a map point corresponding to the octree cube grids.
Optionally, after adding the position information of each map point in the world coordinate system to the map, the method further includes:
matching the road image with each historical road image, and when the matching is successful, determining the successfully matched historical road image as a loop detection image corresponding to the road image;
correcting the positioning pose of each road image between the loop detection image and the road image to obtain each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
after correcting the position information of the corresponding map point, deleting the map point in the octree cubic grid when more than one map point exists in the octree cubic grid for each octree cubic grid in the map, so that the rest map points in the octree cubic grid are one.
Optionally, the step of matching the road image with each historical road image includes:
and matching the edge characteristic graph of the road image with the edge characteristic graph of each historical road image.
Optionally, the position information of each point in the world coordinate system includes: and the coordinate position of each point in the world coordinate system and the normal vector information of the point.
In a second aspect, an embodiment of the present invention discloses a visual-based drawing device, including:
an acquisition module configured to acquire a road image acquired by a camera device;
the extraction module is configured to extract an edge feature map of the road image according to preset edge intensity;
a positioning module configured to determine a positioning pose corresponding to the road image from data collected by a motion detection device; the positioning pose is a pose in a world coordinate system of the map;
a determining module configured to determine position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and the adding module is configured to select map points from various points of the edge feature map according to preset point density and add the position information of the various map points in the world coordinate system to the map.
Optionally, the extracting module is specifically configured to:
extracting an edge feature map of the road image based on an edge feature extraction model; the edge feature extraction model is obtained by training according to a sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
Optionally, the determining module is specifically configured to:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; the camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge feature map in the world coordinate system.
Optionally, the adding module is specifically configured to:
constructing an octree cube grid in the map according to an octree algorithm with a preset voxel size;
and selecting one point from the points of the edge feature map in the octree cube grids for each octree cube grid to serve as a map point corresponding to the octree cube grids.
Optionally, the apparatus further comprises:
a matching module configured to match the road image with each historical road image after adding position information of each map point in a world coordinate system to the map, and when matching is successful, determine the successfully matched historical road image as a loop detection image corresponding to the road image;
a correction module configured to correct a positioning pose of each road image from the loopback detection image to the road image, resulting in each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
the deleting module is configured to delete the map points in the octree cube grids when more than one map point exists in the octree cube grids so that the rest map points in the octree cube grids are one for each octree cube grid in the map after the position information of the corresponding map point is corrected.
Optionally, when the matching module matches the road image with each historical road image, the matching module includes:
and matching the edge feature map of the road image with the edge feature maps of the historical road images.
In a third aspect, an embodiment of the present invention discloses a vehicle-mounted terminal, including: a processor, a camera device and a motion detection device; the processor includes:
the acquisition module is used for acquiring a road image acquired by the camera equipment;
the extraction module is used for extracting an edge feature map of the road image according to preset edge intensity;
the positioning module is used for determining a positioning pose corresponding to the road image according to the data acquired by the motion detection equipment; the positioning pose is a pose in a world coordinate system of the map;
the determining module is used for determining the position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and the adding module is used for selecting map points from all the points of the edge feature map according to the preset point density and adding the position information of all the map points in the world coordinate system to the map.
Optionally, the extracting module is specifically configured to:
extracting an edge feature map of the road image based on an edge feature extraction model; the edge feature extraction model is obtained by training according to a sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
Optionally, the determining module is specifically configured to:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; the camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge feature map in the world coordinate system.
Optionally, the adding module is specifically configured to:
constructing an octree cube grid in the map according to an octree algorithm with a preset voxel size;
and selecting one point from the points of the edge feature map in the octree cube grids for each octree cube grid to serve as a map point corresponding to the octree cube grids.
Optionally, the apparatus further comprises:
the matching module is used for matching the road image with each historical road image after the position information of each map point in the world coordinate system is added to the map, and when the matching is successful, determining a matching image of loop detection corresponding to the road image according to the historical road image which is successfully matched;
the correction module is used for correcting the positioning pose of each road image between the loop detection image and the road image to obtain each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
and the deleting module is used for deleting map points in the octree cubic grids when more than one map point exists in the octree cubic grids aiming at each octree cubic grid in the map after the position information of the corresponding map point is corrected, so that the number of the remaining map points in the octree cubic grids is one.
Optionally, when the matching module matches the road image with each historical road image, the matching module includes:
and matching the edge feature map of the road image with the edge feature maps of the historical road images.
As can be seen from the above, the map building method, the map building device and the vehicle-mounted terminal provided in the embodiments of the present invention can extract the edge feature map of the road image according to the preset edge strength, determine the positioning pose corresponding to the road image according to the data acquired by the motion detection device, determine the position information of each point in the edge feature map in the world coordinate system based on the three-dimensional reconstruction algorithm and the positioning pose, select a map point from each point in the edge feature map according to the preset point density, and add the position information of each map point in the world coordinate system to the map. According to the method and the device for extracting the map points, the map points are selected from the edge feature map obtained from the road image, the position information of the map points is added to the map, the edge feature map of the road image can extract the structural features in the image, the structural features are richer and noise-resistant, and even if the markers in the scene are sparse and even have no markers, enough effective map points can be extracted. Therefore, the embodiment of the invention can increase the effective information amount for visual positioning in the map. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. and extracting a structured edge feature map from the road image, and extracting point cloud data from the edge feature map for map construction, so that denser map information can be obtained, and the effective information content in the map is increased.
2. And maintaining the density of the map points by adopting the octree, so that the selected map points can meet the preset point density requirement.
3. According to the three-dimensional reconstruction, the coordinates of each point in the edge feature map in the camera coordinate system are determined, the coordinates in the camera coordinate system are converted into the world coordinate system, the position of each point in the edge feature map in the world coordinate system is obtained, and the position information of each point in the edge feature map can be determined more accurately.
4. The position information of the map points is corrected through loop detection, repeated map points are deleted, accumulated errors can be eliminated, and the accuracy of the map point data in the map is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
FIG. 1 is a schematic flow chart of a vision-based mapping method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a vision-based mapping method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a vision-based mapping apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a vision-based map building method, a vision-based map building device and a vehicle-mounted terminal, which can increase the effective information amount for visual positioning in a map. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a vision-based mapping method according to an embodiment of the present invention. The method is applied to the electronic equipment. The electronic device may be a general Computer, a server, an intelligent terminal device, or the like, or may be a vehicle-mounted Computer or a vehicle-mounted terminal such as an Industrial Personal Computer (IPC). In this embodiment, the vehicle-mounted terminal may be installed in a vehicle, and the vehicle refers to an intelligent vehicle. The method specifically comprises the following steps.
S110: and acquiring a road image acquired by the camera equipment.
Wherein, the camera device can collect road images according to a preset frequency. The road image may comprise image data of road markers or any other object within the image acquisition range of the camera device.
In this embodiment, the location where the road image is located may be outdoors or may be a parking lot. The camera device may be provided in a vehicle, or in a mobile smart product such as a robot. In the following, the camera is provided in the vehicle as an example, but this does not mean that the present embodiment is limited to the vehicle.
The road image may be an image of the surroundings of the vehicle captured by the camera device while the vehicle is traveling on various roads. The road may be any place where the vehicle can travel, such as an urban road, a rural road, a mountain road, a parking lot road, and the like, and the image acquired during entering the parking space may also be included in the road image.
S120: and extracting an edge feature map of the road image according to the preset edge intensity.
The edge in the image refers to a set of pixel points with step changes in the gray levels of surrounding pixels. The gray values of the pixels at the two sides of the edge point have obvious difference. The edge intensity may be understood as the magnitude of the edge point gradient. The edge feature map may be understood as data containing edge features in accordance with the road image size. The edge feature map includes position information of edge lines in the road image. The edge lines are composed of edge points.
The extracted edge feature map is position information expressed in an image coordinate system in which the road image is located. The edge points in the edge feature map are positions in the image coordinate system.
The above-mentioned extracting the edge feature map of the road image may be understood as performing feature extraction on the road image to obtain the edge feature map. Specifically, when the edge feature map of the road image is extracted, the edge feature map may be obtained by extracting an edge whose edge intensity is greater than a threshold value from the road image, with the preset edge intensity as the threshold value. For example, in a pillar having a ridge line, the gradient of the ridge line inside the pillar may not be large in the entire road image range, but the edge of the ridge line may have a large edge strength in the pillar range, and thus the edge may be also extracted. The two above schemes can also be combined.
The edge feature map extracted according to the preset edge strength can embody the structural features in the road image. Specifically, a canny operator, a sobel operator and a LoG operator can be adopted to extract the edge feature map of the road image.
In another embodiment, the step may further include: and extracting an edge feature map of the road image based on the edge feature extraction model.
The edge feature extraction model is obtained by training according to the sample road image and an edge feature graph labeled according to preset edge strength. The edge feature extraction model associates the road image with a corresponding edge feature map.
Extracting an edge feature map of the road image based on the edge feature extraction model may include: and inputting the road image into an edge feature extraction model, and acquiring an edge feature image of the road image output by the edge feature extraction model.
In the training stage, a plurality of sample road images and labeled edge features can be obtained, and the sample road images are input into an edge feature extraction model; the edge feature extraction model extracts feature vectors of the sample road image according to the model parameters and regresses the feature vectors to obtain reference edge features of the sample road image; comparing the reference edge features with the marked edge features to obtain difference quantities; when the difference is larger than a preset difference threshold value, returning to the step of inputting the sample road image into the edge feature extraction model; and when the difference is not greater than a preset difference threshold value, determining that the training of the edge feature extraction model is finished.
Wherein the marked edge features reflect a preset edge strength. That is, the edge features in the sample road image are labeled according to the preset edge strength as a standard. The edge feature extraction model obtained by training by adopting the machine learning method can extract more accurate edge features. The edge feature extraction model may also be referred to as an edge detector.
S130: and determining a positioning pose corresponding to the road image according to the data acquired by the motion detection equipment.
And the positioning pose is the pose in the world coordinate system of the map. The motion detection device may include sensors such as an Inertial Measurement Unit (IMU) and/or a wheel speed meter. The time of the data collected by the motion detection device and the time of the road image collected by the camera device may be mutually associated, for example, the collection times of the two may be the same time or time with a short time difference.
Determining a positioning pose corresponding to the road image according to data acquired by the motion detection device, which may specifically include: and acquiring an upper positioning pose, and determining the positioning pose corresponding to the road image according to the upper positioning pose and data acquired by the motion detection equipment. And the last positioning pose can be the positioning pose determined at the last moment.
In another embodiment, the step may further include: and determining the positioning pose of the vehicle corresponding to the road image according to the data acquired by the motion detection equipment and the matching result of the road characteristics in the road image and the road characteristics in the preset map. In this embodiment, the road features in the road image are matched with the road features in the preset map, which corresponds to another vision-based positioning method.
In another embodiment, the motion detection device may further include a Global Positioning System (GPS). When the motion detection equipment comprises the GPS, the accumulated errors in the positioning process according to the IMU, the wheel speed meter and the like can be eliminated as far as possible, and the accuracy of the positioning pose is improved.
The positioning pose determined in this step is a positioning pose for associating the feature points in the edge feature map with the position information in the map, and is a relatively accurate positioning pose of the vehicle.
S140: and determining the position information of each point in the edge characteristic graph in the world coordinate system based on the three-dimensional reconstruction algorithm and the positioning pose.
The position information of each point in the world coordinate system comprises: the coordinate position of each point in the world coordinate system and the normal vector information of the point. The normal vector information of the point represents the normal vector of the plane in which the point is located. In the world coordinate system, the coordinate position of the point can be represented by three coordinates (a, B, C), and the normal vector information of the point can be represented by three parameters (a, B, C). The position information of each point contains information of 6 dimensions, and this representation may be equivalent to representing each point by using a plane a (x-a) + B (y-B) + C (z-C) ═ 0.
S150: map points are selected from all points of the edge feature map according to the preset point density, and the position information of all the map points in the world coordinate system is added to the map.
Since the number of points in the edge feature map is very large, all the points are used as map points, which makes the processing process complicated and cannot effectively improve the accuracy. In order to reduce the processing complexity, map points are selected from various points in the edge feature map according to a preset point density in the step for map construction.
The position information of each map point in the world coordinate system is added to the map, and it can be understood that the correspondence relationship between each map point and the position information is added to the map.
As can be seen from the above, in the present embodiment, location information of map points is added to a map according to selected map points in an edge feature map obtained from a road image, the edge feature map of the road image can extract structural features in the image, the structural features are richer, and the map points can still have a stable appearance, i.e., are more robust, under the conditions of a long time, illumination change, seasonal change, or the like, and can be extracted effectively even if markers in a scene are sparse and even no markers exist. Therefore, the present embodiment can increase the effective amount of information for visual positioning in the map.
In summary, the difference between the present embodiment and the conventional feature detection further includes that the conventional feature detection requires to definitely detect the semantic features of the artificial objects such as lane lines, sidewalks, traffic signs, and street lamp posts. If these semantic features are not present in the scene, it may not be possible to map or find new features and redefine them. The edge features extracted in the embodiment are more generalized, include artificial objects and natural objects in the scene, and have stronger adaptability.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, step S150, the step of selecting a map point from the points of the edge feature map according to the preset point density includes the following steps 1a and 2 a.
Step 1 a: and constructing an octree cube grid in the map according to an octree algorithm with a preset voxel size.
Wherein the preset voxel size may include, but is not limited to, 2cm or other values. When the preset voxel size is 2cm, the size of the octree cube grid constructed in the map is 2cm by 2 cm.
In this embodiment, an octree algorithm may be used to maintain octree cube grids in the map, where each octree cube grid corresponds to a map point.
Step 2 a: and selecting one point from the points of the edge feature map in the octree cube grids as a map point corresponding to the octree cube grids for each octree cube grid.
In specific implementation, whether points in the edge feature map exist in each octree cube grid in the map is judged, if yes, one point is selected from the points in the octree cube grid to serve as a map point corresponding to the octree cube grid. And if the point in the edge feature map does not exist in the octree cube grid, continuously judging the next octree cube grid.
In summary, the density of the map points in the map is maintained by using the octree algorithm, so that the selected map points can meet the preset point density requirement.
In another embodiment of the present invention, based on the embodiment shown in fig. 1, step S140, the step of determining the position information of each point in the edge feature map in the world coordinate system based on the three-dimensional reconstruction algorithm and the positioning pose includes the following steps 1b to 2 b.
Step 1 b: and determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm.
The camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment.
When the three-dimensional coordinates of each point in the edge feature map in the camera coordinate system are determined, the three-dimensional coordinates of each pixel point in the road image can be determined according to the matching relationship of the pixel points between different road images, the distance between the camera devices and the camera parameters of the camera devices based on the road images collected by the at least two camera devices at the same moment. The three-dimensional coordinates of the points in the road image in the camera coordinate system can also be determined based on a plurality of road images continuously acquired by one camera device according to the motion data of the vehicle in the process of acquiring the plurality of road images. When the three-dimensional coordinates of the pixel points in the road image are determined, the three-dimensional coordinates of each point in the edge feature map can be determined according to the corresponding relation between the road image and the midpoint of the edge feature map.
In another embodiment, a block matching algorithm (Patch Match) may also be used to determine the three-dimensional coordinates of each point in the edge feature map in the camera coordinate system, i.e., perform three-dimensional reconstruction on each point.
And step 2 b: and determining a transformation matrix between a camera coordinate system and a world coordinate system according to the positioning pose, and transforming the three-dimensional coordinates according to the transformation matrix to obtain the position information of each point in the edge characteristic diagram in the world coordinate system.
The above-described positioning poses represent poses of the vehicle in a world coordinate system, and the relative positions of the camera apparatus disposed in the vehicle and the vehicle are fixed, so that a transformation matrix between the camera coordinate system and the world coordinate system can be determined from the positioning poses.
In summary, the present embodiment provides a specific implementation manner for determining the location information of each point in the edge feature map in the world coordinate system, and determines the three-dimensional coordinates of each point in the camera coordinate system, and converts the three-dimensional coordinates into the world coordinate system according to the conversion matrix, so that the location information of each point in the edge feature map can be determined more accurately.
In another embodiment of the present invention, based on the above-described embodiment, after adding the position information of the respective map points in the world coordinate system to the map, the method may further include the following steps 1c to 3 c.
Step 1 c: and matching the road image with each historical road image, and determining the historical road image successfully matched as a loop detection image corresponding to the road image when the matching is successful.
And each historical road image is a road image constructed by map points.
In the process of vehicle walking, the road image collected by the camera device, the map point determined from the road image and the positioning pose corresponding to the road image can be stored, and the stored road image is used as a historical road image.
Each historical road image can be a road image collected within a preset time length.
The road image is matched with each historical road image, and the road image can be directly matched with each historical road image. The edge feature map of the road image may be matched with the edge feature maps of the historical road images, and when the matching is successful, the historical road image corresponding to the edge feature map which is successfully matched is the loopback detection image. The matching efficiency can be improved by matching the edge feature maps.
When the matching is successful, the vehicle is considered to return to the position of the previously created map, and the position corresponding to the road image and the position corresponding to the loop detection image are the same position.
And step 2 c: correcting the positioning pose of each road image between the loop detection image and the road image to obtain each corrected positioning pose; and correcting the position information of the corresponding map point according to each corrected positioning pose.
When the vehicle returns to the same place during moving, the driving track forms a closed loop. The loop back detection of the present embodiment can correct the vehicle pose estimated from the motion detection apparatus. After the positioning pose corresponding to each road image is corrected, the position information of the corresponding map point can be corrected correspondingly according to step S140. Because the positioning pose determined according to the data of the motion detection equipment has accumulated errors, the loop detection can reduce the accumulated errors as much as possible, improve the accuracy of the positioning pose and further improve the accuracy of map points.
And step 3 c: after correcting the position information of the corresponding map point, deleting the map point in each octree cubic grid in the map when more than one map point exists in the octree cubic grid so that the rest map points in the octree cubic grid are one.
When the map point in the octree cube grid is deleted, the map point in the octree cube grid can be randomly deleted, and the map point closest to the center of the octree cube grid can be reserved to delete other map points.
After correcting the location information of the corresponding map point, a change in the location information of the map point may result in more than one map point in the octree stereoscopic mesh. The method can delete repeated map points in the octree cube grid and maintain the density of the map points.
In summary, in the embodiment, the positions of the map points that have been constructed are corrected through loop detection, and repeated map points are deleted, so that accumulated errors can be eliminated, and the accuracy of the map points in the map can be improved.
Fig. 2 is a schematic frame diagram of a visual-based mapping method according to an embodiment of the present invention. Wherein the frame includes front, middle and rear portions. The front camera equipment, the rear camera equipment, the left camera equipment and the right camera equipment on the periphery of the vehicle acquire multiple road images. The front end is a motion detection device which comprises an intelligent vehicle common sensor such as an IMU or a wheel speed meter. The front end is used for estimating the pose by utilizing multi-sensor fusion and outputting the positioning pose of the current road image frame to the MVS mapping module.
The middle end comprises an edge detector and an MVS (Multi-View Stereo) mapping module. After determining the edge feature map of the road image, the edge detector inputs the edge feature map into a value MVS mapping module.
After obtaining the multi-path road image, the positioning pose and the edge characteristic image, the MVS mapping module carries out three-dimensional reconstruction on points in the edge characteristic image to obtain coordinates and normal vectors of the points in the edge characteristic image in a world coordinate system, and inputs the coordinates and the normal vectors of the points in the edge characteristic image in the world coordinate system into a map manager.
The functions of the map manager include at least 2. One function is to maintain the density of map points using octree algorithm, select map points from the information input from the MVS mapping module, and add to the high precision map. And the other function is to correct and filter the points which are added into the high-precision map according to the matching image output by the loop detection module.
And performing loop detection according to the multi-path road image and the historical road image, and sending the detected matched image to the map manager.
Fig. 3 is a schematic structural diagram of a vision-based drawing device according to an embodiment of the present invention. The apparatus corresponds to the method embodiment shown in fig. 1. The device can be applied to electronic equipment. The device includes:
an acquisition module 310 configured to acquire a road image captured by a camera device;
an extraction module 320 configured to extract an edge feature map of the road image according to a preset edge intensity;
a positioning module 330 configured to determine a positioning pose corresponding to the road image from data collected by the motion detection device; the positioning pose is a pose in a world coordinate system of the map;
a determining module 340 configured to determine position information of each point in the edge feature map in a world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and the adding module 350 is configured to select map points from the points of the edge feature map according to the preset point density, and add the position information of the map points in the world coordinate system to the map.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the extracting module 320 is specifically configured to:
extracting an edge feature map of the road image based on the edge feature extraction model; the edge feature extraction model is obtained by training according to the sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the determining module 340 is specifically configured to:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; the camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge characteristic graph in the world coordinate system.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the adding module 350 is specifically configured to:
constructing an octree cube grid in a map according to an octree algorithm with a preset voxel size;
and selecting one point from the points of the edge feature map in the octree cube grids as a map point corresponding to the octree cube grids for each octree cube grid.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the apparatus further includes:
a matching module (not shown in the figure) configured to match the road image with each historical road image after adding the position information of each map point in the world coordinate system to the map, and when the matching is successful, determine the successfully matched historical road image as a loop detection image corresponding to the road image;
a correction module (not shown in the figure) configured to correct the positioning pose of each road image from the loop detection image to the road image, resulting in each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
a deleting module (not shown in the figure) configured to delete, for each octree cube grid in the map, a map point in the octree cube grid when there is more than one map point in the octree cube grid after correcting the position information of the corresponding map point, so that the remaining map points in the octree cube grid are one.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the matching module, when matching the road image with each historical road image, includes:
and matching the edge feature map of the road image with the edge feature maps of the historical road images.
In another embodiment of the present invention, based on the embodiment shown in fig. 3, the position information of each point in the world coordinate system includes: the coordinate position of each point in the world coordinate system and the normal vector information of the point.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. The vehicle-mounted terminal comprises: a processor 410, a camera device 420, and a motion detection device 430; the processor 410 includes (the following modules are not shown):
an acquisition module, configured to acquire a road image acquired by the camera device 420;
the extraction module is used for extracting an edge characteristic map of the road image according to preset edge intensity;
the positioning module is used for determining a positioning pose corresponding to the road image according to the data collected by the motion detection equipment 430; the positioning pose is a pose in a world coordinate system of the map;
the determining module is used for determining the position information of each point in the edge characteristic graph in a world coordinate system based on a three-dimensional reconstruction algorithm and a positioning pose;
and the adding module is used for selecting map points from all the points of the edge feature map according to the preset point density and adding the position information of all the map points in the world coordinate system to the map.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, the extraction module is specifically configured to:
extracting an edge feature map of the road image based on the edge feature extraction model; the edge feature extraction model is obtained by training according to the sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, the determining module is specifically configured to:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; wherein, the camera coordinate system is a three-dimensional coordinate system corresponding to the camera device 420;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge characteristic graph in the world coordinate system.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, a module is added, which is specifically configured to:
constructing an octree cube grid in a map according to an octree algorithm with a preset voxel size;
and selecting one point from the points of the edge feature map in the octree cube grids as a map point corresponding to the octree cube grids for each octree cube grid.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, the apparatus further includes:
a matching module (not shown in the figure) for matching the road image with each historical road image after adding the position information of each map point in the world coordinate system to the map, and determining the successfully matched historical road image as a loop detection image corresponding to the road image when the matching is successful;
a correction module (not shown in the figure) for correcting the positioning pose of each road image from the loop detection image to the road image to obtain each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
and a deleting module (not shown in the figure) for deleting the map point in the octree cubic grid when more than one map point exists in the octree cubic grid, so that the rest map points in the octree cubic grid are one, aiming at each octree cubic grid in the map after the position information of the corresponding map point is corrected.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, the matching module, when matching the road image with each historical road image, includes:
and matching the edge feature map of the road image with the edge feature maps of the historical road images.
In another embodiment of the present invention, based on the embodiment shown in fig. 4, the position information of each point in the world coordinate system includes: the coordinate position of each point in the world coordinate system and the normal vector information of the point.
The terminal embodiment and the method embodiment shown in fig. 1 are embodiments based on the same inventive concept, and the relevant points can be referred to each other. The terminal embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, reference is made to the method embodiment.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A vision-based mapping method is characterized by comprising the following steps:
acquiring a road image acquired by camera equipment;
extracting an edge feature map of the road image according to preset edge intensity;
determining a positioning pose corresponding to the road image according to data collected by the motion detection equipment; the positioning pose is a pose in a world coordinate system of the map;
determining the position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and selecting map points from all points of the edge feature map according to a preset point density, and adding the position information of all the map points in the world coordinate system to the map.
2. The method as claimed in claim 1, wherein the step of extracting the edge feature map of the road image according to the preset edge intensity comprises:
extracting an edge feature map of the road image based on an edge feature extraction model; the edge feature extraction model is obtained by training according to a sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
3. The point method according to claim 1, wherein the step of determining the position information of each point in the edge feature map in the world coordinate system based on the three-dimensional reconstruction algorithm and the positioning pose comprises:
determining the three-dimensional coordinates of each point in the edge feature map in a camera coordinate system based on a three-dimensional reconstruction algorithm; the camera coordinate system is a three-dimensional coordinate system corresponding to the camera equipment;
and determining a transformation matrix between the camera coordinate system and the world coordinate system according to the positioning pose, and transforming the three-dimensional coordinate according to the transformation matrix to obtain the position information of each point in the edge feature map in the world coordinate system.
4. The method of claim 1, wherein the step of selecting map points from the points of the edge feature map according to a predetermined point density comprises:
constructing an octree cube grid in the map according to an octree algorithm with a preset voxel size;
for each octree cube grid, selecting one of the points of the edge feature map that are in the octree cube grid as a map point corresponding to the octree cube grid.
5. The method of claim 4, after adding location information of the respective map points in the world coordinate system to the map, further comprising:
matching the road image with each historical road image, and when the matching is successful, determining the successfully matched historical road image as a loop detection image corresponding to the road image;
correcting the positioning pose of each road image between the loop detection image and the road image to obtain each corrected positioning pose; correcting the position information of the corresponding map points according to each corrected positioning pose;
after correcting the position information of the corresponding map point, deleting the map point in the octree cubic grid when more than one map point exists in the octree cubic grid for each octree cubic grid in the map, so that the rest map points in the octree cubic grid are one.
6. The method of claim 5, wherein the step of matching the road image with each historical road image comprises:
and matching the edge characteristic graph of the road image with the edge characteristic graph of each historical road image.
7. The method of any one of claims 1 to 6, wherein the position information of each point in the world coordinate system comprises: and the coordinate position of each point in the world coordinate system and the normal vector information of the point.
8. A vision-based mapping apparatus, comprising:
an acquisition module configured to acquire a road image acquired by a camera device;
the extraction module is configured to extract an edge feature map of the road image according to preset edge intensity;
a positioning module configured to determine a positioning pose corresponding to the road image from data collected by a motion detection device; the positioning pose is a pose in a world coordinate system of the map;
a determining module configured to determine position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and the adding module is configured to select map points from various points of the edge feature map according to preset point density and add the position information of the various map points in the world coordinate system to the map.
9. The apparatus of claim 8, wherein the extraction module is specifically configured to:
extracting an edge feature map of the road image based on an edge feature extraction model; the edge feature extraction model is obtained by training according to a sample road image and an edge feature map labeled according to preset edge strength; the edge feature extraction model associates the road image with a corresponding edge feature map.
10. A vehicle-mounted terminal characterized by comprising: a processor, a camera device and a motion detection device; the processor includes:
the acquisition module is used for acquiring a road image acquired by the camera equipment;
the extraction module is used for extracting an edge feature map of the road image according to preset edge intensity;
the positioning module is used for determining a positioning pose corresponding to the road image according to the data acquired by the motion detection equipment; the positioning pose is a pose in a world coordinate system of the map;
the determining module is used for determining the position information of each point in the edge feature map in the world coordinate system based on a three-dimensional reconstruction algorithm and the positioning pose;
and the adding module is used for selecting map points from all the points of the edge feature map according to the preset point density and adding the position information of all the map points in the world coordinate system to the map.
CN201910687295.7A 2019-07-29 2019-07-29 Vision-based drawing construction method and device and vehicle-mounted terminal Pending CN112308904A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910687295.7A CN112308904A (en) 2019-07-29 2019-07-29 Vision-based drawing construction method and device and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910687295.7A CN112308904A (en) 2019-07-29 2019-07-29 Vision-based drawing construction method and device and vehicle-mounted terminal

Publications (1)

Publication Number Publication Date
CN112308904A true CN112308904A (en) 2021-02-02

Family

ID=74329417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910687295.7A Pending CN112308904A (en) 2019-07-29 2019-07-29 Vision-based drawing construction method and device and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN112308904A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390116A (en) * 2022-09-09 2022-11-25 紫清智行科技(北京)有限公司 Dynamic mapping method and device based on roadside image recognition and satellite image

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008065368A (en) * 2006-09-04 2008-03-21 Kyushu Institute Of Technology System for recognizing position and posture of object using stereoscopic image, method of recognizing position and posture of object, and program for executing method
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
CN107169893A (en) * 2017-04-28 2017-09-15 深圳市数字城市工程研究中心 A kind of land deeds property right body coding method refined based on uniformly subdivision in vertical direction
CN107704821A (en) * 2017-09-29 2018-02-16 河北工业大学 A kind of vehicle pose computational methods of bend
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model
CN108225341A (en) * 2016-12-14 2018-06-29 乐视汽车(北京)有限公司 Vehicle positioning method
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN109115232A (en) * 2017-06-22 2019-01-01 华为技术有限公司 The method and apparatus of navigation
CN109461208A (en) * 2018-11-15 2019-03-12 网易(杭州)网络有限公司 Three-dimensional map processing method, device, medium and calculating equipment
WO2019127347A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
US20190333269A1 (en) * 2017-01-19 2019-10-31 Panasonic Intellectual Property Corporation Of America Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and generation method for generating three-dimensional model
US20200300637A1 (en) * 2016-03-28 2020-09-24 Sri International Collaborative navigation and mapping
US20210199437A1 (en) * 2016-01-08 2021-07-01 Intelligent Technologies International, Inc. Vehicular component control using maps

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008065368A (en) * 2006-09-04 2008-03-21 Kyushu Institute Of Technology System for recognizing position and posture of object using stereoscopic image, method of recognizing position and posture of object, and program for executing method
CN104833370A (en) * 2014-02-08 2015-08-12 本田技研工业株式会社 System and method for mapping, localization and pose correction
US20210199437A1 (en) * 2016-01-08 2021-07-01 Intelligent Technologies International, Inc. Vehicular component control using maps
US20200300637A1 (en) * 2016-03-28 2020-09-24 Sri International Collaborative navigation and mapping
CN108225341A (en) * 2016-12-14 2018-06-29 乐视汽车(北京)有限公司 Vehicle positioning method
US20190333269A1 (en) * 2017-01-19 2019-10-31 Panasonic Intellectual Property Corporation Of America Three-dimensional reconstruction method, three-dimensional reconstruction apparatus, and generation method for generating three-dimensional model
CN107169893A (en) * 2017-04-28 2017-09-15 深圳市数字城市工程研究中心 A kind of land deeds property right body coding method refined based on uniformly subdivision in vertical direction
CN109115232A (en) * 2017-06-22 2019-01-01 华为技术有限公司 The method and apparatus of navigation
CN107704821A (en) * 2017-09-29 2018-02-16 河北工业大学 A kind of vehicle pose computational methods of bend
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model
WO2019127347A1 (en) * 2017-12-29 2019-07-04 深圳前海达闼云端智能科技有限公司 Three-dimensional mapping method, apparatus and system, cloud platform, electronic device, and computer program product
CN108537876A (en) * 2018-03-05 2018-09-14 清华-伯克利深圳学院筹备办公室 Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN108682038A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109084732A (en) * 2018-06-29 2018-12-25 北京旷视科技有限公司 Positioning and air navigation aid, device and processing equipment
CN109461208A (en) * 2018-11-15 2019-03-12 网易(杭州)网络有限公司 Three-dimensional map processing method, device, medium and calculating equipment
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390116A (en) * 2022-09-09 2022-11-25 紫清智行科技(北京)有限公司 Dynamic mapping method and device based on roadside image recognition and satellite image

Similar Documents

Publication Publication Date Title
CN108369420B (en) Apparatus and method for autonomous positioning
CN110148196B (en) Image processing method and device and related equipment
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
JP5435306B2 (en) Image processing system and positioning system
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
CN103389103A (en) Geographical environmental characteristic map construction and navigation method based on data mining
CN111209780A (en) Lane line attribute detection method and device, electronic device and readable storage medium
CN111190199B (en) Positioning method, positioning device, computer equipment and readable storage medium
CN108428254A (en) The construction method and device of three-dimensional map
CN112270272B (en) Method and system for extracting road intersections in high-precision map making
KR20190080009A (en) Automatic drawing method using lane information
CN111008660A (en) Semantic map generation method, device and system, storage medium and electronic equipment
CN106446785A (en) Passable road detection method based on binocular vision
Diaz-Ruiz et al. Ithaca365: Dataset and driving perception under repeated and challenging weather conditions
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN110210384B (en) Road global information real-time extraction and representation system
CN112418081B (en) Method and system for quickly surveying traffic accidents by air-ground combination
KR101451946B1 (en) System for estimating of waste reclamation volume based geographic information system
CN112308904A (en) Vision-based drawing construction method and device and vehicle-mounted terminal
CN112488010A (en) High-precision target extraction method and system based on unmanned aerial vehicle point cloud data
CN112651991A (en) Visual positioning method, device and computer system
CN111754388A (en) Picture construction method and vehicle-mounted terminal
CN112507887B (en) Intersection sign extracting and associating method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination