CN114863347A - Map checking method, device and equipment - Google Patents

Map checking method, device and equipment Download PDF

Info

Publication number
CN114863347A
CN114863347A CN202210638735.1A CN202210638735A CN114863347A CN 114863347 A CN114863347 A CN 114863347A CN 202210638735 A CN202210638735 A CN 202210638735A CN 114863347 A CN114863347 A CN 114863347A
Authority
CN
China
Prior art keywords
map
frame
image
data
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210638735.1A
Other languages
Chinese (zh)
Inventor
苏春龙
陈小龙
朱磊
贾双成
张现法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202210638735.1A priority Critical patent/CN114863347A/en
Publication of CN114863347A publication Critical patent/CN114863347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computational Linguistics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to a map checking method, a map checking device and map checking equipment. The method comprises the following steps: acquiring map data and video data corresponding to roads in the map data; generating an image culling box on each frame of video image in the video data; converting the pixel coordinates of the corner points of the image picking frame on the video image into geographic coordinates to generate a map picking frame corresponding to the geographic coordinates; and acquiring the map elements in the map selection frame in the map data, and mapping and displaying the map elements in the corresponding video image so as to verify the map elements in the map by using the video image. The scheme provided by the application can be used for verifying the generated map and checking whether the map elements are missing, so that the map manufacturing precision is improved, and the map quality inspection efficiency is improved.

Description

Map checking method, device and equipment
Technical Field
The present application relates to the field of navigation technologies, and in particular, to a map checking method, apparatus and device.
Background
In the related art, a high-precision map can be created using video data, the high-precision map is an essential infrastructure in the era of autonomous driving, and map elements (for example, driving direction marks on a road surface) as basic elements of the high-precision map are important for the precision of the created high-precision map.
However, for the high-precision map that has already been manufactured, there is no reliable verification scheme in the related art to check whether the high-precision map has a map element missing problem.
Disclosure of Invention
In order to solve or partially solve the problems in the related art, the application provides a map checking method, a map checking device and map checking equipment, which can check a generated map and check whether map elements are missing, are favorable for improving map manufacturing precision and improving map quality inspection efficiency.
A first aspect of the present application provides a map checking method, including:
acquiring map data and video data corresponding to roads in the map data;
generating an image culling box on each frame of video image in the video data;
converting the pixel coordinates of the corner points of the image picking frame on the video image into geographic coordinates to generate a map picking frame corresponding to the geographic coordinates;
and acquiring the map elements in the map selection frame in the map data, and mapping and displaying the map elements in the corresponding video image so as to verify the map elements in the map by using the video image.
In one embodiment, the pixel coordinates of the corner point of the image culling box on the video image of different frames are the same.
In one embodiment, the image culling box covers a set area of a road in the video image.
In one embodiment, the converting the pixel coordinates of the corner points of the image frame on the video image into geographic coordinates to generate a map frame corresponding to the geographic coordinates includes:
and according to the real-time positioning and map building theoretical rule, converting the pixel coordinates of the corner points of the image selecting frame on the video image into geographic coordinates to generate a map selecting frame corresponding to the geographic coordinates.
In one embodiment, the obtaining of the map element in the map culling box in the map data and the mapping display in the corresponding video image includes:
obtaining map elements in the map selection frame in the map data;
and converting the geographic coordinates to the pixel coordinates of the corner points of the boundary frame of the map element, and generating a check frame mapped with the boundary frame on the corresponding video image.
In one embodiment, the obtaining of the map elements in the map data within the map culling box includes:
establishing spatial indexes between all map elements in the map data and different map marquees;
and obtaining the map elements in the map selection frame in the map data according to the spatial index.
A second aspect of the present application provides a map checking apparatus, including:
the first acquisition module is used for acquiring map data and video data corresponding to roads in the map data;
the generating module is used for generating an image selecting frame on each frame of video image in the video data acquired by the acquiring module;
the conversion module is used for converting the pixel coordinates of the corner points of the image picking frame generated by the generation module on the video image into geographic coordinates to generate a map picking frame corresponding to the geographic coordinates;
the second acquisition module is used for acquiring map elements in the map selection frame generated by the conversion module in the map data;
and the mapping module is used for mapping and displaying the map elements acquired by the second acquisition module in the corresponding video images.
In an embodiment, the mapping module is further configured to perform conversion from geographic coordinates to pixel coordinates on corner points of a bounding box of the map element acquired by the second acquiring module, and generate a check box mapped with the bounding box on the corresponding video image.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described above.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the method as described above.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the method, the map data and the video data corresponding to roads in the map data are obtained, the image selecting frame is generated on each frame of video image in the video data, the corner points of the image selecting frame on the video image are converted from the pixel coordinates to the geographic coordinates, the map selecting frame corresponding to the geographic coordinates is generated, further, the map elements in the map selecting frame in the map data can be obtained and are mapped and displayed in the corresponding video image, and therefore the map elements in the map can be verified through the video image. Therefore, the generated map can be verified, whether the map elements are missing or not can be checked, the map manufacturing precision can be improved, and the map quality inspection efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flowchart of a map verification method according to an embodiment of the present application;
fig. 2 is another schematic flow chart of a map checking method according to an embodiment of the present disclosure;
FIG. 3 is a schematic representation of a presentation of video data according to an embodiment of the present application;
FIG. 4 is a schematic representation of map data shown in an embodiment of the present application;
FIG. 5 is another schematic representation of video data according to an embodiment of the present application;
FIG. 6 is another representation of map data shown in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a map checking apparatus according to an embodiment of the present application;
fig. 8 is another schematic structural diagram of a map verification apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While embodiments of the present application are illustrated in the accompanying drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the related art, for a high-precision map which is already manufactured, a reliable verification scheme is not available for checking whether the high-precision map has the problem of map element missing.
In view of the above problems, embodiments of the present application provide a map verification method, which can verify a generated map, check whether there is a map element missing problem, and is beneficial to improving map manufacturing accuracy and improving map quality inspection efficiency.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a map verification method according to an embodiment of the present application.
Referring to fig. 1, the method includes:
step S101, map data and video data corresponding to roads in the map data are obtained.
The map data may be high-precision map data, and the video data may correspond to roads in the high-precision map data. The video data may be video data collected by a photographing device (e.g., an in-vehicle camera, a vehicle recorder, etc.) mounted on the vehicle during driving of the vehicle. The video data captures images of ground roads, and the road images in the video data correspond to the roads in the map data. The video data may comprise temporally successive frames of video images.
Step S102, generating an image selection frame on each frame of video image in the video data.
In this step, an image culling box may be generated on each frame of video image in the video data, and the image culling box may be a circular box, an elliptical box, or a polygonal box (e.g., a rectangle, a trapezoid, etc.).
Step S103, converting the pixel coordinates to the geographic coordinates of the corner points of the image picking frame on the video image to generate a map picking frame corresponding to the geographic coordinates.
In this step, the corner points of the image selecting frame on the video image may be converted from the pixel coordinates to the geographic coordinates according to the real-time positioning and map building theoretical rules, so as to generate a map selecting frame corresponding to the geographic coordinates.
Wherein, the instant positioning And Mapping is SLAM (instant positioning And Mapping).
When the image selection frame is a circular frame or an oval frame, the corner points may be all boundary points on the image selection frame, and the image selection frame may be determined according to the positions of the corner points. When the image frame is a polygon frame, the corner point may be a vertex (or called an inflection point) on the image frame, and the image frame may be determined according to a position of the vertex.
And step S104, acquiring the map elements in the map selection frame in the map data, and mapping and displaying the map elements in the corresponding video image so as to verify the map elements in the map by using the video image.
The map element may be a ground element, and the ground element may be a ground indicator, such as a straight arrow, a left-turn arrow, a right-turn arrow, a turning arrow, or the like.
In this step, after the map elements in the map frame in the map data are acquired, one frame of video image corresponding to the map frame may be matched, so that the map elements in the map frame may be mapped and displayed in the corresponding video image. It can be understood that the video image shows the actual situation on the road, and the map elements generated in the map data are displayed on the picture of the video image, so that comparison can be conveniently performed, whether the map elements generated in the map have the problem of map element loss or not can be verified, and whether the positions of the map elements generated in the map are correct or not can be conveniently detected.
As can be seen from this embodiment, in the method provided in the embodiment of the present application, an image selecting frame is generated on each frame of video image in the video data by obtaining the map data and the video data corresponding to the road in the map data, and a corner point of the image selecting frame on the video image is converted from a pixel coordinate to a geographic coordinate to generate a map selecting frame corresponding to the geographic coordinate, so that a map element in the map selecting frame in the map data can be obtained and displayed in the corresponding video image in a mapping manner, so as to verify the map element in the map using the video image. Therefore, the generated map can be verified, whether the map elements are missing or not can be checked, the map manufacturing precision can be improved, and the map quality inspection efficiency can be improved.
Fig. 2 is another schematic flow chart of a map verification method according to an embodiment of the present application. Fig. 2 depicts the solution of the present application in more detail with respect to fig. 1.
Referring to fig. 2, the method includes:
step S201, obtaining map data and video data corresponding to roads in the map data.
The map data may be high-precision map data, and the video data may correspond to roads in the high-precision map data. The video data may be video data collected by a shooting device (such as an on-board camera, a vehicle recorder, and the like) installed on the vehicle during driving of the vehicle. The video data captures images of ground roads, and the road images in the video data correspond to the roads in the map data. The video data may comprise a plurality of frames of video images that are temporally consecutive.
Referring to fig. 3 and 4 together, fig. 3 is a schematic view showing video data according to an embodiment of the present application, fig. 3 shows a frame of video image in the video data, each frame of video image in the video data has a time stamp (see the top position in fig. 3), and the video data captures a picture of a ground road, that is, each frame of video image is an image including the ground road.
Fig. 4 is a schematic representation of map data shown in an embodiment of the present application, and fig. 4 shows a lane line, a vehicle positioning track point, and a map element (a box indicated by an arrow in fig. 4) in the map data. It should be noted that, in the embodiment of the present application, fig. 4 is a picture displayed by a QGIS through which map data is displayed, and the QGIS, that is, Quantum GIS, is a user-friendly, cross-platform, open-source desktop geographic information system. It should be further noted that the video data shown in fig. 3 is video data acquired by a shooting device mounted on a vehicle during the driving process of the vehicle, and a positioning device on the vehicle acquires position information of the vehicle according to a preset acquisition frequency to obtain vehicle positioning data, the vehicle positioning track point in fig. 4 is obtained according to the vehicle positioning data, and the vehicle positioning track point may be an RTK (Real-time kinematic) track point coordinate.
Step S202, an image culling box is generated on each frame of video image in the video data.
In this step, an image culling frame may be generated on each frame of video image in the video data, and the image culling frame may be a circular frame, an oval frame, or a polygonal frame (e.g., a rectangle, a trapezoid, etc.).
Preferably, in the embodiment of the present application, the image culling frame covers a set area of a road in the video image. In one embodiment, the image selecting frame is formed as a trapezoidal frame by selecting two lane line feature points in front of and behind lane lines on both sides of a road in a video image. In other words, two lane line feature points are respectively selected at the front and rear of the lane line on one side of the road, and two lane line feature points are respectively selected at the front and rear of the lane line on the other side of the road, and the four lane line feature points are formed into an image selecting frame which is a trapezoidal frame. The road covered by the image marquee may be a road in the front traveling direction, i.e., a non-oncoming road. It can be understood that the set area can be adjusted according to requirements to meet different coverage rates of roads in the video image.
Referring to fig. 5, fig. 5 is another representation of video data according to an embodiment of the present application, fig. 5 shows an image culling box (i.e., a trapezoid box in fig. 5) in the video data, and the difference between fig. 5 and fig. 3 is that the image culling box is not shown in fig. 3.
It can be understood that, for the same video data, the length and width of each frame of video image in the video data are the same, i.e. the length and width of the video images of different frames are the same. In one embodiment, the pixel coordinates of the corner points of the image pick-up box are the same on the video images of different frames. That is, the shape size of the image culling frame in each frame video image is constant, and the position of the image culling frame in each frame video image is constant. As shown in fig. 4, the corner points of the image culling box, i.e. the four vertices of the trapezoid box in fig. 4, have the same pixel coordinates on the video images of different frames. That is, the position of the image culling box in a trapezoidal frame made up of four vertices is constant across video images of different frames.
Step S203, converting the pixel coordinates to the geographic coordinates of the corner points of the image picking frame on the video image, and generating the map picking frame corresponding to the geographic coordinates.
In one embodiment, the corner points of the image frame on the video image may be converted from the pixel coordinates to the geographic coordinates according to the real-time positioning and mapping theory rule, so as to generate the map frame corresponding to the geographic coordinates. Wherein, the instant positioning And Mapping is SLAM (instant positioning And Mapping).
As shown in fig. 5, in the embodiment of the present application, the image culling box is a trapezoid box, and corner points of the image culling box are four vertices of the trapezoid box. The pixel coordinates of the four vertices can be converted into geographic coordinates (e.g., latitude and longitude coordinates) according to the real-time positioning and mapping theoretical rules. In one embodiment, the geographic coordinates of the four vertexes of the trapezoid frame, that is, the geographic coordinates of the corner points of the image selection frame, can be determined by utilizing the timestamps of the front and rear frames of video images and the geographic coordinates of the positioning track points of the vehicle which is shot by the video images through utilizing the SLAM technology. In other embodiments, the map data may also be called to obtain geographic coordinates of a landmark (e.g., a certain guideboard or street lamp) in the video image, and then, in combination with the SLAM technology, the conversion of the corner point of the image selection frame from the pixel coordinate to the geographic coordinate is realized. It should be noted that, a specific process of converting a pixel coordinate of a certain point in an image into a geographic coordinate by using an instant positioning and mapping theory rule (i.e., SLAM technology) may refer to descriptions in related technologies, and details are not repeated here.
It is understood that, in the embodiment of the present application, the image culling box covers a set area of a road in the video image. The image selection frame may be a trapezoidal frame formed by selecting two lane line feature points in front of and behind the lane lines on both sides of the road in the video image. Thus, after the image frame is transformed from pixel coordinates to geographic coordinates, the resulting map frame will be rectangular.
Referring to fig. 6, fig. 6 is another representation diagram of the map data according to the embodiment of the present application, fig. 6 shows a map selection box (i.e., a rectangular box in fig. 6) in the map data, and the difference between fig. 6 and fig. 4 is that the map selection box is not shown in fig. 4. It can be understood that the image selection frame is in the video images of different frames, and after the coordinate conversion is performed on the corner points of the image selection frame, the corresponding geographic coordinates are different. That is, the image frames in the video images of different frames may correspond to different map frames. Although the pixel coordinates of the corner points of the image selection frame in the video images of different frames are the same, the geographic coordinates of the corner points of the map selection frame after coordinate conversion are different. The image picking frame in each frame of video image corresponds to a map picking frame.
And step S204, obtaining the map elements in the map selection frame in the map data.
The map element may be a ground element, and the ground element may be a ground indication mark in the map, such as a straight arrow, a left-turn arrow, a right-turn arrow, a turning arrow, and the like.
In one embodiment, obtaining map elements in the map data within the map culling box comprises:
step A: spatial indexes between all map elements in the map data and different map marquees are established.
In this step, the map element and the map culling frame may be matched according to the geographic coordinates (i.e., longitude and latitude coordinates) of the map element in the map and the geographic coordinates of the map culling frame, so as to establish the spatial index. In this way, map elements located within different map marquees may be quickly determined for the different map marquees. The spatial index is a correspondence relationship that determines the correspondence relationship between different map marquees and different map elements. For example, the map box in fig. 6 corresponds to the image box in the video image shown in fig. 5, and the map box shown in fig. 6 corresponds to two map elements (i.e., two left-turn arrows).
And B: and obtaining the map elements in the map selection frame in the map data according to the spatial index.
In this step, according to the spatial index established in step a, the map elements in the map selection frame in the map data can be quickly acquired, that is, the map elements in the map selection frame can be quickly determined.
Step S205, converting the geographic coordinates to the pixel coordinates of the corner points of the bounding box of the map element, and generating a check box mapped with the bounding box on the corresponding video image.
The bounding box of the map element may be a rectangular box surrounding the map element, and the corner points of the bounding box of the map element may be four vertices (or called inflection points) of the rectangular box. According to the map selection frame surrounding the map element, the video image corresponding to the map selection frame is further determined, and the conversion from the geographic coordinate to the pixel coordinate can be carried out on the corner points of the boundary frame of the map element according to the instant positioning and map construction theoretical rule (namely, the SLAM technology), so that the check frame is obtained.
For example, for a map element in the map culling frame shown in fig. 6, a check frame (i.e., a rectangular frame surrounding two right-turn arrows in the image culling frame in fig. 5) is obtained by coordinate conversion of a bounding box of the map element.
It can be understood that the video image shows the actual situation on the road, and the map elements generated in the map data are displayed on the picture of the video image, so that comparison can be conveniently performed, whether the map elements generated in the map have the problem of map element loss or not can be verified, and whether the positions of the map elements generated in the map are correct or not can be conveniently detected.
When video data is played in a frame of video image, by implementing the map verification method provided by the application, map elements (such as ground elements) in a map (such as a high-precision map) can be mapped and displayed in the video image corresponding to one frame in real time so as to verify the map elements in the map. As shown in fig. 5, the user can determine whether the generated map elements in the map are accurate, whether the map elements are missing in number, whether the generated positions of the map elements have errors, and the like by observing whether the right-turn arrow in the video image corresponds to the check box.
The embodiment shows that the method provided by the embodiment of the application can be used for verifying the generated map and checking whether the map elements are missing or not, and is beneficial to improving the map manufacturing precision and improving the map quality inspection efficiency. Furthermore, the method provided by the embodiment of the application can be applied to improving the manufacturing precision of the automatic driving high-precision map and improving the production and quality inspection efficiency of the automatic driving high-precision map.
Corresponding to the embodiment of the application function implementation method, the application also provides a map checking device, electronic equipment and a corresponding embodiment.
Fig. 7 is a schematic structural diagram of a map verification apparatus according to an embodiment of the present application.
Referring to fig. 7, a map verifying apparatus 70 includes: a first obtaining module 710, a generating module 720, a converting module 730, a second obtaining module 740, and a mapping module 750.
The first obtaining module 710 is configured to obtain map data and video data corresponding to roads in the map data.
A generating module 720, configured to generate an image culling box on each frame of video image in the video data acquired by the acquiring module.
The converting module 730 is configured to perform conversion from the pixel coordinate to the geographic coordinate on the corner point of the image selecting frame generated by the generating module 720 on the video image, and generate a map selecting frame corresponding to the geographic coordinate.
A second obtaining module 740, configured to obtain the map elements in the map culling box generated by the converting module 730 in the map data.
And the mapping module 750 is configured to map and display the map element acquired by the second acquiring module 740 in the corresponding video image.
As can be seen from this embodiment, the device 70 provided in this embodiment of the present application can verify a generated map, check whether there are problems such as missing map elements, and facilitate improving map manufacturing accuracy and improving map quality inspection efficiency.
Fig. 8 is another schematic structural diagram of a map checking apparatus according to an embodiment of the present application;
referring to fig. 8, a map checking apparatus 70 includes: a first obtaining module 710, a generating module 720, a converting module 730, a second obtaining module 740, and a mapping module 750.
The functions of the first obtaining module 710, the generating module 720 and the converting module 730 can refer to the description in fig. 7, and are not described herein again.
The mapping module 750 is further configured to convert the geographic coordinates to the pixel coordinates of the corner points of the bounding box of the map element acquired by the second acquiring module 740, and generate a check box mapped with the bounding box on the corresponding video image.
The second obtaining module 740 includes an indexing sub-module 741 and a obtaining sub-module 742.
The index sub-module 741 is configured to establish a spatial index between all map elements in the map data and different map marquees.
The obtaining sub-module 742 is configured to obtain a map element in the map selection box in the map data according to the spatial index.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 9 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 9, an electronic device 900 includes a memory 910 and a processor 920.
The Processor 920 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 910 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 920 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 910 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, as well. In some embodiments, memory 910 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 910 has stored thereon executable code that, when processed by the processor 920, may cause the processor 920 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A map verification method, comprising:
acquiring map data and video data corresponding to roads in the map data;
generating an image culling box on each frame of video image in the video data;
converting pixel coordinates to geographic coordinates of corner points of the image picking frame on the video image to generate a map picking frame corresponding to the geographic coordinates;
and acquiring the map elements in the map selection frame in the map data, and mapping and displaying the map elements in the corresponding video image so as to verify the map elements in the map by using the video image.
2. The method of claim 1, wherein:
and the pixel coordinates of the corner points of the image selection frame on the video images of different frames are the same.
3. The method of claim 1, wherein:
and the image selecting frame covers a set area of a road in the video image.
4. The method according to claim 1, wherein the converting the pixel coordinates to the geographic coordinates of the corner points of the image cull box on the video image to generate a map cull box corresponding to the geographic coordinates comprises:
and according to the real-time positioning and map building theoretical rule, converting the pixel coordinates of the corner points of the image selecting frame on the video image into geographic coordinates to generate a map selecting frame corresponding to the geographic coordinates.
5. The method of claim 1, wherein the obtaining of the map elements in the map data within the map selection box and the mapping of the map elements in the corresponding video image comprises:
obtaining map elements in the map selection frame in the map data;
and converting the geographic coordinates to the pixel coordinates of the corner points of the boundary frame of the map element, and generating a check frame mapped with the boundary frame on the corresponding video image.
6. The method according to claim 1 or 5, wherein the obtaining of the map elements in the map data within the map culling box comprises:
establishing spatial indexes between all map elements in the map data and different map marquees;
and obtaining the map elements in the map selection frame in the map data according to the spatial index.
7. A map verifying apparatus, comprising:
the first acquisition module is used for acquiring map data and video data corresponding to roads in the map data;
the generating module is used for generating an image selecting frame on each frame of video image in the video data acquired by the acquiring module;
the conversion module is used for converting the pixel coordinates of the corner points of the image picking frame generated by the generation module on the video image into geographic coordinates to generate a map picking frame corresponding to the geographic coordinates;
the second acquisition module is used for acquiring map elements in the map selection frame generated by the conversion module in the map data;
and the mapping module is used for mapping and displaying the map elements acquired by the second acquisition module in the corresponding video images.
8. The apparatus of claim 7, wherein:
the mapping module is further configured to perform conversion from geographic coordinates to pixel coordinates on corner points of a bounding box of the map element acquired by the second acquisition module, and generate a check box mapped with the bounding box on the corresponding video image.
9. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-6.
10. A computer-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-6.
CN202210638735.1A 2022-06-08 2022-06-08 Map checking method, device and equipment Pending CN114863347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210638735.1A CN114863347A (en) 2022-06-08 2022-06-08 Map checking method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210638735.1A CN114863347A (en) 2022-06-08 2022-06-08 Map checking method, device and equipment

Publications (1)

Publication Number Publication Date
CN114863347A true CN114863347A (en) 2022-08-05

Family

ID=82624893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210638735.1A Pending CN114863347A (en) 2022-06-08 2022-06-08 Map checking method, device and equipment

Country Status (1)

Country Link
CN (1) CN114863347A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410219A (en) * 2022-09-02 2022-11-29 自然资源部地图技术审查中心 Map element model construction method and device for identifying problem map

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410219A (en) * 2022-09-02 2022-11-29 自然资源部地图技术审查中心 Map element model construction method and device for identifying problem map

Similar Documents

Publication Publication Date Title
US20120133639A1 (en) Strip panorama
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
AU2008241689A1 (en) Method of and apparatus for producing road information
JP2010511212A (en) Method and apparatus for identifying and locating planar objects in an image
JP2009053059A (en) Object specifying device, object specifying method, and object specifying program
CN111930877B (en) Map guideboard generation method and electronic equipment
CN114463984B (en) Vehicle track display method and related equipment
CN115235493A (en) Method and device for automatic driving positioning based on vector map
CN113465615A (en) Lane line generation method and related device
CN114863347A (en) Map checking method, device and equipment
CN113838129B (en) Method, device and system for obtaining pose information
CN112595335B (en) Intelligent traffic driving stop line generation method and related device
CN113284194A (en) Calibration method, device and equipment for multiple RS (remote sensing) equipment
CN110223223A (en) Street scan method, device and scanner
CN113284193A (en) Calibration method, device and equipment of RS equipment
CN114820784A (en) Guideboard generation method and device and electronic equipment
CN116052117A (en) Pose-based traffic element matching method, equipment and computer storage medium
US20200349740A1 (en) Method and device for identifying stereoscopic object, and vehicle and storage medium
CN114119963A (en) Method and device for generating high-precision map guideboard
CN112348903B (en) Method and device for calibrating external parameters of automobile data recorder and electronic equipment
CN115235484A (en) Method and device for generating high-precision map stop line
CN112991434B (en) Method for generating automatic driving traffic identification information and related device
CN113724390A (en) Ramp generation method and device
CN113408509B (en) Signboard recognition method and device for automatic driving
CN117853644A (en) Map model rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination