CN111932627B - Marker drawing method and system - Google Patents

Marker drawing method and system Download PDF

Info

Publication number
CN111932627B
CN111932627B CN202010965481.5A CN202010965481A CN111932627B CN 111932627 B CN111932627 B CN 111932627B CN 202010965481 A CN202010965481 A CN 202010965481A CN 111932627 B CN111932627 B CN 111932627B
Authority
CN
China
Prior art keywords
image
calculating
coordinates
camera
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010965481.5A
Other languages
Chinese (zh)
Other versions
CN111932627A (en
Inventor
单国航
李倩
贾双成
李成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mushroom Car Union Information Technology Co Ltd
Original Assignee
Mushroom Car Union Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mushroom Car Union Information Technology Co Ltd filed Critical Mushroom Car Union Information Technology Co Ltd
Priority to CN202010965481.5A priority Critical patent/CN111932627B/en
Publication of CN111932627A publication Critical patent/CN111932627A/en
Application granted granted Critical
Publication of CN111932627B publication Critical patent/CN111932627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Abstract

The application relates to a marker drawing method and a marker drawing system. The method comprises the following steps: acquiring two images containing the same marker; identifying the images to obtain image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively; calculating a rotation matrix and a translation matrix of the second image relative to the first image; and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera. According to the scheme provided by the application, the marker required by the high-precision map can be drawn by adopting the image shot by the automobile data recorder.

Description

Marker drawing method and system
Technical Field
The application relates to the technical field of navigation, in particular to a marker drawing method and system.
Background
Along with the development of space technology and information technology, the unified management and intelligent interaction of urban infrastructure gradually enter the public field of vision. The guideboard is used as an information bearing carrier of a city geographic entity, has a place name information guiding function, is used as infrastructure distributed at a city road intersection, has specificity in space, and is a good carrier of a city basic Internet of things.
The current practice in the prior art is to adopt a three-dimensional point cloud method to carry out mapping of a high-precision map, but the method needs to adopt a special mapping vehicle, cannot popularize the mapping method, and further is difficult to improve the mapping scale, so that under the existing environment, when the environmental road changes, the data of the high-precision map is often not updated in time; and limited by the scale of the mapping vehicle and the professional mapping team, the overall efficiency of high-precision mapping is not high.
Disclosure of Invention
The application provides a marker drawing method and a marker drawing system, which can realize the drawing of a marker required by a high-precision map through the acquisition of a two-dimensional image.
A marker mapping method comprising: acquiring two images containing the same marker; identifying the images to obtain image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively; calculating a rotation matrix and a translation matrix of the second image relative to the first image; and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera.
In the above method, the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image specifically includes: selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
Or, the method for calculating the rotation matrix and the translation matrix of the second image relative to the first image specifically comprises: selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
In the method, the marker is a guideboard, and the characteristic points of the marker are all vertexes of the guideboard; and respectively calculating the world coordinate system coordinates of each vertex of the guideboard, so that the three-dimensional coordinates of the guideboard required by high-precision mapping can be obtained.
The method further comprises the following steps: acquiring geographic coordinate information of the camera when the two images are respectively shot; and calculating the geographic coordinate information of the marker according to the geographic coordinate information of the camera and the world coordinate system coordinate of the marker feature point relative to the camera.
A marker mapping system, comprising: the cache unit is used for acquiring two images containing the same identifier; the image identification unit is used for identifying the images acquired by the cache unit and acquiring image pixel coordinates of corresponding feature points of the same identifier in the two images respectively; the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image; and the first processor unit is used for calculating the pixel coordinates of the characteristic points in the image acquired by the image identification unit by using the rotation matrix and the translation matrix acquired by the calculation unit to obtain the world coordinate system coordinates of the characteristic points of the marker relative to the camera.
In the above system, the computing unit is specifically configured to: identifying and selecting at least eight pairs of corresponding pixel points in the first image and the second image and respectively calculating the obtained image pixel coordinates; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
In the above system, the computing unit is specifically configured to: identifying and selecting at least five pairs of corresponding pixel points in the first image and the second image and respectively calculating the obtained image pixel coordinates; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
In the system, the image recognition unit is specifically configured to recognize a guideboard in an image and recognize image pixel coordinates of a feature point of the guideboard, where the feature point of the guideboard is a vertex of the guideboard.
A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the above-described method.
The technical scheme provided by the application can comprise the following beneficial effects:
the marker needed by the high-precision map can be drawn according to the acquired two-dimensional image, so that the marker needed by the high-precision map can be drawn by using the image shot by the automobile data recorder. In addition, in the process of drawing the marker by using the two-dimensional image acquired by a monocular camera such as a driving recorder, the rotation matrix and the translation matrix of the camera when the two images are shot are obtained by using the characteristics of the corresponding points in the two images, so that the space coordinates of the object in the images obtained by taking the two images as input calculation are very accurate, the calculation precision is further ensured, and the image shot by using the driving recorder can be applied to drawing the marker required by the high-precision map.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a schematic flow chart diagram illustrating a method for mapping a marker according to an embodiment of the present application;
fig. 2 is a schematic diagram of a marker image including a marker drawing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a translation matrix and a rotation matrix algorithm of a marker mapping method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a marker mapping method according to an embodiment of the present application, which calculates coordinates of a world coordinate system based on an image.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the high-precision map data acquisition process, a surveying and mapping vehicle runs along a road, and map data of surrounding buildings, traffic signboards and other markers needing to be drawn on a map are acquired. The method of three-dimensional laser scanning adopted in the prior art can rapidly scan a measured object to directly obtain high-precision point cloud data, so that rapid two-dimensional vectorization can be carried out on the real world.
The embodiment of the application provides a method for obtaining an image of a scene in a photographing or shooting mode and further obtaining data required by drawing a high-precision map by utilizing image calculation. In addition, in the process of actual image acquisition, the shooting angle of the camera is constantly changed due to factors such as up-and-down fluctuation of the road, the direction of the road changing, and the driving environment changing, and accurate data cannot be obtained by surveying and mapping in the environment by adopting the method for shooting the images. Thus, when obtaining high-precision map data using a camera-captured image, it is necessary to consider the influence of changes in the angle and position of the camera on the calculation of the spatial position of the marker based on the captured image during the traveling of the vehicle.
In high-precision mapping, buildings, road electronic photographing equipment, traffic signal identification, for example: map data of markers such as traffic lights, signboards, lane lines, etc. are all required in drawing high-precision maps. In the embodiment of the present application, the identifier refers to an object that needs to be drawn in a high-precision map, and is not limited to the above example. Fig. 1 is a schematic flow chart of a marker mapping method according to an embodiment of the present application. By the embodiment including the embodiment shown in fig. 1, the world coordinate system coordinates or the spatial position information of the marker can be accurately obtained by using the two-dimensional image, so that the marker can be used for drawing a high-precision map.
Referring to fig. 1, a marker mapping method includes:
step 11, two images containing the same marker are acquired.
In the embodiment of the application, the camera device installed on the automobile, such as a vehicle recorder, acquires the video during the driving of the automobile. The vehicle travels on a road, and when approaching a guideboard, the camera device on the vehicle obtains a plurality of guideboard images at different times, i.e., images of a plurality of frames in a video.
As shown in fig. 2, the image is acquired when the vehicle passes by the guideboard at a certain time, and as shown in fig. 2, a rectangular guideboard is displayed on the right side of the image acquired by photographing. The surveying and mapping vehicle runs on the road, and when approaching to the guideboard, the monocular camera started on the surveying and mapping vehicle terminal can obtain the guideboard image of the current angle according to the preset acquisition frequency. The surveying and mapping vehicle runs close to the guideboard, the monocular camera starts to acquire the guideboard image, and the surveying and mapping vehicle continues to run in the process of acquiring the guideboard image, so that each frame of guideboard image is acquired from different angles by the monocular camera. And acquiring the guideboard images at different angles through a monocular camera.
And recording the geographic coordinate information when different guideboard images are obtained. And in the process of acquiring the guideboard image, the surveying and mapping vehicle continuously runs and records the geographic coordinate information. And searching the geographical coordinate information of the time according to the time of the acquired two pieces of image information.
Step 12: and identifying the images to obtain the image pixel coordinates of the characteristic points of the markers corresponding to the two images respectively.
Referring to fig. 2, two images containing the same marker are identified, and the image pixel coordinates of the feature point corresponding to the marker in each of the two images are obtained. Assuming that a 2-second time period elapses during the process of the vehicle traveling from far to near to the guideboard, in the 2-second time period of the video, any two frames of images containing the guideboard are acquired. And taking 4 vertexes of the guideboard as feature points, and respectively obtaining image pixel coordinates of the 4 vertexes in the first image and the second image according to a preset rule.
The image pixel coordinates are used to describe coordinates of an image point of an imaged object on a digital image. The coordinate system in which the information read from the camera is located. The unit is one (number of pixels). The coordinate values are expressed by (u, v) with the vertex at the upper left corner of the image plane as the origin of coordinates and the X-axis and the Y-axis being parallel to the X-axis and the Y-axis, respectively, of the image coordinate system. The images acquired by the digital camera are first formed into a standard electrical signal and then converted into digital images by analog-to-digital conversion. The storage form of each image is an array of M × N, and the numerical value of each element in the image of M rows and N columns represents the gray scale of the image point. Each element is called a pixel, and the pixel coordinate system is an image coordinate system taking the pixel as a unit.
The embodiment of the present application is not intended to limit the feature points of the guideboard in the selected image, and may be any feature points in the guideboard that can be recognized. For example, a square guideboard may identify four vertices, a triangular guideboard may identify three vertices, and a circular guideboard may identify two vertices, horizontal and two vertices, vertical. Therefore, the guideboard of the present embodiment is not limited to the actually square or rectangular guideboard shown in fig. 2, and includes, for example, guideboards having a triangular or circular shape.
And step 13, calculating a rotation matrix and a translation matrix of the second image relative to the first image.
And identifying at least eight pairs of corresponding pixel points and image pixel coordinates of the pixel points in the first image and the second image by utilizing an image identification technology. Imaging points of the same object/object in the first image and the second image respectively have corresponding relations. On the basis of satisfying the correspondence, there is no limitation on the selection of the pixel points, and for example, the pixel points may be identifiable pixel points of buildings or other objects in the image, and may also include four vertices of the guideboard in this embodiment.
And selecting image pixel coordinates of the eight pairs of pixel points in the first image and the second image respectively, and calculating a rotation matrix and a translation matrix of the second image relative to the first image by adopting an eight-point method.
Referring to fig. 3, two images of the same guideboard are shot at different positions, and pixel points corresponding to the same object in the images satisfy epipolar constraint relationship. Where P is a vertex of a real object in the world coordinate system, such as a guideboard.
Figure 182082DEST_PATH_IMAGE001
The monocular camera optical center positions of the first image and the second image, respectively. I is1、I2Representing a first image and a second image, respectively.
Figure 892549DEST_PATH_IMAGE002
Respectively P point in the first image I1And a second image I2Is projected. e.g. of the type1、e2Is the pole. According to the epipolar constraint:
Figure 433251DEST_PATH_IMAGE003
obtaining:
Figure 545564DEST_PATH_IMAGE004
wherein:
Figure 48440DEST_PATH_IMAGE005
e is a 3 × 3 essential matrix, T is a translation matrix, R is a rotation matrix, and T is a transpose of the matrix.
Finding E by eight-point method
Figure 246203DEST_PATH_IMAGE006
Wherein (u)1,v1) Is p1Image pixel coordinates of (u)2,v2) Is p2The image pixel coordinates of (2).
Obtaining:
Figure 590596DEST_PATH_IMAGE007
wherein:
Figure 291836DEST_PATH_IMAGE008
the same representation is used for other pairs of points, so that all the equations obtained are put together to obtain a linear system of equations (u)i,vi) Representing the ith matched point pair.
Figure 329062DEST_PATH_IMAGE009
The essential matrix E is obtained by the above system of linear equations.
And decomposing the singular value E to obtain 4 groups of t and R values, wherein only one depth value in the 4 groups of results is positive, and the combination of the t and R values with the positive depth value is a translation matrix and a rotation matrix of the second image relative to the first image.
Step 14: and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera.
The camera is placed in a three-dimensional space, and thus the world coordinate system, this reference coordinate system, describes the position of the camera, and the position of the camera is used to describe the position of any other object placed in this three-dimensional environment. Let P be a point in the real world whose location in the world coordinate system is
Figure 748542DEST_PATH_IMAGE010
And P is the real position of a certain point of the guideboard in the embodiment of the application.
Figure 834310DEST_PATH_IMAGE011
The optical center is used as an origin point for a camera coordinate system, the optical center of the camera is used as the origin point for the camera coordinate system, the z axis coincides with the optical axis, namely the z axis points to the front of the camera, and the positive directions of the x axis and the y axis are parallel to the object coordinate system. Where f is the focal length of the camera, as can be seen in FIG. 4, f is the origin of the camera coordinate system
Figure 655636DEST_PATH_IMAGE012
Distance from o in the physical coordinate system of the image.
o-xy is the physical coordinate system of the image, also called the planar coordinate system. The position of the pixel is expressed by physical units, and the origin of coordinates is the intersection position of the optical axis of the camera and the physical coordinate system of the image, namely the optical center is the central point of the image. The o-xy coordinate system is in millimeters (mm), which is compatible with the size of the camera's internal CCD sensor. The photo is imaged later in units of pixels, such as 640 × 480, and thus further conversion of the image physical coordinates to image pixel coordinates is required.
Image pixel coordinate system uv, as shown in fig. 4. And taking the pixel as a unit, and taking the origin of coordinates as the upper left corner of the image. The conversion relation between the image pixel physical coordinate and the image pixel coordinate is the relation between the millimeter and the pixel point, namely pixel/millimeter. For example, the camera CCD sensor is 8mm 6mm, the image pixel size is 640X 480 if
Figure 67025DEST_PATH_IMAGE013
Representing the physical size of each pixel in the image pixel coordinate system
Figure 472337DEST_PATH_IMAGE014
Is 1/80 mm.
In world coordinate system
Figure 158533DEST_PATH_IMAGE015
The imaging point of the point in the image is p, and the coordinate in the physical coordinate system of the image is p
Figure 834365DEST_PATH_IMAGE016
The coordinates in the image pixel coordinate system are
Figure 416656DEST_PATH_IMAGE017
According to the conversion relation, the world coordinate of the P point relative to the camera position is calculated according to the pixel coordinate of the P point in the image. According to the following conversion formula, the point P is located on a straight line with the camera as a starting point and the determined direction relative to the camera.
Figure 607466DEST_PATH_IMAGE018
Wherein Z iscAre depth values. u and v are pixel coordinates of the point P in a pixel coordinate system.
Figure 35036DEST_PATH_IMAGE020
Is a camera distortion parameter.
Figure 565375DEST_PATH_IMAGE021
And
Figure 115305DEST_PATH_IMAGE023
which respectively indicate how many length units a pixel occupies in the x-direction and the y-direction, respectively. u0, v0 denotes the lateral and longitudinal pixel differences between the central pixel coordinates of the image and the origin pixel coordinates of the imageAnd (4) counting. f is the camera focal length. R is the rotation matrix and t is the translation matrix. XW、YW、ZWIs the coordinate of point P in world coordinate system.
Since the second image and the first image have rotation and translation, R is the rotation matrix and t is the translation matrix.
The coordinate sets of the characteristic points in the first image and the second image relative to the two world coordinate systems of the camera can be obtained according to the method, and the characteristic points are obtained by assuming that the first image is shot at the point A and the second image is shot at the point B
Figure 996673DEST_PATH_IMAGE024
And
Figure 227934DEST_PATH_IMAGE025
. P point coordinates obtained at this time
Figure 878358DEST_PATH_IMAGE024
And
Figure 333610DEST_PATH_IMAGE026
on a straight line starting from the camera when the vehicle is at point a and the camera when the vehicle is at point B, respectively. The intersection point of the two straight lines is the world coordinate system coordinate of the point P relative to the camera.
In the above embodiment, the rotation matrix and the translation matrix of the second image with respect to the first image are obtained by the eight-point method. Referring to the existing algorithm, 5 pairs of corresponding pixel points can be selected from the first image and the second image, and a five-point method is adopted to calculate a rotation matrix and a translation matrix, which is not described herein again. The embodiments of the present application do not limit other calculation methods that can obtain the required rotation matrix and translation matrix.
In the above embodiment, the rotation matrix and the translation matrix of the second image relative to the first image are obtained, and the rotation matrix and the translation matrix of the first image relative to the second image are also obtained, which are based on the transformation in the mathematical method, and the object of the present invention can also be achieved.
Using the embodiment method described above, the world coordinate system coordinates of the P point relative to the camera are obtained. The coordinates may be expressed in the world coordinate system of the camera when the first image is taken, or in the world coordinate system of the camera when the second image is taken.
On the basis of the world coordinate system coordinates of the point P relative to the camera, the external parameters of the camera, namely a translation matrix of the camera relative to the vehicle-mounted GPS equipment and a rotation matrix for describing information such as the pitching angle of the camera are further obtained, and the three-dimensional spatial position information of the point P, including the geographic coordinate information of the point P, the ground height of the point P and the like, is obtained through calculation.
According to the above embodiment, when the three-dimensional spatial position information of each vertex is obtained for all 4 vertices of the guideboard by the above method, the three-dimensional shape of the guideboard for drawing a high-precision map can be obtained. Similarly, the geographic coordinate information and the three-dimensional shape information can be obtained for the marker needing to be drawn in the high-precision map by the method.
The application also provides a marker drawing system, and the method is adopted. The system comprises:
and the buffer unit is used for acquiring two images containing the same identifier.
And the image identification unit is used for identifying the images acquired by the cache unit and acquiring the image pixel coordinates of the characteristic points of the markers corresponding to the two images respectively.
And the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image.
And the first processor unit is used for calculating the coordinates of the image pixels acquired by the image identification unit by using the rotation matrix and the translation matrix acquired by the calculation unit to acquire the coordinates of the feature point of the marker relative to the world coordinate system of the camera.
The method for calculating the rotation matrix and the translation matrix of the second image relative to the first image by the calculating unit comprises the following steps: selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method. Or selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
The present application also provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (7)

1. A marker mapping method, comprising:
acquiring two images containing the same marker at different angles in a video of a vehicle event data recorder;
identifying the images to obtain image pixel coordinates of corresponding feature points of markers in the two images, wherein the markers are guideboards, and the feature points of the markers are vertexes of the guideboards;
calculating a rotation matrix and a translation matrix of the second image relative to the first image;
respectively calculating the image pixel coordinates by using the rotation matrix and the translation matrix to obtain world coordinate system coordinates of each vertex of the guideboard relative to a camera;
acquiring geographic coordinate information of the camera when the two images are respectively shot;
and calculating the geographical coordinate information of the guideboard according to the geographical coordinate information of the camera and the world coordinate system coordinates of each vertex of the guideboard relative to the camera.
2. The method according to claim 1, wherein the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image is specifically:
selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
3. The method according to claim 1, wherein the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image is specifically:
selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
4. A marker mapping system, comprising:
the buffer unit is used for acquiring two images containing the same marker at different angles in a video of the automobile data recorder;
the image identification unit is used for identifying the images acquired by the cache unit and acquiring image pixel coordinates of corresponding characteristic points of markers in the two images, wherein the markers are guideboards, and the characteristic points of the markers are vertexes of the guideboards;
the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image;
the first processor unit is used for respectively calculating the pixel coordinates of the image acquired by the image recognition unit by using the rotation matrix and the translation matrix acquired by the calculation unit to acquire the world coordinate system coordinates of each vertex of the guideboard relative to the camera, acquiring the geographic coordinate information of the camera when the two images are respectively shot, and calculating the geographic coordinate information of the guideboard according to the geographic coordinate information of the camera and the world coordinate system coordinates of each vertex of the guideboard relative to the camera.
5. The system according to claim 4, wherein the computing unit is specifically configured to:
selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
6. The system according to claim 4, wherein the computing unit is specifically configured to:
selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
7. A non-transitory machine-readable storage medium having executable code stored thereon, wherein the executable code, when executed by a processor of an electronic device, causes the processor to perform the method of one of claims 1 to 3.
CN202010965481.5A 2020-09-15 2020-09-15 Marker drawing method and system Active CN111932627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010965481.5A CN111932627B (en) 2020-09-15 2020-09-15 Marker drawing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010965481.5A CN111932627B (en) 2020-09-15 2020-09-15 Marker drawing method and system

Publications (2)

Publication Number Publication Date
CN111932627A CN111932627A (en) 2020-11-13
CN111932627B true CN111932627B (en) 2021-01-05

Family

ID=73333510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010965481.5A Active CN111932627B (en) 2020-09-15 2020-09-15 Marker drawing method and system

Country Status (1)

Country Link
CN (1) CN111932627B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118537B (en) * 2020-11-19 2021-02-19 蘑菇车联信息科技有限公司 Method and related device for estimating movement track by using picture
CN112668505A (en) 2020-12-30 2021-04-16 北京百度网讯科技有限公司 Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment
CN113139031B (en) * 2021-05-18 2023-11-03 智道网联科技(北京)有限公司 Method and related device for generating traffic sign for automatic driving

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
CN106447766A (en) * 2016-09-28 2017-02-22 成都通甲优博科技有限责任公司 Scene reconstruction method and apparatus based on mobile device monocular camera
CN106651953A (en) * 2016-12-30 2017-05-10 山东大学 Vehicle position and gesture estimation method based on traffic sign
CN107239748A (en) * 2017-05-16 2017-10-10 南京邮电大学 Robot target identification and localization method based on gridiron pattern calibration technique
CN110148177A (en) * 2018-02-11 2019-08-20 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity
CN111563936A (en) * 2020-04-08 2020-08-21 蘑菇车联信息科技有限公司 Camera external parameter automatic calibration method and automobile data recorder

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101830249B1 (en) * 2014-03-20 2018-03-29 한국전자통신연구원 Position recognition apparatus and method of mobile object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278A (en) * 2013-12-10 2014-05-28 清华大学 Monitoring camera calibration method and system
CN106447766A (en) * 2016-09-28 2017-02-22 成都通甲优博科技有限责任公司 Scene reconstruction method and apparatus based on mobile device monocular camera
CN106651953A (en) * 2016-12-30 2017-05-10 山东大学 Vehicle position and gesture estimation method based on traffic sign
CN107239748A (en) * 2017-05-16 2017-10-10 南京邮电大学 Robot target identification and localization method based on gridiron pattern calibration technique
CN110148177A (en) * 2018-02-11 2019-08-20 百度在线网络技术(北京)有限公司 For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity
CN111563936A (en) * 2020-04-08 2020-08-21 蘑菇车联信息科技有限公司 Camera external parameter automatic calibration method and automobile data recorder

Also Published As

Publication number Publication date
CN111932627A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN111932627B (en) Marker drawing method and system
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CA2395257C (en) Any aspect passive volumetric image processing method
JP4232167B1 (en) Object identification device, object identification method, and object identification program
JP4284644B2 (en) 3D model construction system and 3D model construction program
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN111261016B (en) Road map construction method and device and electronic equipment
JP4978615B2 (en) Target identification device
CN112444242A (en) Pose optimization method and device
CN111830953A (en) Vehicle self-positioning method, device and system
CN109920009B (en) Control point detection and management method and device based on two-dimensional code identification
CN111930877B (en) Map guideboard generation method and electronic equipment
CN114820769A (en) Vehicle positioning method and device, computer equipment, storage medium and vehicle
CN114119682A (en) Laser point cloud and image registration method and registration system
CN111724432B (en) Object three-dimensional detection method and device
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN114140533A (en) Method and device for calibrating external parameters of camera
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
CN111243021A (en) Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN108090930A (en) Barrier vision detection system and method based on binocular solid camera
CN115205382A (en) Target positioning method and device
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
WO2022133986A1 (en) Accuracy estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant