CN111932627A - Marker drawing method and system - Google Patents
Marker drawing method and system Download PDFInfo
- Publication number
- CN111932627A CN111932627A CN202010965481.5A CN202010965481A CN111932627A CN 111932627 A CN111932627 A CN 111932627A CN 202010965481 A CN202010965481 A CN 202010965481A CN 111932627 A CN111932627 A CN 111932627A
- Authority
- CN
- China
- Prior art keywords
- image
- coordinates
- marker
- calculating
- guideboard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Geometry (AREA)
- Databases & Information Systems (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a marker drawing method and a marker drawing system. The method comprises the following steps: acquiring two images containing the same marker; identifying the images to obtain image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively; calculating a rotation matrix and a translation matrix of the second image relative to the first image; and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera. According to the scheme provided by the application, the marker required by the high-precision map can be drawn by adopting the image shot by the automobile data recorder.
Description
Technical Field
The application relates to the technical field of navigation, in particular to a marker drawing method and system.
Background
Along with the development of space technology and information technology, the unified management and intelligent interaction of urban infrastructure gradually enter the public field of vision. The guideboard is used as an information bearing carrier of a city geographic entity, has a place name information guiding function, is used as infrastructure distributed at a city road intersection, has specificity in space, and is a good carrier of a city basic Internet of things.
The current practice in the prior art is to adopt a three-dimensional point cloud method to carry out mapping of a high-precision map, but the method needs to adopt a special mapping vehicle, cannot popularize the mapping method, and further is difficult to improve the mapping scale, so that under the existing environment, when the environmental road changes, the data of the high-precision map is often not updated in time; and limited by the scale of the mapping vehicle and the professional mapping team, the overall efficiency of high-precision mapping is not high.
Disclosure of Invention
The application provides a marker drawing method and a marker drawing system, which can realize the drawing of a marker required by a high-precision map through the acquisition of a two-dimensional image.
A marker mapping method comprising: acquiring two images containing the same marker; identifying the images to obtain image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively; calculating a rotation matrix and a translation matrix of the second image relative to the first image; and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera.
In the above method, the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image specifically includes: selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
Or, the method for calculating the rotation matrix and the translation matrix of the second image relative to the first image specifically comprises: selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
In the method, the marker is a guideboard, and the characteristic points of the marker are all vertexes of the guideboard; and respectively calculating the world coordinate system coordinates of each vertex of the guideboard, so that the three-dimensional coordinates of the guideboard required by high-precision mapping can be obtained.
The method further comprises the following steps: acquiring geographic coordinate information of the camera when the two images are respectively shot; and calculating the geographic coordinate information of the marker according to the geographic coordinate information of the camera and the world coordinate system coordinate of the marker feature point relative to the camera.
A marker mapping system, comprising: the cache unit is used for acquiring two images containing the same identifier; the image identification unit is used for identifying the images acquired by the cache unit and acquiring image pixel coordinates of corresponding feature points of the same identifier in the two images respectively; the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image; and the first processor unit is used for calculating the pixel coordinates of the characteristic points in the image acquired by the image identification unit by using the rotation matrix and the translation matrix acquired by the calculation unit to obtain the world coordinate system coordinates of the characteristic points of the marker relative to the camera.
In the above system, the computing unit is specifically configured to: identifying and selecting at least eight pairs of corresponding pixel points in the first image and the second image and respectively calculating the obtained image pixel coordinates; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
In the above system, the computing unit is specifically configured to: identifying and selecting at least five pairs of corresponding pixel points in the first image and the second image and respectively calculating the obtained image pixel coordinates; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
In the system, the image recognition unit is specifically configured to recognize a guideboard in an image and recognize image pixel coordinates of a feature point of the guideboard, where the feature point of the guideboard is a vertex of the guideboard.
A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the above-described method.
The technical scheme provided by the application can comprise the following beneficial effects:
the marker needed by the high-precision map can be drawn according to the acquired two-dimensional image, so that the marker needed by the high-precision map can be drawn by using the image shot by the automobile data recorder. In addition, in the process of drawing the marker by using the two-dimensional image acquired by a monocular camera such as a driving recorder, the rotation matrix and the translation matrix of the camera when the two images are shot are obtained by using the characteristics of the corresponding points in the two images, so that the space coordinates of the object in the images obtained by taking the two images as input calculation are very accurate, the calculation precision is further ensured, and the image shot by using the driving recorder can be applied to drawing the marker required by the high-precision map.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
FIG. 1 is a schematic flow chart diagram illustrating a method for mapping a marker according to an embodiment of the present application;
fig. 2 is a schematic diagram of a marker image including a marker drawing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a translation matrix and a rotation matrix algorithm of a marker mapping method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a marker mapping method according to an embodiment of the present application, which calculates coordinates of a world coordinate system based on an image.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
In the high-precision map data acquisition process, a surveying and mapping vehicle runs along a road, and map data of surrounding buildings, traffic signboards and other markers needing to be drawn on a map are acquired. The method of three-dimensional laser scanning adopted in the prior art can rapidly scan a measured object to directly obtain high-precision point cloud data, so that rapid two-dimensional vectorization can be carried out on the real world.
The embodiment of the application provides a method for obtaining an image of a scene in a photographing or shooting mode and further obtaining data required by drawing a high-precision map by utilizing image calculation. In addition, in the process of actual image acquisition, the shooting angle of the camera is constantly changed due to factors such as up-and-down fluctuation of the road, the direction of the road changing, and the driving environment changing, and accurate data cannot be obtained by surveying and mapping in the environment by adopting the method for shooting the images. Thus, when obtaining high-precision map data using a camera-captured image, it is necessary to consider the influence of changes in the angle and position of the camera on the calculation of the spatial position of the marker based on the captured image during the traveling of the vehicle.
In high-precision mapping, buildings, road electronic photographing equipment, traffic signal identification, for example: map data of markers such as traffic lights, signboards, lane lines, etc. are all required in drawing high-precision maps. In the embodiment of the present application, the identifier refers to an object that needs to be drawn in a high-precision map, and is not limited to the above example. Fig. 1 is a schematic flow chart of a marker mapping method according to an embodiment of the present application. By the embodiment including the embodiment shown in fig. 1, the world coordinate system coordinates or the spatial position information of the marker can be accurately obtained by using the two-dimensional image, so that the marker can be used for drawing a high-precision map.
Referring to fig. 1, a marker mapping method includes:
In the embodiment of the application, the camera device installed on the automobile, such as a vehicle recorder, acquires the video during the driving of the automobile. The vehicle travels on a road, and when approaching a guideboard, the camera device on the vehicle obtains a plurality of guideboard images at different times, i.e., images of a plurality of frames in a video.
As shown in fig. 2, the image is acquired when the vehicle passes by the guideboard at a certain time, and as shown in fig. 2, a rectangular guideboard is displayed on the right side of the image acquired by photographing. The surveying and mapping vehicle runs on the road, and when approaching to the guideboard, the monocular camera started on the surveying and mapping vehicle terminal can obtain the guideboard image of the current angle according to the preset acquisition frequency. The surveying and mapping vehicle runs close to the guideboard, the monocular camera starts to acquire the guideboard image, and the surveying and mapping vehicle continues to run in the process of acquiring the guideboard image, so that each frame of guideboard image is acquired from different angles by the monocular camera. And acquiring the guideboard images at different angles through a monocular camera.
And recording the geographic coordinate information when different guideboard images are obtained. And in the process of acquiring the guideboard image, the surveying and mapping vehicle continuously runs and records the geographic coordinate information. And searching the geographical coordinate information of the time according to the time of the acquired two pieces of image information.
Step 12: and identifying the images to obtain the image pixel coordinates of the characteristic points of the markers corresponding to the two images respectively.
Referring to fig. 2, two images containing the same marker are identified, and the image pixel coordinates of the feature point corresponding to the marker in each of the two images are obtained. Assuming that a 2-second time period elapses during the process of the vehicle traveling from far to near to the guideboard, in the 2-second time period of the video, any two frames of images containing the guideboard are acquired. And taking 4 vertexes of the guideboard as feature points, and respectively obtaining image pixel coordinates of the 4 vertexes in the first image and the second image according to a preset rule.
The image pixel coordinates are used to describe coordinates of an image point of an imaged object on a digital image. The coordinate system in which the information read from the camera is located. The unit is one (number of pixels). The coordinate values are expressed by (u, v) with the vertex at the upper left corner of the image plane as the origin of coordinates and the X-axis and the Y-axis being parallel to the X-axis and the Y-axis, respectively, of the image coordinate system. The images acquired by the digital camera are first formed into a standard electrical signal and then converted into digital images by analog-to-digital conversion. The storage form of each image is an array of M × N, and the numerical value of each element in the image of M rows and N columns represents the gray scale of the image point. Each element is called a pixel, and the pixel coordinate system is an image coordinate system taking the pixel as a unit.
The embodiment of the present application is not intended to limit the feature points of the guideboard in the selected image, and may be any feature points in the guideboard that can be recognized. For example, a square guideboard may identify four vertices, a triangular guideboard may identify three vertices, and a circular guideboard may identify two vertices, horizontal and two vertices, vertical. Therefore, the guideboard of the present embodiment is not limited to the actually square or rectangular guideboard shown in fig. 2, and includes, for example, guideboards having a triangular or circular shape.
And step 13, calculating a rotation matrix and a translation matrix of the second image relative to the first image.
And identifying at least eight pairs of corresponding pixel points and image pixel coordinates of the pixel points in the first image and the second image by utilizing an image identification technology. Imaging points of the same object/object in the first image and the second image respectively have corresponding relations. On the basis of satisfying the correspondence, there is no limitation on the selection of the pixel points, and for example, the pixel points may be identifiable pixel points of buildings or other objects in the image, and may also include four vertices of the guideboard in this embodiment.
And selecting image pixel coordinates of the eight pairs of pixel points in the first image and the second image respectively, and calculating a rotation matrix and a translation matrix of the second image relative to the first image by adopting an eight-point method.
Referring to fig. 3, two images of the same guideboard are shot at different positions, and pixel points corresponding to the same object in the images satisfy epipolar constraint relationship. Where P is a vertex of a real object in the world coordinate system, such as a guideboard.Respectively a first image and a second imageMonocular camera optical center position of an image. I is1、I2Representing a first image and a second image, respectively.Respectively P point in the first image I1And a second image I2Is projected. e.g. of the type1、e2Is the pole. According to the epipolar constraint:
obtaining:
wherein:
e is a 3 × 3 essential matrix, T is a translation matrix, R is a rotation matrix, and T is a transpose of the matrix.
Finding E by eight-point method
Wherein (u)1,v1) Is p1Image pixel coordinates of (u)2,v2) Is p2The image pixel coordinates of (2).
Obtaining:
wherein:
the same representation is used for other pairs of points, so that all the equations obtained are put together to obtain a linear system of equations (u)i,vi) Representing the ith matched point pair.
The essential matrix E is obtained by the above system of linear equations.
And decomposing the singular value E to obtain 4 groups of t and R values, wherein only one depth value in the 4 groups of results is positive, and the combination of the t and R values with the positive depth value is a translation matrix and a rotation matrix of the second image relative to the first image.
Step 14: and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera.
The camera is placed in a three-dimensional space, and thus the world coordinate system, this reference coordinate system, describes the position of the camera, and the position of the camera is used to describe the position of any other object placed in this three-dimensional environment. Let P be a point in the real world whose location in the world coordinate system isAnd P is the real position of a certain point of the guideboard in the embodiment of the application.
The optical center is used as an origin point for a camera coordinate system, the optical center of the camera is used as the origin point for the camera coordinate system, the z axis coincides with the optical axis, namely the z axis points to the front of the camera, and the positive directions of the x axis and the y axis are parallel to the object coordinate system. Where f is the focal length of the camera, as can be seen in FIG. 4, f is the origin of the camera coordinate systemDistance from o in the physical coordinate system of the image.
o-xy is the physical coordinate system of the image, also called the planar coordinate system. The position of the pixel is expressed by physical units, and the origin of coordinates is the intersection position of the optical axis of the camera and the physical coordinate system of the image, namely the optical center is the central point of the image. The o-xy coordinate system is in millimeters (mm), which is compatible with the size of the camera's internal CCD sensor. The photo is imaged later in units of pixels, such as 640 × 480, and thus further conversion of the image physical coordinates to image pixel coordinates is required.
Image pixel coordinate system uv, as shown in fig. 4. And taking the pixel as a unit, and taking the origin of coordinates as the upper left corner of the image. The conversion relation between the image pixel physical coordinate and the image pixel coordinate is the relation between the millimeter and the pixel point, namely pixel/millimeter. For example, the camera CCD sensor is 8mm 6mm, the image pixel size is 640X 480 ifRepresenting the physical size of each pixel in the image pixel coordinate systemIt is 1/80 mm.
In world coordinate systemThe imaging point of the point in the image is p, and the coordinate in the physical coordinate system of the image is pThe coordinates in the image pixel coordinate system are。
According to the conversion relation, the world coordinate of the P point relative to the camera position is calculated according to the pixel coordinate of the P point in the image. According to the following conversion formula, the point P is located on a straight line with the camera as a starting point and the determined direction relative to the camera.
Wherein Z iscAre depth values. u and v are pixel coordinates of the point P in a pixel coordinate system.Is a camera distortion parameter.Andwhich respectively indicate how many length units a pixel occupies in the x-direction and the y-direction, respectively. u0, v0 indicates the number of pixels in the horizontal and vertical directions of the phase difference between the center pixel coordinates of the image and the origin pixel coordinates of the image. f is the camera focal length. R is the rotation matrix and t is the translation matrix. XW、YW、ZWIs the coordinate of point P in world coordinate system.
Since the second image and the first image have rotation and translation, R is the rotation matrix and t is the translation matrix.
The coordinate sets of the characteristic points in the first image and the second image relative to the two world coordinate systems of the camera can be obtained according to the method, and the characteristic points are obtained by assuming that the first image is shot at the point A and the second image is shot at the point BAnd. P point coordinates obtained at this timeAndrespectively at the beginning of the vehicle at point AAnd the vehicle is located on the straight line of the camera at point B. The intersection point of the two straight lines is the world coordinate system coordinate of the point P relative to the camera.
In the above embodiment, the rotation matrix and the translation matrix of the second image with respect to the first image are obtained by the eight-point method. Referring to the existing algorithm, 5 pairs of corresponding pixel points can be selected from the first image and the second image, and a five-point method is adopted to calculate a rotation matrix and a translation matrix, which is not described herein again. The embodiments of the present application do not limit other calculation methods that can obtain the required rotation matrix and translation matrix.
In the above embodiment, the rotation matrix and the translation matrix of the second image relative to the first image are obtained, and the rotation matrix and the translation matrix of the first image relative to the second image are also obtained, which are based on the transformation in the mathematical method, and the object of the present invention can also be achieved.
Using the embodiment method described above, the world coordinate system coordinates of the P point relative to the camera are obtained. The coordinates may be expressed in the world coordinate system of the camera when the first image is taken, or in the world coordinate system of the camera when the second image is taken.
On the basis of the world coordinate system coordinates of the point P relative to the camera, the external parameters of the camera, namely a translation matrix of the camera relative to the vehicle-mounted GPS equipment and a rotation matrix for describing information such as the pitching angle of the camera are further obtained, and the three-dimensional spatial position information of the point P, including the geographic coordinate information of the point P, the ground height of the point P and the like, is obtained through calculation.
According to the above embodiment, when the three-dimensional spatial position information of each vertex is obtained for all 4 vertices of the guideboard by the above method, the three-dimensional shape of the guideboard for drawing a high-precision map can be obtained. Similarly, the geographic coordinate information and the three-dimensional shape information can be obtained for the marker needing to be drawn in the high-precision map by the method.
The application also provides a marker drawing system, and the method is adopted. The system comprises:
and the buffer unit is used for acquiring two images containing the same identifier.
And the image identification unit is used for identifying the images acquired by the cache unit and acquiring the image pixel coordinates of the characteristic points of the markers corresponding to the two images respectively.
And the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image.
And the first processor unit is used for calculating the coordinates of the image pixels acquired by the image identification unit by using the rotation matrix and the translation matrix acquired by the calculation unit to acquire the coordinates of the feature point of the marker relative to the world coordinate system of the camera.
The method for calculating the rotation matrix and the translation matrix of the second image relative to the first image by the calculating unit comprises the following steps: selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method. Or selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image; and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
The present application also provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A marker mapping method, comprising:
acquiring two images containing the same marker;
identifying the images to obtain image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively;
calculating a rotation matrix and a translation matrix of the second image relative to the first image;
and calculating the coordinates of the image pixels by using the rotation matrix and the translation matrix to obtain the coordinates of the characteristic points of the marker relative to the world coordinate system of the camera.
2. The method according to claim 1, wherein the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image is specifically:
selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
3. The method according to claim 1, wherein the method for calculating the rotation matrix and the translation matrix of the second image with respect to the first image is specifically:
selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
4. The method according to one of the claims 1 to 3,
the marker is a guideboard, and the characteristic points of the marker are all vertexes of the guideboard;
and respectively calculating the world coordinate system coordinates of each vertex of the guideboard.
5. The method of claim 4, further comprising:
acquiring geographic coordinate information of the camera when the two images are respectively shot;
and calculating the geographic coordinate information of the marker according to the geographic coordinate information of the camera and the world coordinate system coordinate of the marker feature point relative to the camera.
6. A marker mapping system, comprising:
the cache unit is used for acquiring two images containing the same identifier;
the image identification unit is used for identifying the images acquired by the cache unit and acquiring the image pixel coordinates of the corresponding characteristic points of the markers in the two images respectively;
the calculation unit is used for calculating a rotation matrix and a translation matrix of the second image relative to the first image;
and the first processor unit is used for calculating the coordinates of the image pixels acquired by the image identification unit by using the rotation matrix and the translation matrix acquired by the calculation unit to acquire the coordinates of the feature point of the marker relative to the world coordinate system of the camera.
7. The system according to claim 6, wherein the computing unit is specifically configured to:
selecting image pixel coordinates of at least eight pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the eight pairs of pixel points and adopting an eight-point method.
8. The system according to claim 6, wherein the computing unit is specifically configured to:
selecting image pixel coordinates of at least five pairs of corresponding pixel points in the first image and the second image;
and calculating a rotation matrix and a translation matrix of the second image relative to the first image by using the image pixel coordinates of the five pairs of pixel points and adopting a five-point method.
9. The system according to one of claims 6 to 8,
the image identification unit is specifically used for identifying a guideboard in an image and identifying image pixel coordinates of a feature point of the guideboard, wherein the feature point of the guideboard is a vertex of the guideboard.
10. A non-transitory machine-readable storage medium having executable code stored thereon, wherein the executable code, when executed by a processor of an electronic device, causes the processor to perform the method of one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010965481.5A CN111932627B (en) | 2020-09-15 | 2020-09-15 | Marker drawing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010965481.5A CN111932627B (en) | 2020-09-15 | 2020-09-15 | Marker drawing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111932627A true CN111932627A (en) | 2020-11-13 |
CN111932627B CN111932627B (en) | 2021-01-05 |
Family
ID=73333510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010965481.5A Active CN111932627B (en) | 2020-09-15 | 2020-09-15 | Marker drawing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111932627B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112118537A (en) * | 2020-11-19 | 2020-12-22 | 蘑菇车联信息科技有限公司 | Method and related device for estimating movement track by using picture |
CN112668505A (en) * | 2020-12-30 | 2021-04-16 | 北京百度网讯科技有限公司 | Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment |
CN113139031A (en) * | 2021-05-18 | 2021-07-20 | 智道网联科技(北京)有限公司 | Method for generating traffic sign for automatic driving and related device |
CN114419594A (en) * | 2022-01-17 | 2022-04-29 | 智道网联科技(北京)有限公司 | Method and device for identifying intelligent traffic guideboard |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824278A (en) * | 2013-12-10 | 2014-05-28 | 清华大学 | Monitoring camera calibration method and system |
US20150269734A1 (en) * | 2014-03-20 | 2015-09-24 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing location of object |
CN106447766A (en) * | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
CN106651953A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Vehicle position and gesture estimation method based on traffic sign |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN110148177A (en) * | 2018-02-11 | 2019-08-20 | 百度在线网络技术(北京)有限公司 | For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity |
CN111563936A (en) * | 2020-04-08 | 2020-08-21 | 蘑菇车联信息科技有限公司 | Camera external parameter automatic calibration method and automobile data recorder |
-
2020
- 2020-09-15 CN CN202010965481.5A patent/CN111932627B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824278A (en) * | 2013-12-10 | 2014-05-28 | 清华大学 | Monitoring camera calibration method and system |
US20150269734A1 (en) * | 2014-03-20 | 2015-09-24 | Electronics And Telecommunications Research Institute | Apparatus and method for recognizing location of object |
CN106447766A (en) * | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
CN106651953A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Vehicle position and gesture estimation method based on traffic sign |
CN107239748A (en) * | 2017-05-16 | 2017-10-10 | 南京邮电大学 | Robot target identification and localization method based on gridiron pattern calibration technique |
CN110148177A (en) * | 2018-02-11 | 2019-08-20 | 百度在线网络技术(北京)有限公司 | For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity |
CN111563936A (en) * | 2020-04-08 | 2020-08-21 | 蘑菇车联信息科技有限公司 | Camera external parameter automatic calibration method and automobile data recorder |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112118537A (en) * | 2020-11-19 | 2020-12-22 | 蘑菇车联信息科技有限公司 | Method and related device for estimating movement track by using picture |
CN112118537B (en) * | 2020-11-19 | 2021-02-19 | 蘑菇车联信息科技有限公司 | Method and related device for estimating movement track by using picture |
CN112668505A (en) * | 2020-12-30 | 2021-04-16 | 北京百度网讯科技有限公司 | Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment |
US11893884B2 (en) | 2020-12-30 | 2024-02-06 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method for acquiring three-dimensional perception information based on external parameters of roadside camera, and roadside device |
CN113139031A (en) * | 2021-05-18 | 2021-07-20 | 智道网联科技(北京)有限公司 | Method for generating traffic sign for automatic driving and related device |
CN113139031B (en) * | 2021-05-18 | 2023-11-03 | 智道网联科技(北京)有限公司 | Method and related device for generating traffic sign for automatic driving |
CN114419594A (en) * | 2022-01-17 | 2022-04-29 | 智道网联科技(北京)有限公司 | Method and device for identifying intelligent traffic guideboard |
Also Published As
Publication number | Publication date |
---|---|
CN111932627B (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111932627B (en) | Marker drawing method and system | |
CN110148169B (en) | Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera | |
CN107316325B (en) | Airborne laser point cloud and image registration fusion method based on image registration | |
CN108805934B (en) | External parameter calibration method and device for vehicle-mounted camera | |
CA2395257C (en) | Any aspect passive volumetric image processing method | |
JP4284644B2 (en) | 3D model construction system and 3D model construction program | |
CN111261016B (en) | Road map construction method and device and electronic equipment | |
CN108288292A (en) | A kind of three-dimensional rebuilding method, device and equipment | |
CN111830953A (en) | Vehicle self-positioning method, device and system | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
JP4978615B2 (en) | Target identification device | |
CN112444242A (en) | Pose optimization method and device | |
JP2009053059A (en) | Object specifying device, object specifying method, and object specifying program | |
CN111930877B (en) | Map guideboard generation method and electronic equipment | |
CN112257668A (en) | Main and auxiliary road judging method and device, electronic equipment and storage medium | |
CN115690138A (en) | Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud | |
CN115205382A (en) | Target positioning method and device | |
CN114140533A (en) | Method and device for calibrating external parameters of camera | |
CN111243021A (en) | Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium | |
CN112446915A (en) | Picture-establishing method and device based on image group | |
CN114119682A (en) | Laser point cloud and image registration method and registration system | |
CN111724432B (en) | Object three-dimensional detection method and device | |
KR102249381B1 (en) | System for generating spatial information of mobile device using 3D image information and method therefor | |
CN116823966A (en) | Internal reference calibration method and device for camera, computer equipment and storage medium | |
CN113536854A (en) | High-precision map guideboard generation method and device and server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |