CN112017238A - Method and device for determining spatial position information of linear object - Google Patents

Method and device for determining spatial position information of linear object Download PDF

Info

Publication number
CN112017238A
CN112017238A CN201910460562.7A CN201910460562A CN112017238A CN 112017238 A CN112017238 A CN 112017238A CN 201910460562 A CN201910460562 A CN 201910460562A CN 112017238 A CN112017238 A CN 112017238A
Authority
CN
China
Prior art keywords
image
line
pixel point
target
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910460562.7A
Other languages
Chinese (zh)
Inventor
钟礼山
穆北鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chusudu Technology Co ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910460562.7A priority Critical patent/CN112017238A/en
Publication of CN112017238A publication Critical patent/CN112017238A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • G01C11/08Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The embodiment of the invention discloses a method and a device for determining spatial position information of a linear object. The method comprises the following steps: the method comprises the steps of obtaining a first image and a second image of a vehicle driving road, obtaining a first projection matrix and a second projection matrix, extracting a first characteristic line corresponding to each linear object in the first image, determining a second characteristic line matched with the first characteristic line in the second image, selecting a plurality of pixel points from the first characteristic line, determining a target projection line of a target ray passing through a first optical center and the pixel points in the second image for each pixel point, determining an intersection point between the second characteristic line and the target projection line as a target pixel point matched with the pixel points, establishing a coordinate equation according to coordinates of the pixel points and the target pixel points, the first projection matrix and the second projection matrix, solving to obtain space coordinates of corresponding points of the pixel points, and using the space coordinates of the corresponding points of the pixel points as space position information of the linear objects. By applying the scheme of the embodiment of the invention, the applicability is improved.

Description

Method and device for determining spatial position information of linear object
Technical Field
The invention relates to the technical field of high-precision maps, in particular to a method and a device for determining spatial position information of a linear object.
Background
When a high-precision map is used for vehicle positioning, positioning needs to be performed based on spatial position information of a linear object in an image acquired by an acquisition device on a vehicle, wherein the linear object can be a lane line, a road edge line, a light pole, a telegraph pole and the like.
At present, the method for determining the spatial position information of a linear object is generally a triangulation method, but in the existing triangulation method, under the condition that an internal reference matrix and an external reference matrix of two cameras are known, matched end points are found in two images acquired by the two cameras aiming at the same linear object, and the spatial position information of the linear object is determined by respectively triangulating the two end points, so that the method is only suitable for linear-segment-shaped linear objects such as lamp poles and telegraph poles, and has obvious end points, and the requirement on the shape of the linear object is severe.
It can be seen that the above approach is less applicable.
Disclosure of Invention
The invention provides a method and a device for determining spatial position information of a linear object, which are used for improving applicability. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention provides a method for determining spatial position information of a linear object, where the method includes:
acquiring a first image and a second image of a vehicle driving road, wherein the average pixel distance between a linear object in the first image and a corresponding linear object in the second image is within a preset range, the first image and the second image are two images acquired by two acquisition devices at the same time, and the distance between the optical centers of the two acquisition devices is smaller than a preset threshold value, or the first image and the second image are two images acquired by the same acquisition device at two times with a preset time interval;
acquiring a first projection matrix representing a projection relation between an image coordinate system of a first image and an equipment coordinate system of acquisition equipment for acquiring the first image, and acquiring a second projection matrix representing a projection relation between an image coordinate system of a second image and an equipment coordinate system of acquisition equipment for acquiring the second image;
for each linear object in the first image, extracting a first characteristic line corresponding to the linear object;
determining a second characteristic line matched with the first characteristic line in the second image;
selecting a plurality of pixel points from the first characteristic line according to a preset selection rule;
for each pixel point, determining a target ray passing through a first optical center and the pixel point, determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, taking the intersection point as a target pixel point matched with the pixel point, establishing a coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system, wherein the first optical center is the optical center of a collecting device for collecting the first image, and the corresponding point is on the target ray;
and taking the space coordinates of the corresponding points of the plurality of pixel points as the space position information of the linear object.
Optionally, the step of determining a second feature line in the second image, where the second feature line matches the first feature line, includes:
for each linear object in the second image, extracting a reference line corresponding to the linear object;
calculating an average pixel distance between the first feature line and each reference line;
and when the minimum value in the average pixel distance is smaller than a preset distance threshold value, taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line.
Optionally, the step of determining a target projection line of the target ray in the second image includes:
determining a connecting line between the first optical center and the second optical center, wherein the second optical center is an optical center of an acquisition device for acquiring the second image;
determining a target plane passing through the connecting line and the target ray;
determining an intersection of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
Optionally, the step of establishing a coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system includes:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in a world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
Optionally, the step of obtaining a first projection matrix representing a projection relationship between an image coordinate system of the first image and a device coordinate system of a capturing device capturing the first image includes:
acquiring an internal parameter matrix and an external parameter matrix of acquisition equipment for acquiring the first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
In a second aspect, an embodiment of the present invention provides an apparatus for determining spatial position information of a linear object, where the apparatus includes:
the image acquisition module is used for acquiring a first image and a second image of a vehicle driving road, wherein the average pixel distance between a linear object in the first image and a corresponding linear object in the second image is within a preset range, the first image and the second image are two images acquired by two acquisition devices at the same time, and the distance between the optical centers of the two acquisition devices is smaller than a preset threshold value, or the first image and the second image are two images acquired by the same acquisition device at two times at intervals of a preset time period respectively;
the device comprises a matrix acquisition module, a first image acquisition module and a second image acquisition module, wherein the matrix acquisition module is used for acquiring a first projection matrix representing the projection relationship between an image coordinate system of a first image and an equipment coordinate system of acquisition equipment for acquiring the first image and a second projection matrix representing the projection relationship between an image coordinate system of a second image and an equipment coordinate system of acquisition equipment for acquiring the second image;
the extraction module is used for extracting a first characteristic line corresponding to each linear object in the first image;
a second feature line determining module, configured to determine a second feature line in the second image, where the second feature line matches the first feature line;
the selecting module is used for selecting a plurality of pixel points from the first characteristic line according to a preset selecting rule;
the space coordinate determination module is used for determining a target ray passing through a first optical center and the pixel point aiming at each pixel point, determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, taking the intersection point as a target pixel point matched with the pixel point, establishing a coordinate equation of a corresponding point of the pixel point under a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a space coordinate of the corresponding point of the pixel point under the world coordinate system, wherein the first optical center is the optical center of a collecting device for collecting the first image, and the corresponding point is on the target ray;
and the spatial position information determining module is used for taking the spatial coordinates of the corresponding points of the plurality of pixel points as the spatial position information of the linear object.
Optionally, the second characteristic line determining module includes:
the extraction sub-module is used for extracting a reference line corresponding to each linear object in the second image;
the average pixel distance calculation submodule is used for calculating the average pixel distance between the first characteristic line and each reference line;
and the determining submodule is used for taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line when the minimum value in the average pixel distance is smaller than a preset distance threshold value.
Optionally, the spatial coordinate determination module is specifically configured to:
determining a connecting line between the first optical center and a second optical center, wherein the second optical center is an optical center of an acquisition device for acquiring the second image;
determining a target plane passing through the connecting line and the target ray;
determining an intersection of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
Optionally, the spatial coordinate determination module is specifically configured to:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in a world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
Optionally, the matrix obtaining module is specifically configured to:
acquiring an internal parameter matrix and an external parameter matrix of acquisition equipment for acquiring the first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
As can be seen from the above, after the first characteristic line and the second characteristic line which are matched in the two images are determined, the present embodiment may determine the coordinates of the plurality of pixel point pairs which are matched on the first characteristic line and the second characteristic line according to the constraint relationship between the two images, and further may determine the spatial position information of the linear object according to the coordinates of the plurality of pixel point pairs and the first projection matrix and the second projection matrix, it can be seen that the present embodiment does not relate to the shape of the linear object when determining the spatial position information of the linear object, and therefore, the method of the present embodiment does not require the shape of the linear object, and may be applied to determine the spatial position information of the linear object in any shape, so as to improve the applicability, and at the same time, when determining the spatial position information of the linear object, also does not relate to the position of the linear object in the space, therefore, the method of the embodiment of the invention has no requirement on the position of the linear object in the space, and can be applied to determining the linear object above the road and the spatial position information of the linear object on the road surface, thereby improving the applicability. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. after a first characteristic line and a second characteristic line which are matched in two images are determined, the coordinates of a plurality of pixel point pairs matched on the first characteristic line and the second characteristic line are determined according to the constraint relation between the two images, and then the space position information of the linear object can be determined according to the coordinates of the pixel point pairs and the first projection matrix and the second projection matrix.
2. After a first characteristic line and a second characteristic line which are matched in two images are determined, the coordinates of a plurality of pixel point pairs matched on the first characteristic line and the second characteristic line are determined according to the constraint relation between the two images, and then the space position information of the linear object can be determined according to the coordinates of the pixel point pairs and the first projection matrix and the second projection matrix.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flowchart of a method for determining spatial position information of a linear object according to an embodiment of the present invention;
fig. 2(a) -2 (C) are schematic views of preset positions on a first characteristic line;
FIG. 3 is a schematic diagram of a position relationship between a first image and a second image according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for determining spatial position information of a linear object according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a method and a device for determining spatial position information of a linear object, which are suitable for determining the spatial position information of the linear object in a straight line segment form and the spatial position information of the linear object on a road, and improve the applicability. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flow chart of a method for determining spatial position information of a linear object according to an embodiment of the present invention. The method is applied to an electronic device which can be installed on a vehicle, and specifically comprises the following steps S110 to S170.
S110: a first image and a second image of a vehicle travel path are acquired.
Since linear objects are present with a high probability in the images acquired in real time by the acquisition devices on the vehicle, for example: a roadway lane line, a road edge line, a light pole or a telegraph pole, etc., so that the electronic device on the vehicle can determine the specific position of the vehicle in the space, i.e., locate the vehicle, based on the spatial position information of the linear object in the captured image.
For example, the electronic device on the vehicle may be a central processing unit or a control chip, and the acquisition device may be a camera or an unmanned vehicle sensor, and the embodiment of the present invention is not limited in this respect.
In order that the specific position of the vehicle in space can be determined based on the spatial position information of the linear object in the captured image, it is necessary to acquire a first image and a second image of the road on which the vehicle is traveling, and the average pixel distance of the linear object in the first image and the corresponding linear object in the second image is within a preset range. When the acquisition equipment on the vehicle acquires the image, the running state of the vehicle can be a ready-to-start state or a running state.
The first image and the second image may be two images acquired by two acquiring devices at the same time, and a distance between optical centers of the two acquiring devices is smaller than a preset threshold, or the first image and the second image may be two images acquired by the same acquiring device at two times at an interval of a preset time period.
No matter two acquisition devices or one acquisition device are arranged on the vehicle, the acquisition devices are calibrated.
S120: a first projection matrix representing a projection relationship between an image coordinate system of the first image and an apparatus coordinate system of an acquisition apparatus that acquires the first image, and a second projection matrix representing a projection relationship between an image coordinate system of the second image and an apparatus coordinate system of an acquisition apparatus that acquires the second image are obtained.
After the first image and the second image are acquired, a matching pair of feature lines needs to be found from the first image and the second image. In order to find a matching pair of feature lines from the first image and the second image, it is necessary to acquire a first projection matrix representing a projection relationship between an image coordinate system of the first image and a device coordinate system of an acquisition device that acquires the first image, and a second projection matrix representing a projection relationship between an image coordinate system of the second image and a device coordinate system of an acquisition device that acquires the second image.
The device coordinate system is used for representing the position of an object in a three-dimensional space, the image coordinate system is used for representing the pixel position of the object in a two-dimensional image, and the function of the internal reference matrix of the acquisition device is to perform linear change between the two coordinate systems. Since the acquisition device has a position and a posture in space, the acquisition device further has an external reference matrix, wherein the external reference matrix describes the position of the acquisition device and the pointing direction thereof in the world coordinate system.
After the calibration of the acquisition equipment is completed, the internal parameter matrix and the external parameter matrix of the acquisition equipment can be obtained.
The step of obtaining a first projection matrix representing a projection relationship between an image coordinate system of the first image and an apparatus coordinate system of an acquisition apparatus acquiring the first image may specifically include:
acquiring an internal reference matrix and an external reference matrix of acquisition equipment for acquiring a first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
The projection matrix is typically:
Figure BDA0002077935650000091
where M is the projection matrix, K is the internal reference matrix, [ R | t [ ]]Is a reference matrix, mijIs the ith row and jth column element of the projection matrix.
The internal reference matrix of the acquisition device is generally:
Figure BDA0002077935650000092
where K is an internal reference matrix, fxAnd fyFocal lengths of the acquisition device in the x-direction and y-direction, c, respectivelyxAnd cyThe coordinate of a principal point of the acquisition equipment in the x direction and the y direction are respectively, wherein the intersection point of the optical axis direction of the acquisition equipment and the imaging plane is called as the principal point.
The external reference matrix of the acquisition device is generally:
Figure BDA0002077935650000093
wherein, [ R | t]Is a reference matrix, r11-r13,r21-r23,r31-r33To characterize the elements of the rotational relationship, t1-t3Are elements that characterize a translation relationship.
S130: for each linear object in the first image, a first characteristic line corresponding to the linear object is extracted.
Since the number of the linear objects in the first image may be one or more, it is necessary to extract, for each linear object in the first image, a first feature line corresponding to the linear object.
The method for extracting the characteristic line may be to extract a preset number of pixel points on the linear object to form a first characteristic line, or may also be to extract a plurality of pixel points on a preset position on the linear object to form a first characteristic line, which is not limited in this respect.
S140: and determining a second characteristic line matched with the first characteristic line in the second image.
After the first feature line is extracted, a second feature line matching the first feature line needs to be determined in the second image.
Step S140 may include:
for each linear object in the second image, extracting a reference line corresponding to the linear object;
calculating an average pixel distance between the first characteristic line and each reference line;
and when the minimum value in the average pixel distance is smaller than a preset distance threshold value, taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line.
Since the number of the linear objects in the second image may be one or more, it is necessary to extract a reference line corresponding to each linear object in the second image.
Since the pixel coordinates of the same linear object in the two images do not differ much, the second feature line matching the first feature line can be determined from the reference lines by calculating the average pixel distance between the first feature line and each reference line.
Since the lines are composed of points, the way to calculate the average pixel distance between the first feature line and each reference line may be: determining feature points at preset positions on the first feature line, determining reference points corresponding to the feature points on the reference line, calculating the distance between the feature points and the corresponding reference points for each feature point, calculating the average value of the obtained distances, and taking the average value as the distance between the first feature line and the reference line.
The preset position here may be determined based on the shape of the first characteristic line, and when the shape is a straight line segment or a curved line segment, the preset position may be the positions of both end points of the line segment, or may be the positions of both end points and the midpoint of the line segment.
For example: as shown in fig. 2(a), the shape of the first characteristic line L is a straight line segment, and the preset position may be positions of A, B and C, where a and C are end point positions of the line segment, and B is a midpoint position of the line segment;
as shown in fig. 2(B), the shape of the first characteristic line M is a curved line segment, and the preset position may be D, E and the position of point F, where D and F are the end point positions of the line segment, and E is the midpoint position of the line segment.
When the shape is formed by connecting the straight line segment and the curve segment, the first characteristic line can be decomposed into the straight line segment and the curve segment, and the preset positions can be positions of two end points of the first characteristic line, a middle point position of the straight line segment, a connecting point of the straight line segment and the curve segment, and a middle point position of the curve segment.
For example: as shown in fig. 2(C), the shape of the first characteristic line N is a shape formed by alternately and sequentially connecting a straight line segment and a curved line segment, and the preset positions may be positions of points G, H, I, J and K, where G is a midpoint position of the straight line segment, H is a position of a connection point of the straight line segment and the curved line segment, I is a midpoint position of the curved line segment, and J and K are positions of two end points of the first characteristic line.
Illustratively, the distance between a feature point and a corresponding reference point is calculated by the following formula:
Figure BDA0002077935650000111
where dist is the distance between a feature point and a corresponding reference point, x and y are the coordinates of the feature point, and x 'and y' are the coordinates of the reference point.
After the average pixel distance between the first feature line and each reference line is calculated, the minimum value in the average pixel distances is determined, if the minimum value is smaller than a preset distance threshold value, it is indicated that the first feature line is similar to the reference line corresponding to the minimum value, and at this time, the reference line corresponding to the minimum value in the second image may be used as the second feature line matched with the first feature line.
And if the minimum value is not smaller than the preset distance threshold value, the first characteristic line is not similar to the reference line corresponding to the minimum value, and at the moment, a second characteristic line which is not matched with the first characteristic line in the second image is determined.
Thus, a second feature line matching the first feature line is selected from the plurality of reference lines by calculating an average pixel distance between the first feature line and each of the reference lines.
In another implementation, step S140 may include:
for each linear object in the second image, extracting a reference line corresponding to the linear object;
determining a corresponding projection line of the first characteristic line in the second image through an optical flow method;
calculating an average pixel distance between the projection line and each reference line;
and when the minimum value in the average pixel distance is smaller than a preset distance threshold value, taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line.
Since the number of the linear objects in the second image may be one or more, it is necessary to extract a reference line corresponding to each linear object in the second image.
Since the optical flow method can be used for target tracking of the target in the image, after extracting the reference lines in the second image, the projection line corresponding to the first feature line in the second image can be determined by the optical flow method, and then the average pixel distance between the projection line and each reference line is calculated. The manner of calculating the average pixel distance between the projection line and each reference line may refer to the manner of calculating the average pixel distance between the first characteristic line and each reference line, which is not described herein again.
After the average pixel distance between the projection line and each reference line is calculated, the minimum value in the average pixel distances is determined, if the minimum value is smaller than a preset distance threshold value, it is indicated that the first feature line is similar to the reference line corresponding to the minimum value, and at this time, the reference line corresponding to the minimum value in the second image can be used as the second feature line matched with the first feature line.
And if the minimum value is not smaller than the preset distance threshold value, the first characteristic line is not similar to the reference line corresponding to the minimum value, and at the moment, a second characteristic line which is not matched with the first characteristic line in the second image is determined.
Thus, the second characteristic line matching the first characteristic line is selected from the plurality of reference lines by calculating the average pixel distance between the projection line corresponding to the first characteristic line in the second image and each reference line.
Since there is a certain transformation relationship between the two images, the second characteristic line in the second image matching the first characteristic line can be determined according to the first projection matrix and the transformation matrix between the second projection matrices.
In another implementation, step S140 may include:
for each linear object in the second image, extracting a reference line corresponding to the linear object;
determining a projection point corresponding to the feature point of the preset position on the first feature line in the second image through a conversion matrix between the first projection matrix and the second projection matrix;
calculating the distance between a projection line formed by the projection points corresponding to the feature points at the preset positions and the reference line;
and taking the reference line with the distance smaller than the preset threshold value as a second characteristic line matched with the first characteristic line.
Since the number of the linear objects in the second image may be one or more, it is necessary to extract a reference line corresponding to each linear object in the second image.
Since the line is composed of points, it is necessary to determine the corresponding projection point of the feature point at the preset position on the first feature line in the second image through the transformation matrix between the first projection matrix and the second projection matrix, where the manner of determining the preset position may refer to the above corresponding description.
Because the positions of the matched pair of characteristic lines are generally close in the image, after the projection points corresponding to the characteristic points at the preset positions are obtained, the characteristic line matched with the first characteristic line can be determined from the plurality of reference lines in a mode of calculating the distance between the projection line formed by the projection points corresponding to the characteristic points at the preset positions and the reference lines.
The way of calculating the distance between the projection line and the reference line may be: and determining reference points corresponding to the projection points on the reference line, calculating the distance between the projection point and the corresponding reference point for each projection point, calculating the average value of the obtained distances, and taking the average value as the distance between the projection line and the reference line.
Illustratively, the distance between the projected point and the corresponding reference point is calculated by the following formula:
Figure BDA0002077935650000131
where dist is the distance between the projected point and the corresponding reference point, x and y are the coordinates of the projected point, and x 'and y' are the coordinates of the reference point.
After the distance between the projection line formed by the projection point corresponding to the feature point at the preset position and the reference line is calculated, if the distance is smaller than the preset threshold, it is indicated that the first feature line is relatively similar to the reference line, and at this time, the reference line with the distance smaller than the preset threshold can be used as a second feature line matched with the first feature line.
And if the minimum value is not smaller than the preset distance threshold value, the first characteristic line is not similar to the reference line corresponding to the minimum value, and at the moment, a second characteristic line which is not matched with the first characteristic line in the second image is determined.
Thus, the second characteristic line matching the first characteristic line is selected from the plurality of reference lines by calculating the distance between the projection line and the reference line.
S150: and selecting a plurality of pixel points from the first characteristic line according to a preset selection rule.
Since the line is composed of many points, if the positions of the points constituting the line are determined, the positions of the line are also determined, and therefore, a plurality of pixel points need to be selected from the first characteristic line according to a preset selection rule.
The above-mentioned mode of selecting a plurality of pixel points from first characteristic line according to predetermineeing the selection rule has the multiple, can be: and selecting a preset number of pixel points from the first characteristic line, or selecting a plurality of pixel points on a preset position of the first characteristic line.
The preset position here may be determined based on the shape of the first characteristic line, and when the shape is a straight line segment or a curved line segment, the preset position may be the positions of both end points of the line segment, or may be the positions of both end points and the midpoint of the line segment.
When the shape is formed by connecting the straight line segment and the curve segment, the first characteristic line can be decomposed into the straight line segment and the curve segment, and the preset positions can be positions of two end points of the first characteristic line, a middle point position of the straight line segment, a connecting point of the straight line segment and the curve segment, and a middle point position of the curve segment.
S160: and aiming at each pixel point, determining a target ray passing through the first optical center and the pixel point, determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, taking the intersection point as a target pixel point matched with the pixel point, establishing a coordinate equation of a corresponding point of the pixel point under a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain the space coordinate of the corresponding point of the pixel point under the world coordinate system.
The method comprises the steps of determining a target ray passing through a first optical center and a pixel point aiming at each pixel point, wherein any space point on the target ray can be mapped into the pixel point through a small hole, so that the space coordinate of the corresponding point of the pixel point under a world coordinate system cannot be uniquely determined based on the pixel point, and therefore the space coordinate of the corresponding point of the pixel point under the world coordinate system needs to be uniquely determined according to two matched pixel points.
The method for determining the target pixel point matched with the pixel point in the second image may be as follows:
and determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, and taking the intersection point as a target pixel point matched with the pixel point.
After the target ray passing through the first optical center and the pixel point is determined, based on the photographic relation, the corresponding point of the pixel point under the world coordinate system is necessarily on the target ray.
And then determining the target projection line of the target ray in the second image, namely determining the projection line of the target ray in the second image as the target projection line, so that the corresponding point of the pixel point under the world coordinate system is necessarily on the target projection line. And because the first characteristic line is matched with the second characteristic line, the projection point of the corresponding point of the pixel point in the world coordinate system in the second image, namely the target pixel point, is bound to be on the second characteristic line, and therefore the intersection point between the second characteristic line and the target projection line can be determined to be the target pixel point matched with the pixel point.
The step of determining the projection line of the target in the second image may include:
determining a connecting line between the first optical center and the second optical center;
determining a target plane passing through the connecting line and the target ray;
determining an intersection line of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
In order to determine the target projection line of the target ray in the second image, a connecting line between the first optical center and a second optical center of the second acquisition device is determined, then a target plane passing through the connecting line and the target ray is determined, because the target plane is a plane in which the first optical center, the second optical center and the target ray are located, the target plane necessarily has an intersection line with the second image, the intersection line of the target plane and the second image is determined, and the intersection line is used as the target projection line of the target ray in the second image, wherein the second optical center is the optical center of the acquisition device acquiring the second image.
For ease of understanding, the following description will be made by way of specific embodiments, referring to fig. 3, fig. 3 showing a positional relationship between a first image and a second image, in fig. 3, C1A first optical center of an acquisition device for acquiring a first image, C2Second optical center of the acquisition device for acquiring a second image,/1Is a first characteristic line,/2Is the second characteristic line, x1Is a pixel point on the first characteristic line, x2Is the sum of the second characteristic line and x2Matched target pixel point, C1x1Is passing through the first optical center and x1Target ray, el, of a pixel2Is a target ray C1x1Projection line of the object in the second image, C1C2Is a connection line between the first optical center and the second optical center, e1Is C1C2Intersection with the first image, e2Is C1C2Point of intersection with the second image, P being x1Corresponding point in world coordinate system, and at the same time, x2Corresponding points in the world coordinate system.
As can be seen from FIG. 3, point P must be at point C1x1Above, due to el2Is a target ray C1x1Projection line of the object in the second image, thus, x1The corresponding point P in the world coordinate system is bound to be on the target projection line el2Above and due to the first characteristic line l1And a second characteristic line l2Match, therefore, x1The projection point of the corresponding point P in the world coordinate system in the second image is also the target pixel point x2Must be on the second characteristic line l2Thus, a second characteristic line l can be determined2And the object projection line el2The intersection point x between2It is the target pixel point that matches the pixel point.
Therefore, the target projection line of the target ray in the second image is obtained by determining a connecting line between the first optical center and the second optical center, then determining a target plane based on the connecting line and the target ray, and finally determining an intersecting line of the target plane and the second image.
After the target pixel point matched with the pixel point is determined, the spatial coordinate of the corresponding point of the pixel point under the world coordinate system can be uniquely determined according to the two matched pixel points, and the spatial coordinate specifically can be as follows:
and establishing a coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system, wherein the corresponding point is on the target ray.
The step of establishing a coordinate equation of a corresponding point of the pixel point in the world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system may include:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in the world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
combining a first space coordinate equation and a second space coordinate equation to obtain a linear equation set of a corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
Because the pixel point and the target pixel point are matched pixel points in the first image and the second image, the corresponding point of the pixel point in the world coordinate system is also the corresponding point of the target pixel point in the world coordinate system.
In order to calculate the spatial coordinates of the corresponding point of the pixel point in the world coordinate system, a first spatial coordinate equation of the corresponding point of the pixel point in the world coordinate system can be established according to the coordinates of the pixel point and the first projection matrix, and a second spatial coordinate equation of the corresponding point of the pixel point in the world coordinate system can be established according to the coordinates of the target pixel point and the second projection matrix.
Exemplarily, the establishing of the first spatial coordinate equation of the corresponding point of the pixel point in the world coordinate system according to the coordinate of the pixel point and the first projection matrix may be:
Figure BDA0002077935650000161
wherein (u)1,v1) Is the coordinate of the pixel point and is,
Figure BDA0002077935650000162
is the ith row and jth column element of the first projection matrix, (X, Y, Z) is the spatial coordinate of the corresponding point of the pixel point under the world coordinate system, Z1The depth information of the pixel point.
Establishing a second spatial coordinate equation of a corresponding point of the target pixel point in the world coordinate system according to the coordinates of the target pixel point and the second projection matrix may be:
Figure BDA0002077935650000163
wherein (u)2,v2) Is the coordinates of the target pixel point,
Figure BDA0002077935650000164
is the ith row and jth column element of the second projection matrix, (X, Y, Z) is the spatial coordinate of the corresponding point of the pixel point under the world coordinate system, Z2The depth information of the target pixel point is obtained.
After the first space coordinate equation and the second space coordinate equation are obtained, the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under the world coordinate system, and the solution of the linear equation set is calculated according to a least square method to obtain the space coordinate of the corresponding point of the pixel point under the world coordinate system. Meanwhile, the solution of the linear equation set also comprises the depth information of the pixel point and the depth information of the target pixel point.
Therefore, the space coordinate of the corresponding point of the pixel point under the world coordinate system is solved by establishing a space coordinate equation through the matched coordinates of the pixel point and the target pixel point and the two projection matrixes.
S170: and taking the space coordinates of the corresponding points of the plurality of pixel points as the space position information of the linear object.
After the spatial coordinates of the corresponding points of the plurality of pixel points are obtained, the spatial coordinates of the corresponding points of the plurality of pixel points can be used as the spatial position information of the linear object.
As can be seen from the above, after the first characteristic line and the second characteristic line which are matched in the two images are determined, the present embodiment may determine the coordinates of the plurality of pixel point pairs which are matched on the first characteristic line and the second characteristic line according to the constraint relationship between the two images, and further may determine the spatial position information of the linear object according to the coordinates of the plurality of pixel point pairs and the first projection matrix and the second projection matrix, it can be seen that the present embodiment does not relate to the shape of the linear object when determining the spatial position information of the linear object, and therefore, the method of the present embodiment does not require the shape of the linear object, and may be applied to determine the spatial position information of the linear object in any shape, so as to improve the applicability, and at the same time, when determining the spatial position information of the linear object, also does not relate to the position of the linear object in the space, therefore, the method of the embodiment of the invention has no requirement on the position of the linear object in the space, and can be applied to determining the linear object above the road and the spatial position information of the linear object on the road surface, thereby improving the applicability.
Fig. 4 is a schematic structural diagram of an apparatus for determining spatial position information of a linear object according to an embodiment of the present invention. The apparatus may include:
the image acquisition module 410 is configured to acquire a first image and a second image of a driving road of a vehicle, where an average pixel distance between a linear object in the first image and a corresponding linear object in the second image is within a preset range, the first image and the second image are two images acquired by two acquiring devices at the same time, and a distance between optical centers of the two acquiring devices is smaller than a preset threshold, or the first image and the second image are two images acquired by the same acquiring device at two times at an interval of a preset time period;
a matrix obtaining module 420, configured to obtain a first projection matrix representing a projection relationship between an image coordinate system of a first image and an apparatus coordinate system of an acquisition apparatus that acquires the first image, and a second projection matrix representing a projection relationship between an image coordinate system of a second image and an apparatus coordinate system of an acquisition apparatus that acquires the second image;
an extracting module 430, configured to, for each linear object in the first image, extract a first feature line corresponding to the linear object;
a second feature line determining module 440, configured to determine a second feature line in the second image, where the second feature line matches the first feature line;
a selecting module 450, configured to select a plurality of pixel points from the first feature line according to a preset selecting rule;
a space coordinate determining module 460, configured to determine, for each pixel point, a target ray passing through a first optical center and the pixel point, determine a target projection line of the target ray in the second image, determine an intersection point between the second characteristic line and the target projection line, use the intersection point as a target pixel point matched with the pixel point, establish a coordinate equation of a corresponding point of the pixel point in a world coordinate system according to a coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solve the coordinate equation to obtain a space coordinate of the corresponding point of the pixel point in the world coordinate system, where the first optical center is an optical center of a collecting device collecting the first image, and the corresponding point is on the target ray;
a spatial location information determining module 470, configured to use the spatial coordinates of the corresponding points of the plurality of pixel points as the spatial location information of the linear object.
As can be seen from the above, after the first characteristic line and the second characteristic line which are matched in the two images are determined, the present embodiment may determine the coordinates of the plurality of pixel point pairs which are matched on the first characteristic line and the second characteristic line according to the constraint relationship between the two images, and further may determine the spatial position information of the linear object according to the coordinates of the plurality of pixel point pairs and the first projection matrix and the second projection matrix, it can be seen that the present embodiment does not relate to the shape of the linear object when determining the spatial position information of the linear object, and therefore, the method of the present embodiment does not require the shape of the linear object, and may be applied to determine the spatial position information of the linear object in any shape, so as to improve the applicability, and at the same time, when determining the spatial position information of the linear object, also does not relate to the position of the linear object in the space, therefore, the method of the embodiment of the invention has no requirement on the position of the linear object in the space, and can be applied to determining the linear object above the road and the spatial position information of the linear object on the road surface, thereby improving the applicability.
In another embodiment of the present invention, the second characteristic line determining module 440 may include:
the extraction sub-module is used for extracting a reference line corresponding to each linear object in the second image;
the average pixel distance calculation submodule is used for calculating the average pixel distance between the first characteristic line and each reference line;
and the determining submodule is used for taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line when the minimum value in the average pixel distance is smaller than a preset distance threshold value.
In another embodiment of the present invention, the spatial coordinate determination module 460 may be specifically configured to:
determining a connecting line between the first optical center and a second optical center, wherein the second optical center is an optical center of an acquisition device for acquiring the second image;
determining a target plane passing through the connecting line and the target ray;
determining an intersection of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
In another embodiment of the present invention, the spatial coordinate determination module 460 may be specifically configured to:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in a world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
In another embodiment of the present invention, the matrix obtaining module 420 may be specifically configured to:
acquiring an internal parameter matrix and an external parameter matrix of acquisition equipment for acquiring the first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for determining spatial position information of a linear object, comprising:
acquiring a first image and a second image of a vehicle driving road, wherein the average pixel distance between a linear object in the first image and a corresponding linear object in the second image is within a preset range, the first image and the second image are two images acquired by two acquisition devices at the same time, and the distance between the optical centers of the two acquisition devices is smaller than a preset threshold value, or the first image and the second image are two images acquired by the same acquisition device at two times with a preset time interval;
acquiring a first projection matrix representing a projection relation between an image coordinate system of a first image and an equipment coordinate system of acquisition equipment for acquiring the first image, and acquiring a second projection matrix representing a projection relation between an image coordinate system of a second image and an equipment coordinate system of acquisition equipment for acquiring the second image;
for each linear object in the first image, extracting a first characteristic line corresponding to the linear object;
determining a second characteristic line matched with the first characteristic line in the second image;
selecting a plurality of pixel points from the first characteristic line according to a preset selection rule;
for each pixel point, determining a target ray passing through a first optical center and the pixel point, determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, taking the intersection point as a target pixel point matched with the pixel point, establishing a coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system, wherein the first optical center is the optical center of a collecting device for collecting the first image, and the corresponding point is on the target ray;
and taking the space coordinates of the corresponding points of the plurality of pixel points as the space position information of the linear object.
2. The method of claim 1, wherein the step of determining a second feature line in the second image that matches the first feature line comprises:
for each linear object in the second image, extracting a reference line corresponding to the linear object;
calculating an average pixel distance between the first feature line and each reference line;
and when the minimum value in the average pixel distance is smaller than a preset distance threshold value, taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line.
3. The method of claim 1, wherein the step of determining an object projection line of the object ray in the second image comprises:
determining a connecting line between the first optical center and a second optical center, wherein the second optical center is an optical center of an acquisition device for acquiring the second image;
determining a target plane passing through the connecting line and the target ray;
determining an intersection of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
4. The method of claim 1, wherein the step of establishing a coordinate equation of a corresponding point of the pixel point in the world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a spatial coordinate of the corresponding point of the pixel point in the world coordinate system comprises:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in a world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
5. The method of claim 1, wherein the step of obtaining a first projection matrix characterizing a projection relationship between an image coordinate system of a first image and a device coordinate system of an acquisition device acquiring the first image comprises:
acquiring an internal parameter matrix and an external parameter matrix of acquisition equipment for acquiring the first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
6. An apparatus for determining spatial position information of a linear object, comprising:
the image acquisition module is used for acquiring a first image and a second image of a vehicle driving road, wherein the average pixel distance between a linear object in the first image and a corresponding linear object in the second image is within a preset range, the first image and the second image are two images acquired by two acquisition devices at the same time, and the distance between the optical centers of the two acquisition devices is smaller than a preset threshold value, or the first image and the second image are two images acquired by the same acquisition device at two times at intervals of a preset time period respectively;
the device comprises a matrix acquisition module, a first image acquisition module and a second image acquisition module, wherein the matrix acquisition module is used for acquiring a first projection matrix representing the projection relationship between an image coordinate system of a first image and an equipment coordinate system of acquisition equipment for acquiring the first image and a second projection matrix representing the projection relationship between an image coordinate system of a second image and an equipment coordinate system of acquisition equipment for acquiring the second image;
the extraction module is used for extracting a first characteristic line corresponding to each linear object in the first image;
a second feature line determining module, configured to determine a second feature line in the second image, where the second feature line matches the first feature line;
the selecting module is used for selecting a plurality of pixel points from the first characteristic line according to a preset selecting rule;
the space coordinate determination module is used for determining a target ray passing through a first optical center and the pixel point aiming at each pixel point, determining a target projection line of the target ray in the second image, determining an intersection point between the second characteristic line and the target projection line, taking the intersection point as a target pixel point matched with the pixel point, establishing a coordinate equation of a corresponding point of the pixel point under a world coordinate system according to the coordinate of the pixel point, the coordinate of the target pixel point, the first projection matrix and the second projection matrix, and solving the coordinate equation to obtain a space coordinate of the corresponding point of the pixel point under the world coordinate system, wherein the first optical center is the optical center of a collecting device for collecting the first image, and the corresponding point is on the target ray;
and the spatial position information determining module is used for taking the spatial coordinates of the corresponding points of the plurality of pixel points as the spatial position information of the linear object.
7. The apparatus of claim 6, wherein the second profile determination module comprises:
the extraction sub-module is used for extracting a reference line corresponding to each linear object in the second image;
the average pixel distance calculation submodule is used for calculating the average pixel distance between the first characteristic line and each reference line;
and the determining submodule is used for taking a reference line corresponding to the minimum value in the second image as a second characteristic line matched with the first characteristic line when the minimum value in the average pixel distance is smaller than a preset distance threshold value.
8. The apparatus of claim 6, wherein the spatial coordinate determination module is specifically configured to:
determining a connecting line between the first optical center and a second optical center, wherein the second optical center is an optical center of an acquisition device for acquiring the second image;
determining a target plane passing through the connecting line and the target ray;
determining an intersection of the target plane and the second image;
and taking the intersection line as a target projection line of the target ray in the second image.
9. The apparatus of claim 6, wherein the spatial coordinate determination module is specifically configured to:
establishing a first space coordinate equation of a corresponding point of the pixel point in a world coordinate system according to the coordinate of the pixel point and the first projection matrix;
establishing a second space coordinate equation of a corresponding point of the target pixel point in a world coordinate system according to the coordinate of the target pixel point and the second projection matrix;
the first space coordinate equation and the second space coordinate equation are combined to obtain a linear equation set of the corresponding point of the pixel point under a world coordinate system;
and calculating the solution of the linear equation set according to a least square method to obtain the spatial coordinates of the corresponding point of the pixel point in the world coordinate system.
10. The apparatus of claim 6, wherein the matrix acquisition module is specifically configured to:
acquiring an internal parameter matrix and an external parameter matrix of acquisition equipment for acquiring the first image;
and calculating a matrix product of the internal reference matrix and the external reference matrix to obtain a first projection matrix representing the projection relation between the image coordinate system of the first image and the equipment coordinate system of the acquisition equipment for acquiring the first image.
CN201910460562.7A 2019-05-30 2019-05-30 Method and device for determining spatial position information of linear object Pending CN112017238A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910460562.7A CN112017238A (en) 2019-05-30 2019-05-30 Method and device for determining spatial position information of linear object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910460562.7A CN112017238A (en) 2019-05-30 2019-05-30 Method and device for determining spatial position information of linear object

Publications (1)

Publication Number Publication Date
CN112017238A true CN112017238A (en) 2020-12-01

Family

ID=73501515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910460562.7A Pending CN112017238A (en) 2019-05-30 2019-05-30 Method and device for determining spatial position information of linear object

Country Status (1)

Country Link
CN (1) CN112017238A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112595335A (en) * 2021-01-15 2021-04-02 智道网联科技(北京)有限公司 Method for generating intelligent traffic stop line and related device
CN112861685A (en) * 2021-01-29 2021-05-28 亿景智联(北京)科技有限公司 Method for extracting regional geographic information in map image based on ray outbreak thought
CN115631308A (en) * 2022-12-15 2023-01-20 北京集度科技有限公司 Artificial rod reconstruction method, device, vehicle and medium
CN116558504A (en) * 2023-07-11 2023-08-08 之江实验室 Monocular vision positioning method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577004A (en) * 2009-06-25 2009-11-11 青岛海信数字多媒体技术国家重点实验室有限公司 Rectification method for polar lines, appliance and system thereof
CN102968634A (en) * 2012-11-23 2013-03-13 南京大学 Method for extracting parking lot structure under main direction restriction
CN106558038A (en) * 2015-09-18 2017-04-05 中国人民解放军国防科学技术大学 A kind of detection of sea-level and device
WO2017095259A1 (en) * 2015-12-04 2017-06-08 Андрей Владимирович КЛИМОВ Method for monitoring linear dimensions of three-dimensional entities
CN107608541A (en) * 2017-10-17 2018-01-19 宁波视睿迪光电有限公司 Three-dimensional attitude positioning method, device and electronic equipment
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101577004A (en) * 2009-06-25 2009-11-11 青岛海信数字多媒体技术国家重点实验室有限公司 Rectification method for polar lines, appliance and system thereof
CN102968634A (en) * 2012-11-23 2013-03-13 南京大学 Method for extracting parking lot structure under main direction restriction
CN106558038A (en) * 2015-09-18 2017-04-05 中国人民解放军国防科学技术大学 A kind of detection of sea-level and device
WO2017095259A1 (en) * 2015-12-04 2017-06-08 Андрей Владимирович КЛИМОВ Method for monitoring linear dimensions of three-dimensional entities
CN107608541A (en) * 2017-10-17 2018-01-19 宁波视睿迪光电有限公司 Three-dimensional attitude positioning method, device and electronic equipment
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
叶运生: ""基于深度学习的单目视觉车辆检测与跟踪研究"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, 15 January 2019 (2019-01-15) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112595335A (en) * 2021-01-15 2021-04-02 智道网联科技(北京)有限公司 Method for generating intelligent traffic stop line and related device
CN112861685A (en) * 2021-01-29 2021-05-28 亿景智联(北京)科技有限公司 Method for extracting regional geographic information in map image based on ray outbreak thought
CN115631308A (en) * 2022-12-15 2023-01-20 北京集度科技有限公司 Artificial rod reconstruction method, device, vehicle and medium
CN116558504A (en) * 2023-07-11 2023-08-08 之江实验室 Monocular vision positioning method and device
CN116558504B (en) * 2023-07-11 2023-09-29 之江实验室 Monocular vision positioning method and device

Similar Documents

Publication Publication Date Title
CN110567469B (en) Visual positioning method and device, electronic equipment and system
CN112017238A (en) Method and device for determining spatial position information of linear object
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
US10909395B2 (en) Object detection apparatus
JP2020525809A (en) System and method for updating high resolution maps based on binocular images
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
WO2020000137A1 (en) Integrated sensor calibration in natural scenes
WO2020228694A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
US20080247602A1 (en) System and Method for Providing Mobile Range Sensing
KR102103944B1 (en) Distance and position estimation method of autonomous vehicle using mono camera
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN112556685B (en) Navigation route display method and device, storage medium and electronic equipment
CN112819903A (en) Camera and laser radar combined calibration method based on L-shaped calibration plate
CN110889873A (en) Target positioning method and device, electronic equipment and storage medium
CN109685855A (en) A kind of camera calibration optimization method under road cloud monitor supervision platform
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN111932627B (en) Marker drawing method and system
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
Liu et al. Semalign: Annotation-free camera-lidar calibration with semantic alignment loss
Junejo et al. Autoconfiguration of a dynamic nonoverlapping camera network
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
CN111598956A (en) Calibration method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination