CN116934870A - Homography matrix determination method and device and vehicle - Google Patents

Homography matrix determination method and device and vehicle Download PDF

Info

Publication number
CN116934870A
CN116934870A CN202310899568.0A CN202310899568A CN116934870A CN 116934870 A CN116934870 A CN 116934870A CN 202310899568 A CN202310899568 A CN 202310899568A CN 116934870 A CN116934870 A CN 116934870A
Authority
CN
China
Prior art keywords
feature point
camera
target
calibration image
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310899568.0A
Other languages
Chinese (zh)
Inventor
陈吕劼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310899568.0A priority Critical patent/CN116934870A/en
Publication of CN116934870A publication Critical patent/CN116934870A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for determining a homography matrix and a vehicle, and belongs to the technical field of automatic driving. The method comprises the following steps: acquiring a first calibration image of a first camera and a second calibration image of a second camera; performing feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair; and determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs. Therefore, the characteristic point matching is carried out on the basis of the first calibration image of the first camera and the second calibration image of the second camera, a matched target characteristic point pair is obtained, the richness and the identification degree of the characteristics are ensured, and the accuracy of characteristic point matching is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced.

Description

Homography matrix determination method and device and vehicle
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a method and a device for determining a homography matrix and a vehicle.
Background
The existing automatic driving function relies on a plurality of cameras with different focal lengths installed in front of a vehicle to realize environment sensing at different distances, and sensing the environment depends on homography matrixes among the cameras. However, with the use of the vehicle, due to factors such as vibration and temperature variation, the homography matrix between the cameras is not suitable for the current environment any more, so that the problem of the environment sensing function is caused.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method, an apparatus, a vehicle, and a computer-readable storage medium for determining a homography matrix, so as to solve the problem that the homography matrix between cameras is no longer applicable due to factors such as vibration and temperature variation. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a method for determining a homography matrix, including: acquiring a first calibration image of a first camera and a second calibration image of a second camera; performing feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair; determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs; the feature point matching of the first calibration image and the second calibration image comprises the following steps: extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point; performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point; determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value; and carrying out feature point matching on the first candidate feature point and the second candidate feature point to obtain the target feature point pair.
In one embodiment of the present disclosure, the determining, based on the depth value, a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point includes: acquiring a plurality of depth intervals based on the depth values; acquiring the number of feature points in each depth interval, and determining a target depth interval with the maximum number; and determining the first candidate feature point and the second candidate feature point from the feature points in the target depth interval.
In an embodiment of the present disclosure, the performing feature point matching on the first candidate feature point and the second candidate feature point to obtain the target feature point pair includes: performing feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs; and screening the candidate feature point pairs to obtain the target feature point pairs.
In one embodiment of the present disclosure, the screening the candidate feature point pair to obtain the target feature point pair includes: for each candidate feature point pair, acquiring a first coordinate of the first candidate feature point in the candidate feature point pair and a second coordinate of the second candidate feature point, and acquiring a difference value between the first coordinate and the second coordinate; and eliminating candidate feature point pairs with the difference value being greater than or equal to a set threshold value to obtain the target feature point pair.
In one embodiment of the present disclosure, the feature extracting the first calibration image and the second calibration image to obtain a first feature point and a second feature point includes: respectively acquiring a first image area and a second image area, which are overlapped with the visible angles of the first camera and the second camera, from the first calibration image and the second calibration image; and carrying out feature extraction on the first image area to obtain a first feature point, and carrying out feature extraction on the second image area to obtain a second feature point.
In one embodiment of the disclosure, after determining the homography matrix between the first camera and the second camera based on the matched target feature point pairs, the method further includes: carrying out lane line detection through the first camera and the second camera to obtain respective detection lane lines; based on the homography matrix, projecting detection lane lines of the remaining cameras under an image coordinate system of the target camera to obtain projected detection lane lines corresponding to the remaining cameras; wherein the target camera is one of the first camera and the second camera, and the remaining cameras are the other of the first camera and the second camera; and determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
In one embodiment of the present disclosure, the determining the verification result of the homography matrix based on the target detection lane line detected by the target camera and the post-projection detection lane line includes: aligning the projected detection lane line with the target detection lane line to obtain a lane line pair; and aiming at each lane line pair, acquiring the detection translation distance and the parallelism between the detection lane line and the target detection lane line after the projection of the lane line pair, and taking the detection translation distance and the parallelism as the verification result.
According to a second aspect of the embodiments of the present disclosure, there is provided a determining apparatus of a homography matrix, including: the acquisition module is used for acquiring a first calibration image of the first camera and a second calibration image of the second camera; the matching module is used for matching the characteristic points of the first calibration image and the second calibration image to obtain matched target characteristic point pairs; the determining module is used for determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs; wherein, matching module still includes: the feature extraction unit is used for extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point; the depth estimation unit is used for carrying out depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point; a feature point determining unit configured to determine a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value; and the characteristic point matching unit is used for carrying out characteristic point matching on the first candidate characteristic point and the second candidate characteristic point to obtain the target characteristic point pair.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle comprising a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the steps of the method according to the first aspect of the embodiments of the present disclosure.
According to a fourth aspect of the disclosed embodiments there is provided a computer readable storage medium having stored thereon computer program instructions which when executed by a vehicle implement the steps of the method of the first aspect of the disclosed embodiments.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program, characterized in that the computer program, when executed by a vehicle, implements the steps of the method according to the first aspect of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the method comprises the steps of obtaining a first calibration image of a first camera and a second calibration image of a second camera based on a conventional road surface scene, and carrying out feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair, so that the feature richness and the feature identification degree are ensured, and the feature point matching accuracy is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a method of determining a homography matrix, according to some embodiments of the present disclosure.
Fig. 2 is a flow chart illustrating another method of determining a homography matrix, according to some embodiments of the present disclosure.
Fig. 3 is a flow chart illustrating another method of determining a homography matrix, according to some embodiments of the present disclosure.
Fig. 4 is a flow chart illustrating another method of determining a homography matrix, according to some embodiments of the present disclosure.
Fig. 5 is a flow chart illustrating another method of determining a homography matrix, according to some embodiments of the present disclosure.
Fig. 6 is a block diagram of a configuration of a homography matrix determination apparatus, shown according to some embodiments of the present disclosure.
Fig. 7 is a functional block diagram schematic of a vehicle, shown according to some embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to some embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. Various changes, modifications, and equivalents of the methods, devices, and/or systems described herein will become apparent after an understanding of the present disclosure. For example, the order of operations described herein is merely an example and is not limited to those set forth herein, but may be altered as will become apparent after an understanding of the disclosure, except where necessary to perform the operations in a particular order. In addition, descriptions of features known in the art may be omitted for the sake of clarity and conciseness.
The implementations described below in some examples of the disclosure are not representative of all implementations consistent with the disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
FIG. 1 is a flow chart of a method of determining a homography matrix, shown in FIG. 1, according to some embodiments of the present disclosure, including, but not limited to, the steps of:
S101, acquiring a first calibration image of a first camera and a second calibration image of a second camera.
It should be noted that, in the embodiment of the present disclosure, the execution body of the method for determining the homography matrix is an electronic device, and the electronic device may be a vehicle control system or a vehicle-mounted terminal with a capability of calculating the homography matrix. The method for determining a homography matrix according to the embodiments of the present disclosure may be performed by the apparatus for determining a homography matrix according to the embodiments of the present disclosure, and the apparatus for determining a homography matrix according to the embodiments of the present disclosure may be configured in any electronic device to perform the method for determining a homography matrix according to the embodiments of the present disclosure. In the present disclosure, the method of determining the homography matrix may be applied to image perception of multiple cameras on an autonomous vehicle.
In some implementations, multiple cameras may be mounted on the vehicle for capturing image information. The first camera and the second camera can be determined according to the focal length of the cameras among the plurality of cameras, the camera with the larger focal length of the camera is the first camera, and the rest one or more cameras are the second camera. It can be appreciated that a camera with a larger focal length can acquire images at a longer distance in a scene to realize depth perception of the surrounding environment, thereby providing a technical basis for automatic driving.
For example, three front-view cameras of wide angle, normal focal length, tele, etc. may be mounted on the vehicle. And if the focal length of the wide-angle camera is larger than the focal lengths of the normal focal length camera and the long-focus camera, the wide-angle camera is a first camera, and the normal focal length camera and the long-focus camera are second cameras.
In some implementations, in order to expand the application range of the homography matrix, a first camera and a second camera may be used to respectively photograph a conventional road scene, so as to obtain a first calibration image of the first camera and a second calibration image of the second camera. It is understood that the conventional road scene refers to scene information on urban or rural roads, including information on vehicle driving conditions, traffic lights, lane lines, road signs, and the like.
S102, performing feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair.
In some implementations, the first calibration image and the second calibration image are feature point matched based on the feature points by extracting first feature points of the first calibration image and second feature points of the second calibration image, so as to obtain matched target feature point pairs. Optionally, the first feature point and the second feature point may be matched based on a proximity principle, so as to obtain a matched target feature point pair. The target feature point pair comprises a first feature point and a second feature point which are successfully matched.
Alternatively, feature point extraction may be performed on the first calibration image and the second calibration image based on Scale-invariant feature transform (Scale-Invariant Feature Transform, SIFT), acceleration-robust feature (Speeded-Up Robust Features, SURF), orientation-fast feature, and binary descriptor feature (Oriented FAST and Rotated BRIEF, ORB), and the method of feature point extraction is not limited by the present disclosure.
S103, determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs.
In some implementations, for each matched target feature pair, it may be expressed in terms of coordinates in a rectangular coordinate system, such asx 1 ,y 1) and (x 2 ,y 2), wherein ,xyand respectively representing the coordinates of the characteristic points in the calibration image. Alternatively, the homography matrix between the first camera and the second camera may be obtained by constructing a system of linear equations and solving the system of linear equations using a least squares method. Wherein the homography matrix is a matrix of 3*3.
Exemplary illustration, homography matrixHThe form of (2) is as follows:
optionally, a homography matrix between the first camera and the second camera may also be calculated using a random sample consensus (Random Sample Consensus, RANSAC) algorithm. Because homography matrix uses a uniform coordinate system, such as #, the method is characterized by that x,y1) in a homography matrixh 22 If the value of (2) is 1, then there are 8 unknowns in the homography matrix. 4 feature points are selected from the target feature point pairs, and the unknown number in the homography matrix is calculated by using a least square method, so that the homography matrix between the first camera and the second camera is obtained.
It will be appreciated that after the homography matrix between the first camera and the second camera is obtained, additional pairs of target feature points may also be used to verify the accuracy of the homography matrix. Optionally, a second feature point in the target feature point pair may be projected onto the image coordinate system of the first camera through matrix operation of the homography matrix, and compared with the first feature point in the target feature point pair, to determine whether there is a deviation between the projected feature point and the first feature point, so as to verify accuracy of the homography matrix.
It should be noted that, the method for determining the homography matrix disclosed by the disclosure can be applied to automatic driving of a vehicle, the vehicle can sense the current environment based on the homography matrix among cameras, the visual sensing capability of an automatic driving system is enhanced, the automatic driving system is helped to accurately detect and identify other vehicles, pedestrians and other obstacles, the decision accuracy of the automatic driving system is improved, safe and reliable automatic driving is realized, and development and application of an automatic driving technology are helped to be promoted.
In the method for determining the homography matrix, the first calibration image of the first camera and the second calibration image of the second camera are obtained based on the conventional road surface scene, feature point matching is carried out on the first calibration image and the second calibration image, a matched target feature point pair is obtained, the richness and the identification degree of features are guaranteed, and the accuracy of feature point matching is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced.
FIG. 2 is a flow chart of a method of determining a homography matrix, shown in FIG. 2, according to some embodiments of the present disclosure, including, but not limited to, the steps of:
s201, a first calibration image of a first camera and a second calibration image of a second camera are obtained.
In the embodiment of the present disclosure, the implementation manner of step S201 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S202, extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point.
In some implementations, the first image region and the second image region may be determined from the first calibration image and the second calibration image based on the coincident visual angles according to the visual angles of the first camera and the second camera, and then feature extraction is performed on the first image region and the second image region, so as to obtain the first feature point and the second feature point.
Optionally, a first image area and a second image area, which are overlapped by the visibility angle of the first camera and the second camera, are respectively acquired from the first calibration image and the second calibration image. Further, by means of a feature extraction method such as SIFT, SURF, ORB, feature extraction is performed on the first image area to obtain first feature points, and feature extraction is performed on the second image area to obtain second feature points.
In some implementations, global feature points of the first image area and the second image area are extracted through a feature extraction method such as SIFT, SURF, ORB, and on the basis of the global feature points, targets such as vehicles, traffic lights and road signs in the first image area and the second image area are identified, and corner points of a detection frame of the targets are also used as feature points to obtain the first feature points and the second feature points.
And S203, performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point.
In some implementations, depth estimation may be performed on feature points in the first calibration image and the second calibration image based on a pre-trained depth estimation model, to obtain a depth value for each feature point. It is understood that the depth value is the distance between the camera and the feature point.
Alternatively, the depth estimation method may be performed on the first calibration image and the second calibration image based on a stereoscopic depth estimation method, a depth estimation method based on a structured light or time-of-flight camera, or a depth estimation method based on deep learning, which is not limited in the present disclosure.
And S204, determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value.
In some implementations, to ensure that extracted feature points are in the same plane, the depth values may be partitioned to determine feature points within different depth intervals, which are coplanar as viewed within the same depth interval. And carrying out feature point matching on the feature points in the depth interval with a large number of feature points.
In some implementations, multiple depth intervals may be acquired based on the depth values. Alternatively, the depth interval may be divided according to application requirements, for example, the depth value may be divided into three depth intervals: shallow, medium and deep, the depth range of the shallow interval is 0-1 meter, the depth range of the middle interval is 1-3 meters, and the depth range of the deep interval is more than 3 meters. Further, the number of feature points in each depth interval is obtained, and the target depth interval with the largest number is determined. Alternatively, the number of feature points in each depth interval may be counted, and the number of feature points may be compared to obtain the depth interval with the largest number of feature points as the target depth interval. From the feature points within the target depth interval, a first candidate feature point and a second candidate feature point are determined.
And S205, performing feature point matching on the first candidate feature point and the second candidate feature point to obtain a target feature point pair.
In some implementations, feature point matching may be performed on the first candidate feature point and the second candidate feature point based on a matching method such as violent matching (Brute Force Matcher, BFMatcher) to obtain a matched candidate feature point pair.
Alternatively, the first candidate feature point and the second candidate feature point may be matched according to the proximity principle based on the coordinate values by acquiring the coordinates of the first candidate feature point and the second candidate feature point, so as to obtain a matched target feature point pair. The target feature point pair comprises a first candidate feature point and a second candidate feature point which are successfully matched.
S206, determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs.
In the embodiment of the present disclosure, the implementation manner of step S206 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
In the method for determining the homography matrix, the richness and the identification degree of the features are ensured by acquiring the first calibration image of the first camera and the second calibration image of the second camera. And then, the first calibration image and the second calibration image are subjected to feature point matching based on the depth value, a matched target feature point pair is obtained, the feature points are ensured to be in the same plane, and the accuracy of feature point matching is improved. And determining a homography matrix between the first camera and the second camera based on the target feature point pairs, so that the homography matrix of the cameras is not influenced by environmental factors, and the later maintenance cost is reduced.
FIG. 3 is a flow chart of a method of determining a homography matrix, shown in FIG. 3, according to some embodiments of the present disclosure, including, but not limited to, the steps of:
s301, acquiring a first calibration image of a first camera and a second calibration image of a second camera.
In the embodiment of the present disclosure, the implementation manner of step S301 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S302, extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point.
In the embodiment of the present disclosure, the implementation manner of step S302 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S303, performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point.
In the embodiment of the present disclosure, the implementation manner of step S303 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S304, determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value.
In the embodiment of the present disclosure, the implementation manner of step S304 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S305, performing feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs.
In some implementations, feature point matching may be performed on the first candidate feature point and the second candidate feature point based on a matching method such as brute force matching BFMatcher, to obtain a matched candidate feature point pair.
It can be understood that, in order to improve the success rate of feature point matching, by setting the range of feature points, the first feature point and the second feature point within the set range are matched, and a matched candidate feature point pair is obtained. Optionally, the region with rich features on the plane calibration plate can be used as a set range, and feature point matching can be performed on the first feature point and the second feature point in the set range.
Alternatively, coordinates of the first feature point and the second feature point are acquired. And judging whether the first characteristic point and the second characteristic point are in a set range or not based on the coordinates, determining and filtering the characteristic points which are not in the set range, and deleting the characteristic points which are not in the set range to obtain a first target characteristic point and a second target characteristic point, namely a first candidate characteristic point and a second candidate characteristic point. And then, carrying out feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs.
S306, screening the candidate feature point pairs to obtain target feature point pairs.
In some implementations, since the distance between the first camera and the second camera is very close, the coordinate distance of the matched feature points is also very close, so that the candidate feature point pairs can be filtered to obtain the feature point pairs which are accurately matched as the target feature point pairs.
Optionally, for each candidate feature point pair, a first coordinate of a first candidate feature point in the candidate feature point pair and a second coordinate of a second candidate feature point are obtained, and a difference value between the first coordinate and the second coordinate is obtained. And setting a threshold value of the coordinate difference value, and if the difference value between the first coordinate and the second coordinate of the candidate feature point pair is smaller than the set threshold value, reserving the candidate feature point pair. And eliminating candidate feature point pairs with the difference value being greater than or equal to a set threshold value to obtain target feature point pairs.
S307, determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs.
In the embodiment of the present disclosure, the implementation manner of step S307 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
In the method for determining the homography matrix, the richness and the identification degree of the features are ensured by acquiring the first calibration image of the first camera and the second calibration image of the second camera. And then, the first calibration image and the second calibration image are subjected to feature point matching based on the depth value, and the matched feature points are screened to obtain matched target feature point pairs, so that the feature points are ensured to be in the same plane, and the accuracy of feature point matching is improved. And determining a homography matrix between the first camera and the second camera based on the target feature point pairs, so that the homography matrix of the cameras is not influenced by environmental factors, and the later maintenance cost is reduced.
FIG. 4 is a flow chart of a method of determining a homography matrix, shown in FIG. 4, according to some embodiments of the present disclosure, including, but not limited to, the steps of:
s401, acquiring a first calibration image of a first camera and a second calibration image of a second camera.
In the embodiment of the present disclosure, the implementation manner of step S401 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
And S402, performing feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair.
In the embodiment of the present disclosure, the implementation manner of step S402 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S403, determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs.
In the embodiment of the present disclosure, the implementation manner of step S403 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S404, detecting lane lines by the first camera and the second camera to obtain respective detected lane lines.
In some implementations, to ensure accuracy of the homography matrix between the first camera and the second camera, the homography matrix may be verified based on the lane lines. Optionally, image information of the current environment can be collected through the first camera and the second camera, detection and identification of lane lines are performed in the images, and the respective detection lane lines of the first camera and the second camera are identified. Wherein the detected lane line is a line segment.
S405, based on the homography matrix, the detection lane lines of the remaining cameras are projected downwards to the image coordinate system of the target camera, and the detection lane lines after projection corresponding to the remaining cameras are obtained.
It will be appreciated that the second camera includes one or more cameras, and that the target camera is one of the first and second cameras and the remaining cameras are the other of the first and second cameras.
In some implementations, the lane detected by the target camera is a target detection lane, and for the detection lane of the remaining cameras, coordinate transformation is performed by using a homography matrix, and the detection lane is projected onto an image coordinate system of the target camera to obtain a projected detection lane, so that the target detection lane and the projected detection lane can be compared under the same coordinate system.
S406, determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
In some implementations, the lane line pair may be obtained by aligning the post-projection detected lane line with the target detected lane line, or alternatively, the two detected lane lines may be aligned in pixel coordinates based on coordinate points of the post-projection detected lane line and the target detected lane line, to obtain the lane line pair. It can be understood that each lane line pair includes a target detection lane line and a post-projection detection lane line, and the position deviation of the two detection lane lines in the lane line pair is smaller, that is, the two detection lane lines in the lane line pair substantially correspond to the same lane line in reality.
Further, the detection translation distance and the parallelism between the detection lane line and the target detection lane line after the lane line centering projection are obtained, so that the verification result of the homography matrix is obtained.
Alternatively, 2 sets of coordinate points may be obtained by sampling pairs of lane lines. For example, the pair of lane lines may be sampled based on an interval of 3 meters or 5 meters or 10 meters, and 20 coordinate points of the target detection lane line and 20 coordinate points of the post-projection detection lane line are acquired. And respectively calculating the difference values between the 2 groups of coordinate points to determine the detection translation distance between the target detection lane line and the projected detection lane line. And a translation distance threshold can be set, and if the detected translation distance is smaller than or equal to the translation distance threshold, the homography matrix is determined to be high in accuracy, and the camera does not generate distortion.
Alternatively, the parallelism of the two detected lane lines can be determined by calculating the vertical separation distance between the target detected lane line and the projected detected lane line. Alternatively, the target detection lane and the post-projection detection lane may be divided into a plurality of line segments, the vertical separation distance between the post-projection detection lanes of the target detection lane within each line segment may be calculated, and the variance of the plurality of separation distances may be calculated. And if the variance of the interval distance is smaller than or equal to the variance threshold, the parallelism between the target detection lane line and the projected detection lane line is determined to be good, the homography matrix can be determined to be high in accuracy, and the camera does not generate distortion.
In some implementations, the homography matrix with high accuracy after verification can be stored in the memory, so that the homography matrix can be directly read from the memory when the vehicle performs environment sensing fusion, and the computing resource of the vehicle is saved.
In the method for determining the homography matrix, the first calibration image of the first camera and the second calibration image of the second camera are obtained based on the conventional road surface scene, feature point matching is carried out on the first calibration image and the second calibration image, a matched target feature point pair is obtained, the richness and the identification degree of features are guaranteed, and the accuracy of feature point matching is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced. Further, the effectiveness and the correctness of the homography matrix can be improved by verifying the homography matrix through the lane lines.
FIG. 5 is a flow chart of a method of determining a homography matrix, shown in FIG. 5, according to some embodiments of the present disclosure, including, but not limited to, the steps of:
S501, acquiring a first calibration image of a first camera and a second calibration image of a second camera.
S502, extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point.
And S503, performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point.
S504, based on the depth value, a first candidate feature point and a second candidate feature point for matching are determined from the first feature point and the second feature point.
And S505, performing feature point matching on the first candidate feature point and the second candidate feature point to obtain a target feature point pair.
S506, determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs.
S507, detecting lane lines by the first camera and the second camera to obtain respective detected lane lines.
And S508, based on the homography matrix, projecting the detection lane lines of the residual cameras to the image coordinate system of the target camera to obtain the detection lane lines after projection corresponding to the residual cameras.
S509, determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
In the method for determining the homography matrix, the first calibration image of the first camera and the second calibration image of the second camera are obtained based on the conventional road surface scene, so that the richness and the identification degree of the features are ensured. And then, the first calibration image and the second calibration image are subjected to feature point matching based on the depth value, a matched target feature point pair is obtained, the feature points are ensured to be in the same plane, and the accuracy of feature point matching is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced. Further, the effectiveness and the correctness of the homography matrix can be improved by verifying the homography matrix through the lane lines.
Fig. 6 is a block diagram 600 illustrating a configuration of a homography matrix determination apparatus, according to some embodiments of the present disclosure. Referring to fig. 6, the apparatus includes an acquisition module 601, a matching module 602, and a determination module 603.
The acquiring module 601 is configured to acquire a first calibration image of a first camera and a second calibration image of a second camera.
And the matching module 602 is configured to perform feature point matching on the first calibration image and the second calibration image, so as to obtain a matched target feature point pair.
A determining module 603, configured to determine a homography matrix between the first camera and the second camera based on the matched target feature point pair.
In one embodiment of the present disclosure, the matching module 602 is further configured to: extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point; performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point; determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value; and carrying out feature point matching on the first candidate feature point and the second candidate feature point to obtain the target feature point pair.
In one embodiment of the present disclosure, the matching module 602 is further configured to: acquiring a plurality of depth intervals based on the depth values; acquiring the number of feature points in each depth interval, and determining a target depth interval with the maximum number; and determining the first candidate feature point and the second candidate feature point from the feature points in the target depth interval.
In one embodiment of the present disclosure, the matching module 602 is further configured to: performing feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs; and screening the candidate feature point pairs to obtain the target feature point pairs.
In one embodiment of the present disclosure, the matching module 602 is further configured to: for each candidate feature point pair, acquiring a first coordinate of the first candidate feature point in the candidate feature point pair and a second coordinate of the second candidate feature point, and acquiring a difference value between the first coordinate and the second coordinate; and eliminating candidate feature point pairs with the difference value being greater than or equal to a set threshold value to obtain the target feature point pair.
In one embodiment of the present disclosure, the matching module 602 is further configured to: respectively acquiring a first image area and a second image area, which are overlapped with the visible angles of the first camera and the second camera, from the first calibration image and the second calibration image; and carrying out feature extraction on the first image area to obtain a first feature point, and carrying out feature extraction on the second image area to obtain a second feature point.
In one embodiment of the present disclosure, the determining module 603 is further configured to: carrying out lane line detection through the first camera and the second camera to obtain respective detection lane lines; based on the homography matrix, projecting detection lane lines of the remaining cameras under an image coordinate system of the target camera to obtain projected detection lane lines corresponding to the remaining cameras; wherein the target camera is one of the first camera and the second camera, and the remaining cameras are the other of the first camera and the second camera; and determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
In one embodiment of the present disclosure, the determining module 603 is further configured to: aligning the projected detection lane line with the target detection lane line to obtain a lane line pair; and aiming at each lane line pair, acquiring the detection translation distance and the parallelism between the detection lane line and the target detection lane line after the projection of the lane line pair, and taking the detection translation distance and the parallelism as the verification result.
In the homography matrix determining device disclosed by the invention, the first calibration image of the first camera and the second calibration image of the second camera are obtained based on the conventional road surface scene, and the characteristic points of the first calibration image and the second calibration image are matched to obtain the matched target characteristic point pairs, so that the richness and the identification degree of the characteristics are ensured, and the accuracy of characteristic point matching is improved. And further, a homography matrix between the first camera and the second camera is determined based on the target feature point pairs, so that the homography matrix of the camera is not influenced by environmental factors, and the later maintenance cost is reduced.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 7 is a block diagram of a vehicle 700 shown in accordance with some embodiments of the present disclosure. For example, vehicle 700 may be a hybrid vehicle, but may also be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 700 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 7, a vehicle 700 may include various subsystems, such as an infotainment system 701, a perception system 702, a decision control system 703, a drive system 704, and a computing platform 705. Vehicle 700 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 700 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 701 may include a communication system, an entertainment system, a navigation system, and the like.
The perception system 702 may include several types of sensors for sensing information of the environment surrounding the vehicle 700. For example, the sensing system 702 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 703 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 704 may include components that provide powered movement of the vehicle 700. In one embodiment, the drive system 704 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 700 are controlled by the computing platform 705. The computing platform 705 may include at least one processor 751 and memory 752, the processor 751 may execute instructions 753 stored in the memory 752.
The processor 751 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 752 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 753, memory 752 may also store data such as road maps, route information, vehicle location, direction, speed, etc. The data stored by memory 752 may be used by computing platform 705.
In an embodiment of the present disclosure, the processor 751 may execute instructions 753 to perform all or part of the steps of the homography matrix determination method described above.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of determining a homography matrix provided by the present disclosure.
Furthermore, the word "exemplary" is used herein to mean serving as an example, instance, illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as advantageous over other aspects or designs. Rather, the use of the word exemplary is intended to present concepts in a concrete fashion. As used herein, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X application a or B" is intended to mean any one of the natural inclusive permutations. I.e. if X applies a; x is applied with B; or both X applications a and B, "X application a or B" is satisfied under any of the foregoing examples. In addition, the articles "a" and "an" as used in this disclosure and the appended claims are generally understood to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations and is limited only by the scope of the claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (which is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes," including, "" has, "" having, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
In the foregoing detailed description, reference is made to the accompanying drawings in which is shown by way of illustration specific aspects in which the disclosure may be practiced. In this regard, terms such as "center", "longitudinal", "transverse", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", and the like, which refer to directions or represent positional relationships, may be used with reference to the orientations of the depicted figures. Because components of the devices described can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. It is to be understood that other aspects may be utilized and structural or logical changes may be made without departing from the concepts of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense.
It should be understood that features of some embodiments of the various disclosure described herein may be combined with one another, unless specifically indicated otherwise. As used herein, the term "and/or" includes any one of the items listed in relation and any combination of any two or more; similarly, ".a.at least one of the" includes any of the relevant listed items and any combination of any two or more.
It should be understood that the terms "coupled," "attached," "mounted," "connected," "secured," and the like as used in the embodiments of the present disclosure are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed, unless otherwise specifically indicated and defined; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the terms herein above will be understood by those of ordinary skill in the art as the case may be.
Furthermore, the word "on" as used in reference to a component, element, or layer of material being formed on or located on a surface may be used herein to mean that the component, element, or layer of material is positioned (e.g., placed, formed, deposited, etc.) on the surface "indirectly" such that one or more additional components, elements, or layers are disposed between the surface and the component, element, or layer of material. However, the word "on" as used in reference to a component, element or material layer that is formed on or located on a surface may also optionally have a particular meaning: a component, element, or layer of material is positioned (e.g., placed, formed, deposited, etc.) "directly on, e.g., in direct contact with, the surface.
Although terms such as "first," "second," and "third" may be used herein to describe various elements, components, regions, layers or sections, these elements, components, regions, layers or sections are not limited by these terms. Rather, these terms are only used to distinguish one component, part, region, layer or section from another component, part, region, layer or section. Thus, a first component, part, region, layer or section discussed in examples described herein could also be termed a second component, part, region, layer or section without departing from the teachings of the examples. In addition, the terms "first," "second," are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description herein, the meaning of "plurality" means at least two, e.g., two, three, etc., unless specifically defined otherwise.
It will be understood that spatially relative terms, such as "above," "upper," "lower," and "lower," among others, are used herein to describe one element's relationship to another element as illustrated in the figures. Such spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "above" or "upper" relative to another element would then be oriented "below" or "lower" relative to the other element. Thus, the term "above" encompasses both an orientation above and below, depending on the spatial orientation of the device. The device may have other orientations (e.g., rotated 90 degrees or at other orientations), and spatially relative descriptors used herein interpreted accordingly.

Claims (16)

1. A method for determining a homography matrix, the method comprising:
acquiring a first calibration image of a first camera and a second calibration image of a second camera;
performing feature point matching on the first calibration image and the second calibration image to obtain a matched target feature point pair;
determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs; wherein,
The feature point matching of the first calibration image and the second calibration image comprises the following steps:
extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point;
performing depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point;
determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value;
and carrying out feature point matching on the first candidate feature point and the second candidate feature point to obtain the target feature point pair.
2. The method of claim 1, wherein the determining a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value comprises:
acquiring a plurality of depth intervals based on the depth values;
acquiring the number of feature points in each depth interval, and determining a target depth interval with the maximum number;
and determining the first candidate feature point and the second candidate feature point from the feature points in the target depth interval.
3. The method according to claim 1, wherein the performing feature point matching on the first candidate feature point and the second candidate feature point to obtain the target feature point pair includes:
performing feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs;
and screening the candidate feature point pairs to obtain the target feature point pairs.
4. A method according to claim 3, wherein said screening said candidate feature point pairs to obtain said target feature point pairs comprises:
for each candidate feature point pair, acquiring a first coordinate of the first candidate feature point in the candidate feature point pair and a second coordinate of the second candidate feature point, and acquiring a difference value between the first coordinate and the second coordinate;
and eliminating candidate feature point pairs with the difference value being greater than or equal to a set threshold value to obtain the target feature point pair.
5. The method according to claim 1, wherein the feature extracting the first calibration image and the second calibration image to obtain a first feature point and a second feature point includes:
Respectively acquiring a first image area and a second image area, which are overlapped with the visible angles of the first camera and the second camera, from the first calibration image and the second calibration image;
and carrying out feature extraction on the first image area to obtain a first feature point, and carrying out feature extraction on the second image area to obtain a second feature point.
6. The method of claim 1, wherein after determining the homography matrix between the first camera and the second camera based on the matched pairs of target feature points, further comprising:
carrying out lane line detection through the first camera and the second camera to obtain respective detection lane lines;
based on the homography matrix, projecting detection lane lines of the remaining cameras under an image coordinate system of the target camera to obtain projected detection lane lines corresponding to the remaining cameras; wherein the target camera is one of the first camera and the second camera, and the remaining cameras are the other of the first camera and the second camera;
and determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
7. The method of claim 6, wherein the determining the verification result of the homography matrix based on the target detection lane detected by the target camera and the post-projection detection lane comprises:
aligning the projected detection lane line with the target detection lane line to obtain a lane line pair;
and aiming at each lane line pair, acquiring the detection translation distance and the parallelism between the detection lane line and the target detection lane line after the projection of the lane line pair, and taking the detection translation distance and the parallelism as the verification result.
8. A homography matrix determining apparatus, the apparatus comprising:
the acquisition module is used for acquiring a first calibration image of the first camera and a second calibration image of the second camera;
the matching module is used for matching the characteristic points of the first calibration image and the second calibration image to obtain matched target characteristic point pairs;
the determining module is used for determining a homography matrix between the first camera and the second camera based on the matched target feature point pairs; wherein,
the matching module further comprises:
the feature extraction unit is used for extracting features of the first calibration image and the second calibration image to obtain a first feature point and a second feature point;
The depth estimation unit is used for carrying out depth estimation on the first calibration image and the second calibration image to obtain a depth value of each feature point;
a feature point determining unit configured to determine a first candidate feature point and a second candidate feature point for matching from the first feature point and the second feature point based on the depth value;
and the characteristic point matching unit is used for carrying out characteristic point matching on the first candidate characteristic point and the second candidate characteristic point to obtain the target characteristic point pair.
9. The apparatus of claim 8, wherein the matching module is further configured to:
acquiring a plurality of depth intervals based on the depth values;
acquiring the number of feature points in each depth interval, and determining a target depth interval with the maximum number;
and determining the first candidate feature point and the second candidate feature point from the feature points in the target depth interval.
10. The apparatus of claim 8, wherein the matching module is further configured to:
performing feature point matching on the first candidate feature point and the second candidate feature point to obtain matched candidate feature point pairs;
And screening the candidate feature point pairs to obtain the target feature point pairs.
11. The apparatus of claim 10, wherein the matching module is further configured to:
for each candidate feature point pair, acquiring a first coordinate of the first candidate feature point in the candidate feature point pair and a second coordinate of the second candidate feature point, and acquiring a difference value between the first coordinate and the second coordinate;
and eliminating candidate feature point pairs with the difference value being greater than or equal to a set threshold value to obtain the target feature point pair.
12. The apparatus of claim 8, wherein the matching module is further configured to:
respectively acquiring a first image area and a second image area, which are overlapped with the visible angles of the first camera and the second camera, from the first calibration image and the second calibration image;
and carrying out feature extraction on the first image area to obtain a first feature point, and carrying out feature extraction on the second image area to obtain a second feature point.
13. The apparatus of claim 8, wherein the determining module is further configured to:
carrying out lane line detection through the first camera and the second camera to obtain respective detection lane lines;
Based on the homography matrix, projecting detection lane lines of the remaining cameras under an image coordinate system of the target camera to obtain projected detection lane lines corresponding to the remaining cameras; wherein the target camera is one of the first camera and the second camera, and the remaining cameras are the other of the first camera and the second camera;
and determining a verification result of the homography matrix based on the target detection lane line detected by the target camera and the projected detection lane line.
14. The apparatus of claim 13, wherein the determining module is further configured to:
aligning the projected detection lane line with the target detection lane line to obtain a lane line pair;
and aiming at each lane line pair, acquiring the detection translation distance and the parallelism between the detection lane line and the target detection lane line after the projection of the lane line pair, and taking the detection translation distance and the parallelism as the verification result.
15. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of carrying out the method of any one of claims 1-7.
16. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-7.
CN202310899568.0A 2023-07-21 2023-07-21 Homography matrix determination method and device and vehicle Pending CN116934870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310899568.0A CN116934870A (en) 2023-07-21 2023-07-21 Homography matrix determination method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310899568.0A CN116934870A (en) 2023-07-21 2023-07-21 Homography matrix determination method and device and vehicle

Publications (1)

Publication Number Publication Date
CN116934870A true CN116934870A (en) 2023-10-24

Family

ID=88393790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310899568.0A Pending CN116934870A (en) 2023-07-21 2023-07-21 Homography matrix determination method and device and vehicle

Country Status (1)

Country Link
CN (1) CN116934870A (en)

Similar Documents

Publication Publication Date Title
CN110443225B (en) Virtual and real lane line identification method and device based on feature pixel statistics
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
JP6595182B2 (en) Systems and methods for mapping, locating, and attitude correction
WO2020043081A1 (en) Positioning technique
WO2020000137A1 (en) Integrated sensor calibration in natural scenes
CN111448478A (en) System and method for correcting high-definition maps based on obstacle detection
CN113673282A (en) Target detection method and device
JP5404861B2 (en) Stationary object map generator
EP4016457A1 (en) Positioning method and apparatus
WO2018128667A1 (en) Systems and methods for lane-marker detection
Cao et al. Camera to map alignment for accurate low-cost lane-level scene interpretation
JP2008065087A (en) Apparatus for creating stationary object map
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
CN111932627B (en) Marker drawing method and system
CN114088114B (en) Vehicle pose calibration method and device and electronic equipment
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN111930877B (en) Map guideboard generation method and electronic equipment
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN114119682A (en) Laser point cloud and image registration method and registration system
CN112424568A (en) System and method for constructing high-definition map
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
JP5435294B2 (en) Image processing apparatus and image processing program
EP3798665A1 (en) Method and system for aligning radar detection data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination