CN117523010B - Method and device for determining camera pose of vehicle, computer equipment and storage medium - Google Patents

Method and device for determining camera pose of vehicle, computer equipment and storage medium Download PDF

Info

Publication number
CN117523010B
CN117523010B CN202410016859.5A CN202410016859A CN117523010B CN 117523010 B CN117523010 B CN 117523010B CN 202410016859 A CN202410016859 A CN 202410016859A CN 117523010 B CN117523010 B CN 117523010B
Authority
CN
China
Prior art keywords
pose
vehicle
image
initial
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410016859.5A
Other languages
Chinese (zh)
Other versions
CN117523010A (en
Inventor
裴朝科
周涤非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ouye Semiconductor Co ltd
Original Assignee
Shenzhen Ouye Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ouye Semiconductor Co ltd filed Critical Shenzhen Ouye Semiconductor Co ltd
Priority to CN202410016859.5A priority Critical patent/CN117523010B/en
Publication of CN117523010A publication Critical patent/CN117523010A/en
Application granted granted Critical
Publication of CN117523010B publication Critical patent/CN117523010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a camera pose determination method, a camera pose determination device, a computer device, a storage medium and a computer program product of a vehicle. The method comprises the following steps: acquiring a body image of a vehicle through a camera carried by the vehicle; detecting image feature points of the vehicle in the vehicle body image; according to the initial pose of the camera, performing space conversion on the image feature points to obtain the space feature points of the vehicle; and correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain a corrected pose. The method can be independent of external environments outside the vehicle, and has the advantage of high accuracy.

Description

Method and device for determining camera pose of vehicle, computer equipment and storage medium
Technical Field
The present invention relates to the field of image processing of vehicles, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for determining a camera pose of a vehicle.
Background
For online calibration of a camera, a common practice is to estimate the external parameters of the camera by assuming methods such as certain regular patterns (such as ground lane lines), objects or vanishing points of the horizon at infinity, and the like, so as to further determine the pose of the camera.
The methods have more dependence on the external environment, such as curves, straight lines, solid lines, broken lines and the like on lane lines, have differences in shape and thickness, and have larger correlation with road conditions; the vanishing point of the horizon at infinity has a large correlation with the vehicle ascending and descending slopes, turning and the like.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a camera pose determination method, apparatus, computer device, computer-readable storage medium, and computer program product for a vehicle that are capable of independent of the external environment outside the vehicle and have the advantage of high accuracy.
In a first aspect, the present application provides a method for determining a camera pose of a vehicle, the method comprising:
acquiring a body image of a vehicle through a camera carried by the vehicle;
detecting image feature points of the vehicle in the vehicle body image;
according to the initial pose of the camera, performing space conversion on the image feature points to obtain the space feature points of the vehicle;
and correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain a corrected pose.
In one embodiment, the detecting, in the vehicle body image, an image feature point of the vehicle includes:
Detecting corner points of a preset area of the vehicle body image to obtain corner points of the vehicle;
wherein the predetermined area comprises at least a partial contour of the vehicle.
In one embodiment, the detecting the corner of the preset area of the vehicle body image to obtain the corner of the vehicle includes:
determining a pixel gradient in the preset region;
and determining the change value of the pixel gradient in the subarea of the preset area, and detecting the pixel points meeting the corner response condition according to the change value to obtain the corner point of the vehicle.
In one embodiment, the method further comprises:
detecting an image feature difference between the image feature point and an initial image feature point; the initial pose corresponds to the initial image feature point;
if the image characteristic difference accords with the pose adjustment condition, executing the initial pose correction step;
and if the image characteristic difference does not meet the pose adjustment condition, determining the current pose of the camera based on the initial pose.
In one embodiment, the correcting the initial pose based on the feature deviation of the spatial feature point under the initial pose to obtain a corrected pose includes:
Determining a spatial feature difference between an initial spatial feature point and the spatial feature point; the initial pose corresponds to the initial spatial feature point;
and correcting the initial pose according to the space characteristic difference to obtain the corrected pose of the camera.
In one embodiment, the determining the spatial feature difference between the initial spatial feature point and the spatial feature point includes:
performing model fitting according to the space feature points to obtain a filter point fitting model;
according to the deviation between the spatial feature points and the filter point fitting model, carrying out outlier filtering on the spatial feature points to obtain filtered spatial feature points;
determining the spatial feature difference between the filtered spatial feature points and the initial spatial feature points;
the correcting the initial pose according to the space feature difference to obtain the corrected pose of the camera comprises the following steps:
and adjusting the initial pose according to the space characteristic difference to obtain an adjusted pose and a corresponding adjusted characteristic deviation, and determining a corrected pose according to the adjusted pose until the adjusted characteristic deviation meets a deviation minimization condition.
In one embodiment, before the spatial conversion is performed on the image feature points according to the initial pose of the camera to obtain the spatial feature points of the vehicle, the method further includes:
the camera of the vehicle is used for carrying out image acquisition and calibration point detection on the calibration plate to obtain an image calibration point;
acquiring the position and the size of the calibration plate, and determining a space calibration point corresponding to the image calibration point based on the position and the size of the calibration plate;
and determining the initial pose according to the corresponding relation between the image calibration point and the space calibration point.
In a second aspect, the present application further provides a camera pose determining apparatus of a vehicle. The device comprises:
the image acquisition module is used for acquiring a body image of the vehicle through a camera carried by the vehicle;
an image detection module for detecting image feature points of the vehicle in the vehicle body image;
the space conversion module is used for performing space conversion on the image characteristic points according to the initial pose of the camera to obtain the space characteristic points of the vehicle;
and the pose determining module is used for correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain the corrected pose.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of camera pose determination of the vehicle in any of the embodiments described above.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of camera pose determination of a vehicle in any of the embodiments described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of camera pose determination of the vehicle in any of the embodiments described above.
The camera pose determining method, the camera pose determining device, the computer equipment, the storage medium and the computer program product of the vehicle have the advantage that the image characteristic points of the vehicle are fixed, so that the detection speed of the image characteristic points is high. Moreover, the image characteristic points of the vehicle have single and stable advantages relative to the external environment, so that the reliability of the corrected pose is relatively high; in addition, as long as the camera can observe the vehicle, the scheme can be executed to determine the corrected pose, and the influence of the external environment is small, so that the corrected pose of the scheme has higher stability. In a scene using an intelligent electronic rear view mirror, the pose of the camera can be quickly and accurately corrected.
Drawings
FIG. 1 is an application environment diagram of a method for determining camera pose of a vehicle in one embodiment;
FIG. 2 is a flow chart of a method of determining camera pose of a vehicle in one embodiment;
FIG. 3 is a schematic diagram showing the effect of corner detection in one embodiment;
FIG. 4 is a schematic diagram showing the effect of image calibration point and spatial calibration point detection in one embodiment;
FIG. 5 is a schematic diagram showing the specific effects of image calibration points and spatial calibration point detection in one embodiment;
FIG. 6 is a schematic diagram showing the specific effects of corner detection in one embodiment;
FIG. 7 is a block diagram showing a configuration of a camera pose determining apparatus of a vehicle in one embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The camera pose determining method of the vehicle, provided by the embodiment of the application, can be applied to an application environment shown in fig. 1. The terminal 102 is a computer device, which is used for correcting the pose of at least one camera carried by the vehicle, and the cameras can be any camera capable of shooting the vehicle body. The terminal 102 may be, but not limited to, various personal computers, internet of things devices, which may be intelligent vehicle devices, and the like.
According to the initial pose of the camera and the grounding point position of the target, the transverse and longitudinal distances between the vehicle and other targets such as vehicles or pedestrians can be estimated; other vehicles are vehicles that do not carry the cameras described above, including but not limited to motor vehicles or non-motor vehicles. In the actual use process, the real pose of the camera pose can be changed from the initial pose, the change can be caused by long-time running, and the camera pose can be changed gradually due to the influence of external force; and may also be due to external impacts or mishandling, resulting in the need for temporary installation of the mirror.
The scene of a camera such as an electronic rearview mirror is limited by the environment and requires the use of a calibration tool to re-target the pose in time. Under the condition of not using a calibration tool, the initial pose is corrected, and the accuracy of distance measurement is ensured, so that the intelligent system of the terminal or the vehicle can work normally.
If the camera external parameters are estimated by a method of certain regular patterns (such as ground lane lines), objects or the vanishing points of the horizon at infinity, the initial pose of the camera can be corrected. The method has 2 defects, namely, the dependence on the external environment is more, for example, a lane line has curves, straight lines, solid lines, broken lines and the like, the shape and the thickness are different, and the correlation with road conditions is larger; the correlation between the vanishing point of the horizon at infinity and the upward and downward slopes, turning and the like of the vehicle is large; secondly, a special detection algorithm which is complex and consumes computational power is needed to detect the target and the characteristic points of the shot image, and the consistency of detection results is difficult to ensure, so that the accuracy is poor.
In one embodiment, as shown in fig. 2, there is provided a camera pose determining method of a vehicle, which is applied to the terminal 102 in fig. 1, including the steps of:
step 202, acquiring a body image of a vehicle through a camera carried by the vehicle.
The camera carried by the vehicle is arranged on the vehicle, and the camera can acquire images of the vehicle where the camera is located. Alternatively, the camera carried by the vehicle may be disposed on a rearview mirror of the vehicle, on a reversing radar of the vehicle, or at another location.
The vehicle body image is acquired by a camera and contains at least part of the structure of the vehicle. Optionally, the direction in which the camera captures the image and the source direction of the light reflected by the rearview mirror are matched, so as to realize the correction of the rearview mirror. Optionally, the camera carried by the vehicle is an electronic rearview mirror of the vehicle.
The vehicle body image is obtained by acquiring an image of at least part of the structure of the vehicle. Optionally, the vehicle body image contains intersecting lines, and the intersecting lines are detected to obtain image feature points in the vehicle body image. Alternatively, in the body image, a partial region of the vehicle may be covered with a contour line of the door or the door handle. Alternatively, in the case where the camera is an electronic rear view mirror, the vehicle body image contains not only a partial area of the vehicle but also an environmental target; environmental targets include, but are not limited to, pedestrians, vehicles, non-vehicles.
Optionally, the camera is fixed on the vehicle, and the image collected by the camera fixed on the vehicle covers a certain preset area of the vehicle where the camera itself is located. Thus, in the case of a fixed orientation of the camera, the pose of the camera is unchanged, and thus the image acquisition of the fixed area can be focused on, so that the corrected pose is more accurate.
In an alternative embodiment, capturing an image of a body of a vehicle by a camera carried by the vehicle includes: when a control instruction for starting the vehicle is detected, controlling a camera carried by the vehicle, and acquiring an image along the boundary direction of the vehicle where the camera is positioned to obtain a body image of the vehicle.
In another alternative embodiment, capturing an image of a body of a vehicle by a camera carried by the vehicle includes: detecting vibration of the position of the camera; if the vibration amplitude generated by the position of the camera is detected to be larger than the preset vibration amplitude, controlling the camera carried by the vehicle, and acquiring an image towards a partial area of the vehicle where the camera is located to obtain a vehicle body image of the vehicle. Therefore, the vibration amplitude of the camera is detected by the sensor such as the vibration sensor, and the condition of the camera for correcting the pose can be accurately perceived, so that the pose can be corrected in time.
In step 204, in the vehicle body image, image feature points of the vehicle are detected.
The image feature points are feature points obtained by detecting the features of the vehicle on the vehicle body image. The image feature points can reflect the structural information of the vehicle itself. Alternatively, the image feature points may include key location feature points of the vehicle, may include texture feature points of the vehicle, and may also include corner points of the vehicle.
Alternatively, key location feature points or texture feature points may be determined by local feature descriptors (e.g., SIFT, SURF, ORB, etc.). The key part feature points of the vehicle refer to vehicle structures, and the vehicle structures can be structures such as car lights, car handles and car windows; and texture feature points are texture information on the vehicle surface that can be used as feature points, including but not limited to painting of the vehicle body, logos, patterns, etc.
And 206, performing space conversion on the image feature points according to the initial pose of the camera to obtain the space feature points of the vehicle.
The initial pose is used for representing an initial conversion relation between the dimension of the image and the dimension of the vehicle. The initial pose is a known pose and may be stored in the terminal 102 or in other computer devices of the vehicle. Optionally, the dimension of the image is a camera coordinate system, and the dimension of the vehicle is a space coordinate system; wherein such a spatial coordinate system includes, but is not limited to, a world coordinate system, a vehicle body coordinate system, or a virtual coordinate system. Optionally, the initial pose is a preset pose; the initial pose may be a pose measured during terminal production; the initial pose may be a pose stored for a long time after the terminal 102 has completed generating. Optionally, the initial pose of the camera is characterized by a rotation matrix and translation vector.
The spatial feature points are feature points corresponding to the image feature points in the spatial dimension of the vehicle. Optionally, the spatial feature point is a coordinate point or a coordinate region of a certain spatial coordinate system in the dimension where the vehicle is located; wherein such a spatial coordinate system includes, but is not limited to, a world coordinate system, a vehicle body coordinate system, or a virtual coordinate system.
In an alternative embodiment, according to the initial pose of the camera, the spatial transformation is performed on the image feature points to obtain the spatial feature points of the vehicle, including: and mapping the image feature points into a world coordinate system according to the initial pose of the camera to obtain the spatial feature points of the vehicle.
And step 208, correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose, and obtaining the corrected pose.
The characteristic deviation is a deviation value generated by the spatial characteristic point under the initial pose. The characteristic deviation is used for representing the current deviation degree of the initial pose under the spatial dimension of the vehicle. Optionally, the feature deviation is a feature deviation between spatial feature points of different images in the initial pose; optionally, the feature deviation is a position deviation between the spatial feature point and a preset position in the initial pose. Alternatively, in the case where the image feature point, the spatial feature point, and the preset position are all represented by coordinates, the position deviation may be a coordinate deviation or an angle deviation, and the position deviation may also be a pixel deviation.
The corrected pose is the correction result of the initial pose; optionally, the corrected pose is a pose obtained by at least once adjusting the initial pose, and may also be the initial pose. Optionally, if the feature deviation of the spatial feature points under the initial pose is smaller than a preset value, determining the current pose according to the initial pose; if the characteristic deviation of the spatial characteristic points under the initial pose is larger than a preset value, correcting the initial pose according to the characteristic deviation to obtain the corrected pose.
In an alternative embodiment, correcting the initial pose based on the feature deviation of the spatial feature points under the initial pose to obtain a corrected pose, including: extracting a plurality of pairs of key points in different space feature points; filtering key points smaller than the preset matching degree to obtain target matching point pairs; based on the target matching point pairs, calculating transformation relation matrixes among different space characteristic points; determining a deflection angle of camera transformation based on the transformation relation matrix; the different spatial feature points are obtained by converting different image feature points in different vehicle images of the same vehicle based on initial pose, the different vehicle images are respectively shot through different angles, and the angle difference between the different angles is a preset angle. The transformation relation matrix is solved through at least two pictures before and after the camera angle transformation, so that the deflection angle of the camera transformation is resolved, and the initial pose of the camera can be corrected according to the deflection angle, so that the calibration efficiency is improved.
In the method for determining the camera pose of the vehicle, the vehicle body image of the vehicle is acquired through the camera carried by the vehicle, and the image feature points of the vehicle are detected in the vehicle body image; since the image feature points of the vehicle itself are fixed, the detection speed of the image feature points is high. Moreover, the image characteristic points of the vehicle have single and stable advantages relative to the external environment, so that the reliability of the corrected pose is relatively high; in addition, as long as the camera can observe the body of the vehicle, the scheme can be executed to determine the corrected pose, and the influence of the external environment is small, so that the corrected pose of the scheme has higher stability. Under the condition, according to the initial pose of the camera, performing space conversion on the image feature points to obtain the space feature points of the vehicle, so that the effect of mapping the initial pose is reflected through the space feature points; and finally, correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain the corrected pose. Thus, in a scene using the intelligent electronic rearview mirror, the pose of the camera can be quickly and accurately corrected.
Based on the method, the vehicle body appearance characteristics of the vehicle are utilized to calibrate the camera posture on line, the dependence on the external environment is less, the data related to the algorithm are only obtained from the characteristics of the vehicle, the detection speed, reliability and stability are improved, the defect of a general scheme can be effectively overcome, and the camera on-line calibration is simply and effectively implemented in an intelligent electronic rearview mirror scene.
In an alternative embodiment, in a vehicle body image, detecting an image feature point of a vehicle includes: detecting corner points of a preset area of the vehicle body image to obtain corner points of the vehicle; wherein the predetermined area comprises at least a partial contour of the vehicle.
The preset area is an area of the vehicle itself in the vehicle body image, and includes at least part of the outline of the vehicle, so that the present solution performs corner detection. Alternatively, the preset region may be determined according to a preset coordinate set or coordinate boundary line; the vehicle structure of the vehicle in the vehicle body image can be determined firstly, and then the area where the vehicle structure is located is determined as a preset area; vehicle structures include, but are not limited to, vehicle lights, vehicle handles, and vehicle window structures. The contour of a vehicle consists of intersecting lines, the corner points being corner points on the contour. Alternatively, the contour of the vehicle may consist of a contour line on the door. Alternatively, the corner points of the vehicle may be detected by Harris and Shi-Tomasi et al algorithms.
In an optional implementation manner, performing corner detection on a preset area of a vehicle body image to obtain a corner of a vehicle, including: performing edge detection on a preset area of the vehicle body image to obtain a contour line of the vehicle; on the contour line of the vehicle, the corner point of the vehicle is determined based on the degree of variation of the pixel values. Illustratively, the outline of the vehicle is a gray fold line on the largest body structure in fig. 3, with the corner points being located at the corner points of the gray fold line.
In this embodiment, the image feature points are corner points obtained by detecting a preset area of the vehicle body image, and because the corner points can more accurately represent the contour of the vehicle, further obtain spatial feature points with higher precision, so that feature deviation of the spatial feature points under the initial pose is more accurate, and further obtain a more accurate corrected pose.
In an optional implementation manner, performing corner detection on a preset area of a vehicle body image to obtain a corner of a vehicle, including: determining a pixel gradient in a preset region; and determining a change value of the pixel gradient in a sub-region of the preset region, and detecting pixel points meeting corner response conditions according to the change value to obtain corner points of the vehicle.
The pixel gradient is used to characterize the degree of change in pixel values in at least one direction. Optionally, the vehicle body image may be converted into a gray image, and then the pixel gradient of the gray image in the preset area may be calculated, so that the pixel gradient is more accurately represented. Optionally, the pixels in the preset area may be processed by an operator such as Sobel, prewitt to obtain a pixel gradient.
The subarea is a local area for detecting the corner points based on the change value of the pixel gradient in a preset area of the car body image. Optionally, the sub-region is a window region sliding along a certain direction of the preset region, and the pixels in the window region form adjacent pixels, and the corner point of the vehicle can be determined more accurately by the change value of the adjacent pixels in terms of pixel gradient. The sub-regions are used both to determine the variation value of the pixel gradient and to derive the corner points of the vehicle.
The change value of the pixel gradient is a characteristic value of the change of the pixel gradient. The change value of the pixel gradient is used for representing the change degree of the pixel gradient so as to more finely determine the corner point of the vehicle. Optionally, the pixel gradient of the preset area can be processed by an operator such as Sobel, prewitt, so as to obtain a variation value of the pixel gradient. Optionally, the variation value of the pixel gradient is a covariance matrix of the pixel gradient within the sub-region.
The corner response condition is an index set for the change value. Optionally, in the subarea, the change value of the pixel gradient of each pixel point can be compared to obtain a comparison result; and determining the corner points in the pixel points according to the comparison result.
In an exemplary embodiment, detecting a pixel point satisfying a corner response condition according to a change value to obtain a corner point of a vehicle includes: inputting the change value of the pixel gradient into a Harris response function and performing non-maximum value inhibition processing to obtain a local maximum value; the pixel location where the local maximum is located is the corner of the vehicle.
In another exemplary embodiment, detecting a pixel point satisfying a corner response condition according to a variation value to obtain a corner point of a vehicle includes: inputting the change value of the pixel gradient into the Shi-Tomasi response function to obtain a pixel characteristic value in the subarea, wherein the pixel characteristic value is used for representing the gradient change value of the subarea; determining a minimum value in the pixel characteristic values; if the minimum value in the pixel characteristic values is larger than the preset angular point threshold value, determining the position of the minimum value in the pixel characteristic values as the angular point of the vehicle.
In this embodiment, the change value of the pixel gradient is determined on the basis of the change degree of the pixel gradient representing the pixel value, and the change value is used to represent the change degree of the pixel value, so that the characteristic of the corner point can be reflected more carefully. The subareas divided by the preset area belong to areas for detecting the angular points, and in the subareas, the angular points of the vehicle can be determined more carefully by detecting the angular points through the change value of the pixel gradient.
In an alternative embodiment, the method further comprises: detecting image feature differences between the image feature points and the initial image feature points; the initial pose corresponds to the initial image feature point; if the image characteristic difference accords with the pose adjustment condition, executing the initial pose correction step; if the image characteristic difference does not meet the pose adjustment condition, determining the current pose of the camera based on the initial pose.
The initial image feature points are known image feature points in the initial pose. Alternatively, both the initial image feature point and the image feature point may be pixel locations, which may be in the form of pixel coordinates. Optionally, after the initial pose, and under the condition that the initial pose meets the pose accuracy, the camera is used for collecting the image and extracting the feature points to obtain the image feature points. Alternatively, the pose accuracy condition may refer to: starting timing from determining the initial pose of the camera, determining that the time point obtained by timing is within a certain time period or meets certain events. For example: after the initial pose is determined, image acquisition and feature point extraction are performed through a camera carried by the vehicle, so that initial image feature points can be obtained.
The image feature difference is a feature difference between the image feature point and the initial image feature point and is used for representing the feature deviation degree of the vehicle image at the image level. The initial pose corresponds to the initial image feature point, which means that the initial image feature point is set for the initial pose, and whether the initial pose needs to be adjusted can be judged directly through the image hierarchy. The image hierarchy may refer to a two-dimensional image or may be a multi-dimensional image that is not used for representing a spatial dimension, for example, gradient information may be used as a third dimension on the basis of providing two dimensions in a planar coordinate system, so as to obtain an image feature difference under the three-dimensional image hierarchy.
Optionally, the image feature point and the initial image feature point each contain a plurality of feature points; the image feature difference can be determined according to a plurality of single feature point deviations between a plurality of image feature points and a plurality of initial image features; the single feature point deviation is a feature point deviation between each image feature point and each initial image feature, and includes, but is not limited to, gradient differences or coordinate differences of the image. At this time, the image feature difference may be a result of averaging a plurality of single feature point deviations, or may be a matrix formed by a plurality of single feature point deviations.
Alternatively, the image feature differences may be gradient differences or coordinate differences; for example: the image feature difference may be a coordinate difference between the image feature point and the initial image feature point, or may be a coordinate variance between the image feature point and the initial image feature point.
The pose adjustment condition is an image level condition set for the image difference feature. Optionally, if the image difference feature is less than the minimized deviation threshold, the image feature difference does not meet the pose adjustment condition; if the image difference characteristic is larger than the minimum deviation threshold, the image characteristic difference accords with the pose adjustment condition; the image feature difference may be a gradient difference or a coordinate difference.
Optionally, performing the initial pose correction step may refer to the terminal performing steps 206 and 208; this execution may be any embodiment under execution of steps 206 and 208. Specifically, the step of correcting the initial pose is performed, including: according to the initial pose of the camera, performing space conversion on the image feature points to obtain the space feature points of the vehicle; and correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain the corrected pose.
The current pose is a pose for which no correction is made to the initial pose. Alternatively, the current pose may be the initial pose, or the result of mapping the initial pose.
In an alternative embodiment, determining the current pose of the camera based on the initial pose comprises: the initial pose is determined as the current pose of the camera. Therefore, the initial pose is not adjusted, and is directly taken as the current pose of the camera, so that the data volume required to be processed is relatively low.
In an alternative embodiment, determining the current pose of the camera based on the initial pose comprises: and mapping the initial pose into the current pose of the camera according to the mapping parameters of the initial pose under the model of the vehicle. Therefore, the initial pose is unified, mapping is carried out according to the mapping parameters under the model of the vehicle, and the calibration efficiency of the initial pose is relatively high.
In this embodiment, the initial pose has corresponding initial image feature points, and whether the initial pose needs to be corrected can be efficiently determined on the pixel level of the image by using the image feature difference between the image feature points and the initial image feature points. If correction is not needed, the pose of the camera can be determined based on the initial pose alone without performing space conversion on the image feature points, so that the efficiency is high; if correction is needed, since the initial image feature points are existing data and the image feature points are data for executing initial pose correction, only the comparison step of image feature differences is additionally executed, and the initial pose is corrected with higher efficiency.
In an alternative embodiment, correcting the initial pose based on the feature deviation of the spatial feature points under the initial pose, to obtain a corrected pose, includes: determining a spatial feature difference between the initial spatial feature point and the spatial feature point; the initial pose corresponds to the initial spatial feature point; and correcting the initial pose according to the space characteristic difference to obtain the corrected pose of the camera.
The initial spatial feature points are known spatial feature points in the initial pose. The initial spatial feature points are known spatial feature points in the initial pose. Alternatively, both the initial spatial feature point and the spatial feature point may be spatial locations, which may be in the form of spatial coordinates.
Optionally, the initial spatial feature points are feature points obtained by performing spatial conversion on the initial image feature points based on the initial pose. When the initial pose is determined and the initial pose meets the pose accuracy condition, the camera is used for image acquisition and feature point extraction, so that the initial image feature point can be obtained.
Alternatively, the pose accuracy condition may be that timing is started from the initial pose of the camera, and a time point obtained by determining timing is within a certain time period or meets certain events. For example: after the initial pose is determined, image acquisition and feature point extraction are carried out through a camera carried by the vehicle, so that initial image feature points can be obtained; and performing space conversion on the immediately obtained initial image feature points according to the initial pose of the camera to obtain initial space feature points of the vehicle.
The spatial feature difference is the spatial feature difference between the initial spatial feature point and the spatial feature point in the dimension of the vehicle. The spatial feature differences are used to characterize the feature deviations of the vehicle image at a spatial level. The initial pose corresponds to the initial image feature point, which means that the initial pose is corrected more finely by the spatial dimension by setting the initial spatial feature point for the initial pose. The spatial hierarchy may be a three-dimensional hierarchy for representing spatial dimensions, for example, gradient information may be used as a fourth dimension on the basis of providing long, wide and high dimensions by a planar coordinate system, so as to obtain a spatial feature difference under a four-dimensional spatial hierarchy.
Optionally, the spatial feature point and the initial spatial feature point are a plurality of feature points; the spatial feature difference can be determined according to a plurality of single feature point spatial deviations between a plurality of spatial feature points and a plurality of initial spatial features; the single feature point spatial deviation is a feature point deviation between each spatial feature point and each initial spatial feature, and includes, but is not limited to, gradient differences or spatial coordinate differences. In this case, the spatial feature difference may be a result of averaging a plurality of single feature point deviations, or may be a matrix formed by a plurality of single feature point deviations.
Alternatively, the spatial feature difference may be a gradient difference or a coordinate difference; for example: the spatial feature difference may be a coordinate difference between the spatial feature point and the initial spatial feature point, or may be a coordinate variance between the spatial feature point and the initial spatial feature point.
In this embodiment, the initial pose corresponds to the initial spatial feature point, in this case, by determining the spatial feature difference between the initial spatial feature point and the spatial feature point, the degree of deviation between the actual pose of the camera and the initial pose can be reflected more carefully from the dimension where the vehicle is located, so that the initial pose is corrected according to the spatial feature difference, and the corrected pose with higher accuracy can be obtained.
In an alternative embodiment, determining the spatial feature difference between the initial spatial feature point and the spatial feature point comprises: performing model fitting according to the space feature points to obtain a filter point fitting model; according to the deviation between the spatial feature points and the filter point fitting model, carrying out outlier filtering on the spatial feature points to obtain filtered spatial feature points; and determining the spatial feature difference between the filtered spatial feature points and the initial spatial feature points.
Correspondingly, the initial pose is corrected according to the space feature difference to obtain the corrected pose of the camera, which comprises the following steps: and adjusting the initial pose according to the space characteristic difference to obtain an adjusted pose and a corresponding adjusted characteristic deviation, and determining the corrected pose according to the adjusted pose until the adjusted characteristic deviation meets the deviation minimization condition.
The filter point fitting model is a characteristic point distribution model obtained by performing model fitting on spatial characteristic points. The filtering point fitting model is used for carrying out distribution position estimation on the spatial feature points under normal conditions, so that abnormal point filtering is carried out on the spatial feature points, and the spatial feature difference is obtained more accurately. Alternatively, the filtered point fitting model may be obtained by performing linear fitting or nonlinear fitting on the spatial feature points.
The filtered spatial feature points are the deviation between the filtered spatial feature points and the filtered point fitting model, and the spatial feature points meet the conventional conditions. Because the filter point fitting model is obtained through fitting, partial space feature points are abnormal points which do not coincide with the filter point fitting model, and the deviation between the abnormal points and the filter point fitting model is overlarge, the space feature difference is determined based on the filtered space feature points and the initial space feature points, and the space feature difference can be more accurate.
The adjusted pose is a pose obtained by adjusting the initial pose. The characteristic deviation after adjustment is the deviation value between the spatial characteristic point and the initial spatial characteristic point under the pose after adjustment. The adjusted characteristic deviation is used for representing the current deviation degree of the current adjusted pose under the space dimension of the vehicle; the current adjusted feature bias is determined based on the current adjusted pose. Alternatively, the adjusted feature deviation may refer to a sum of squares of deviation of the adjusted spatial feature point from each of the initial feature points, such as a sum of squares of differences or ratios.
In a specific embodiment, the method for adjusting the initial pose according to the spatial feature difference to obtain an adjusted pose and a corresponding adjusted feature deviation, until the adjusted feature deviation meets a deviation minimizing condition, determining the corrected pose according to the adjusted pose, includes: and adjusting the initial pose according to the space characteristic difference to obtain an adjusted pose, and determining the adjusted space characteristic point obtained by the space conversion of the image characteristic point and the adjusted characteristic deviation of the adjusted space characteristic point according to the adjusted pose until the adjusted characteristic deviation meets the deviation minimization condition, and determining the corrected pose according to the adjusted pose.
The adjusted spatial feature points are obtained by performing spatial conversion on the image feature points according to the adjusted pose of the camera, and belong to the result of adjusting the spatial feature points of the vehicle. The adjusted feature deviation is a feature deviation between the adjusted spatial feature point and the initial spatial feature point.
Specifically, before the adjusted characteristic deviation meets the deviation minimization condition, each time the initial pose is adjusted successively, the adjusted pose of each time is obtained; according to the pose after each adjustment, carrying out space conversion on the image characteristic points to obtain the adjusted space characteristic points and the adjusted characteristic deviation obtained by each adjustment; and judging whether the adjusted pose meets the deviation minimization condition or not through the adjusted characteristic deviation obtained by each adjustment. When the adjusted characteristic deviation meets the deviation minimizing condition, the corrected pose can be determined according to the adjusted pose, and the adjusted characteristic deviation of the adjusted spatial characteristic point and the adjusted spatial characteristic point is not required to be determined again.
The deviation minimizing condition is a minimizing condition set for the adjusted characteristic deviation. Under the condition that the characteristic deviation after adjustment is smaller than a certain threshold value, the characteristic deviation after adjustment can be determined to meet the deviation minimization condition; at this time, the adjusted pose for determining the adjusted feature deviation may be used as the corrected pose, or the adjusted pose for determining the adjusted feature deviation may be mapped or otherwise processed to obtain the corrected pose. When the adjusted characteristic deviation meets the deviation minimization condition, the method means that the adjusted pose can be subjected to model fitting for a plurality of times, and the filtered spatial characteristic points can be continuously adjusted, so that the solving process of the fitted model is completed.
Optionally, performing model fitting according to the spatial feature points to obtain a filter point fitting model, including: in the space feature points, randomly selecting samples to obtain space feature point samples: and performing model fitting according to the spatial feature point samples to obtain a filter point fitting model.
Correspondingly, according to the deviation between the spatial feature points and the filter point fitting model, carrying out outlier filtering on the spatial feature points to obtain filtered spatial feature points, wherein the method comprises the following steps: and calculating the position deviation between all the spatial feature points and the filter point fitting model, and determining the filtered spatial feature points by using the spatial feature points with the position deviation smaller than the set threshold value.
Optionally, samples can be randomly selected for multiple times, multiple model fitting is performed, a target filter point fitting model with the largest filtered spatial feature points is selected from multiple filter point fitting models, and the filtered spatial feature points of the target filter point fitting model are determined, so that the processing process has higher precision. Alternatively, the process of obtaining the spatial feature differences and correcting the post-pose may be performed based on the RANSAC algorithm.
In one possible embodiment, the adjusting the initial pose according to the spatial feature difference to obtain the adjusted pose includes: and inputting the space characteristic difference into a solvePnP function, and adjusting the initial pose through the solvePnP function to obtain the adjusted pose.
Optionally, determining the corrected pose according to the adjusted pose includes: taking the adjusted pose as the corrected pose; or mapping the adjusted pose to obtain the corrected pose.
In the embodiment, the filter point fitting model is obtained by performing model fitting on the spatial feature points, and then the filter point fitting model is used for filtering, so that the filtered spatial feature points have stronger robustness, the correction of the initial pose is not easily influenced by the interference of the external environment, and the correction accuracy is higher. On the basis, the initial pose is adjusted according to the space feature difference until the adjusted feature deviation of the adjusted pose meets the minimum condition, and the corrected pose has higher accuracy because the adjusted feature deviation is relatively smaller.
In one embodiment, before performing spatial conversion on the image feature points according to the initial pose of the camera to obtain the spatial feature points of the vehicle, the method further includes: the method comprises the steps of performing image acquisition and calibration point detection on a calibration plate through a camera of a vehicle to obtain an image calibration point; acquiring the position and the size of a calibration plate, and determining a space calibration point corresponding to an image calibration point based on the position and the size of the calibration plate; and determining the initial pose according to the corresponding relation between the image calibration point and the space calibration point.
The calibration plate is a calibration component containing calibration points. Alternatively, the calibration plate is a checkerboard of known size, on which calibration points of known positions are arranged, and by taking images of the calibration plate at different angles and positions, corner points or other types of feature points on the calibration plate can be extracted from the images.
As shown in fig. 4, the image acquired by the camera includes the calibration plate and the calibration points on the calibration plate, and the connecting lines between the calibration points. Specifically, the calibration plate with the 2x3 dots placed on the ground is a halcon calibration plate, the image recognition program is used for obtaining the initial external parameters of the camera, namely the initial pose, by detecting the circle center of the black circular spot, correlating the image coordinates and the space coordinates of the circle center through the known size information of the calibration plate and solving through PnP, and the application scene of the process is shown in fig. 5.
Optionally, everything appears to be solved after the camera finishes the initial calibration, but in the practical use process, although the camera obtains the best fastening measure, small displacement or rotation cannot be avoided, so that deviation appears in estimation of the target position, namely 'milli-centimeter out of the heart'. In a vehicle-mounted environment, a general solution is to position an object which is interested and has unique characteristics, such as a vanishing point of a horizon, an external object with relatively regular lane lines and the like by means of special object detection and or image recognition and the like, and continuously correct the pose of a camera according to the characteristics of the object with the unique characteristics.
The image calibration points are characteristic points of the calibration plate in the image. Since the position and size of the calibration plate are known, and the position of the calibration point inside the calibration plate is also known; in this case, based on the position and the size, the spatial index point corresponding to the image index point can be directly determined.
For each corner point on the calibration plate, its coordinates in a three-dimensional coordinate system are set, for example. In general, this can be accomplished by assuming that the calibration plate is placed on a plane, and selecting one spatial fixed point on the plane as the origin (0, 0, 0), and then setting the spatial fixed point of the other image calibration points using the size of the calibration plate. Meanwhile, image acquisition and angular point detection are carried out on the calibration plate, so that image calibration points can be obtained to form different two-dimensional coordinates.
In an alternative embodiment, determining the initial pose according to the correspondence between the image calibration points and the spatial calibration points includes: substituting the image calibration points and the space calibration points with corresponding relations into a PnP solving method to solve the initial pose of the camera, for example, solving a 3D to 2D rotation matrix and translation vectors.
In the embodiment, the camera of the vehicle is used for carrying out image acquisition and calibration point detection on the calibration plate to obtain an image calibration point, so that the coordinate point of the image hierarchy can be directly determined; the position and the size of the calibration plate are obtained, the space calibration point is determined based on the position and the size of the calibration plate, and the coordinate point of the space hierarchy can be directly determined; and then determining the initial pose according to the corresponding relation between the image calibration point and the space calibration point. Thus, the initial pose accuracy is relatively good.
In a specific embodiment, first, the initial pose of the camera is calibrated using a calibration plate; next, feature points of a delimited area (body of the own vehicle) are detected by using a corner detection technique, and typically, pixel positions of the image feature points, which are located at positions where X marks in fig. 6 are located, may be extracted by using a Harris or Shi-Tomasi corner detector; the pixel locations of the image feature points may be 2D coordinates, which 2D coordinates are used to characterize the original image feature points. And finally, according to the initial pose, reversely calculating the 3D coordinates of the pixel positions of the characteristic points of the body of the vehicle, wherein the 3D coordinates are used for representing the initial spatial characteristic points and recorded in a file and are recorded as a 2D+3D coordinate set X corresponding to the initial pose, so that the power failure is avoided.
Illustratively, calibrating the camera pose using a calibration plate refers to: firstly, detecting the circle center of a circular block of the calibration plate 2x3 by using a spot detection technology in computer vision; then, according to the placement position of the calibration plate and the size of the calibration plate, determining the space 3D coordinates (space three-dimensional coordinates) of the circular block of the calibration plate; then, the camera pose (the rotation matrix R and the translation vector t from 3D to 2D) is solved by using a classical PnP solving method of computer vision, and 2D represents the two-dimensional coordinates of the image plane.
And when the pose of the camera is slightly offset, the situation that the camera and the motion of the vehicle body are not synchronous can be reflected, and further, according to the steps 202, 204, 206 and 208 and the corresponding embodiments, the initial pose is corrected, the corrected pose is obtained, and the subsequent applications such as detecting target ranging are supported. The camera pose is slightly shifted, which means that when the image feature difference meets pose adjustment conditions, for example, the image feature difference is represented by a pixel distance difference, the image feature difference is overlarge.
Specifically, the initial pose, the corrected pose, or the current pose may be characterized by a vector, matrix, or dataset form. Alternatively, any pose can be characterized as: r= [ [ a11, a12, a13], [ a21, a22, a23], [ a31, a32, a33] ]; t= [ t1, t2, t3]; wherein R is a rotation vector, and t is a translation vector; (a 11, a12, a 13) are three elements of the first row, representing a new direction of the X-axis after rotation; (a 21, a22, a 23) are the three elements of the second row, representing the new direction of the Y-axis after rotation; (a 31, a32, a 33) are three elements of the third row, representing a new direction of the Z-axis after rotation; three elements represent the translation of the translation vector on the X, Y, Z axis, respectively.
Alternatively, the image feature points and the spatial feature points converted by the image feature points can be characterized as: pn= [ [ un, vn ], [ xn, yn, zn ] ]; where n denotes what number, [ un, vn ] is an image feature point, and the spatial feature point is [ xn, yn, zn ]. It is understood that the initial image feature points and the image feature points may be characterized in the same manner, and the initial spatial feature points and the spatial feature points may be characterized in the same manner.
Based on the above, the invention utilizes the special application environment that the electronic rearview mirror can observe the body of the bicycle; according to the method, after the calibration plate is utilized for primary calibration, the calibration plate can be separated, and the position and the posture of the camera are accurately calculated in real time by utilizing the body information of the vehicle, so that the calculation precision of subsequent application is effectively ensured; under the scene that the intelligent rearview mirror and the like can shoot the outline of the body of the vehicle, the invention can support automatic calibration of the pose of the camera after the model of the vehicle is input, and no calibration plate is required to be used for initialization off-line calibration.
Specifically, the feature point detection algorithm of various images can detect the feature point information of the vehicle body of the vehicle so as to obtain the feature points of the images, thereby ensuring the correction efficiency; the algorithm for reversely calculating the information space position of the characteristic points of the vehicle body of the vehicle according to the corresponding parameters of the standard points has wide applicability and higher efficiency; the algorithm for positively calculating the camera parameters according to the spatial positions of the characteristic points of the vehicle body of the vehicle is wide in applicability and high in efficiency; low dependence on external environment: the test accuracy is not affected when the external environment such as weather, illumination, lane line type, lane line width and the like is changed.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a camera pose determining device of the vehicle for realizing the camera pose determining method of the vehicle. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the camera pose determining device for one or more vehicles provided below may refer to the limitation of the camera pose determining method for vehicles hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided a camera pose determining apparatus of a vehicle, including:
an image acquisition module 702, configured to acquire a body image of a vehicle through a camera carried by the vehicle;
an image detection module 704, configured to detect an image feature point of the vehicle in the vehicle body image;
the space conversion module 706 is configured to spatially convert the image feature points according to the initial pose of the camera, so as to obtain spatial feature points of the vehicle;
the pose determining module 708 is configured to correct the initial pose based on the feature deviation of the spatial feature point under the initial pose, and obtain a corrected pose.
In one embodiment, the image detection module 704 is configured to:
detecting corner points of a preset area of the vehicle body image to obtain corner points of the vehicle;
wherein the predetermined area comprises at least a partial contour of the vehicle.
In one embodiment, the image detection module 704 is configured to:
determining a pixel gradient in the preset region;
and determining the change value of the pixel gradient in the subarea of the preset area, and detecting the pixel points meeting the corner response condition according to the change value to obtain the corner point of the vehicle.
In one embodiment, the pose determination module 708 is configured to:
detecting an image feature difference between the image feature point and an initial image feature point; the initial pose corresponds to the initial image feature point;
if the image characteristic difference accords with the pose adjustment condition, executing the initial pose correction step;
and if the image characteristic difference does not meet the pose adjustment condition, determining the current pose of the camera based on the initial pose.
In one embodiment, the pose determination module 708 is configured to:
determining a spatial feature difference between an initial spatial feature point and the spatial feature point; the initial pose corresponds to the initial spatial feature point;
and correcting the initial pose according to the space characteristic difference to obtain the corrected pose of the camera.
In one embodiment, the pose determination module 708 is configured to:
performing model fitting according to the space feature points to obtain a filter point fitting model;
according to the deviation between the spatial feature points and the filter point fitting model, carrying out outlier filtering on the spatial feature points to obtain filtered spatial feature points;
Determining the spatial feature difference between the filtered spatial feature points and the initial spatial feature points;
and adjusting the initial pose according to the space characteristic difference to obtain an adjusted pose and a corresponding adjusted characteristic deviation, and determining a corrected pose according to the adjusted pose until the adjusted characteristic deviation meets a deviation minimization condition.
In one embodiment, the pose determination module 708 is configured to:
performing space conversion on the image feature points according to the initial pose of the camera, and performing image acquisition and calibration point detection on a calibration plate through the camera of the vehicle before obtaining the space feature points of the vehicle to obtain image calibration points;
acquiring the position and the size of the calibration plate, and determining a space calibration point corresponding to the image calibration point based on the position and the size of the calibration plate;
and determining the initial pose according to the corresponding relation between the image calibration point and the space calibration point.
The respective modules in the camera pose determination apparatus of the vehicle described above may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 8. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program, when executed by a processor, implements a method of determining a camera pose of a vehicle. The display unit of the computer equipment is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device, wherein the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track information ball or a touch pad arranged on a shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use, and processing of the related data need to comply with the related laws and regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method for determining a camera pose of a vehicle, the method comprising:
acquiring a body image of a vehicle through a camera carried by the vehicle;
detecting image feature points of the vehicle in the vehicle body image;
according to the initial pose of the camera, performing space conversion on the image feature points to obtain the space feature points of the vehicle; the space feature points are feature points corresponding to the image feature points in the space dimension of the vehicle;
And correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain a corrected pose.
2. The method according to claim 1, wherein the detecting image feature points of the vehicle in the vehicle body image includes:
detecting corner points of a preset area of the vehicle body image to obtain corner points of the vehicle;
wherein the predetermined area comprises at least a partial contour of the vehicle.
3. The method according to claim 2, wherein the performing corner detection on the preset area of the vehicle body image to obtain the corner of the vehicle includes:
determining a pixel gradient in the preset region;
and determining the change value of the pixel gradient in the subarea of the preset area, and detecting the pixel points meeting the corner response condition according to the change value to obtain the corner point of the vehicle.
4. The method according to claim 1, wherein the method further comprises:
detecting an image feature difference between the image feature point and an initial image feature point; the initial pose corresponds to the initial image feature point;
If the image characteristic difference accords with the pose adjustment condition, executing the initial pose correction step;
and if the image characteristic difference does not meet the pose adjustment condition, determining the current pose of the camera based on the initial pose.
5. The method of claim 1, wherein the correcting the initial pose based on the feature bias of the spatial feature points under the initial pose to obtain a corrected pose comprises:
determining a spatial feature difference between an initial spatial feature point and the spatial feature point; the initial pose corresponds to the initial spatial feature point;
and correcting the initial pose according to the space characteristic difference to obtain the corrected pose of the camera.
6. The method of claim 5, wherein said determining a spatial feature difference between an initial spatial feature point and said spatial feature point comprises:
performing model fitting according to the space feature points to obtain a filter point fitting model;
according to the deviation between the spatial feature points and the filter point fitting model, carrying out outlier filtering on the spatial feature points to obtain filtered spatial feature points;
Determining the spatial feature difference between the filtered spatial feature points and the initial spatial feature points;
the correcting the initial pose according to the space feature difference to obtain the corrected pose of the camera comprises the following steps:
and adjusting the initial pose according to the space characteristic difference to obtain an adjusted pose and a corresponding adjusted characteristic deviation, and determining a corrected pose according to the adjusted pose until the adjusted characteristic deviation meets a deviation minimization condition.
7. The method of claim 1, wherein before spatially transforming the image feature points according to the initial pose of the camera to obtain the spatial feature points of the vehicle, the method further comprises:
the camera of the vehicle is used for carrying out image acquisition and calibration point detection on the calibration plate to obtain an image calibration point;
acquiring the position and the size of the calibration plate, and determining a space calibration point corresponding to the image calibration point based on the position and the size of the calibration plate;
and determining the initial pose according to the corresponding relation between the image calibration point and the space calibration point.
8. A camera pose determination apparatus of a vehicle, characterized in that the apparatus comprises:
The image acquisition module is used for acquiring a body image of the vehicle through a camera carried by the vehicle;
an image detection module for detecting image feature points of the vehicle in the vehicle body image;
the space conversion module is used for performing space conversion on the image characteristic points according to the initial pose of the camera to obtain the space characteristic points of the vehicle; the space feature points are feature points corresponding to the image feature points in the space dimension of the vehicle;
and the pose determining module is used for correcting the initial pose based on the characteristic deviation of the spatial characteristic points under the initial pose to obtain the corrected pose.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410016859.5A 2024-01-05 2024-01-05 Method and device for determining camera pose of vehicle, computer equipment and storage medium Active CN117523010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410016859.5A CN117523010B (en) 2024-01-05 2024-01-05 Method and device for determining camera pose of vehicle, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410016859.5A CN117523010B (en) 2024-01-05 2024-01-05 Method and device for determining camera pose of vehicle, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117523010A CN117523010A (en) 2024-02-06
CN117523010B true CN117523010B (en) 2024-04-09

Family

ID=89757032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410016859.5A Active CN117523010B (en) 2024-01-05 2024-01-05 Method and device for determining camera pose of vehicle, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117523010B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608693A (en) * 2015-12-18 2016-05-25 上海欧菲智能车联科技有限公司 Vehicle-mounted panoramic around view calibration system and method
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene
CN111311632A (en) * 2018-12-11 2020-06-19 深圳市优必选科技有限公司 Object pose tracking method, device and equipment
CN111524192A (en) * 2020-04-20 2020-08-11 北京百度网讯科技有限公司 Calibration method, device and system for external parameters of vehicle-mounted camera and storage medium
CN112348837A (en) * 2020-11-10 2021-02-09 中国兵器装备集团自动化研究所 Object edge detection method and system based on point-line detection fusion
CN113554711A (en) * 2020-04-26 2021-10-26 上海欧菲智能车联科技有限公司 Camera online calibration method and device, computer equipment and storage medium
CN113592947A (en) * 2021-07-30 2021-11-02 北京理工大学 Visual odometer implementation method of semi-direct method
CN113643356A (en) * 2020-04-27 2021-11-12 北京达佳互联信息技术有限公司 Camera pose determination method, camera pose determination device, virtual object display method, virtual object display device and electronic equipment
CN114494316A (en) * 2022-01-28 2022-05-13 瑞芯微电子股份有限公司 Corner marking method, parameter calibration method, medium, and electronic device
CN114612567A (en) * 2020-12-08 2022-06-10 北京极智嘉科技股份有限公司 Camera calibration method and device, computer equipment and computer storage medium
CN114648639A (en) * 2022-05-19 2022-06-21 魔视智能科技(武汉)有限公司 Target vehicle detection method, system and device
CN115880372A (en) * 2022-12-12 2023-03-31 武汉港迪智能技术有限公司 Unified calibration method and system for external hub positioning camera of automatic crane

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831290B2 (en) * 2012-08-01 2014-09-09 Mitsubishi Electric Research Laboratories, Inc. Method and system for determining poses of vehicle-mounted cameras for in-road obstacle detection

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608693A (en) * 2015-12-18 2016-05-25 上海欧菲智能车联科技有限公司 Vehicle-mounted panoramic around view calibration system and method
CN111311632A (en) * 2018-12-11 2020-06-19 深圳市优必选科技有限公司 Object pose tracking method, device and equipment
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene
CN111524192A (en) * 2020-04-20 2020-08-11 北京百度网讯科技有限公司 Calibration method, device and system for external parameters of vehicle-mounted camera and storage medium
CN113554711A (en) * 2020-04-26 2021-10-26 上海欧菲智能车联科技有限公司 Camera online calibration method and device, computer equipment and storage medium
CN113643356A (en) * 2020-04-27 2021-11-12 北京达佳互联信息技术有限公司 Camera pose determination method, camera pose determination device, virtual object display method, virtual object display device and electronic equipment
CN112348837A (en) * 2020-11-10 2021-02-09 中国兵器装备集团自动化研究所 Object edge detection method and system based on point-line detection fusion
CN114612567A (en) * 2020-12-08 2022-06-10 北京极智嘉科技股份有限公司 Camera calibration method and device, computer equipment and computer storage medium
CN113592947A (en) * 2021-07-30 2021-11-02 北京理工大学 Visual odometer implementation method of semi-direct method
CN114494316A (en) * 2022-01-28 2022-05-13 瑞芯微电子股份有限公司 Corner marking method, parameter calibration method, medium, and electronic device
CN114648639A (en) * 2022-05-19 2022-06-21 魔视智能科技(武汉)有限公司 Target vehicle detection method, system and device
CN115880372A (en) * 2022-12-12 2023-03-31 武汉港迪智能技术有限公司 Unified calibration method and system for external hub positioning camera of automatic crane

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉标记板的自动驾驶车辆激光雷达与相机在线标定研究;吴琼 等;《汽车技术》;20201231(第04期);40-44 *

Also Published As

Publication number Publication date
CN117523010A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN111179358B (en) Calibration method, device, equipment and storage medium
Zhu et al. Fusion of time-of-flight depth and stereo for high accuracy depth maps
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
US10366501B2 (en) Method and apparatus for performing background image registration
US20140253679A1 (en) Depth measurement quality enhancement
US7764284B2 (en) Method and system for detecting and evaluating 3D changes from images and a 3D reference model
CN109155066B (en) Method for motion estimation between two images of an environmental region of a motor vehicle, computing device, driver assistance system and motor vehicle
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN104361603B (en) Gun camera image target designating method and system
CN110827361B (en) Camera group calibration method and device based on global calibration frame
CN108362205B (en) Space distance measuring method based on fringe projection
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN112200203A (en) Matching method of weak correlation speckle images in oblique field of view
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN112102375B (en) Point cloud registration reliability detection method and device and mobile intelligent equipment
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
KR20220113781A (en) How to measure the topography of your environment
El Bouazzaoui et al. Enhancing rgb-d slam performances considering sensor specifications for indoor localization
CN115830135A (en) Image processing method and device and electronic equipment
CN114119652A (en) Method and device for three-dimensional reconstruction and electronic equipment
CN117523010B (en) Method and device for determining camera pose of vehicle, computer equipment and storage medium
Liu et al. Outdoor camera calibration method for a GPS & camera based surveillance system
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium
Cui et al. ACLC: Automatic Calibration for non-repetitive scanning LiDAR-Camera system based on point cloud noise optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant