WO2022134518A1 - 摄像设备标定方法、装置、电子设备及存储介质 - Google Patents

摄像设备标定方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022134518A1
WO2022134518A1 PCT/CN2021/102795 CN2021102795W WO2022134518A1 WO 2022134518 A1 WO2022134518 A1 WO 2022134518A1 CN 2021102795 W CN2021102795 W CN 2021102795W WO 2022134518 A1 WO2022134518 A1 WO 2022134518A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate information
target reference
information
scene image
camera device
Prior art date
Application number
PCT/CN2021/102795
Other languages
English (en)
French (fr)
Inventor
马涛
闫国行
李怡康
Original Assignee
上海商汤临港智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤临港智能科技有限公司 filed Critical 上海商汤临港智能科技有限公司
Publication of WO2022134518A1 publication Critical patent/WO2022134518A1/zh
Priority to US17/873,722 priority Critical patent/US20220366606A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present disclosure relates to the technical field of computer vision, and in particular, to a camera equipment calibration method, device, electronic device, and storage medium.
  • ADAS Advanced Driving Assistance System
  • the camera equipment can be installed by the user, wherein, the camera equipment
  • the installation position and installation angle can be set according to user needs.
  • the present disclosure provides at least a method, an apparatus, an electronic device, and a storage medium for calibrating an imaging device.
  • the present disclosure provides a method for calibrating a camera device, including:
  • a homography matrix corresponding to the imaging device is determined.
  • the first coordinate information of the multiple target reference points on each of the input at least two line segments in the pixel coordinate system and the second coordinate information in the world coordinate system can be used to determine the homography matrix corresponding to the camera device, which realizes the automatic calibration of the camera device. Compared with the manual calibration method, the efficiency and accuracy of the calibration are improved. .
  • the method before acquiring a scene image of a preset scene captured by a camera device disposed on the traveling device, the method further includes:
  • the line and the second reference line are located on a screen image of the camera device when the scene image is captured, and the first reference line and the second reference line are parallel in the screen image.
  • the pose of the camera device before acquiring the scene image collected by the camera device, the pose of the camera device can be adjusted so that the skyline included in the scene image collected by the adjusted camera device is located between the set first baseline and the second baseline. time, that is, the pitch angle corresponding to the camera device can be made close to 0°, avoiding the occurrence of low accuracy of the generated second coordinate information of the target reference point when the pitch angle is large, thereby improving the accuracy of the homography matrix. Accuracy.
  • the method before acquiring a scene image of a preset scene captured by a camera device disposed on the traveling device, the method further includes:
  • the pose of the target camera device before acquiring the scene image collected by the camera device, the pose of the target camera device can be adjusted, so that the skyline in the scene image collected by the adjusted camera device is parallel to or overlapping with the set third reference line, that is, it can be
  • the tumble angle of the adjusted camera device is made close to 0°, so as to avoid the situation where the accuracy of the second coordinate information of the generated target reference point is low when the tumble angle is large, thereby improving the accuracy of the homography matrix .
  • the method before acquiring the scene image of the preset scene collected by the camera device disposed on the traveling device, the method further includes:
  • Adjust the pose of the camera device so that the skyline included in the scene image captured by the camera device after adjustment and the fourth reference line located between the first reference line and the second reference line
  • the reference lines overlap; the fourth reference line is located on the screen image of the camera device when the scene image is captured, and is parallel to the first reference line and the second reference line on the screen image.
  • determining two line segments and multiple target reference points on each line segment includes:
  • a target area can be set on the scene image, the two parallel lines in the target area are close to the real distance between the camera equipment, and the two parallel lines in the target area are selected to overlap with the two parallel lines in the scene image.
  • the second coordinate information of the selected target reference point can be more accurately determined by multiple target reference points on the two line segments.
  • the first coordinate information of the two line segments and the multiple target reference points on each line segment under the pixel coordinate system and the first coordinate information thereof under the world coordinate system are determined based on the scene image.
  • Two-coordinate information including:
  • the second coordinates of the plurality of target reference points in the world coordinate system are determined based on the positional coordinate information of the intersection point in the pixel coordinate system and the first coordinate information of the plurality of target reference points information.
  • the second coordinate information in the world coordinate system includes:
  • For each target reference point determine the difference value information between the first coordinate information of the target reference point and the position coordinate information of the intersection point; based on the difference value information, the focal length information of the imaging device, and The predetermined installation height of the imaging device determines the second coordinate information of the target reference point.
  • the difference information between the calculated first coordinate information of the target reference point and the position coordinate information of the intersection point, the focal length information of the imaging device, and the installation height of the imaging device are compared.
  • the second coordinate information of the target reference point is accurately determined.
  • the difference information between the first coordinate information of the target reference point and the position coordinate information of the intersection includes abscissa difference and ordinate difference; value information, the focal length information of the imaging device, and the predetermined installation height of the imaging device, and determining the second coordinate information of the target reference point, including:
  • the horizontal coordinate value in the second coordinate information of the target reference point is determined.
  • the second coordinate information of the target reference point is determined based on the vertical coordinate value, the horizontal coordinate difference value, and the horizontal focal length in the focal length information of the imaging device.
  • Horizontal coordinate values in including:
  • the abscissa difference value Based on the longitudinal coordinate value, the abscissa difference value, and the lateral focal length in the focal length information of the imaging device, determining the lateral distance between the target reference point and the imaging device;
  • the distance between the camera device and the center of the traveling device can be determined based on the determined distance between the camera device and the center of the traveling device.
  • the horizontal distance of the position and the horizontal distance between the target reference point and the imaging device are compared with the horizontal coordinate value in the second coordinate information for determining the target reference point.
  • the method further includes:
  • the traveling device is controlled based on the world coordinate information of the target object.
  • the determined homography matrix and the pixel coordinate information of the target object included in the detected real-time image can be used to more accurately determine whether the target object is in the world The world coordinate information in the coordinate system, and then realize the precise control of the driving equipment.
  • the present disclosure provides a camera equipment calibration device, including:
  • an acquisition module configured to acquire a scene image of a preset scene collected by a camera device disposed on the traveling equipment, wherein the preset scene includes at least two parallel lines, and the traveling equipment is located on two adjacent parallel lines between and the sides of the traveling device are parallel to the two parallel lines;
  • a first determination module configured to determine, based on the scene image, the first coordinate information of the two line segments and the multiple target reference points on each line segment in the pixel coordinate system and the second coordinate information in the world coordinate system , wherein the two line segments respectively overlap with the two lines corresponding to the two parallel lines in the scene image;
  • a second determining module configured to determine a homography matrix corresponding to the imaging device based on the first coordinate information and the second coordinate information.
  • the present disclosure provides an electronic device, comprising: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor communicates with the The memories communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the method for calibrating an imaging device according to the first aspect or any one of the implementation manners are executed.
  • the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the imaging according to the first aspect or any one of the embodiments above is executed. Steps in the device calibration method.
  • FIG. 1 shows a schematic flowchart of a method for calibrating a camera device provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of a scene image in a method for calibrating a camera device provided by an embodiment of the present disclosure
  • FIG. 3 shows that, in a method for calibrating a camera device provided by an embodiment of the present disclosure, based on the first coordinate information of the plurality of target reference points, the coordinates of the plurality of target reference points in the world coordinate system are determined.
  • FIG. 4 shows a schematic structural diagram of an apparatus for calibrating a camera device provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the user can determine the homography matrix of the installed camera equipment by means of manual calibration. For example, a user can place a cone on the ground, and determine the position of the cone, and based on the position of the cone, determine the homography matrix of the installed camera device.
  • manual calibration the operation process is cumbersome, and a large error will be generated when determining the position of the conical target, so that the error of the determined homography matrix is large, which will cause ADAS The test results are inaccurate.
  • the embodiments of the present disclosure provide a camera equipment calibration method, device, electronic device, and storage medium.
  • the execution subject of the camera device calibration method provided by the embodiment of the present disclosure is generally a computer device with a certain computing capability. ), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc.
  • the camera device calibration method may be implemented by the processor calling computer-readable instructions stored in the memory.
  • FIG. 1 is a schematic flowchart of a method for calibrating an imaging device provided by an embodiment of the present disclosure
  • the method includes S101-S103, wherein:
  • S101 Acquire a scene image of a preset scene collected by a camera device disposed on a traveling device; the preset scene includes at least two parallel lines, the traveling device is located between two adjacent parallel lines and the The side surface of the traveling device is parallel to the two parallel lines; the parallel lines refer to the parallelism of the lines in the actual preset scene.
  • the two parallel lines are Parallel lines have an angle between them in the image, i.e. they meet at a certain point.
  • S103 Determine a homography matrix corresponding to the imaging device based on the first coordinate information and the second coordinate information.
  • the homography matrix corresponding to the imaging device is determined, and the automatic calibration of the imaging device is realized.
  • the calibration efficiency and accuracy are improved.
  • the traveling device may be a motor vehicle, a non-motor vehicle, a robot, etc.
  • a camera device may be installed on the traveling device, and after the camera device is installed, a scene image of a preset scene captured by the camera device is acquired.
  • the preset scene includes at least two parallel lines, the traveling device is located between the two parallel lines, and the side surface of the traveling device is parallel to the two parallel lines where the traveling device is located.
  • the yaw angle corresponding to the camera device can be made close to 0° (that is, the difference between the yaw angle corresponding to the camera device and 0° is smaller than the set first difference value threshold), and then based on the collected scene image corresponding to the preset scene, the second coordinate information of the target reference point can be more accurately determined.
  • the two parallel lines may be road traffic markings set on the road, for example, any one of the two parallel lines may be a solid white line, a dashed white line, a solid yellow line on the road, etc.;
  • One parallel line can also be two parallel lines drawn on the parking space, etc.
  • the preset scene may be any scene with at least two road markings (ie, two parallel lines) and a visible skyline.
  • the preset scene may be a road scene or a parking lot scene.
  • the pose of the camera device Before acquiring the scene image of the preset scene, the pose of the camera device can be adjusted in the following three ways, so as to collect the scene image of the preset scene based on the adjusted camera device, and determine the second coordinate information of the target reference point , the accuracy of the determined second coordinate information can be increased.
  • Manner 1 Before acquiring a scene image of a preset scene captured by a camera device installed on the driving device, the method further includes: adjusting the pose of the camera device, so that the scene image collected by the adjusted camera device includes The skyline is located between the set first reference line and the second reference line, and the first reference line and the second reference line are located on the screen image of the camera device when the scene image is captured, and all The first reference line and the second reference line are parallel.
  • the pose of the camera device can be adjusted so that The skyline included in the scene image collected by the adjusted camera device is located between the set first reference line and the second reference line, that is, the pitch angle corresponding to the adjusted camera device is close to 0° (that is, the pitch angle corresponding to the camera device is The difference between the angle and 0° is less than the second difference threshold).
  • the positions of the first reference line and the second reference line on the screen image may be determined according to actual conditions.
  • Manner 2 Before acquiring a scene image of a preset scene captured by a camera device disposed on the driving device, the method further includes: adjusting the pose of the camera device, so that the scene image collected by the adjusted camera device is in the scene image.
  • the skyline is parallel to or overlapping with the set third reference line, and the third reference line is located on the screen image of the camera device when the scene image is captured.
  • the screen image when the camera device collects the scene image can be Set a third reference line on the top, and adjust the pose of the camera device so that the skyline in the scene image captured by the adjusted camera device is parallel or overlapping with the set third reference line, that is, the roll angle of the camera device is close to 0 ° (the difference between the roll angle of the camera device and 0° is smaller than the set third difference threshold).
  • the position of the third reference line on the screen image may be determined according to the actual situation.
  • the method Before acquiring the scene image of the preset scene captured by the camera device set on the driving device, the method further includes:
  • the fourth reference line is located on the screen image of the camera device when the scene image is captured, between the first reference line and the second reference line, and is connected with the first reference line and the second reference line.
  • the second reference line is parallel.
  • a first reference line, a second reference line and a fourth reference line that are parallel to each other can be set on the screen image when the camera device captures the scene image, wherein the fourth reference line is located between the first reference line and the second reference line between.
  • the position information of the first reference line, the second reference line and the fourth reference line on the screen image can be set according to the actual situation.
  • the pose of the camera device can be adjusted so that the skyline included in the scene image collected by the adjusted camera device overlaps with the set fourth reference line, that is, the pitch angle and roll angle of the camera device can be both close to 0 °.
  • the first coordinate information of the multiple target fiducial points on the scene image in the pixel coordinate system can be determined, and then based on the first coordinate information of the multiple target fiducial points, it can be determined that the multiple target fiducial points are in the pixel coordinate system.
  • Second coordinate information in the world coordinate system can be selected as required.
  • the world coordinate system can be a coordinate system constructed with the center point of the traveling device as the origin, or it can also be the center point of the top plane of the traveling device as the origin.
  • determining two line segments and a plurality of target reference points on each line segment includes: from the target area set on the scene image, determining the two line segments that are the same as those in the scene image. two line segments in which parallel lines overlap, and a plurality of target fiducial points on the two line segments are determined.
  • the target area may be set as required, for example, the target area may be an image area located in the middle of the scene image, and/or the target area may be an image area including two parallel lines in the scene image. Since the real distance between the real object indicated by the pixel point in the scene image and the camera device is far, the second coordinate information of the pixel point cannot be accurately determined, so the real object corresponding to the pixel point in the target area is different from the real object in the target area. The distance between the camera devices is less than the set distance threshold.
  • the target area can also be automatically set according to the size of the scene image, for example, a predetermined proportion of the image in the lower part of the scene image is automatically set as the target area, and it is automatically determined whether the target area includes at least two road markings, and if it does not include two roads.
  • the target area can be expanded automatically; or it is prompted that the scene image is wrong, and the scene image needs to be re-acquired.
  • two line segments overlapping with two parallel lines in the scene image may be determined from the target area set on the scene image, and four target reference points may be selected on the two line segments.
  • the figure includes a target area 21 , parallel lines 22 , and a plurality of target reference points 23 determined in the target area.
  • a target area can be set on the scene image, the two parallel lines in the target area are close to the real distance between the camera equipment, and it is determined from the target area that the two parallel lines in the scene image overlap with the two lines in the scene image.
  • the two line segments and the multiple target reference points on the two line segments can more accurately determine the second coordinate information of the selected target reference point.
  • the lengths of the two line segments are related to the size of the target area.
  • the lengths of the two line segments can be automatically determined according to the length of the target area.
  • the lengths of the two line segments can be automatically determined according to the lengths of the road markings detected in the target area.
  • the lengths of the two line segments can be set according to empirical values.
  • the two line segments can be automatically moved to overlap the detected parallel lines.
  • the two line segments can also be manually adjusted to overlap the parallel lines more closely.
  • the first coordinate information of the two line segments and the multiple target reference points on each line segment in the pixel coordinate system and their location in the world are determined based on the scene image.
  • the second coordinate information in the coordinate system including:
  • the first coordinate information of each selected target reference point in the pixel coordinate system corresponding to the scene image may be determined. Then, based on the first coordinate information of the multiple target datum points, the fitting parameters of the lines where the target datum points are located can be determined; and then the determined fitting parameters of the two lines can be used to determine the fitting parameters of the two parallel lines in the scene image. The position coordinate information of the intersection of the two lines in the pixel coordinate system.
  • this FIG. 2 also includes two lines 22 of two parallel lines in the scene image, namely the first line, the second line, and the intersection 24 corresponding to the first line and the second line .
  • the number of selected target reference points 23 may be 4, the two target reference points 23 on the left are located on the first line, and the two target reference points 23 on the right are located on the second line.
  • the target reference point may be the end point of each line segment.
  • the first coordinate information of the first target reference point on the left is (x1, y1)
  • the first coordinate information of the second target reference point is (x2, y2)
  • the first coordinate information of the third target reference point on the right The coordinate information is (x3, y3)
  • the first coordinate information of the fourth target reference point is (x4, y4).
  • first line equation (1) of the first line and the second line equation (2) of the second line can be used to determine the position coordinate information of the intersection point in the pixel coordinate system.
  • abscissa VPx in the position coordinate information of the determined intersection point in the pixel coordinate system is:
  • the ordinate VPy in the position coordinate information of the determined intersection in the pixel coordinate system is:
  • the position coordinate information (VP x , VP y ) of the intersection point in the pixel coordinate system is obtained.
  • more than two first target reference points may also be selected from the first line in FIG. 2 , by setting the first coordinates of the selected plurality of first target reference points in the pixel coordinate system
  • the information is fitted with a straight line, and the first straight line equation corresponding to the first line is determined (that is, the first fitting parameter corresponding to the first line is determined); and more than two second target reference points are selected from the second line,
  • the second straight line corresponding to the second line is determined by performing straight line fitting on the second coordinate information of the selected multiple second target reference points in the pixel coordinate system (that is, the second fitting parameter corresponding to the second line is determined) Equation, in which the number of selected first target reference points and second target reference points can be set as required, for example, 4 first target reference points can be selected from the first line, and 4 can be selected from the second line.
  • the second target reference point is the number of selected first target reference points and second target reference points can be set as required, for example, 4 first target reference points can be selected from the first line, and 4 can be selected
  • the selected multiple first target reference points may be fitted by the least squares method to determine the first fitting parameter corresponding to the first line, that is, the first straight line equation corresponding to the first line is obtained; and Through the least squares method, the selected multiple second target reference points are fitted to determine the second fitting parameter corresponding to the second line, that is, the second straight line equation corresponding to the second line is obtained.
  • first line equation corresponding to the first line and the second line equation corresponding to the second line can be used to determine the relationship between the two parallel lines in the two lines (ie, the first line and the second line) in the scene image.
  • the position coordinate information of the intersection point in the pixel coordinate system can be used to determine the relationship between the two parallel lines in the two lines (ie, the first line and the second line) in the scene image.
  • the plurality of target datums are determined based on the position coordinate information of the intersection point in the pixel coordinate system and the first coordinate information of the plurality of target reference points
  • the second coordinate information of the point includes: for each target reference point, calculating the difference value information between the first coordinate information of the target reference point and the position coordinate information of the intersection point; based on the difference value information, the focal length information of the imaging device, and the predetermined installation height of the imaging device, to determine the second coordinate information of the target reference point.
  • each target reference point first calculate the difference information between the first coordinate information of the target reference point and the position coordinate information of the intersection point, that is, the abscissa of the target reference point and the intersection point can be calculated. Subtract the abscissa to obtain the abscissa difference between the target reference point and the intersection, and subtract the ordinate of the target reference point from the ordinate of the intersection to obtain the ordinate between the target reference point and the intersection difference.
  • the installation height of the camera equipment is the height distance between the camera equipment and the ground.
  • the difference information between the calculated first coordinate information of the target reference point and the position coordinate information of the intersection point, the focal length information of the imaging device, and the installation height of the imaging device are compared.
  • the second coordinate information of the target reference point is accurately determined.
  • the difference information between the first coordinate information of the target reference point and the position coordinate information of the intersection includes abscissa difference and ordinate difference; value information, the focal length information of the imaging device, and the predetermined installation height of the imaging device, and determining the second coordinate information of the target reference point, including:
  • the longitudinal distance between the target reference point and the imaging device can be determined according to the following formula (3):
  • DY i is the longitudinal distance between the i-th target reference point and the imaging device
  • abs(VP y -y i ) is the absolute value of the ordinate difference corresponding to the i-th target reference point
  • f y is the longitudinal direction Focal length
  • camera H is the installation height of the camera equipment.
  • the longitudinal coordinate value in the second coordinate information of the target reference point may be determined based on the longitudinal distance between the target reference point and the imaging device. For example, when the camera device is located on the longitudinal axis of the world coordinate system, the longitudinal distance between the determined target reference point and the camera device is the vertical coordinate value; there is a longitudinal distance between the camera device and the horizontal axis of the world coordinate system. , then the vertical distance in the second coordinate information of the target reference point can be determined based on the vertical distance between the camera device and the horizontal coordinate axis of the world coordinate system and the determined vertical distance between the target reference point and the camera device Coordinate value.
  • the lateral distance between the target reference point and the camera device can be determined first, and then the determined target can be used.
  • the second coordinate information of the target reference point is determined based on the vertical coordinate value, the horizontal coordinate difference value, and the horizontal focal length in the focal length information of the imaging device.
  • Horizontal coordinate values in including:
  • the abscissa difference value, and the lateral focal length in the focal length information of the imaging device determine the lateral distance between the target reference point and the imaging device; based on the determined distance of the imaging device The lateral distance between the center position of the traveling device and the lateral distance between the target reference point and the imaging device determines the lateral coordinate value in the second coordinate information of the target reference point.
  • the lateral distance between the target reference point and the imaging device can be determined according to the following formula (4):
  • DX i is the horizontal coordinate value of the ith target reference point in the world coordinate system
  • abs(VP x -x i ) is the absolute value of the abscissa difference corresponding to the ith target reference point
  • f x is Horizontal focal length
  • DY i is the vertical coordinate value of the ith target reference point in the world coordinate system.
  • the second lateral distance of the target reference point can be determined by using the determined lateral distance from the target reference point to the imaging device and the lateral distance between the imaging device and the origin of the constructed world coordinate system.
  • the horizontal coordinate value in the coordinate information is the origin of the constructed world coordinate system.
  • the second coordinate information of the four target reference points shown in FIG. 2 can be obtained, that is, the second coordinate information (X1, Y1), the second The second coordinate information (X2, Y2) of the target reference point, the second coordinate information (X3, Y3) of the third target reference point on the right, and the second coordinate information (X4, Y4) of the fourth target reference point.
  • the distance between the camera device and the center of the traveling device can be determined based on the determined distance between the camera device and the center of the traveling device.
  • the horizontal distance of the position and the horizontal distance between the target reference point and the imaging device are compared with the horizontal coordinate value in the second coordinate information for determining the target reference point.
  • the homography matrix corresponding to the imaging device may be determined based on the first coordinate information and the second coordinate information.
  • the homography matrix can be determined according to the following formula (5):
  • H is the homography matrix corresponding to the camera device
  • C is the first matrix formed by the first coordinate information of the multiple target reference points
  • A is a second matrix formed by the second coordinate information of the plurality of target reference points
  • a T is the matrix transpose of A.
  • the method further includes: acquiring a real-time image collected by the camera device during the movement of the traveling device; The homography matrix and the pixel coordinate information of the target object included in the real-time image obtained by detection, determine the world coordinate information of the target object in the world coordinate system; based on the world coordinate information of the target object Coordinate information to control the traveling device.
  • the real-time image collected by the target photographing device can be acquired during the movement of the traveling device, and the collected real-time image can be detected to determine the target included in the real-time image.
  • the position information of the object in the pixel coordinate system then use the determined homography matrix and the determined position information of the target object in the pixel coordinate system to determine the world coordinate information of the target object in the world coordinate system.
  • the traveling device is controlled, for example, the acceleration, deceleration, steering, braking, etc. of the traveling device can be controlled.
  • voice prompt information can be played to prompt the driver to control the acceleration, deceleration, steering, braking, etc. of the driving device.
  • the determined homography matrix determines the location coordinate information of the target object in the world coordinate system.
  • the driving device is controlled. For example, it provides a good foundation for ADAS subsequent lane line detection, obstacle detection, traffic sign recognition, and navigation.
  • the world coordinate information of the target object in the world coordinate system can be more accurately determined, thereby realizing precise control of the driving equipment.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • an embodiment of the present disclosure also provides an apparatus for calibrating a camera device.
  • a schematic diagram of the architecture of the apparatus for calibrating a camera device provided in an embodiment of the present disclosure includes an acquisition module 401 , a first determining Module 402, second determination module 403, specifically:
  • the acquisition module 401 is configured to acquire a scene image of a preset scene collected by a camera device arranged on a traveling device, wherein the preset scene includes at least two parallel lines, and the traveling device is located in two adjacent parallel lines. between the lines and the sides of the traveling device are parallel to the two parallel lines;
  • the first determination module 402 determines the first coordinate information of the two line segments and the multiple target reference points on each line segment under the pixel coordinate system and the second coordinate information under the world coordinate system, Wherein, the two line segments respectively overlap with the two lines corresponding to the two parallel lines in the scene image;
  • the second determining module 403 is configured to determine a homography matrix corresponding to the imaging device based on the first coordinate information and the second coordinate information.
  • the apparatus before acquiring a scene image of a preset scene captured by a camera device disposed on the traveling device, the apparatus further includes: a first adjustment module 404, configured to:
  • the line and the second reference line are located on a screen image of the camera device when the scene image is captured, and the first reference line and the second reference line are parallel in the screen image.
  • the apparatus before acquiring a scene image of a preset scene captured by a camera device disposed on the traveling device, the apparatus further includes: a second adjustment module 405, configured to:
  • the apparatus before acquiring a scene image of a preset scene captured by a camera device disposed on the traveling device, the apparatus further includes: a third adjustment module 406, configured to:
  • Adjust the pose of the camera device so that the skyline included in the scene image captured by the camera device after adjustment and the fourth reference line located between the first reference line and the second reference line
  • the reference lines overlap; the fourth reference line is located on the screen image of the camera device when the scene image is captured, and is parallel to the first reference line and the second reference line on the screen image.
  • the first determining module 402 is configured to determine two line segments and multiple target reference points on each line segment according to the following method:
  • the first determination module 402 determines, based on the scene image, the first coordinate information of the two line segments and the multiple target reference points on each line segment in the pixel coordinate system and the corresponding information thereof.
  • the second coordinate information in the world coordinate system it is used to:
  • the second coordinates of the plurality of target reference points in the world coordinate system are determined based on the positional coordinate information of the intersection point in the pixel coordinate system and the first coordinate information of the plurality of target reference points information.
  • the first determining module 402 determines the multiple target reference points based on the position coordinate information of the intersection point in the pixel coordinate system and the first coordinate information of multiple target reference points.
  • the second coordinate information of each target reference point is in the world coordinate system, it is used for:
  • For each target reference point determine the difference value information between the first coordinate information of the target reference point and the position coordinate information of the intersection point; based on the difference value information, the focal length information of the imaging device, and The predetermined installation height of the imaging device determines the second coordinate information of the target reference point.
  • the difference information between the first coordinate information of the target reference point and the position coordinate information of the intersection includes abscissa difference and ordinate difference;
  • the first determination module 402 when determining the second coordinate information of the target reference point based on the difference information, the focal length information of the imaging device, and the predetermined installation height of the imaging device, be used for:
  • the horizontal coordinate value in the second coordinate information of the target reference point is determined.
  • the first determining module 402 determines the target reference point based on the longitudinal coordinate value, the abscissa difference value, and the lateral focal length in the focal length information of the imaging device.
  • the horizontal coordinate value in the second coordinate information is used for:
  • the abscissa difference value Based on the longitudinal coordinate value, the abscissa difference value, and the lateral focal length in the focal length information of the imaging device, determining the lateral distance between the target reference point and the imaging device;
  • the apparatus further includes: a control module 407, configured to:
  • the traveling device is controlled based on the world coordinate information of the target object.
  • the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • FIG. 5 a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure includes a processor 501 , a memory 502 , and a bus 503 .
  • the memory 502 is used to store execution instructions, including the memory 5021 and the external memory 5022; the memory 5021 here is also called the internal memory, which is used to temporarily store the operation data in the processor 501 and the data exchanged with the external memory 5022 such as the hard disk,
  • the processor 501 exchanges data with the external memory 5022 through the memory 5021.
  • the processor 501 communicates with the memory 502 through the bus 503, so that the processor 501 executes the following instructions:
  • a homography matrix corresponding to the imaging device is determined.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for calibrating an imaging device described in the foregoing method embodiment is executed. step.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the camera device calibration method described in the above method embodiments.
  • the computer program product carries program codes
  • the instructions included in the program codes can be used to execute the steps of the camera device calibration method described in the above method embodiments.
  • the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种摄像设备标定方法、装置、电子设备及存储介质,该方法包括:获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。

Description

摄像设备标定方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求2020年12月22日递交的申请号为202011529925.7的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开涉及计算机视觉技术领域,具体而言,涉及一种摄像设备标定方法、装置、电子设备及存储介质。
背景技术
随着科技的发展,越来越多的车辆上安装有高级驾驶辅助系统(Advanced Driving Assistance System,ADAS),ADAS通常集成在摄像设备上,该摄像设备可以由用户进行安装,其中,该摄像设备的安装位置和安装角度可以根据用户需要进行设定。为了保证安装该摄像设备后,ADAS的功能可以正常使用,需要对安装的摄像设备进行标定,即确定安装的摄像设备的单应性矩阵。
发明内容
有鉴于此,本公开至少提供一种摄像设备标定方法、装置、电子设备及存储介质。
第一方面,本公开提供了一种摄像设备标定方法,包括:
获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;
基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;
基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
采用上述方法,通过基于获取的场景图像,确定输入的至少两条线段中的每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,进而可以利用第一坐标信息和第二坐标信息,确定摄像设备对应的单应性矩阵,实现了摄像设备的自动标定,与人工标定的方法相比,提高了标定的效率和准确度。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:
调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括 的所述天际线位于设置的第一基准线和第二基准线之间,所述第一基准线和所述第二基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,且所述第一基准线和所述第二基准线在所述屏幕图像中平行。
采用上述方法,可以在获取摄像设备采集的场景图像之前,调整摄像设备的位姿,使得调整后的摄像设备采集的场景图像中包括的天际线位于设置的第一基准线和第二基准线之间,即可以使得摄像设备对应的俯仰角接近于0°,避免俯仰角较大时造成生成的目标基准点的第二坐标信息的准确度较低的情况发生,进而提高了单应性矩阵的准确度。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:
调整所述摄像设备的位姿,以使得调整后的所述摄像设备采集的所述场景图像中的所述天际线与设置的第三基准线平行或重叠,所述第三基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上。
采用上述方法,可以在获取摄像设备采集的场景图像之前,调整目标摄影设备的位姿,使得调整后的摄像设备采集的场景图像中的天际线与设置的第三基准线平行或重叠,即可以使得调整后的摄像设备的翻滚角接近于0°,避免翻滚角较大时造成生成的目标基准点的第二坐标信息的准确度较低的情况发生,进而提高了单应性矩阵的准确度。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集的预设场景的场景图像之前,还包括:
调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的所述天际线与位于所述第一基准线和所述第二基准线之间的第四基准线重叠;所述第四基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,并与所述第一基准线和所述第二基准线在所述屏幕图像上平行。
一种可能的实施方式中,确定两条线段以及每条线段上的多个目标基准点包括:
从所述场景图像上设置的目标区域内,确定与所述两条平行线条在所述场景图像中的两条线条重叠的两条线段以及所述两条线段上的多个目标基准点。
这里,可以在场景图像上设置目标区域,该目标区域中的两条平行线条与摄像设备之间的真实距离较近,从目标区域中选择与两条平行线条在场景图像中的两条线条重叠的两条线段上的多个目标基准点,可以较准确地确定选择的目标基准点的第二坐标信息。
一种可能的实施方式中,所述基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,包括:
确定所述多个目标基准点在所述场景图像对应的像素坐标下的所述第一坐标信息;
基于所述多个目标基准点的所述第一坐标信息,确定所述两条平行线条在所述场景图像中对应的所述两条线条的交叉点在所述像素坐标系下的位置坐标信息;
基于所述交叉点在所述像素坐标系下的位置坐标信息、和所述多个目标基准点的第 一坐标信息,确定所述多个目标基准点在世界坐标系下的所述第二坐标信息。
一种可能的实施方式中,所述基于所述交叉点在所述像素坐标系下的位置坐标信息、和所述多个目标基准点的第一坐标信息,确定所述多个目标基准点在世界坐标系下的所述第二坐标信息,包括:
针对每个目标基准点,确定所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息;基于所述差值信息、所述摄像设备的焦距信息、和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息。
采用上述方法,针对每个目标基准点,通过计算的目标基准点的第一坐标信息和交叉点的位置坐标信息之间的差值信息、摄像设备的焦距信息、和摄像设备的安装高度,较准确的确定了目标基准点的第二坐标信息。
一种可能的实施方式中,所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息包括横坐标差值和纵坐标差值;所述基于所述差值信息、所述摄像设备的焦距信息和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息,包括:
基于所述纵坐标差值、所述摄像设备的安装高度、以及所述摄像设备的焦距信息中的纵向焦距,确定所述目标基准点的所述第二坐标信息中的纵向坐标值;
基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
一种可能的实施方式中,所述基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值,包括:
基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点距离所述摄像设备的横向距离;
基于确定的所述摄像设备距离所述行驶设备中心位置的横向距离、和所述目标基准点距离所述摄像设备的横向距离,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
考虑到摄像设备与行驶设备中心位置(即建立的世界坐标系的原点)之间存在横向距离,故在确定了目标基准点距离摄像设备的横向距离之后,可以基于确定的摄像设备距离行驶设备中心位置的横向距离、和目标基准点距离摄像设备的横向距离,较准取的确定目标基准点的第二坐标信息中的横向坐标值。
一种可能的实施方式中,在确定所述摄像设备对应的单应性矩阵之后,还包括:
获取在所述行驶设备的移动过程中所述摄像设备采集的实时图像;
基于所述摄像设备对应的所述单应性矩阵、和检测得到的所述实时图像中包括的目标对象的像素坐标信息,确定所述目标对象在世界坐标系下的世界坐标信息;
基于所述目标对象的所述世界坐标信息,控制所述行驶设备。
采用上述方法,在生成了摄像设备对应的单应性矩阵之后,可以利用确定的单应性矩阵、和检测得到的实时图像中包括的目标对象的像素坐标信息,较准确的确定目标对象在世界坐标系下的世界坐标信息,进而实现对行驶设备的精准控制。
以下装置、电子设备等的效果描述参见上述方法的说明,这里不再赘述。
第二方面,本公开提供了一种摄像设备标定装置,包括:
获取模块,用于获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;
第一确定模块,用于基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;
第二确定模块,用于基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
第三方面,本公开提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如上述第一方面或任一实施方式所述的摄像设备标定方法的步骤。
第四方面,本公开提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如上述第一方面或任一实施方式所述的摄像设备标定方法的步骤。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种摄像设备标定方法的流程示意图;
图2示出了本公开实施例所提供的一种摄像设备标定方法中,场景图像的示意图;
图3示出了本公开实施例所提供的一种摄像设备标定方法中,基于所述多个目标基准点的所述第一坐标信息,确定所述多个目标基准点在世界坐标系下的第二坐标信息的具体方法的流程示意图;
图4示出了本公开实施例所提供的一种摄像设备标定装置的架构示意图;
图5示出了本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
一般的,用户可以使用手动标定的方式,确定安装的摄像设备的单应性矩阵。比如,用户可以在地面上放置锥形标,并确定锥形标的位置,基于该锥形标的位置,确定安装的摄像设备的单应性矩阵。但是,使用手动标定的方式对摄像设备进行标定时,操作过程较繁琐、且在确定锥形标的位置时会产生较大的误差,使得确定的单应性矩阵的误差较大,进而会造成ADAS的检测结果不准确。为了提高摄像设备标定的准确度,较准确地确定摄像设备对应的单应性矩阵,本公开实施例提供了一种摄像设备标定方法、装置、电子设备及存储介质。
下面将结合本公开中附图,对本公开中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本公开实施例进行理解,首先对本公开实施例所公开的一种摄像设备标定方法进行详细介绍。本公开实施例所提供的摄像设备标定方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该摄像设备标定方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
参见图1所示,为本公开实施例所提供的摄像设备标定方法的流程示意图,该方法包括S101-S103,其中:
S101,获取设置在行驶设备上的摄像设备采集的预设场景的场景图像;所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设 备的侧面与所述两条平行线条平行;所述平行线条是指所述线条在实际预设场景中的平行,在所述场景图像中,由于摄像设备的构成和成像原理,所述两条平行线条在图像中之间存在一定角度,即在某个点相交。
S102,基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息;其中,两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠。
S103,基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
上述方法中,通过基于获取的场景图像确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,进而可以利用第一坐标信息和第二坐标信息,确定摄像设备对应的单应性矩阵,实现了摄像设备的自动标定,与人工标定的方法相比,提高了标定的效率和准确度。
下述对S101-S103进行详细说明。
针对S101:
这里,行驶设备可以为机动车辆、非机动车辆、机器人等,并可以在该行驶设备上安装摄像设备,并在安装了摄像设备后,获取该摄像设备采集预设场景的场景图像。其中,预设场景中包括至少两条平行线条,行驶设备位于两条平行线条之间,且行驶设备的侧面与行驶设备位于的两条平行线条平行。通过控制行驶设备的侧面与两条平行线条平行,可以使得摄像设备对应的偏航角接近于0°(即摄像设备对应的偏航角与0°之间的差值小于设置的第一差值阈值),进而基于采集的预设场景对应的场景图像,可以较准确的确定目标基准点的第二坐标信息。
示例性的,两条平行线条可以为道路上设置的道路交通标线,比如,两条平行线条中的任意一条线条可以为道路上的白色实线、白色虚线、黄色实线等;或者,两条平行线条还可以为停车位上绘制的两条平行线条等。比如,预设场景可以为存在至少两条道路标线(即两条平行线条)、天际线可见的任一场景,例如,该预设场景可以为道路场景、也可以为停车场场景等。
在获取预设场景的场景图像之前,可以通过下述三种方式对摄像设备的位姿进行调整,以便基于调整后的摄像设备采集预设场景的场景图像,确定目标基准点的第二坐标信息时,可以调高确定的第二坐标信息的准确度。
方式一,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的天际线位于设置的第一基准线和第二基准线之间,所述第一基准线和所述第二基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,且所述第一基准线和所述第二基准线平行。
这里,考虑到摄像设备的俯仰角与目标基准点的第二坐标信息之间存在关联,故这里为了较准确的确定目标基准点的第二坐标信息,可以对摄像设备的位姿进行调整,使得调整后的摄像设备采集的场景图像中包括的天际线位于设置的第一基准线和第二基 准线之间,即使得调整后的摄像设备对应的俯仰角接近0°(即摄像设备对应的俯仰角与0°之间的差值小于第二差值阈值)。示例性的,第一基准线和第二基准线在屏幕图像上的位置可以根据实际情况进行确定。
采用上述方法,可以避免俯仰角较大时造成生成的目标基准点的第二坐标信息的准确度较低的情况发生,进而提高了单应性矩阵的准确度。
方式二,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:调整所述摄像设备的位姿,以使得调整后的所述摄像设备采集的所述场景图像中的天际线与设置的第三基准线平行或重叠,所述第三基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上。
这里,考虑到摄像设备的翻滚角与目标基准点的第二坐标信息之间存在关联,故这里为了较准确的确定目标基准点的第二坐标信息,可以在摄像设备采集场景图像时的屏幕图像上设置第三基准线,调整摄像设备的位姿,以使得调整后的摄像设备采集的场景图像中的天际线与设置的第三基准线平行或重叠,即使得摄像设备的翻滚角接近于0°(摄像设备的翻滚角与0°之间的差值小于设置的第三差值阈值)。其中,第三基准线在屏幕图像上的位置可以根据实际情况进行确定。
采用上述方法,可以避免翻滚角较大时造成生成的目标基准点的第二坐标信息的准确度较低的情况发生,进而提高了单应性矩阵的准确度。
方式三,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:
调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的天际线与位于设置的第一基准线和第二基准线之间的第四基准线重叠;所述第四基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上且位于所述第一基准线和所述第二基准线之间,并与所述第一基准线和所述第二基准线平行。
这里,可以在摄像设备采集场景图像时的屏幕图像上设置相互平行的第一基准线、第二基准线和第四基准线,其中,第四基准线位于第一基准线和第二基准线之间。第一基准线、第二基准线和第四基准线在屏幕图像上的位置信息可以根据实际情况进行设置。进而,可以调整摄像设备的位姿,使得调整后的摄像设备采集的场景图像中包括的天际线与位于设置的第四基准线重叠,即可以使得摄像设备的俯仰角和翻滚角均接近于0°。
针对S102:
在获取了场景图像之后,可以确定场景图像上的多个目标基准点在像素坐标系下的第一坐标信息,再可以基于多个目标基准点的第一坐标信息,确定多个目标基准点在世界坐标系下的第二坐标信息。其中,世界坐标系的原点可以根据需要进行选择,比如,该世界坐标系可以为以行驶设备的中心点为原点构建的坐标系,或者,也可以为以行驶设备的顶部平面的中心点为原点构建的坐标系;或者也可以以摄像设备为原点构建的坐标系。
一种可选实施方式中,根据下述方法确定两条线段以及每条线段上的多个目标基准点包括:从所述场景图像上设置的目标区域内,确定与所述场景图像中的两条平行线条 重叠的两条线段,并确定所述两条线段上的多个目标基准点。
这里,目标区域可以根据需要进行设置,比如,目标区域可以为位于场景图像中间的图像区域,和/或,目标区域可以为包括两条平行线条在场景图像中的两条线条的图像区域。由于在场景图像中的像素点指示的真实物体与摄像设备之间的真实距离较远时,会无法准确的确定该像素点的第二坐标信息,故目标区域内的像素点对应的真实物体与摄像设备之间的距离小于设置的距离阈值。
目标区域还可以根据场景图像的大小自动设置,例如自动设置场景图像中下部某一预定比例的图像为目标区域,并自动判断该目标区域是否至少包括两条道路标线,若不包括两条道路标线,可以自动扩大该目标区域;或者提示场景图像有误,需要重新获取场景图像。
示例性的,可以从场景图像上设置的目标区域内,确定与两条平行线条在场景图像中的两条线条重叠的两条线段,并在所述两条线段上选取四个目标基准点。参见图2所示的一种摄像设备标定方法中,场景图像的示意图,该图中包括目标区域21、平行线条22、以及在目标区域中确定的多个目标基准点23。
这里,可以在场景图像上设置目标区域,该目标区域中的两条平行线条与摄像设备之间的真实距离较近,从目标区域中确定与两条平行线条在场景图像中的两条线条重叠的两条线段以及所述两条线段上的多个目标基准点,可以较准确地确定选择的目标基准点的第二坐标信息。
示例性的,这两条线段的长短与目标区域的大小相关,例如,可以按目标区域的长度自动确定这两条线段的长度。又例如,可以根据在目标区域检测出的道路标线的长度自动确定这两条线段的长度。再例如,这两条线段的长度可以根据经验值进行设置。
在一个例子中,在确定这两条线段的长度后,可以自动移动这两条线段,以使其与检测出的平行线条重叠。在另一个例子中,还可以手动调整这两条线段,以使其与平行线条的重叠更紧密。
一种可选实施方式中,参见图3所示,所述基于所述场景图像确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,包括:
S301,确定所述多个目标基准点在所述场景图像对应的像素坐标系下的所述第一坐标信息;
S302,基于所述多个目标基准点的所述第一坐标信息,确定所述两条平行线条在所述场景图像中的交叉点在所述像素坐标系下的位置坐标信息。该交叉点即为行驶设备位于的两条平行线条的消失点。
S303,基于所述交叉点在所述像素坐标系下的位置坐标信息、和所述多个目标基准点的第一坐标信息,确定所述多个目标基准点在世界坐标系下的所述第二坐标信息。
在S301和S302中,可以确定选择的每个目标基准点在场景图像对应的像素坐标系下的第一坐标信息。再可以基于多个目标基准点的第一坐标信息,确定目标基准点所处 的线条的拟合参数;进而可以利用确定的两条线条的拟合参数,确定两条平行线条在场景图像中的两条线条的交叉点在像素坐标系下的位置坐标信息。
结合图2进行示例性说明,该图2中还包括两条平行线条在场景图像中的两条线条22,即第一线条、第二线条、以及第一线条与第二线条对应的交叉点24。在图2中,选择的目标基准点23的数量可以为4个,左侧的两个目标基准点23位于第一线条上,右侧的两个目标基准点23位于第二线条上。
可以理解的是,由于两条线段与两条线条分别重叠,因此图中未示出两条线段。示例性的,目标基准点可以是每条线段的端点。
比如,左侧的第一目标基准点的第一坐标信息为(x1,y1)、第二目标基准点的第一坐标信息为(x2,y2),右侧的第三目标基准点的第一坐标信息为(x3,y3)、第四目标基准点的第一坐标信息为(x4,y4)。进而可以得到第一线条对应的第一直线方程(1):
Figure PCTCN2021102795-appb-000001
得到的第二线条对应的第二直线方程(2):
Figure PCTCN2021102795-appb-000002
进而可以利用第一线条的第一直线方程(1)和第二线条的第二直线方程(2),确定交叉点在像素坐标系下的位置坐标信息。其中,确定的交叉点在像素坐标系下的位置坐标信息中的横坐标VPx为:
Figure PCTCN2021102795-appb-000003
确定的交叉点在像素坐标系下的位置坐标信息中的纵坐标VPy为:
Figure PCTCN2021102795-appb-000004
即得到了交叉点在像素坐标系下的位置坐标信息(VP x,VP y)。
在另一种具体实施方式中,还可以从图2的第一线条上选择多于2个第一目标基准点,通过将选择的多个第一目标基准点在像素坐标系下的第一坐标信息进行直线拟合,确定第一线条对应的第一直线方程(即确定了第一线条对应的第一拟合参数);以及从第二线条上选择多于2个第二目标基准点,通过将选择的多个第二目标基准点在像素坐标系下的第二坐标信息进行直线拟合(即确定了第二线条对应的第二拟合参数),确定第二线条对应的第二直线方程,其中,选择的第一目标基准点和第二目标基准点的数量可以根据需要进行设置,比如,可以从第一线条上选择4个第一目标基准点,从第二线条上选择4个第二目标基准点。
示例性的,可以通过最小二乘法,将选择的多个第一目标基准点进行拟合,确定第一线条对应的第一拟合参数,即得到第一线条对应的第一直线方程;以及通过最小二乘法,将选择的多个第二目标基准点进行拟合,确定第二线条对应的第二拟合参数, 即得到第二线条对应的第二直线方程。
进一步的,可以利用第一线条对应的第一直线方程和第二线条对应的第二直线方程,确定两条平行线条在场景图像中的两条线条(即第一线条与第二线条)的交叉点在像素坐标系下的位置坐标信息。
在S303中,一种可选实施方式中,所述基于所述交叉点在所述像素坐标系下的位置坐标信息、和多个目标基准点的第一坐标信息,确定所述多个目标基准点的所述第二坐标信息,包括:针对每个目标基准点,计算所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息;基于所述差值信息、所述摄像设备的焦距信息、和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息。
具体实施时,针对每个目标基准点,先计算该目标基准点的第一坐标信息与交叉点的位置坐标信息之间的差值信息,即可以将该目标基准点的横坐标与交叉点的横坐标相减,得到目标基准点与交叉点之间的横坐标差值,以及将该目标基准点的纵坐标与交叉点的纵坐标相减,得到目标基准点与交叉点之间的纵坐标差值。
再利用确定的差值信息、摄像设备的焦距信息、和预先确定的摄像设备的安装高度,确定该目标基准点的第二坐标信息,其中,摄像设备的焦距信息可以包括纵向焦距和横向焦距;摄像设备的安装高度为摄像设备与地面之间的高度距离。
采用上述方法,针对每个目标基准点,通过计算的目标基准点的第一坐标信息和交叉点的位置坐标信息之间的差值信息、摄像设备的焦距信息、和摄像设备的安装高度,较准确的确定了目标基准点的第二坐标信息。
一种可选实施方式中,所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息包括横坐标差值和纵坐标差值;所述基于所述差值信息、所述摄像设备的焦距信息、和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息,包括:
基于所述纵坐标差值、所述摄像设备的安装高度、以及所述摄像设备的焦距信息中的纵向焦距,确定所述目标基准点的所述第二坐标信息中的纵向坐标值;基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
可以根据下述公式(3)确定目标基准点与摄像设备之间的纵向距离:
Figure PCTCN2021102795-appb-000005
其中,DY i为第i个目标基准点的与摄像设备之间的纵向距离,abs(VP y-y i)为第i个目标基准点对应的纵坐标差值的绝对值,f y为纵向焦距,camera H为摄像设备的安装高度。
进而可以基于目标基准点的与摄像设备之间的纵向距离,确定目标基准点的第二坐标信息中的纵向坐标值。比如,摄像设备位于世界坐标系的纵向坐标轴上时,确定的目标基准点与摄像设备之间的纵向距离即为纵向坐标值;在摄像设备与世界坐标系的横向坐标轴之间存在纵向距离时,则可以基于该摄像设备与世界坐标系的横向坐标轴之 间的纵向距离、和确定的目标基准点的与摄像设备之间的纵向距离,确定目标基准点的第二坐标信息中的纵向坐标值。
考虑到构建的世界坐标系的原点与摄像设备的安装位置之间可能是一致的,也可能是不一致的,故这里可以先确定目标基准点与摄像设备之间的横向距离,在利用确定的目标基准点与摄像设备之间的横向距离、和摄像设备与原点之间的横向距离,确定目标基准点的第二坐标信息中的横向坐标值。
一种可选实施方式中,所述基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值,包括:
基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点距离所述摄像设备的横向距离;基于确定的所述摄像设备距离所述行驶设备中心位置的横向距离、和所述目标基准点距离所述摄像设备的横向距离,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
可以根据下述公式(4)确定目标基准点距离摄像设备的横向距离:
Figure PCTCN2021102795-appb-000006
其中,DX i为第i个目标基准点的在世界坐标系下的横向坐标值,abs(VP x-x i)为第i个目标基准点对应的横坐标差值的绝对值,f x为横向焦距,DY i为第i个目标基准点的在世界坐标系下的纵向坐标值。
在确定了目标基准点距离摄像设备的横向距离之后,可以利用确定的目标基准点距离摄像设备的横向距离、以及摄像设备距离构建的世界坐标系的原点的横向距离,确定目标基准点的第二坐标信息中的横向坐标值。其中,在构建的世界坐标系的原点为行驶设备的中心位置时,则摄像设备距离构建的世界坐标系的原点的横向距离,即为摄像设备距离行驶设备中心位置的横向距离。
示例性的,通过上述过程可以得到图2中所示的四个目标基准点的第二坐标信息,即可以得到左侧的第一目标基准点的第二坐标信息(X1,Y1)、第二目标基准点的第二坐标信息(X2,Y2),右侧的第三目标基准点的第二坐标信息(X3,Y3)、第四目标基准点的第二坐标信息(X4,Y4)。
考虑到摄像设备与行驶设备中心位置(即建立的世界坐标系的原点)之间存在横向距离,故在确定了目标基准点距离摄像设备的横向距离之后,可以基于确定的摄像设备距离行驶设备中心位置的横向距离、和目标基准点距离摄像设备的横向距离,较准取的确定目标基准点的第二坐标信息中的横向坐标值。
针对S103:
这里,可以基于第一坐标信息和第二坐标信息,确定摄像设备对应的单应性矩阵。
示例性的,可以根据下述公式(5)确定单应性矩阵:
H=(AA T)*(CA T) -1      (5);
其中,H为摄像设备对应的单应性矩阵,
Figure PCTCN2021102795-appb-000007
C为多个目标基准点的第一坐标信息构成的第一矩阵,
Figure PCTCN2021102795-appb-000008
A为多个目标基准点的第二坐标信息构成的第二矩阵。A T为A的矩阵转置。
一种可选实施方式中,在确定所述摄像设备对应的单应性矩阵之后,还包括:获取在所述行驶设备的移动过程中所述摄像设备采集的实时图像;基于所述摄像设备对应的所述单应性矩阵、和检测得到的所述实时图像中包括的目标对象的像素坐标信息,确定所述目标对象在世界坐标系下的世界坐标信息;基于所述目标对象的所述世界坐标信息,控制所述行驶设备。
这里,可以在确定了目标摄影设备对应的单应性矩阵之后,可以在行驶设备的移动过程中获取目标摄影设备采集的实时图像,对采集的实时图像进行检测,确定该实时图像中包括的目标对象在像素坐标系中的位置信息;再利用确定的单应性矩阵、和确定的目标对象在像素坐标系中的位置信息,确定目标对象在世界坐标系下的世界坐标信息。最后,基于目标对象的世界坐标信息,控制行驶设备,比如,可以控制行驶设备的加速、减速、转向、制动等。或者可以播放语音提示信息,以提示驾驶员控制行驶设备加速、减速、转向、制动等。
示例性的,在确定了该实时图像中包括的目标对象在像素坐标系中的位置信息之后,还可以使用确定的单应性矩阵、交叉点在像素坐标系下的位置坐标信息、和确定的目标对象在像素坐标系中的位置信息,确定目标对象在世界坐标系下的世界坐标信息。最后,基于目标对象的世界坐标信息,控制行驶设备。例如,为ADAS后续的车道线检测、障碍物检测、交通标志识别、导航等工作提供了良好的基础。
采用上述方法,可以较准确的确定目标对象在世界坐标系下的世界坐标信息,进而实现对行驶设备的精准控制。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于相同的构思,本公开实施例还提供了一种摄像设备标定装置,参见图4所示,为本公开实施例提供的一种摄像设备标定装置的架构示意图,包括获取模块401、第一确定模块402、第二确定模块403,具体的:
获取模块401,用于获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;
第一确定模块402,基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;
第二确定模块403,用于基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,所述装置还包括:第一调整模块404,用于:
调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的所述天际线位于设置的第一基准线和第二基准线之间,所述第一基准线和所述第二基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,且所述第一基准线和所述第二基准线在所述屏幕图像中平行。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,所述装置还包括:第二调整模块405,用于:
调整所述摄像设备的位姿,以使得调整后的所述摄像设备采集的所述场景图像中的天际线与设置的第三基准线平行或重叠,所述第三基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上。
一种可能的实施方式中,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,所述装置还包括:第三调整模块406,用于:
调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的所述天际线与位于所述第一基准线和所述第二基准线之间的第四基准线重叠;所述第四基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,并与所述第一基准线和所述第二基准线在所述屏幕图像上平行。
一种可能的实施方式中,所述第一确定模块402,用于根据下述方法确定两条线段以及每条线段上的多个目标基准点:
从所述场景图像上设置的目标区域内,确定与所述两条平行线条在所述场景图像中的两条线条重叠的两条线段以及所述两条线段上的多个目标基准点。
一种可能的实施方式中,所述第一确定模块402,在基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息时,用于:
确定所述多个目标基准点在所述场景图像对应的像素坐标下的所述第一坐标信息;
基于所述多个目标基准点的所述第一坐标信息,确定所述两条平行线条在所述场景图像中对应的所述两条线条的交叉点在所述像素坐标系下的位置坐标信息;
基于所述交叉点在所述像素坐标系下的位置坐标信息、和所述多个目标基准点的第一坐标信息,确定所述多个目标基准点在世界坐标系下的所述第二坐标信息。
一种可能的实施方式中,所述第一确定模块402,在基于所述交叉点在所述像素坐标系下的位置坐标信息、和多个目标基准点的第一坐标信息,确定所述多个目标基准点在世界坐标系下的所述第二坐标信息时,用于:
针对每个目标基准点,确定所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息;基于所述差值信息、所述摄像设备的焦距信息、和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息。
一种可能的实施方式中,所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息包括横坐标差值和纵坐标差值;所述第一确定模块402,在基于所述差值信息、所述摄像设备的焦距信息、和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息时,用于:
基于所述纵坐标差值、所述摄像设备的安装高度、以及所述摄像设备的焦距信息中的纵向焦距,确定所述目标基准点的所述第二坐标信息中的纵向坐标值;
基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
一种可能的实施方式中,所述第一确定模块402,在基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值时,用于:
基于所述纵向坐标值、所述横坐标差值、以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点距离所述摄像设备的横向距离;
基于确定的所述摄像设备距离所述行驶设备中心位置的横向距离、和所述目标基准点距离所述摄像设备的横向距离,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
一种可能的实施方式中,在确定所述摄像设备对应的单应性矩阵之后,所述装置还包括:控制模块407,用于:
获取在所述行驶设备的移动过程中所述摄像设备采集的实时图像;
基于所述摄像设备对应的所述单应性矩阵、和检测得到的所述实时图像中包括的目标对象的像素坐标信息,确定所述目标对象在世界坐标系下的世界坐标信息;
基于所述目标对象的所述世界坐标信息,控制所述行驶设备。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模板可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
基于同一技术构思,本公开实施例还提供了一种电子设备。参照图5所示,为本公开实施例提供的电子设备的结构示意图,包括处理器501、存储器502、和总线503。其中,存储器502用于存储执行指令,包括内存5021和外部存储器5022;这里的内存5021也称内存储器,用于暂时存放处理器501中的运算数据,以及与硬盘等外部存储器5022交换的数据,处理器501通过内存5021与外部存储器5022进行数据交换,当电子设备500运行时,处理器501与存储器502之间通过总线503通信,使得处理器501在执行以下指令:
获取设置在行驶设备上的摄像设备采集预设场景的场景图像;
基于所述场景图像,确定输入的至少两条线段中的每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息;其中,具有目标基准点的两条线段分别与所述行驶设备位于的两条平行线条在所述场景图像中的两条线条重叠;
基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
此外,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的摄像设备标定方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的摄像设备标定方法的步骤,具体可参见上述方法实施例,在此不再赘述。
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只 读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。

Claims (13)

  1. 一种摄像设备标定方法,其特征在于,包括:
    获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;
    基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;
    基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
  2. 根据权利要求1所述的方法,其特征在于,在获取设置在行驶设备上的摄像设备采集预设场景的场景图像之前,还包括:
    调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的天际线位于设置的第一基准线和第二基准线之间,所述第一基准线和所述第二基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,且所述第一基准线和所述第二基准线在所述屏幕图像中平行。
  3. 根据权利要求1所述的方法,其特征在于,在获取设置在行驶设备上的摄像设备采集的预设场景的场景图像之前,还包括:
    调整所述摄像设备的位姿,以使得调整后的所述摄像设备采集的所述场景图像中的天际线与设置的第三基准线平行或重叠,所述第三基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上。
  4. 根据权利要求2所述的方法,其特征在于,在获取设置在行驶设备上的摄像设备采集的预设场景的场景图像之前,还包括:
    调整所述摄像设备的位姿,使得调整后的所述摄像设备采集的所述场景图像中包括的所述天际线与位于所述第一基准线和所述第二基准线之间的第四基准线重叠;所述第四基准线位于所述摄像设备在采集所述场景图像时的屏幕图像上,并与所述第一基准线和所述第二基准线在所述屏幕图像上平行。
  5. 根据权利要求1~4任一所述的方法,其特征在于,确定两条线段以及每条线段上的多个目标基准点包括:
    从所述场景图像上设置的目标区域内,确定与所述两条平行线条在所述场景图像中的两条线条重叠的两条线段以及所述两条线段上的多个目标基准点。
  6. 根据权利要求1~5任一所述的方法,其特征在于,所述基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,包括:
    确定所述多个目标基准点在所述场景图像对应的像素坐标下的所述第一坐标信息;
    基于所述多个目标基准点的所述第一坐标信息,确定所述两条平行线条在所述场景图像中对应的所述两条线条的交叉点在所述像素坐标系下的位置坐标信息;
    基于所述交叉点在所述像素坐标系下的位置坐标信息和所述多个目标基准点的第一坐标信息,确定所述多个目标基准点在所述世界坐标系下的所述第二坐标信息。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述交叉点在所述像素坐标系下的位置坐标信息、和所述多个目标基准点的第一坐标信息,确定所述多个目标基 准点在所述世界坐标系下的所述第二坐标信息,包括:
    针对每个目标基准点,
    确定所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息;
    基于所述差值信息、所述摄像设备的焦距信息和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息。
  8. 根据权利要求7所述的方法,其特征在于,所述目标基准点的第一坐标信息与所述交叉点的位置坐标信息之间的差值信息包括横坐标差值和纵坐标差值;所述基于所述差值信息、所述摄像设备的焦距信息和预先确定的所述摄像设备的安装高度,确定所述目标基准点的所述第二坐标信息,包括:
    基于所述纵坐标差值、所述摄像设备的安装高度以及所述摄像设备的焦距信息中的纵向焦距,确定所述目标基准点的所述第二坐标信息中的纵向坐标值;
    基于所述纵向坐标值、所述横坐标差值以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述纵向坐标值、所述横坐标差值以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点的所述第二坐标信息中的横向坐标值,包括:
    基于所述纵向坐标值、所述横坐标差值以及所述摄像设备的焦距信息中的横向焦距,确定所述目标基准点距离所述摄像设备的横向距离;
    基于确定的所述摄像设备距离所述行驶设备中心位置的横向距离、和所述目标基准点距离所述摄像设备的横向距离,确定所述目标基准点的所述第二坐标信息中的横向坐标值。
  10. 根据权利要求1~9任一所述的方法,其特征在于,在确定所述摄像设备对应的单应性矩阵之后,还包括:
    获取在所述行驶设备的移动过程中所述摄像设备采集的实时图像;
    基于所述摄像设备对应的所述单应性矩阵、和检测得到的所述实时图像中包括的目标对象的像素坐标信息,确定所述目标对象在世界坐标系下的世界坐标信息;
    基于所述目标对象的所述世界坐标信息,控制所述行驶设备。
  11. 一种摄像设备标定装置,其特征在于,包括:
    获取模块,用于获取设置在行驶设备上的摄像设备采集的预设场景的场景图像,其中,所述预设场景中包括至少两条平行线条,所述行驶设备位于相邻的两条平行线条之间且所述行驶设备的侧面与所述两条平行线条平行;
    第一确定模块,用于基于所述场景图像,确定两条线段以及每条线段上的多个目标基准点在像素坐标系下的第一坐标信息及其在世界坐标系下的第二坐标信息,其中,所述两条线段分别与所述两条平行线条在所述场景图像中对应的两条线条重叠;
    第二确定模块,用于基于所述第一坐标信息和所述第二坐标信息,确定所述摄像设备对应的单应性矩阵。
  12. 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至10任一 所述的摄像设备标定方法的步骤。
  13. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至10任一所述的摄像设备标定方法的步骤。
PCT/CN2021/102795 2020-12-22 2021-06-28 摄像设备标定方法、装置、电子设备及存储介质 WO2022134518A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/873,722 US20220366606A1 (en) 2020-12-22 2022-07-26 Methods for calibrating image acquiring devices, electronic devices and storage media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011529925.7 2020-12-22
CN202011529925.7A CN112529968A (zh) 2020-12-22 2020-12-22 摄像设备标定方法、装置、电子设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/873,722 Continuation US20220366606A1 (en) 2020-12-22 2022-07-26 Methods for calibrating image acquiring devices, electronic devices and storage media

Publications (1)

Publication Number Publication Date
WO2022134518A1 true WO2022134518A1 (zh) 2022-06-30

Family

ID=75002386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/102795 WO2022134518A1 (zh) 2020-12-22 2021-06-28 摄像设备标定方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US20220366606A1 (zh)
CN (1) CN112529968A (zh)
WO (1) WO2022134518A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529968A (zh) * 2020-12-22 2021-03-19 上海商汤临港智能科技有限公司 摄像设备标定方法、装置、电子设备及存储介质
CN113706630B (zh) * 2021-08-26 2024-02-06 西安电子科技大学 基于一组水平平行线标定相机俯仰角的方法
CN114199124B (zh) * 2021-11-09 2023-07-25 汕头大学 基于线性拟合的坐标标定方法、装置、系统及介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104166A (ko) * 2009-03-17 2010-09-29 주식회사 만도 카메라 캘리브레이션 방법
CN107133985A (zh) * 2017-04-20 2017-09-05 常州智行科技有限公司 一种基于车道线消逝点的车载摄像机自动标定方法
CN109191531A (zh) * 2018-07-30 2019-01-11 深圳市艾为智能有限公司 一种基于车道线检测的后方车载相机的自动外参标定方法
US20190311494A1 (en) * 2018-04-05 2019-10-10 Microsoft Technology Licensing, Llc Automatic camera calibration
CN110349219A (zh) * 2018-04-04 2019-10-18 杭州海康威视数字技术股份有限公司 一种相机外参标定方法及装置
CN110570475A (zh) * 2018-06-05 2019-12-13 上海商汤智能科技有限公司 车载摄像头自标定方法及装置和车辆驾驶方法及装置
CN111380502A (zh) * 2020-03-13 2020-07-07 商汤集团有限公司 标定方法、位置确定方法、装置、电子设备及存储介质
CN112529968A (zh) * 2020-12-22 2021-03-19 上海商汤临港智能科技有限公司 摄像设备标定方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100104166A (ko) * 2009-03-17 2010-09-29 주식회사 만도 카메라 캘리브레이션 방법
CN107133985A (zh) * 2017-04-20 2017-09-05 常州智行科技有限公司 一种基于车道线消逝点的车载摄像机自动标定方法
CN110349219A (zh) * 2018-04-04 2019-10-18 杭州海康威视数字技术股份有限公司 一种相机外参标定方法及装置
US20190311494A1 (en) * 2018-04-05 2019-10-10 Microsoft Technology Licensing, Llc Automatic camera calibration
CN110570475A (zh) * 2018-06-05 2019-12-13 上海商汤智能科技有限公司 车载摄像头自标定方法及装置和车辆驾驶方法及装置
CN109191531A (zh) * 2018-07-30 2019-01-11 深圳市艾为智能有限公司 一种基于车道线检测的后方车载相机的自动外参标定方法
CN111380502A (zh) * 2020-03-13 2020-07-07 商汤集团有限公司 标定方法、位置确定方法、装置、电子设备及存储介质
CN112529968A (zh) * 2020-12-22 2021-03-19 上海商汤临港智能科技有限公司 摄像设备标定方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112529968A (zh) 2021-03-19
US20220366606A1 (en) 2022-11-17

Similar Documents

Publication Publication Date Title
WO2022134518A1 (zh) 摄像设备标定方法、装置、电子设备及存储介质
CN110264520B (zh) 车载传感器与车辆位姿关系标定方法、装置、设备和介质
CN109859278B (zh) 车载相机系统相机外参的标定方法及标定系统
EP3633539A2 (en) Method for position detection, device, and storage medium
US10434877B2 (en) Driver-assistance method and a driver-assistance apparatus
JP6522076B2 (ja) 側方の車両測位のための方法、装置、記憶媒体及びプログラム製品
EP3627109A1 (en) Visual positioning method and apparatus, electronic device and system
EP2413282B1 (en) Camera calibrator
JP6058256B2 (ja) 車載カメラ姿勢検出装置および方法
WO2018120040A1 (zh) 一种障碍物检测方法及装置
US20130236858A1 (en) Surrounding bird view monitoring image generation method and training method, automobile-side device, and training device thereof
WO2015178542A1 (ko) 카메라 파라미터 측정 장치 및 그 방법
US20210042955A1 (en) Distance estimation apparatus and operating method thereof
CN112489136B (zh) 标定方法、位置确定方法、装置、电子设备及存储介质
CN112967344B (zh) 相机外参标定的方法、设备、存储介质及程序产品
CN111508027A (zh) 摄像机外参标定的方法和装置
CN108376384B (zh) 视差图的矫正方法、装置及存储介质
JP2009276233A (ja) パラメータ計算装置、パラメータ計算システムおよびプログラム
JP4855278B2 (ja) カメラパラメータ取得装置
JP2020095627A (ja) 画像処理装置および画像処理方法
KR20160064275A (ko) 차량 위치 인식 장치 및 방법
JP4873272B2 (ja) カメラ校正装置
WO2014110906A1 (zh) 一种全景图像合成和显示的方法及装置
CN113610927A (zh) 一种avm摄像头参数标定方法、装置及电子设备
CN113362232A (zh) 车辆全景环视图像生成方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21908523

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.11.2023)