WO2020181506A1 - 一种图像处理方法、装置及系统 - Google Patents

一种图像处理方法、装置及系统 Download PDF

Info

Publication number
WO2020181506A1
WO2020181506A1 PCT/CN2019/077892 CN2019077892W WO2020181506A1 WO 2020181506 A1 WO2020181506 A1 WO 2020181506A1 CN 2019077892 W CN2019077892 W CN 2019077892W WO 2020181506 A1 WO2020181506 A1 WO 2020181506A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature point
point set
subset
feature
Prior art date
Application number
PCT/CN2019/077892
Other languages
English (en)
French (fr)
Inventor
杨志华
梁家斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/077892 priority Critical patent/WO2020181506A1/zh
Priority to CN201980004931.XA priority patent/CN111213159A/zh
Publication of WO2020181506A1 publication Critical patent/WO2020181506A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Definitions

  • the present invention relates to the field of computer technology, and in particular to an image processing method, device and system.
  • Image matching includes the process of identifying points with the same name between two or more images.
  • the point with the same name refers to the feature point corresponding to the same point in the three-dimensional space.
  • the existing matching process matches any feature point in the image with all the feature points in other images. This method is complicated in calculation and slow in matching speed. Based on this, how to improve the matching speed while ensuring the matching accuracy is a technical problem that needs to be solved urgently.
  • the embodiments of the present invention provide an image processing method, device, and system, which can effectively improve the matching speed while ensuring the matching accuracy.
  • the first aspect of the embodiments of the present invention is to provide an image processing method, including:
  • the feature points included in the first subset of the first feature point set and the feature points included in the second subset of the second feature point set are matched to obtain an image matching result, the first subset Is any subset of the first feature point set, and the second subset includes a target subset corresponding to the first subset in the second feature point set.
  • the second aspect of the embodiments of the present invention is to provide an image processing device, including a memory and a processor;
  • the memory is used to store program codes
  • the processor calls the program code, and when the program code is executed, is used to perform the following operations:
  • the feature points included in the first subset of the first feature point set and the feature points included in the second subset of the second feature point set are matched to obtain an image matching result, the first subset Is any subset of the first feature point set, and the second subset includes a target subset corresponding to the first subset in the second feature point set.
  • the third aspect of the embodiments of the present invention is to provide an image processing system, including:
  • the movable platform is used to collect multiple images during the movement of the movable platform through the image capturing device, and send the collected multiple images to the image processing device, the multiple
  • the image includes a first image and a second image.
  • the image processing device may divide the first feature point set and the second feature point set into multiple subsets according to the positioning information and posture information of the image capturing device when the first image and the second image are collected. , To ensure matching accuracy.
  • the image matching device matches the feature points contained in the first subset of the first feature point set with the feature points contained in the second subset of the second feature point set, which reduces the number of matching feature points and is effective Improve matching speed.
  • FIG. 1 is a schematic diagram of imaging provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of an image provided by an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present invention.
  • 4A is a schematic diagram of an arrangement of a subset according to an embodiment of the present invention.
  • FIG. 4B is a schematic diagram of another seed set arrangement provided by an embodiment of the present invention.
  • 4C is a schematic diagram of the arrangement of another seed set provided by an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of a method for determining a reference pole plane provided by an embodiment of the present invention
  • 6A is a schematic diagram of a reference pole plane provided by an embodiment of the present invention.
  • 6B is a schematic diagram of another reference pole plane provided by an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart of a method for dividing a subset according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another image provided by an embodiment of the present invention.
  • FIG. 9 is a schematic frame diagram of a picture matching device provided by an embodiment of the present invention.
  • feature points refer to pixels with sharp changes in the gray value of the image or pixels with greater curvature on the edge of the image (ie, the intersection of two edges).
  • Feature points can reflect the essential characteristics of the image, can identify the target object in the image, and the image matching can be completed through the matching of feature points.
  • the feature extraction can be performed on the image through a preset feature point detection algorithm to obtain a feature point set.
  • the preset feature point detection algorithm may include but is not limited to Harris, Fast (Features from accelerated segment test), DOG (Difference of Gaussian), or SURF (Speeded Up Robust Features) and other algorithms.
  • Feature descriptors refer to local descriptions of feature points.
  • feature descriptors can include DAISY descriptors, Scale-invariant feature transform (SIFT, Scale-invariant feature transform) descriptors, SURF descriptors, or ORB descriptions ⁇ etc.
  • SIFT Scale-invariant feature transform
  • SURF descriptors SURF descriptors
  • the matching of feature points can be completed through the calculation of feature descriptors.
  • the feature descriptor is used as a high-dimensional vector, the distance between the two vectors is calculated, and the matching result between the feature points corresponding to the feature descriptor is obtained based on the distance ,
  • the distance can be Euclidean distance or Hamming distance.
  • the image capturing device can capture images of the object X from different angles to obtain the first image and the second image.
  • the position of the lens center included in the image capturing device in the world coordinate system is called the optical center.
  • the position of the optical center in the world coordinate system is C Point (ie, the first optical center); when the image capturing device collects the second image, the position of the optical center in the world coordinate system is C'point (ie, the second optical center).
  • the straight line formed by the first optical center C and the second optical center C′ is the baseline.
  • the plane formed by any pixel and baseline in any image is a polar plane.
  • the image capturing device may be a camera or a camera.
  • the coordinate system where (x, y, f) is located is the image space coordinate system
  • the xy plane in the image space coordinate system is parallel to the image plane
  • the z axis is the camera main axis
  • the origin O is Projection center (that is, optical center).
  • the intersection point between the main axis of the camera and the image plane is called the principal point
  • the distance between point O and point O1 is the focal length f.
  • the embodiment of the present invention provides an image processing system.
  • the image processing system includes a movable platform and an image processing device.
  • the movable platform is equipped with a positioning module and a pan-tilt, and an image capturing device is mounted on the pan-tilt.
  • the image capture device can capture images of the object from different angles to obtain the first image and the second image. Then, the movable platform can send the first image and the second image to the image processing device.
  • the image processing device may perform feature extraction on the first image to obtain a first feature point set, and perform feature extraction on the second image to obtain a second feature point set.
  • the movable platform can also obtain the positioning information of the image capturing device when capturing the first image and the second image through the positioning module, and obtain the posture information of the image capturing device when capturing the first image and the second image, and combine the positioning information with The posture information is sent to the image processing device.
  • the image processing device can divide the first feature point set and the second feature point set into multiple subsets respectively according to the positioning information and posture information of the image capturing device when the first image and the second image are collected, and the first The feature points included in the first subset of the feature point set and the feature points included in the second subset of the second feature point set are matched to obtain an image matching result.
  • the image processing device is provided separately from the movable platform, or the image processing device is provided in the movable platform.
  • the movable platform may be an unmanned aerial vehicle, an unmanned vehicle, or a mobile robot.
  • the positioning module may include, but is not limited to, a global positioning system (GPS) positioning device, a Beidou positioning device, or a real-time kinematic (RTK) carrier phase differential positioning device (RTK positioning device for short).
  • the RTK carrier phase differential technology is a differential method that processes the carrier phase observations of two measuring stations in real time.
  • the carrier phase collected by the reference station is sent to the user receiver to calculate the difference and coordinate.
  • the RTK carrier phase difference technology uses the carrier phase dynamic real-time difference method.
  • the RTK carrier phase difference technology can obtain centimeter-level positioning accuracy in the field in real time, without the need for post-calculation to obtain centimeter-level accuracy.
  • the RTK positioning device is used to detect movable The positioning information of the platform can effectively improve the accuracy of image matching.
  • the movable platform controls the posture of the image capturing device by controlling the PTZ, and the posture information includes PTZ angle information.
  • the movable platform controls the posture of the image capturing device by controlling its posture, and the posture information includes posture information of the movable platform.
  • the pan/tilt angle information may include the attitude information of the pan/tilt when the image capturing device collects the first image and the second image, such as the roll, pitch, or yaw angle of the pan/tilt.
  • a connection point in the first image and the second image can be identified based on the image matching result, and then image stitching is performed based on the connection point.
  • the connection point in the first image and the second image can be identified, and then a two-dimensional map can be generated based on the connection point.
  • the points with the same name in the first image and the second image can be identified, and then the target tracking or relocation is performed based on the points with the same name in the unmanned driving system.
  • the points with the same name in the first image and the second image can be identified, and then the three-dimensional reconstruction is performed based on the points with the same name, etc., which is not specifically limited by the embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the method includes:
  • S301 Perform feature extraction on the first image to obtain a first feature point set, and perform feature extraction on the second image to obtain a second feature point set.
  • the image processing apparatus After the image processing apparatus obtains the first image, it may use a preset feature point detection algorithm to perform feature extraction on the first image to obtain a first feature point set.
  • the first feature point set may include at least two feature points.
  • the image processing device After the image processing device acquires the second image, it may use a preset feature point detection algorithm to perform feature extraction on the second image to obtain a second feature point set.
  • the second feature point set may include at least two feature points.
  • an image capturing device is mounted on the movable platform. After the first image and the second image are collected by the image capturing device, the movable platform can send the first image and the second image to the image processing device.
  • the image processing device may perform feature extraction on the first image to obtain a first feature point set, and perform feature extraction on the second image to obtain a second feature point set.
  • S302 Divide the first feature point set and the second feature point set into multiple subsets according to the positioning information and the posture information of the image capturing device when the first image and the second image are collected.
  • the image processing device may divide the first feature point set into multiple subsets and the second feature point set into multiple subsets according to the positioning information and posture information of the image capturing device when the first image and the second image are collected.
  • the arrangement of the subsets is not limited by the embodiment of the present application. Taking FIG. 4A as an example, the arrangement of the divided subsets may be horizontal arrangement. Taking FIG. 4B as an example, the arrangement of the divided subsets may be vertical arrangement. Taking FIG. 4C as an example, the arrangement of the divided subsets may be a divergent arrangement.
  • the posture information of the camera when the first image and the second image are collected includes pan/tilt angle information
  • the image processing device is based on the positioning information and posture information of the image taking device when the first image and the second image are collected.
  • the specific manner of dividing the first feature point set and the second feature point set into multiple subsets may be: the image processing device according to the positioning information and pan/tilt angle information of the image capturing device when the first image and the second image are collected , Determine the reference pole plane, and then divide the first feature point set and the second feature point set into multiple subsets according to the angle between each pole plane and the reference pole plane.
  • the image processing device divides the first feature point set and the second feature point set into multiple subsets according to the positioning information and posture information of the image capturing device when the first image and the second image are collected. , Can obtain the distortion parameters and internal parameters of the image capturing device, and perform distortion correction on the characteristic points of the first image and the second image according to the distortion parameters and the internal parameters.
  • the image processing device may first obtain the coordinates of each pixel point contained in the first image, and perform distortion on the feature points of the first image according to the coordinates of each pixel point, the distortion parameters and internal parameters of the image capturing device Correction.
  • the image processing device may first obtain the coordinates of each pixel included in the second image, and perform distortion correction on the feature points of the second image according to the coordinates of each pixel, the distortion parameters and internal parameters of the image capturing device.
  • the distortion parameter may include at least one of a radial distortion parameter and a tangential distortion parameter
  • the internal parameter may include at least one of principal point coordinates and focal length.
  • the embodiment of the present invention may perform the pre-factory adjustment of the lens of the image capturing device Calibrate, get the distortion parameters and internal parameters, use the distortion parameters and memory number to perform distortion correction on the feature points, so as to remove the influence of distortion on image matching.
  • the image processing device divides the first feature point set and the second feature point set into multiple subsets
  • the feature points contained in the first subset of the first feature point set and the second subset of the second feature point set can be divided
  • the feature points contained in the set are matched to obtain the image matching result.
  • the image processing device divides the first feature point set into ten subsets, and divides the second feature point set into ten subsets.
  • the arrangement of the subsets is shown in Figure 4C, assuming that the subset 0 of the first feature point set is adjacent to the subset 1 and the subset 9 of the first feature point set, and the subset 0 of the second feature point set is adjacent to the second feature.
  • the subset 1 and the subset 9 of the point set are adjacent.
  • the target subset corresponding to the first subset in the second feature point set may be subset 0 of the second feature point set, that is, the second subset It may be a subset 0 of the second feature point set, and the image processing device may match the feature points contained in the subset 0 of the first feature point set with the feature points contained in the subset 0 of the second feature point set.
  • the second subset further includes a subset adjacent to the target subset.
  • the first subset is subset 0 of the first feature point set
  • the target subset is subset 0 of the second feature point set
  • subset 0 of the second feature point set subset 1 of the second feature point set.
  • the second subset may include subset 0, subset 1, and subset 9 of the second feature point set
  • the image processing device may combine the feature points contained in subset 0 of the first feature point set with Match the feature points contained in the subset 0 of the second feature point set, and match the feature points contained in the subset 0 of the first feature point set with the feature points contained in the subset 1 of the second feature point set, The feature points included in the subset 0 of the first feature point set are matched with the feature points included in the subset 9 of the second feature point set.
  • the points with the same name are not necessarily on the same polar plane, but are in the first subset and the second set of feature points with the target. On the adjacent subset of the subset. Based on this, the feature points contained in the first subset of the first feature point set and the target subset of the second feature point set and the feature points contained in the subset adjacent to the target subset are matched to improve the image The accuracy of matching.
  • the image processing device may match the feature points included in the first subset of the first feature point set with the feature points included in the second subset of the second feature point set: The feature descriptors of the feature points included in the first subset and feature descriptors of the feature points included in the second subset are calculated.
  • the image processing device may divide the first feature point set and the second feature point set into multiple subsets according to the positioning information and posture information of the image capturing device when the first image and the second image are collected. , To ensure matching accuracy.
  • the image processing device matches the feature points contained in the first subset of the first feature point set with the feature points contained in the second subset of the second feature point set, which reduces the number of matching feature points and is effective Improve matching speed.
  • FIG. 5 is a schematic flowchart of a method for determining a reference pole plane according to an embodiment of the present invention. The method includes:
  • S501 Determine, according to the positioning information, the first coordinate of the first optical center in the world coordinate system when the image capturing device is capturing the first image, and the second optical center of the image capturing device in the world coordinate system when capturing the second image.
  • the second coordinate in.
  • the positioning module and the image capturing device can be integrated in the same position of the movable platform.
  • the movable platform can obtain the position of the lens center included in the image capturing device in the world coordinate system through the positioning module, and the movable platform sends the position to the image processing device. This position is taken as the first coordinate of the first optical center in the world coordinate system.
  • the movable platform can obtain the position of the lens center included in the image capturing device in the world coordinate system through the positioning module, and the movable platform sends the position to the image processing device, and the image processing device will This position serves as the second coordinate of the second optical center in the world coordinate system.
  • the positioning module and the image capturing device can be integrated in the same position of the movable platform, and the position detected by the positioning module is the position of the lens center included in the image capturing device in the world coordinate system, which can improve the first The accuracy of the coordinates and the second coordinate.
  • the positioning module and the image capturing device can be integrated in different positions of the movable platform.
  • the movable platform can obtain the position of the positioning module in the world coordinate system through the positioning module, and the movable platform sends the position and the position of the image capturing device relative to the positioning module to the image processing device,
  • the image processing device obtains the first coordinate of the first optical center in the world coordinate system according to the position and the position of the image capturing device relative to the positioning module.
  • the movable platform can obtain the position of the positioning module in the world coordinate system through the positioning module, and the movable platform sends the position to the image processing device, and the image processing device uses the position and the image capturing device Relative to the position of the positioning module, the second coordinate of the second optical center in the world coordinate system is obtained.
  • the positioning module can be integrated with the image capturing device in different positions of the movable platform, the position detected by the positioning module is the position of the positioning module in the world coordinate system, and the positioning module is relative to the image capturing device according to the position.
  • the first coordinate and the second coordinate are obtained, which can improve the accuracy of the first coordinate and the second coordinate.
  • S502 Determine the pose of the first image and the second image in the world coordinate system according to the first coordinate, the second coordinate, and the pan/tilt angle information.
  • the image processing device can determine the pose of the first image in the world coordinate system according to the first coordinates and the pan/tilt angle information when the image capturing device is collecting the first image.
  • the image processing device can determine the pose of the second image in the world coordinate system according to the second coordinates and the pan/tilt angle information when the image capturing device is collecting the second image.
  • the image processing device can establish an image space coordinate system according to the first coordinates and the pan/tilt angle information when the image capturing device is collecting the first image, where the first coordinate is the origin O, and the image space coordinate system x
  • the axis, y axis and z axis can be obtained from the pan/tilt angle information. Since the xy plane and the image plane in the image space coordinate system are parallel, the image processing device can obtain the pose of the first image in the world coordinate system according to the image space coordinate system and the focal length.
  • the image processing device can establish an image space coordinate system based on the second coordinates and the pan/tilt angle information when the image capture device is collecting the second image, where the second coordinate is the origin O, and the x-axis and y of the image space coordinate system
  • the axis and z axis can be obtained from the pan/tilt angle information. Since the xy plane and the image plane in the image space coordinate system are parallel, the image processing device can obtain the pose of the second image in the world coordinate system according to the image space coordinate system and the focal length.
  • S503 Determine a reference polar plane according to the poses of the first image and the second image in the world coordinate system.
  • the image processing device may use the plane formed by the line and the baseline as Reference pole plane.
  • the image processing device can determine the position of the first optical center in the world coordinate system, and obtain the image owner of the first image according to the pose of the first image in the world coordinate system.
  • the position of the point in the world coordinate system and then according to the position of the first optical center in the world coordinate system and the position of the principal point of the first image in the world coordinate system, determine the first optical center and the principal of the first image A straight line composed of points.
  • the image processing device may also determine the baseline based on the position of the first optical center in the world coordinate system and the position of the second optical center in the world coordinate system.
  • the image processing device may obtain the angle between the line formed by the first optical center and the principal point of the first image and the baseline, and compare the angle with a preset threshold.
  • the image processing device may use the plane formed by the straight line and the baseline as the reference polar plane.
  • the image processing device may use the baseline and the plane determined by the line direction vector in the first image as the reference polar plane.
  • the row direction vector in the first image is parallel to the X axis in the image space coordinate system of the first image.
  • the image processing device can determine the position of the first optical center in the world coordinate system, and obtain the image owner of the first image according to the pose of the first image in the world coordinate system.
  • the position of the point in the world coordinate system and then according to the position of the first optical center in the world coordinate system and the position of the principal point of the first image in the world coordinate system, determine the first optical center and the principal of the first image A straight line composed of points.
  • the image processing device may also determine the baseline according to the position of the first optical center in the world coordinate system and the position of the second optical center in the world coordinate system.
  • the image processing device may obtain the angle between the line formed by the first optical center and the principal point of the first image and the baseline, and compare the angle with a preset threshold.
  • the image processing device may use a plane determined according to the baseline and the line direction vector in the first image as the reference polar plane.
  • the image processing device determines the first coordinate in the world coordinate system of the first optical center of the image capturing device when capturing the first image, and the image capturing device when capturing the second image.
  • the second coordinate of the second optical center in the world coordinate system according to the first coordinate, the second coordinate and the pan/tilt angle information, determine the pose of the first image and the second image in the world coordinate system, according to the first image and
  • the pose of the second image in the world coordinate system determines the reference polar plane, which can improve the accuracy of the reference polar plane.
  • FIG. 7 is a schematic flowchart of a method for dividing a subset according to an embodiment of the present invention. The method includes:
  • S701 Determine the included angle interval according to the included angle between each pole plane and the reference pole plane.
  • the minimum included angle included in the included angle interval is less than or equal to the maximum value of the first included angle and the second included angle, and the maximum included angle included in the included angle interval is greater than or equal to the third included angle and the fourth included angle. The minimum value.
  • the first included angle is the minimum included angle between the pole plane corresponding to the feature points in the first feature point set and the reference pole plane
  • the second included angle is the pole plane and the reference pole plane corresponding to the feature points in the second feature point set.
  • the minimum included angle between the reference pole planes, the third included angle is the maximum included angle between the pole plane corresponding to the feature points in the first feature point set and the reference pole plane
  • the fourth included angle is the second feature point set The maximum angle between the polar plane corresponding to the characteristic point and the reference polar plane.
  • the polar plane corresponding to the feature points in the first feature point set is a plane composed of the feature points and the baseline of the first image.
  • the polar plane corresponding to the feature points in the second feature point set is a plane composed of the feature points and the baseline of the second image.
  • the image processing device can acquire each feature point of the first image, and the plane formed by each feature point of the first image and the baseline is a polar plane.
  • the image processing device can obtain the angle between each pole plane and the reference pole plane, and determine the first angle min1 and the third angle max1 from the obtained angles.
  • the image processing device can acquire each feature point of the second image, and the plane formed by each feature point of the second image and the baseline is a polar plane.
  • the image processing device can obtain the angle between each pole plane and the reference pole plane, and determine the second angle min2 and the fourth angle max2 from the obtained angles.
  • the image processing device may determine the maximum value max (min1, min2) between the first and second included angle, and determine the minimum value min (max1, max2) between the third and fourth included angle, and the image The processing device can use [max(min1,min2), min(max1,max2)] as the included angle interval.
  • the image processing device may subtract max(min1,min2) from the first preset value to obtain the minimum value of the included angle interval, and set min(max1,max2) Add to the second preset value to obtain the maximum value of the included angle interval.
  • the first preset value and the second preset value may be the same or different, and are not specifically limited by the embodiment of the present invention. For example, if the first preset value is k1 and the second preset value is k2, the image processing device may use [max(min1, min2)-k1, min(max1, max2)+k2] as the included angle interval.
  • the coordinates of each feature point of the image in the world coordinate system can be obtained in the following way:
  • the image processing device After the image processing device acquires the image, it can obtain the coordinates of each feature point of the image in the pixel coordinate system, where the pixel coordinate system coincides with the image plane.
  • the pixel coordinate system includes u axis and v axis
  • the origin O 0 is the top left vertex of the image plane
  • the coordinates of the principal point O 1 in the pixel coordinate system are (u 0 , v 0 )
  • the pixel coordinate system is a two-dimensional coordinate
  • the unit is a pixel.
  • the coordinates of each feature point of the image in the pixel coordinate system indicate the position of the feature point in the image.
  • the image processing device can convert the coordinates of each feature point of the image in the pixel coordinate system into coordinates in the image coordinate system.
  • the image coordinate system includes x-axis and y-axis, and the origin O 1 is the principal point of the image.
  • the image processing device can obtain the coordinates of each feature point of the image in the image coordinate system through the following formula:
  • u is the abscissa of any feature point of the image in the pixel coordinate system
  • v is the ordinate of the feature point in the pixel coordinate system
  • dx is the physical size of the pixel on the x-axis
  • dy is the pixel on the y-axis Physical size
  • u 0 is the abscissa of the principal point in the pixel coordinate system
  • v 0 is the ordinate of the principal point in the pixel coordinate system
  • x is the abscissa of the characteristic point in the image coordinate system
  • y is the The ordinate of the feature point in the image coordinate system.
  • the image processing device can convert the coordinates of each feature point of the image in the image coordinate system to coordinates in the image space coordinate system.
  • the image coordinate system includes x-axis and y-axis, and the origin O 1 is the principal point of the image.
  • the image space coordinate system includes x-axis, y-axis and z-axis. Taking Figure 2 as an example, the xy plane in the image space coordinate system is parallel to the image plane, the z axis is the camera main axis, the origin O is the projection center (that is, the optical center), the image space coordinate system is a three-dimensional coordinate system, and the image space coordinate system is The unit is consistent with the world coordinate system.
  • the coordinate system where (x, y) is located is the image coordinate system.
  • the xy plane in the image coordinate system coincides with the image plane.
  • the origin O1 is the intersection of the camera's principal axis and the image plane, which is also called the principal point of the image.
  • the distance between is the focal length f
  • the image coordinate system is a two-dimensional coordinate
  • the unit of the image coordinate system is consistent with the image space coordinate system.
  • the image processing device can convert the coordinates of each feature point of the image in the image space coordinate system to coordinates in the world coordinate system.
  • the image processing device can obtain the coordinates of each feature point of the image in the world coordinate system through the following formula:
  • x is the coordinate of any feature point of the image on the x axis of the image space coordinate system
  • y is the coordinate of the feature point on the y axis of the image space coordinate system
  • z is the coordinate of the feature point in the image space coordinate system.
  • the coordinates on the z axis, x w is the coordinates of the feature point on the x axis of the world coordinate system, y w is the coordinates of the feature point on the y axis of the world coordinate system, and z w is the feature point in the world coordinate system
  • the rotation matrix can be obtained by the following formula:
  • ⁇ , ⁇ and ⁇ can be obtained according to the pan/tilt angle information when the image capturing device is collecting images.
  • a three-axis gimbal as an example, that is, the pitch angle, roll angle, and pan angle.
  • the translation transformation matrix can be obtained by the following formula:
  • (x, y, z) are the coordinates of the optical center of the image capturing device in the world coordinate system when the image is captured.
  • the above-mentioned image may be the first image or the second image.
  • S702 Divide the first feature point set and the second feature point set into multiple subsets according to the included angle interval.
  • the image processing apparatus may divide the first feature point set into multiple subsets and divide the second feature point set into multiple subsets according to the included angle interval.
  • the number of subsets obtained by dividing the first feature point set or the number of subsets obtained by dividing the second feature point set can be adjusted according to the accuracy of RTK or PTZ angle. The higher the accuracy of RTK or PTZ angle, the number of subsets can be set as Bigger.
  • the image processing device may divide the included angle interval into a plurality of unit intervals, determine the target pole plane corresponding to the included angle contained in the same unit interval, and combine the first feature point set contained in the target pole plane The feature points of is divided into the first subset, and the feature points in the second feature point set contained in the target polar plane are divided into the target subset.
  • the target subset and the subset adjacent to the target subset can be used as the second subset.
  • the image processing device can determine that the included angle with the reference polar plane is within the range of [-30°, 0) Divide the feature points in the first feature point set contained in the first polar plane into subset 1, and divide the feature points in the second feature point set contained in the first pole plane into subsets 2.
  • the image processing device can also determine the second pole plane within the range of [0, 30°) with the reference pole plane, and combine the feature points in the first feature point set contained in the second pole plane Divide into subset 3, and divide the feature points in the second feature point set contained in the second polar plane into subset 4.
  • the image processing device can also determine the third pole plane within the range of [30°, 60°] with the reference pole plane, and combine the features in the first feature point set contained in the third pole plane The points are divided into subset 5, and the feature points in the second feature point set contained in the third polar plane are divided into subset 6. Based on this, the image processing device divides the first feature point set into subset 1, subset 3, and subset 5 according to the included angle interval, and the image processing device divides the second feature point set into subset 2 according to the included angle interval , Subset 4 and Subset 6. If the first subset is subset 1, then the target subset can be subset 2; if the first subset is subset 3, then the target subset can be subset 4; if the first subset is subset 5, Then the target subset can be subset 6.
  • the number of subsets obtained by dividing the first feature point set or the number of subsets obtained by dividing the second feature point set is set to 100, approximately 20,000 of the two images are matched by the image processing method of the embodiment of the present invention
  • the characteristic point time is less than 3.5ms, while ensuring that the number of matches found compared with the existing matching method is more, and the matching results meet the epipolar constraint, and the mismatch rate is lower.
  • the image processing device determines the included angle interval according to the included angle between each polar plane and the reference polar plane, and according to the included angle interval, the first feature point set and the second feature point set are divided into Multiple subsets, improve the accuracy of subset division, and ensure that the feature points in the first subset and the feature points in the second subset are more likely to have the same name.
  • FIG. 9 is a schematic frame diagram of the image processing apparatus provided by an embodiment of the present invention. As shown in FIG. 9, the image processing apparatus includes a memory 901 and a processor 902, and the memory is used for storage. code;
  • the processor 902 calls the program code, and when the program code is executed, is configured to perform the following operations:
  • the feature points included in the first subset of the first feature point set and the feature points included in the second subset of the second feature point set are matched to obtain an image matching result, the first subset Is any subset of the first feature point set, and the second subset includes a target subset corresponding to the first subset in the second feature point set.
  • the posture information includes pan/tilt angle information
  • the processor 902 calculates the position information and posture information of the image capturing device when the first image and the second image are collected.
  • the first feature point set and the second feature point set are respectively divided into multiple subsets.
  • the processor 902 performs the following operations when determining the reference polar plane according to the positioning information and pan/tilt angle information when the image capturing device collects the first image and the second image:
  • the positioning information determine the first coordinate of the first optical center in the world coordinate system when the image capturing device is capturing the first image, and the image capturing device's first coordinate when capturing the second image The second coordinate of the second optical center in the world coordinate system;
  • the reference polar plane is determined according to the poses of the first image and the second image in the world coordinate system.
  • the processor 902 determines the reference polar plane according to the poses of the first image and the second image in the world coordinate system, the following operations are performed:
  • the plane formed by the line and the baseline is used as the reference Polar plane
  • the reference polar plane is determined according to the baseline and the line direction vector in the first image
  • the baseline is a straight line formed by the first optical center and the second optical center.
  • the processor 902 divides the first feature point set and the second feature point set into a plurality of sub-sets according to the angle between each pole plane and the reference pole plane.
  • When collecting perform the following operations:
  • the first feature point set and the second feature point set are respectively divided into multiple subsets.
  • the minimum included angle included in the included angle interval is less than or equal to the maximum value of the first included angle and the second included angle, and the maximum included angle included in the included angle interval is greater than or equal to the first included angle.
  • the first included angle is the smallest included angle between the pole plane corresponding to the feature points in the first feature point set and the reference pole plane
  • the second included angle is the second feature
  • the third included angle is the pole plane corresponding to the feature point in the first feature point set and the The maximum included angle between reference pole planes
  • the fourth included angle is the maximum included angle between the pole plane corresponding to the feature points of the second feature point set and the reference pole plane.
  • the feature points in the second feature point set included in the target polar plane are divided into the target subsets.
  • the second subset further includes a subset adjacent to the target subset.
  • the processor 902 combines the first feature point set and the second feature point according to the positioning information and posture information of the image capturing device when the first image and the second image are collected. Before the feature point set is divided into multiple subsets, the following operations are also performed:
  • distortion correction is performed on the characteristic points of the first image and the second image.
  • the processor 902 matches the feature points included in the first subset of the first feature point set with the feature points included in the second subset of the second feature point set When, do the following:
  • the feature descriptors of the feature points included in the first subset and the feature descriptors of the feature points included in the second subset are matched.
  • processor 902 further performs the following operations:
  • a plane formed by any pixel of the first image or the second image and a baseline is determined as a polar plane.
  • the image processing apparatus provided in this embodiment can execute the methods shown in FIG. 3, FIG. 5, and FIG. 7 provided in the foregoing embodiment, and the execution manner and beneficial effects are similar, and details are not described herein again.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种图像处理方法、装置及系统,其中方法包括:对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集;将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,第一子集为第一特征点集合中的任一子集,第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。本发明实施例可在确保匹配精度的情况下,有效提高匹配速度。

Description

一种图像处理方法、装置及系统 技术领域
本发明涉及计算机技术领域,尤其涉及一种图像处理方法、装置及系统。
背景技术
图像匹配包括在两个或多个图像之间识别同名点的过程。同名点是指对应于三维空间中同一个点的特征点。现有的匹配过程在获取两个或多个图像的同名点时,是将图像中的任一特征点与其他图像中所有的特征点进行匹配。该方法计算复杂,匹配速度较慢。基于此,如何在确保匹配精度的情况下,提高匹配速度是当前亟需解决的技术问题。
发明内容
有鉴于此,本发明实施例提供一种图像处理方法、装置及系统,可在确保匹配精度的情况下,有效提高匹配速度。
本发明实施例的第一方面是提供的一种图像处理方法,包括:
对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;
根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集;
将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,所述第一子集为第一特征点集合中的任一子集,所述第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。
本发明实施例的第二方面是提供的一种图像处理装置,包括存储器、处理器;
所述存储器用于存储程序代码;
所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;
根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集;
将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,所述第一子集为第一特征点集合中的任一子集,所述第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。
本发明实施例的第三方面是提供的一种图像处理系统,包括:
可移动平台,所述可移动平台上设置有图像拍摄装置;
如第二方面所述的图像处理装置;
所述可移动平台用于通过所述图像拍摄装置在所述可移动平台移动的过程中采集多个图像,并将采集到的所述多个图像发送给所述图像处理装置,所述多个图像包括第一图像和第二图像。
在本发明实施例中,图像处理装置可根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集,可确保匹配精度。另外,图像匹配设备将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,减少了特征点的匹配数量,可有效提高匹配速度。
附图说明
为了更清楚地说明本发明实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种成像示意图;
图2是本发明实施例提供的一种图像示意图;
图3是本发明实施例提供的一种图像处理方法的示意流程图;
图4A是本发明实施例提供的一种子集的排列示意图;
图4B是本发明实施例提供的另一种子集的排列示意图;
图4C是本发明实施例提供的另一种子集的排列示意图;
图5是本发明实施例提供的一种基准极平面的确定方法的示意流程图;
图6A是本发明实施例提供的一种基准极平面的示意图;
图6B是本发明实施例提供的另一种基准极平面的示意图;
图7是本发明实施例提供的一种子集的划分方法的示意流程图;
图8是本发明实施例提供的另一种图像示意图;
图9是本发明实施例提供的一种图片匹配设备的示意框架图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
在图像处理中,特征点指的是图像灰度值发生剧烈变化的像素点或者在图像边缘上曲率较大的像素点(即两个边缘的交点)。特征点能够反映图像本质特征,能够标识图像中的目标物体,通过特征点的匹配能够完成图像匹配。可以通过预设特征点检测算法对图像进行特征提取,得到特征点集合。预设特征点检测算法可以包括但不限于Harris,Fast(Features fromaccelerated segment test),DOG(Difference of Gaussian),或者SURF(Speeded Up Robust Features)等算法。
特征描述子指的是对特征点的局部描述,示例性的,特征描述子可以包括DAISY描述子,尺度不变特征变换(SIFT,Scale-invariant feature transform)描述子,SURF描述子,或者ORB描述子等。通过特征描述子的计算能够完成特征点的匹配,例如将特征描述子作为一个高维度的向量,计算两个向量之间的距离,基于该距离得到特征描述子对应的特征点之间的匹配结果,该距离可以为欧式距离或者汉明距离等。
以图1所示的成像示意图为例,图像拍摄装置可以以不同角度对物体X进行图像拍摄,得到第一图像和第二图像。图像拍摄装置在图像拍摄时,图像 拍摄装置所包含的镜头中心在世界坐标系中的位置称为光心,例如图像拍摄装置在采集第一图像时,光心在世界坐标系中的位置为C点(即第一光心);图像拍摄装置在采集第二图像时,光心在世界坐标系中的位置为C′点(即第二光心)。第一光心C以及第二光心C′所组成的直线为基线。任一图像中的任一像素点和基线所组成的平面为极平面。示例性的,图像拍摄装置可以为相机或者摄像头等。
以图2所示的图像示意图为例,(x,y,f)所在的坐标系为像空间坐标系,像空间坐标系中的xy平面和图像平面平行,z轴为相机主轴,原点O是投影中心(即光心)。相机主轴和图像平面之间的交点,称为像主点,O点和O1点之间的距离为焦距f。
本发明实施例提供一种图像处理系统,图像处理系统包括可移动平台和图像处理装置。可移动平台配置有定位模块和云台,云台上挂载有图像拍摄装置。图像拍摄装置可以以不同角度对被拍摄物体进行图像拍摄,得到第一图像和第二图像。然后,可移动平台可将第一图像和第二图像发送给图像处理装置。图像处理装置可对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合。可移动平台还可以通过定位模块获取图像拍摄装置在采集第一图像和第二图像时的定位信息,并获取图像拍摄装置在采集第一图像和第二图像时的姿态信息,并将定位信息和姿态信息发送给图像处理装置。由此,图像处理装置可以根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集,将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果。
可选的,图像处理装置与可移动平台分离设置,或者图像处理装置设置在可移动平台中。可选的,可移动平台可为无人飞行器,无人车,移动机器人。
示例性的,定位模块可以包括但不限于全球定位系统(GPS)定位装置、北斗定位装置或实时动态(Real-time kinematic,RTK)载波相位差分定位装置(简称RTK定位装置)。RTK载波相位差分技术是实时处理两个测量站载波相位观测量的差分方法,将基准站采集的载波相位发给用户接收机,进行求差解算坐标。RTK载波相位差分技术采用了载波相位动态实时差分方法,RTK 载波相位差分技术能够在野外实时得到厘米级定位精度,而不需要事后进行解算才能获得厘米级的精度,采用RTK定位装置检测可移动平台的定位信息,可以有效提高图像匹配的精度。
可选的,可移动平台是通过控制云台控制图像拍摄装置的姿态,所述姿态信息包括云台角信息。可选的,可移动平台是通过控制自身姿态控制图像拍摄装置的姿态,所述姿态信息包括可移动平台的姿态信息。
云台角信息可以包括图像拍摄装置在采集第一图像和第二图像时云台的姿态信息,例如云台的横滚角(roll)、俯仰角(pitch)或者偏航角(yaw)。
示例性的,得到图像匹配结果之后,基于图像匹配结果可以识别第一图像和第二图像中的连接点,进而基于该连接点进行影像拼接。或者基于图像匹配结果可以识别第一图像和第二图像中的连接点,进而基于该连接点生成二维地图。或者基于图像匹配结果可以识别第一图像和第二图像中的同名点,进而在无人驾驶系统中基于同名点进行目标跟踪或者重定位。或者基于图像匹配结果可以识别第一图像和第二图像中的同名点,进而基于同名点进行三维重建,等等,具体不受本申请实施例的限定。
请参见图3,是本发明实施例提出的一种图像处理方法的示意流程图,所述方法包括:
S301,对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合。
图像处理装置获取到第一图像之后,可以使用预设特征点检测算法对第一图像进行特征提取,得到第一特征点集合,第一特征点集合可以包括至少两个特征点。图像处理装置获取到第二图像之后,可以使用预设特征点检测算法对第二图像进行特征提取,得到第二特征点集合,第二特征点集合可以包括至少两个特征点。
示例的,可移动平台挂载有图像拍摄装置,可移动平台通过图像拍摄装置采集到第一图像和第二图像之后,可以将第一图像和第二图像发送给图像处理装置。图像处理装置可以对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合。
S302,根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集。
图像处理装置可以根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合划分为多个子集,并将第二特征点集合划分为多个子集。
需要说明的是,子集的排列方式不受本申请实施例的限定,以图4A为例,划分得到的子集的排列方式可以为横向排列。以图4B为例,划分得到的子集的排列方式可以为纵向排列。以图4C为例,划分得到的子集的排列方式可以为发散排列。
在一种实现方式中,拍摄装置在采集第一图像和第二图像时姿态信息包括云台角信息,图像处理装置根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集的具体方式可以为:图像处理装置根据图像拍摄装置在采集第一图像和第二图像时的定位信息和云台角信息,确定基准极平面,然后根据每个极平面和基准极平面之间的夹角,将第一特征点集合和第二特征点集合分别划分为多个子集。
在一种实现方式中,图像处理装置根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集之前,可以获取图像拍摄装置的畸变参数和内参数,根据畸变参数和内参数,对第一图像和第二图像的特征点进行畸变校正。
具体实现中,图像处理装置可先获取第一图像所包含的每个像素点的坐标,并根据每个像素点的坐标,图像拍摄装置的畸变参数和内参数对第一图像的特征点进行畸变校正。图像处理装置可先获取第二图像所包含的每个像素点的坐标,并根据每个像素点的坐标,图像拍摄装置的畸变参数和内参数对第二图像的特征点进行畸变校正。其中,畸变参数可以包括径向畸变参数和切向畸变参数中的至少一种,内参数可以包括像主点坐标和焦距中的至少一种。
该实施例中,在位姿绝对精确以及图像畸变完全去除的情况下,同名点位于被拍摄物体与两个光心所确定的平面上,同名点即为被拍摄物体与光心的连线和像平面的交点。拍摄得到的第一图像和第二图像存在畸变,畸变后的像素坐标可能偏离原始位置多达300个像素,为了获取理想的畸变矫正结果,本发 明实施例可以对图像拍摄装置的镜头进行出厂前标定,得到畸变参数和内参数,用畸变参数和内存数对特征点进行畸变矫正,从而去除畸变对图像匹配的影响。
S303,将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,第一子集为第一特征点集合中的任一子集,第二子集包括第二特征点集合中与第一子集对应的目标子集。
图像处理装置将第一特征点集合和第二特征点集合分别划分为多个子集之后,可以将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果。例如,图像处理装置将第一特征点集合划分为十个子集,并将第二特征点集合划分为十个子集。子集的排列方式如图4C所示,假设第一特征点集合的子集0和第一特征点集合的子集1和子集9相邻,第二特征点集合的子集0和第二特征点集合的子集1和子集9相邻。如果第一子集为第一特征点集合的子集0,则第二特征点集合中与第一子集对应的目标子集可以为第二特征点集合的子集0,即第二子集可以为第二特征点集合的子集0,图像处理装置可以将第一特征点集合的子集0所包含的特征点和第二特征点集合的子集0所包含的特征点进行匹配。
在一种实现方式中,第二子集还包括与目标子集相邻的子集。例如,第一子集为第一特征点集合的子集0,目标子集为第二特征点集合的子集0,第二特征点集合的子集0和第二特征点集合的子集1和子集9相邻,则第二子集可以包括第二特征点集合的子集0、子集1和子集9,图像处理装置可以将第一特征点集合的子集0所包含的特征点和第二特征点集合的子集0所包含的特征点进行匹配,将第一特征点集合的子集0所包含的特征点和第二特征点集合的子集1所包含的特征点进行匹配,并将第一特征点集合的子集0所包含的特征点和第二特征点集合的子集9所包含的特征点进行匹配。
在该实施例中,由于定位信息和姿态信息存在一定的误差,同名点不一定在同一个极平面上,而是在第一特征点集合的第一子集和第二特征点集合中与目标子集相邻的子集上。基于此,将第一特征点集合的第一子集所包含的特征点和第二特征点集合的目标子集以及与目标子集相邻的子集所包含的特征点进行匹配,可提高图像匹配的精准度。
在一种实现方式中,图像处理装置将第一特征点集合的第一子集所包含的 特征点和第二特征点集合的第二子集所包含的特征点进行匹配的方式可以为:将第一子集包含的特征点的特征描述子和第二子集包含的特征点的特征描述子进行计算。
在本发明实施例中,图像处理装置可根据图像拍摄装置在采集第一图像和第二图像时的定位信息和姿态信息,将第一特征点集合和第二特征点集合分别划分为多个子集,可确保匹配精度。另外,图像处理装置将第一特征点集合的第一子集所包含的特征点和第二特征点集合的第二子集所包含的特征点进行匹配,减少了特征点的匹配数量,可有效提高匹配速度。
结合图3所示的图像处理方法,请参见图5,图5是本发明实施例提出的一种基准极平面的确定方法的示意流程图,所述方法包括:
S501,根据定位信息,确定图像拍摄装置在采集第一图像时的第一光心在世界坐标系中的第一坐标,以及图像拍摄装置在采集第二图像时的第二光心在世界坐标系中的第二坐标。
在一种实现方式中,定位模块可以和图像拍摄装置集成在可移动平台的同一位置。则图像拍摄装置在采集第一图像时,可移动平台可以通过定位模块获取图像拍摄装置所包含的镜头中心在世界坐标系中的位置,可移动平台将该位置发送给图像处理装置,图像处理装置将该位置作为第一光心在世界坐标系中的第一坐标。图像拍摄装置在采集第二图像时,可移动平台可以通过定位模块获取图像拍摄装置所包含的镜头中心在世界坐标系中的位置,可移动平台将该位置发送给图像处理装置,图像处理装置将该位置作为第二光心在世界坐标系中的第二坐标。在该实施例中,定位模块可以和图像拍摄装置集成在可移动平台的同一位置,则定位模块检测到的位置即图像拍摄装置所包含的镜头中心在世界坐标系中的位置,可提高第一坐标和第二坐标的精确度。
在一种实现方式中,定位模块可以和图像拍摄装置集成在可移动平台的不同位置。则图像拍摄装置在采集第一图像时,可移动平台可以通过定位模块获取定位模块在世界坐标系中的位置,可移动平台将该位置和图像拍摄装置相对定位模块的位置发送给图像处理装置,图像处理装置根据该位置和图像拍摄装置相对定位模块的位置,得到第一光心在世界坐标系中的第一坐标。图像拍摄装置在采集第二图像时,可移动平台可以通过定位模块获取定位模块在世界坐 标系中的位置,可移动平台将该位置发送给图像处理装置,图像处理装置根据该位置和图像拍摄装置相对定位模块的位置,得到第二光心在世界坐标系中的第二坐标。在该实施例中,定位模块可以和图像拍摄装置集成在可移动平台的不同位置,则定位模块检测到的位置即定位模块在世界坐标系中的位置,根据该位置和图像拍摄装置相对定位模块的位置,得到第一坐标和第二坐标,可提高第一坐标和第二坐标的精确度。
S502,根据第一坐标、第二坐标以及云台角信息,确定第一图像和第二图像在世界坐标系中的位姿。
图像处理装置根据第一坐标和图像拍摄装置在采集第一图像时的云台角信息,可以确定第一图像在世界坐标系中的位姿。图像处理装置根据第二坐标和图像拍摄装置在采集第二图像时的云台角信息,可以确定第二图像在世界坐标系中的位姿。
以图2为例,图像处理装置可以根据第一坐标和图像拍摄装置在采集第一图像时的云台角信息,建立像空间坐标系,其中第一坐标为原点O,像空间坐标系的x轴、y轴和z轴可由云台角信息得到。由于像空间坐标系中的xy平面和图像平面平行,则图像处理装置根据像空间坐标系和焦距可以得到第一图像在世界坐标系中的位姿。同理,图像处理装置可以根据第二坐标和图像拍摄装置在采集第二图像时的云台角信息,建立像空间坐标系,其中第二坐标为原点O,像空间坐标系的x轴、y轴和z轴可由云台角信息得到。由于像空间坐标系中的xy平面和图像平面平行,则图像处理装置根据像空间坐标系和焦距可以得到第二图像在世界坐标系中的位姿。
S503,根据第一图像和第二图像在世界坐标系中的位姿确定基准极平面。
在一种实现方式中,当第一光心和第一图像的像主点所组成的直线与基线之间的夹角大于预设阈值时,图像处理装置可以将直线和基线所组成的平面作为基准极平面。
以图6A所示的基准极平面的示意图为例,图像处理装置可以确定第一光心在世界坐标系中的位置,根据第一图像在世界坐标系中的位姿得到第一图像的像主点在世界坐标系中的位置,进而根据第一光心在世界坐标系中的位置和第一图像的像主点在世界坐标系中的位置,确定第一光心和第一图像的像主点所组成的直线。图像处理装置还可以根据第一光心在世界坐标系中的位置和第 二光心在世界坐标系中的位置确定基线。然后,图像处理装置可以获取第一光心和第一图像的像主点所组成的直线与基线之间的夹角,将该夹角与预设阈值进行比较,当第一光心和第一图像的像主点所组成的直线与基线之间的夹角大于预设阈值时,图像处理装置可以将直线和基线所组成的平面作为基准极平面。
在一种实现方式中,当直线与基线之间的夹角小于或者等于预设阈值时,图像处理装置可以将基线和第一图像中的行方向向量所确定的平面作为基准极平面。以图2为例,第一图像中的行方向向量和第一图像的像空间坐标系中的X轴平行。
以图6B所示的基准极平面的示意图为例,图像处理装置可以确定第一光心在世界坐标系中的位置,根据第一图像在世界坐标系中的位姿得到第一图像的像主点在世界坐标系中的位置,进而根据第一光心在世界坐标系中的位置和第一图像的像主点在世界坐标系中的位置,确定第一光心和第一图像的像主点所组成的直线。图像处理装置还可以根据第一光心在世界坐标系中的位置和第二光心在世界坐标系中的位置确定基线。然后,图像处理装置可以获取第一光心和第一图像的像主点所组成的直线与基线之间的夹角,将该夹角与预设阈值进行比较,当第一光心和第一图像的像主点所组成的直线与基线之间的夹角小于或者等于预设阈值时,图像处理装置可以将根据基线和第一图像中的行方向向量所确定的平面作为基准极平面。
在本发明实施例中,图像处理装置根据定位信息,确定图像拍摄装置在采集第一图像时的第一光心在世界坐标系中的第一坐标,以及图像拍摄装置在采集第二图像时的第二光心在世界坐标系中的第二坐标,根据第一坐标、第二坐标以及云台角信息,确定第一图像和第二图像在世界坐标系中的位姿,根据第一图像和第二图像在世界坐标系中的位姿确定基准极平面,可提高基准极平面的准确度。
结合图3所示的图像处理方法,请参见图7,图7是本发明实施例提出的一种子集的划分方法的示意流程图,所述方法包括:
S701,根据每个极平面和基准极平面之间的夹角,确定夹角区间。
其中,夹角区间所包含的最小夹角小于或者等于第一夹角和第二夹角中的最大值,夹角区间所包含的最大夹角大于或者等于第三夹角和第四夹角中的最 小值。
第一夹角为第一特征点集合中的特征点所对应的极平面和基准极平面之间的最小夹角,第二夹角为第二特征点集合中的特征点所对应的极平面和基准极平面之间的最小夹角,第三夹角为第一特征点集合中的特征点所对应的极平面和基准极平面之间的最大夹角,第四夹角为第二特征点集合的特征点所对应的极平面和基准极平面之间的最大夹角。
第一特征点集合中的特征点所对应的极平面,即第一图像的特征点和基线所组成的的平面。第二特征点集合中的特征点所对应的极平面,即第二图像的特征点和基线所组成的的平面。
具体实现中,图像处理装置可以获取第一图像的每个特征点,第一图像的每个特征点和基线所组成的的平面为极平面。图像处理装置可以获取每个极平面和基准极平面之间的夹角,在获取到的夹角中确定第一夹角min1和第三夹角max1。
同理,图像处理装置可以获取第二图像的每个特征点,第二图像的每个特征点和基线所组成的的平面为极平面。图像处理装置可以获取每个极平面和基准极平面之间的夹角,在获取到的夹角中确定第二夹角min2和第四夹角max2。
进一步的,图像处理装置可以在第一夹角和第二夹角中确定最大值max(min1,min2),在第三夹角和第四夹角中确定最小值min(max1,max2),图像处理装置可以将[max(min1,min2),min(max1,max2)]作为夹角区间。
在一种实现方式中,考虑到夹角存在误差,图像处理装置可以将max(min1,min2)与第一预设数值相减,得到夹角区间的最小值,并将min(max1,max2)与第二预设数值相加,得到夹角区间的最大值。第一预设数值和第二预设数值可以相同,也可以不相同,具体不受本发明实施例的限定。例如,第一预设数值为k1,第二预设数值为k2,则图像处理装置可以将[max(min1,min2)-k1,min(max1,max2)+k2]作为夹角区间。
在一种实现方式中,图像的每个特征点在世界坐标系中的坐标可以通过如下方式得到:
(1)图像处理装置获取到图像之后,可以得到图像的每个特征点在像素坐标系中的坐标,其中像素坐标系与图像平面重合。以图8所示的图像示意图为例,像素坐标系包括u轴和v轴,原点O 0为图像平面的左上角顶点,像主 点O 1在像素坐标系中的坐标为(u 0,v 0),像素坐标系为二维坐标,单位是像素,图像的每个特征点在像素坐标系中的坐标表示特征点在图像中的位置。
(2)图像处理装置可以将图像的每个特征点在像素坐标系中的坐标转换为在图像坐标系中的坐标。图像坐标系包括x轴和y轴,原点O 1为图像的像主点。
图像处理装置可以通过如下公式得到图像的每个特征点在图像坐标系中的坐标:
Figure PCTCN2019077892-appb-000001
其中,u为图像的任一特征点在像素坐标系中的横坐标,v为该特征点在像素坐标系中的纵坐标,dx为x轴上像素的物理尺寸,dy为y轴上像素的物理尺寸,u 0为像主点在像素坐标系中的横坐标,v 0为像主点在像素坐标系中的纵坐标,x为该特征点在图像坐标系中的横坐标,y为该特征点在图像坐标系中的纵坐标。
(3)图像处理装置可以将图像的每个特征点在图像坐标系中的坐标转换为在像空间坐标系中的坐标。图像坐标系包括x轴和y轴,原点O 1为图像的像主点。像空间坐标系包括x轴、y轴和z轴。以图2为例,像空间坐标系中的xy平面和图像平面平行,z轴为相机主轴,原点O是投影中心(即光心),像空间坐标系为三维坐标系,像空间坐标系的单位与世界坐标系一致。(x,y)所在的坐标系为图像坐标系,图像坐标系中的xy平面与图像平面重合,原点O1为相机主轴和图像平面的交点,也称为像主点,O点和O1点之间的距离为焦距f,图像坐标系为二维坐标,图像坐标系的单位与像空间坐标系一致。
假设图像的任一特征点在图像坐标系中的坐标为(x,y),则该特征点在像空间坐标系中的坐标为(x,y,f)。
(4)图像处理装置可以将图像的每个特征点在像空间坐标系中的坐标转换为在世界坐标系中的坐标。
图像处理装置可以通过如下公式得到图像的每个特征点在世界坐标系中的坐标:
Figure PCTCN2019077892-appb-000002
其中,x为图像的任一特征点在像空间坐标系的x轴上的坐标,y为该特征点在像空间坐标系的y轴上的坐标,z为该特征点在像空间坐标系的z轴上的坐标,x w为该特征点在世界坐标系的x轴上的坐标,y w为该特征点在世界坐标系的y轴上的坐标,z w为该特征点在世界坐标系的z轴上的坐标,R为旋转矩阵,T为平移变换矩阵。
旋转矩阵可以通过如下公式得到:
Figure PCTCN2019077892-appb-000003
其中,α、β以及γ可以根据图像拍摄装置在采集图像时的云台角信息得到。以三轴云台为例,即为俯仰角、横滚角和平移角。
平移变换矩阵可以通过如下公式得到:
T=-R(α,β,γ)*(x,y,z) T
其中,(x,y,z)为图像拍摄装置在采集图像时的光心在世界坐标系中的坐标。
其中,上述图像可以为第一图像或第二图像。
S702,根据夹角区间,将第一特征点集合和第二特征点集合分别划分为多个子集。
图像处理装置可以根据夹角区间,将第一特征点集合划分为多个子集,并将第二特征点集合划分为多个子集。第一特征点集合划分得到的子集数量或者第二特征点集合划分得到的子集数量可以根据RTK或云台角的精度调整,RTK或云台角的精度越高,子集数量可以设置得更大。
在一种实现方式中,图像处理装置可以将夹角区间划分为多个单元区间,确定同一单元区间所包含的夹角对应的目标极平面,将目标极平面所包含的第一特征点集合中的特征点划分为第一子集,将目标极平面所包含的第二特征点集合中的特征点划分为目标子集。目标子集以及与目标子集相邻的子集可以作为第二子集。
例如,假设夹角区间为[-30°,60°],子集数量为3,则图像处理装置可以确定与基准极平面之间的夹角为[-30°,0)这一区间范围内的第一极平面, 将第一极平面所包含的第一特征点集合中的特征点划分为子集1,将第一极平面所包含的第二特征点集合中的特征点划分为子集2。图像处理装置还可以确定与基准极平面之间的夹角为[0,30°)这一区间范围内的第二极平面,将第二极平面所包含的第一特征点集合中的特征点划分为子集3,将第二极平面所包含的第二特征点集合中的特征点划分为子集4。图像处理装置还可以确定与基准极平面之间的夹角为[30°,60°]这一区间范围内的第三极平面,将第三极平面所包含的第一特征点集合中的特征点划分为子集5,将第三极平面所包含的第二特征点集合中的特征点划分为子集6。基于此,图像处理装置根据夹角区间,将第一特征点集合划分为子集1、子集3和子集5,且图像处理装置根据夹角区间,将第二特征点集合划分为子集2、子集4和子集6。若第一子集为子集1,则目标子集可以为子集2;若第一子集为子集3,则目标子集可以为子集4;若第一子集为子集5,则目标子集可以为子集6。
示例性的,设置第一特征点集合划分得到的子集数量或者第二特征点集合划分得到的子集数量均为100时,通过本发明实施例的图像处理方法匹配两张图像中大约20000个特征点耗时小于3.5ms,同时保证相对于现有匹配方法找到的匹配的数量更多,且匹配结果均满足极线约束,误匹配率更低。
在本发明实施例中,图像处理装置根据每个极平面和基准极平面之间的夹角,确定夹角区间,根据夹角区间,将第一特征点集合和第二特征点集合分别划分为多个子集,提高子集划分的精确度,确保位于第一子集的特征点和位于第二子集的特征点是同名点的可能性较大。
本发明实施例提供了一种图像处理装置,图9是本发明实施例提供的图像处理装置的示意框架图,如图9所示,图像处理装置包括存储器901和处理器902,存储器用于存储程序代码;
所述处理器902,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;
根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集;
将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,所述第一子集为第一特征点集合中的任一子集,所述第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。
在一种实现方式中,所述姿态信息包括云台角信息,所述处理器902根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面;
根据每个极平面和所述基准极平面之间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
在一种实现方式中,所述处理器902根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面时,执行如下操作:
根据所述定位信息,确定所述图像拍摄装置在采集所述第一图像时的第一光心在世界坐标系中的第一坐标,以及所述图像拍摄装置在采集所述第二图像时的第二光心在所述世界坐标系中的第二坐标;
根据所述第一坐标、所述第二坐标以及所述云台角信息,确定所述第一图像和所述第二图像在所述世界坐标系中的位姿;
根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面。
在一种实现方式中,所述处理器902根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面时,执行如下操作:
当所述第一光心和所述第一图像的像主点所组成的直线与基线之间的夹角大于预设阈值时,将所述直线和所述基线所组成的平面作为所述基准极平面;
当所述直线与所述基线之间的夹角小于或者等于所述预设阈值时,则根据所述基线和所述第一图像中的行方向向量确定所述基准极平面;
所述基线为所述第一光心和所述第二光心所组成的直线。
在一种实现方式中,所述处理器902根据每个极平面和所述基准极平面之 间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
根据每个极平面和所述基准极平面之间的夹角,确定夹角区间;
根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
在一种实现方式中,所述夹角区间所包含的最小夹角小于或者等于第一夹角和第二夹角中的最大值,所述夹角区间所包含的最大夹角大于或者等于第三夹角和第四夹角中的最小值;
其中,所述第一夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第二夹角为所述第二特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第三夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最大夹角,所述第四夹角为所述第二特征点集合的特征点所对应的极平面和所述基准极平面之间的最大夹角。
在一种实现方式中,所述处理器902根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
将所述夹角区间划分为多个单元区间;
确定同一单元区间所包含的夹角对应的目标极平面;
将所述目标极平面所包含的第一特征点集合中的特征点划分为所述第一子集;
将所述目标极平面所包含的第二特征点集合中的特征点划分为所述目标子集。
在一种实现方式中,所述第二子集还包括与所述目标子集相邻的子集。
在一种实现方式中,所述处理器902根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集之前,还执行如下操作:
获取所述图像拍摄装置的畸变参数和内参数;
根据所述畸变参数和内参数,对所述第一图像和所述第二图像的特征点进行畸变校正。
在一种实现方式中,所述处理器902将所述第一特征点集合的第一子集所 包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配时,执行如下操作:
将所述第一子集包含的特征点的特征描述子和所述第二子集包含的特征点的特征描述子进行匹配。
在一种实现方式中,所述处理器902还执行如下操作:
将所述第一图像或所述第二图像的任一像素点和基线所组成的平面确定为极平面。
本实施例提供的图像处理装置能够执行前述实施例提供的如图3、图5和图7所示的方法,其执行方式和有益效果类似,在这里不再赘述。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (23)

  1. 一种图像处理方法,其特征在于,包括:
    对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;
    根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集;
    将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,所述第一子集为第一特征点集合中的任一子集,所述第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。
  2. 如权利要求1所述的方法,其特征在于,所述姿态信息包括云台角信息,所述根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集,包括:
    根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面;
    根据每个极平面和所述基准极平面之间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
  3. 如权利要求2所述的方法,其特征在于,所述根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面,包括:
    根据所述定位信息,确定所述图像拍摄装置在采集所述第一图像时的第一光心在世界坐标系中的第一坐标,以及所述图像拍摄装置在采集所述第二图像时的第二光心在所述世界坐标系中的第二坐标;
    根据所述第一坐标、所述第二坐标以及所述云台角信息,确定所述第一图像和所述第二图像在所述世界坐标系中的位姿;
    根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面。
  4. 如权利要求3所述的方法,其特征在于,所述根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面,包括:
    当所述第一光心和所述第一图像的像主点所组成的直线与基线之间的夹角大于预设阈值时,将所述直线和所述基线所组成的平面作为所述基准极平面;
    当所述直线与所述基线之间的夹角小于或者等于所述预设阈值时,则根据所述基线和所述第一图像中的行方向向量确定所述基准极平面;
    所述基线为所述第一光心和所述第二光心所组成的直线。
  5. 如权利要求2所述的方法,其特征在于,所述根据每个极平面和所述基准极平面之间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集,包括:
    根据每个极平面和所述基准极平面之间的夹角,确定夹角区间;
    根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
  6. 如权利要求5所述的方法,其特征在于,
    所述夹角区间所包含的最小夹角小于或者等于第一夹角和第二夹角中的最大值,所述夹角区间所包含的最大夹角大于或者等于第三夹角和第四夹角中的最小值;
    其中,所述第一夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第二夹角为所述第二特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第三夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最大夹角,所述第四夹角为所述第二特征点集合的特征点所对应的极平面和所述基准极平面之间的最大夹角。
  7. 如权利要求5所述的方法,其特征在于,所述根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集,包括:
    将所述夹角区间划分为多个单元区间;
    确定同一单元区间所包含的夹角对应的目标极平面;
    将所述目标极平面所包含的第一特征点集合中的特征点划分为所述第一子集;
    将所述目标极平面所包含的第二特征点集合中的特征点划分为所述目标子集。
  8. 如权利要求1所述的方法,其特征在于,所述第二子集还包括与所述目标子集相邻的子集。
  9. 如权利要求1所述的方法,其特征在于,所述根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集之前,还包括:
    获取所述图像拍摄装置的畸变参数和内参数;
    根据所述畸变参数和内参数,对所述第一图像和所述第二图像的特征点进行畸变校正。
  10. 如权利要求1所述的方法,其特征在于,所述将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,包括:
    将所述第一子集包含的特征点的特征描述子和所述第二子集包含的特征点的特征描述子进行匹配。
  11. 如权利要求1-10任一项所述的方法,其特征在于,所述方法还包括:
    将所述第一图像或所述第二图像的任一像素点和基线所组成的平面确定为极平面。
  12. 一种图像处理装置,其特征在于,包括存储器、处理器;
    所述存储器用于存储程序代码;
    所述处理器,调用所述程序代码,当程序代码被执行时,用于执行以下操作:
    对第一图像进行特征提取,得到第一特征点集合,并对第二图像进行特征提取,得到第二特征点集合;
    根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集;
    将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配,得到图像匹配结果,所述第一子集为第一特征点集合中的任一子集,所述第二子集包括所述第二特征点集合中与所述第一子集对应的目标子集。
  13. 根据权利要求12所述的装置,其特征在于,所述姿态信息包括云台角信息,所述处理器根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
    根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面;
    根据每个极平面和所述基准极平面之间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
  14. 根据权利要求13所述的装置,其特征在于,所述处理器根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和云台角信息,确定基准极平面时,执行如下操作:
    根据所述定位信息,确定所述图像拍摄装置在采集所述第一图像时的第一光心在世界坐标系中的第一坐标,以及所述图像拍摄装置在采集所述第二图像时的第二光心在所述世界坐标系中的第二坐标;
    根据所述第一坐标、所述第二坐标以及所述云台角信息,确定所述第一图 像和所述第二图像在所述世界坐标系中的位姿;
    根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面。
  15. 根据权利要求14所述的装置,其特征在于,所述处理器根据所述第一图像和所述第二图像在所述世界坐标系中的位姿确定所述基准极平面时,执行如下操作:
    当所述第一光心和所述第一图像的像主点所组成的直线与基线之间的夹角大于预设阈值时,将所述直线和所述基线所组成的平面作为所述基准极平面;
    当所述直线与所述基线之间的夹角小于或者等于所述预设阈值时,则根据所述基线和所述第一图像中的行方向向量确定所述基准极平面;
    所述基线为所述第一光心和所述第二光心所组成的直线。
  16. 根据权利要求13所述的装置,其特征在于,所述处理器根据每个极平面和所述基准极平面之间的夹角,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
    根据每个极平面和所述基准极平面之间的夹角,确定夹角区间;
    根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集。
  17. 根据权利要求16所述的装置,其特征在于,所述夹角区间所包含的最小夹角小于或者等于第一夹角和第二夹角中的最大值,所述夹角区间所包含的最大夹角大于或者等于第三夹角和第四夹角中的最小值;
    其中,所述第一夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第二夹角为所述第二特征点集合中的特征点所对应的极平面和所述基准极平面之间的最小夹角,所述第三夹角为所述第一特征点集合中的特征点所对应的极平面和所述基准极平面之间的最大夹角,所述第四夹角为所述第二特征点集合的特征点所对应的极平面和所述基准极平面之间的最大夹角。
  18. 根据权利要求16所述的装置,其特征在于,所述处理器根据所述夹角区间,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集时,执行如下操作:
    将所述夹角区间划分为多个单元区间;
    确定同一单元区间所包含的夹角对应的目标极平面;
    将所述目标极平面所包含的第一特征点集合中的特征点划分为所述第一子集;
    将所述目标极平面所包含的第二特征点集合中的特征点划分为所述目标子集。
  19. 根据权利要求12所述的装置,其特征在于,所述第二子集还包括与所述目标子集相邻的子集。
  20. 根据权利要求12所述的装置,其特征在于,所述处理器根据图像拍摄装置在采集所述第一图像和所述第二图像时的定位信息和姿态信息,将所述第一特征点集合和所述第二特征点集合分别划分为多个子集之前,还执行如下操作:
    获取所述图像拍摄装置的畸变参数和内参数;
    根据所述畸变参数和内参数,对所述第一图像和所述第二图像的特征点进行畸变校正。
  21. 根据权利要求12所述的装置,其特征在于,所述处理器将所述第一特征点集合的第一子集所包含的特征点和所述第二特征点集合的第二子集所包含的特征点进行匹配时,执行如下操作:
    将所述第一子集包含的特征点的特征描述子和所述第二子集包含的特征点的特征描述子进行匹配。
  22. 根据权利要求12-21任一项所述的装置,其特征在于,所述处理器还 执行如下操作:
    将所述第一图像或所述第二图像的任一像素点和基线所组成的平面确定为极平面。
  23. 一种图像处理系统,其特征在于,所述图像处理系统包括:
    可移动平台,所述可移动平台上设置有图像拍摄装置;
    如权利要求12-22中任一项所述的图像处理装置;
    所述可移动平台用于通过所述图像拍摄装置在所述可移动平台移动的过程中采集多个图像,并将采集到的所述多个图像发送给所述图像处理装置,所述多个图像包括第一图像和第二图像。
PCT/CN2019/077892 2019-03-12 2019-03-12 一种图像处理方法、装置及系统 WO2020181506A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/077892 WO2020181506A1 (zh) 2019-03-12 2019-03-12 一种图像处理方法、装置及系统
CN201980004931.XA CN111213159A (zh) 2019-03-12 2019-03-12 一种图像处理方法、装置及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/077892 WO2020181506A1 (zh) 2019-03-12 2019-03-12 一种图像处理方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2020181506A1 true WO2020181506A1 (zh) 2020-09-17

Family

ID=70790120

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077892 WO2020181506A1 (zh) 2019-03-12 2019-03-12 一种图像处理方法、装置及系统

Country Status (2)

Country Link
CN (1) CN111213159A (zh)
WO (1) WO2020181506A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960251A (zh) * 2018-05-22 2018-12-07 东南大学 一种图像匹配描述子生成尺度空间的硬件电路实现方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163562B (zh) * 2020-10-23 2021-10-22 珠海大横琴科技发展有限公司 一种影像重叠区域计算方法、装置、电子设备及存储介质
CN113535875A (zh) * 2021-07-14 2021-10-22 北京百度网讯科技有限公司 地图数据扩充方法、装置、电子设备、介质和程序产品
CN114509049B (zh) * 2021-11-17 2023-06-16 中国民用航空总局第二研究所 基于图像处理的云台重复定位精度测量方法及其系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148896A1 (en) * 2011-12-13 2013-06-13 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and non-transitory computer readable medium storing program
CN106125744A (zh) * 2016-06-22 2016-11-16 山东鲁能智能技术有限公司 基于视觉伺服的变电站巡检机器人云台控制方法
CN106778890A (zh) * 2016-12-28 2017-05-31 南京师范大学 基于sift匹配的云台相机姿态变化检测方法
CN108109148A (zh) * 2017-12-12 2018-06-01 上海兴芯微电子科技有限公司 图像立体分配方法、移动终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5993233B2 (ja) * 2012-07-11 2016-09-14 オリンパス株式会社 画像処理装置及び画像処理方法
JP6395506B2 (ja) * 2014-08-22 2018-09-26 キヤノン株式会社 画像処理装置および方法、プログラム、並びに撮像装置
US9965861B2 (en) * 2014-12-29 2018-05-08 Intel Corporation Method and system of feature matching for multiple images
WO2017020150A1 (zh) * 2015-07-31 2017-02-09 深圳市大疆创新科技有限公司 一种图像处理方法、装置及摄像机
CN106886758B (zh) * 2017-01-20 2019-07-02 北京农业信息技术研究中心 基于三维姿态估计的昆虫识别装置及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130148896A1 (en) * 2011-12-13 2013-06-13 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and non-transitory computer readable medium storing program
CN106125744A (zh) * 2016-06-22 2016-11-16 山东鲁能智能技术有限公司 基于视觉伺服的变电站巡检机器人云台控制方法
CN106778890A (zh) * 2016-12-28 2017-05-31 南京师范大学 基于sift匹配的云台相机姿态变化检测方法
CN108109148A (zh) * 2017-12-12 2018-06-01 上海兴芯微电子科技有限公司 图像立体分配方法、移动终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960251A (zh) * 2018-05-22 2018-12-07 东南大学 一种图像匹配描述子生成尺度空间的硬件电路实现方法

Also Published As

Publication number Publication date
CN111213159A (zh) 2020-05-29

Similar Documents

Publication Publication Date Title
WO2020181506A1 (zh) 一种图像处理方法、装置及系统
US10594941B2 (en) Method and device of image processing and camera
CN110799921A (zh) 拍摄方法、装置和无人机
EP3028252B1 (en) Rolling sequential bundle adjustment
JP4889351B2 (ja) 画像処理装置及びその処理方法
WO2018098824A1 (zh) 一种拍摄控制方法、装置以及控制设备
WO2019113966A1 (zh) 一种避障方法、装置和无人机
CN110908401A (zh) 一种针对未知杆塔结构的无人机自主巡检方法
JP2006252473A (ja) 障害物検出装置、キャリブレーション装置、キャリブレーション方法およびキャリブレーションプログラム
CN112949478A (zh) 基于云台相机的目标检测方法
CN112714287A (zh) 一种云台目标转换控制方法、装置、设备及存储介质
JP4132068B2 (ja) 画像処理装置及び三次元計測装置並びに画像処理装置用プログラム
WO2020063058A1 (zh) 一种多自由度可动视觉系统的标定方法
US20210090339A1 (en) Virtuality-reality overlapping method and system
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
CN111768449A (zh) 一种双目视觉结合深度学习的物体抓取方法
CN111815715A (zh) 变焦云台摄像机的标定方法、装置及存储介质
CN107680035B (zh) 一种参数标定方法和装置、服务器及可读存储介质
CN114283079A (zh) 一种基于图卡拍摄校正的方法及设备
CN110750094A (zh) 确定可移动设备的位姿变化信息的方法、装置和系统
CN111353945B (zh) 鱼眼图像校正方法、装置及存储介质
CN112702513B (zh) 一种双光云台协同控制方法、装置、设备及存储介质
CN115131433A (zh) 一种非合作目标位姿的处理方法、装置及电子设备
JP2004020398A (ja) 空間情報獲得方法、空間情報獲得装置、空間情報獲得プログラム、及びそれを記録した記録媒体
CN116977328B (zh) 车底机器人主动视觉中的图像质量评估方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19919364

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19919364

Country of ref document: EP

Kind code of ref document: A1