WO2018112790A1 - 图象处理方法及装置 - Google Patents

图象处理方法及装置 Download PDF

Info

Publication number
WO2018112790A1
WO2018112790A1 PCT/CN2016/111290 CN2016111290W WO2018112790A1 WO 2018112790 A1 WO2018112790 A1 WO 2018112790A1 CN 2016111290 W CN2016111290 W CN 2016111290W WO 2018112790 A1 WO2018112790 A1 WO 2018112790A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
points
camera
quadrilateral
distance
Prior art date
Application number
PCT/CN2016/111290
Other languages
English (en)
French (fr)
Inventor
陈心
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201680087913.9A priority Critical patent/CN109479082B/zh
Priority to PCT/CN2016/111290 priority patent/WO2018112790A1/zh
Priority to US16/472,067 priority patent/US10909719B2/en
Publication of WO2018112790A1 publication Critical patent/WO2018112790A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Embodiments of the present invention relate to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • FIG. 1a is a schematic diagram of imaging a rectangular frame in the prior art.
  • the point O is the position where the camera is located, and the projection of the rectangular frame P 1 P 2 P 3 P 4 on the image is a quadrilateral Q 1 Q 2 Q 3 Q 4 .
  • the quadrilateral Q 1 Q 2 Q 3 Q 4 is no longer parallel with respect to the two sides, and the angle between the adjacent two sides is not guaranteed to be 90°, and the lengths of the opposite sides are no longer equal.
  • the condition for determining whether the four edge lines of the quadrilateral detected in the image constitute a rectangle in the real world comprises: the angle between the opposite sides should be in the range of 180° ⁇ 30°, and the opposite sides The distance needs to be greater than the image width or height 1/5, the angle between the adjacent two sides should be within ⁇ 90° ⁇ 30°, and the perimeter of the quadrilateral should be greater than 1/4 of the sum of the image width and height.
  • the embodiment of the invention provides an image processing method and device, which can accurately determine whether the detected quadrilateral is a rectangle that needs to be restored, and keep the corrected rectangle and the aspect ratio of the rectangular frame in the real world consistent.
  • the "photographed point” mentioned in the embodiment of the present invention refers to a point on a real object to be photographed, wherein the real object to be photographed may be called “subject”, and the “photographed point” may be understood as “photographed” The point on the object.
  • the "first distance” mentioned in the embodiment of the present invention is the distance from the photographed point to the camera that photographed the photographed point.
  • the “second distance” is the distance from the photographed point to the first plane in which the camera that photographed the photographed point is located, and the first plane is perpendicular to the main optical axis of the camera that photographed the photographed point.
  • the “third distance” is the distance from one of the four taken points to the plane formed by the other three taken points. It can be understood that the above “first”, “second” and “third” are only used to distinguish different distances, and should not be construed as limiting the different distances.
  • an embodiment of the present invention provides an image processing method, including: detecting a first quadrilateral in a first image, the first quadrilateral includes four vertices, and four vertices correspond to four The taken point, the four vertices are the projection points of the four taken points on the first image; the four taken points are respectively determined Relative to the distance information of the camera that captures the first image; determining the positions of the four captured points based on the distance information of the four captured points and the position information of the points on the first image; respectively, according to the four taken points The position of the four photographed points is coplanar, and the four photographed points enclose a second quadrilateral.
  • the second four is determined.
  • the ratio of the sides of the two sides of the edge shape corrects the first quadrilateral to a rectangle, and the adjacent sides of the rectangle have the side length ratio.
  • the image processing method provided by the embodiment of the present invention can obtain the quadrilateral of the four captured points that meet the condition of the rectangle to be corrected according to the positions of the four captured points corresponding to the vertices of the quadrilateral in the image.
  • the ratio of the sides of two adjacent sides If a quadrilateral conforming to the condition of the rectangle to be corrected is understood as a rectangle, the side length ratio of the adjacent two sides can be understood as the aspect ratio of the rectangle.
  • the embodiment of the present invention corrects the quadrilateral in the image to the rectangle of the aspect ratio by calculating the actual aspect ratio of the rectangle corresponding to the quadrilateral in the image. It can be ensured that the corrected rectangle is not distorted, and the distortion caused by the corrected aspect ratio of the rectangular image is different from that of the original rectangle.
  • the area or perimeter of the first quadrilateral is greater than the first threshold.
  • the area or circumference of the first quadrilateral is larger than 1/4 of the total area of the image or 1/4 of the sum of the image width and height, so as to eliminate the smaller quadrilateral of the area, and avoid accidentally including the original rectangular frame.
  • the small rectangular box is corrected to a real rectangular box.
  • the first distance of each of the four taken points to the camera that captured the first image is determined by the depth sensor.
  • the second distance of each of the four taken points is determined by the first image and the second image, and the second distance of each of the captured points is taken for each shot.
  • the coordinate information of each of the four vertices on the first image, the coordinate information of the projection point of each of the captured points on the second image, and the first image are captured.
  • Phase The focal length of the machine and the focal length of the camera that captured the second image result in a second distance for each of the captured points.
  • the first image is taken according to a first distance of each of the four taken points, a two-dimensional coordinate of each of the four vertices on the first image, and a first image.
  • the two-dimensional coordinates of the intersection of the main optical axis of the camera with the plane in which the first image is located and the focal length of the camera that captured the first image determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • the two-dimensional coordinates of each of the four vertices on the first image the first picture is taken.
  • the two-dimensional coordinates of the intersection of the main optical axis of the camera and the plane in which the first image is located and the focal length of the camera that captured the first image determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • the second plane where the three photographed points are located is determined according to the positions of three of the four photographed points, and three of the four photographed points are obtained.
  • determining a side length ratio of the adjacent sides of the second quadrilateral when a side angle of the second quadrilateral and a side length relationship satisfy a preset condition, determining a side length ratio of the adjacent sides of the second quadrilateral, wherein the preset condition includes One or more of the following: the absolute value of the angle between the second quadrilateral and the two sides is lower than the third threshold; the absolute value of the difference between the angle and the right angle of the two adjacent sides of the second quadrilateral is lower than a fourth threshold; the absolute value of the difference between the lengths of the second quadrilateral and the two sides is lower than the fifth threshold; the absolute value of the difference between the distance between the two sides and the length of the other two sides Below the sixth threshold.
  • whether the graphic before the projection image corresponding to the quadrilateral in the image is a rectangle to be corrected may be determined according to the depth information of each of the captured points. Alternatively, it may be determined whether the image before the projection image corresponding to the quadrilateral in the image is a rectangle to be corrected according to the plane distance information of the camera at which the photographing point is taken. Wherein, the plane is perpendicular to the main optical axis of the camera.
  • the quadrilateral in the image is corrected to a rectangle when the image corresponding to the quadrilateral corresponding to the quadrilateral in the image satisfies the condition of the rectangle to be corrected.
  • the accuracy of correcting the distortion rectangle in the image can be improved, and the corrected rectangle is not distorted.
  • the image corresponding to the quadrilateral corresponding to the projection in the image is the object corresponding to the quadrilateral.
  • an embodiment of the present invention provides an image processing apparatus, including: a camera, a processor, a depth sensor, and a display screen.
  • Camera for taking images.
  • a processor for detecting a first quadrilateral in the image, the first quadrilateral comprising four vertices, four vertices corresponding to four taken points, and four vertices having four taken points on the image Projection point.
  • a depth sensor for determining distance information of four taken points with respect to the camera, respectively.
  • the processor is configured to determine the positions of the four captured points according to the distance information of the four taken points and the position information of the points on the image.
  • a processor configured to determine, when the four photographed points are coplanar according to the positions of the four photographed points, the four photographed points enclose a second quadrilateral, when the sides of the second quadrilateral are included and the sides are long
  • the side length ratio of the adjacent two sides of the second quadrilateral is determined, and the first quadrilateral is corrected in a rectangle, and the adjacent two sides of the rectangle have the side length ratio. Display for displaying rectangles.
  • the depth sensor is specifically configured to determine a first distance from each of the four taken points to the camera.
  • the processor is specifically configured to respectively perform two-dimensional coordinates on the image according to a first distance from each of the four taken points to the camera and a vertice of each of the four vertices.
  • the two-dimensional coordinates of the camera's main optical axis and the intersection of the image and the focal length of the camera determine the three-dimensional coordinates of each captured point in the three-dimensional coordinate system.
  • the processor is specifically configured to determine a plane in which the three captured points are located according to positions of three of the four taken points, and obtain three of the four taken points. The third distance from the taken point outside the taken point to the plane; when the third distance is less than the preset threshold, the four taken points are coplanar.
  • an embodiment of the present invention provides an image processing apparatus, including: a first camera, a second camera, a processor, and a display screen.
  • the first camera is for taking a first image.
  • a processor for detecting a first quadrilateral in the first image, the first quadrilateral comprising four vertices, four vertices corresponding to four taken points, and four vertices being four taken at the first The projection point on the image.
  • a second camera for capturing a second image, the second image comprising projection points of four captured points on the second image.
  • a processor for determining four taken points relative to the first shot based on the first image and the second image Distance information between the head and the second camera.
  • the processor is configured to determine the positions of the four captured points according to the distance information of the four captured points relative to the first camera and the second camera, and the position information of the points on the first image.
  • a processor configured to determine, when the four photographed points are coplanar according to the positions of the four photographed points, the four photographed points enclose a second quadrilateral, when the sides of the second quadrilateral are included and the sides are long
  • the side length ratio of the adjacent two sides of the second quadrilateral is determined, and the first quadrilateral is corrected to a rectangle, and adjacent sides of the rectangle have the side length ratio. Display for displaying rectangles.
  • the processor is configured to determine, by using the first image and the second image, a second distance of each of the four taken points, and a second distance of each of the captured points a distance from each of the photographed points to a first plane in which the first camera is located and perpendicular to the main optical axis of the first camera; wherein the main optical axes of the first camera and the second camera are parallel to each other, and the second camera is located On the first plane.
  • the processor is specifically configured to: coordinate information of each of the four vertices on the first image, coordinate information of each of the captured points at a projection point of the second image, The focal length of the first camera and the focal length of the second camera result in a second distance for each of the captured points.
  • the processor is specifically configured to respectively perform, according to a second distance of each of the four taken points, a two-dimensional coordinate of each of the four vertices on the first image.
  • the two-dimensional coordinates of the intersection of the main optical axis of the first camera with the first image and the focal length of the first camera determine the three-dimensional coordinates of each captured point in the three-dimensional coordinate system.
  • the image processing method and apparatus determines the position of the photographed point based on the distance information of the photographed point with respect to the camera that captures the image and the positional information of the projection point of the photographed point on the image. It is judged whether the four photographed points corresponding to the four vertices of the first quadrilateral in the image are coplanar by the position of the photographed point, and when the four photographed points are coplanar, the four photographed points are surrounded by the second a quadrilateral, when the edge angle of the second quadrilateral and the side length relationship satisfy a preset condition, determining the side length ratio of the adjacent sides of the second quadrilateral, correcting the first quadrilateral to a rectangle, the rectangle The adjacent sides have the side length ratio.
  • Embodiments of the present invention can accurately correct a distorted rectangle in an image and correct it to an undistorted rectangle.
  • the technical solution provided by the embodiment of the invention can improve the correction accuracy of the rectangular frame in the image and ensure the school The rectangular image immediately after is not distorted.
  • Figure 1a is a schematic view of a rectangular frame projection image
  • Figure 1b is a schematic diagram of a corrected rectangular frame correction
  • 2a is a first schematic diagram of misdetection using the prior art
  • 2b is a schematic diagram of a second type of misdetection using the prior art
  • 2c is a schematic diagram of a third type of false detection that occurs in the prior art
  • Figure 2d is a fourth schematic diagram of misdetection using the prior art
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • Figure 4a is a schematic diagram showing a straight line in (R, ⁇ ) in a Cartesian coordinate system
  • Figure 4b is a schematic diagram of a curve in the (r, ⁇ ) space corresponding to any point on a straight line represented by (r, ⁇ ) in a Cartesian coordinate system;
  • Figure 4c is a schematic diagram showing the intersection of curves in a (r, ⁇ ) space corresponding to a plurality of points on a straight line represented by (r, ⁇ ) in a Cartesian coordinate system;
  • FIG. 5 is a schematic diagram of photographing of a stereo camera
  • FIG. 6 is a schematic diagram of projection of a captured point in a three-dimensional coordinate system
  • Figure 7a is a schematic view showing the projection of an area including a rectangular frame in the image before correction
  • Figure 7b is a schematic diagram of the corrected rectangular frame image
  • FIG. 8 is a structural diagram of a first image processing apparatus according to an embodiment of the present invention.
  • Figure 9 is a flow chart showing an image processing method using the apparatus shown in Figure 8.
  • FIG. 10 is a structural diagram of a second image processing apparatus according to an embodiment of the present invention.
  • Figure 11 is a flow chart showing an image processing method using the apparatus shown in Figure 10.
  • the "edge line” referred to in the embodiment of the present invention refers to a line composed of dots having a large difference in gray value from surrounding pixels in the image.
  • the "edge point” referred to in the embodiment of the present invention refers to a point where the gradation value changes greatly in one direction, and can also be understood as a point on the "edge line” in the image.
  • the “feature point” mentioned in the embodiment of the present invention refers to a point in the image which is located in a region where the gradation changes drastically, which is easier to distinguish from the surrounding pixel points, and is easy to detect, that is, the gray value changes in all directions. Big point. For example, the corner point of a rectangular box in an image.
  • the "projection point” referred to in the embodiment of the present invention refers to a point at which a "shot point” corresponds to an image projected on an image.
  • a device with a depth sensor and an optical camera as referred to in an embodiment of the present invention can integrate a depth sensor in an optical camera, and in this case, the device can be understood as a camera with a depth sensor. No special explanation will be given in the specification.
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the method provided by the embodiment of the present invention determines whether the four captured points are coplanar by obtaining the positions of the four captured points corresponding to the four vertices of the first quadrilateral imaged by the rectangular frame in the image.
  • the four line segments composed of the four taken points enclose a second quadrilateral. It can be understood that the second quadrilateral refers to a graphic corresponding to the rectangular border of the object.
  • Some judgment conditions can be used to determine whether the four line segments of the second quadrilateral can form a rectangle in the real world.
  • the opposite two sides should be parallel, and the angle between the adjacent two sides should be a right angle.
  • the lengths of the two sides should be equal, and the distance between the opposite sides should be equal to the length of the other two sides.
  • the judgment conditions can be appropriately relaxed.
  • the image processing method provided by the embodiment of the present invention can determine the quadrilateral formed by the coplanar points corresponding to the vertices of the quadrilateral in the image according to the position of the photographed point corresponding to the quadrilateral vertices of the rectangular frame image in the image. Whether it meets the conditions of the rectangle to be corrected. If it is met, the side length ratio of the adjacent two sides of the quadrilateral formed when the points are coplanar is calculated, and the quadrilateral in the image is corrected to a rectangle, and the adjacent sides of the rectangle have the side length ratio. It is possible to avoid the pattern of the object corresponding to the quadrilateral in the image. In the case of a rectangle, it is also possible to avoid distortion of the corrected rectangular aspect ratio (side length ratio) with respect to the original rectangular object. As shown in Figure 3, the following steps are included:
  • Step 301 detecting a first quadrilateral in the first image, the first quadrilateral includes four vertices, the four vertices corresponding to four taken points, and the four vertices are the four The point of projection of the point on the first image.
  • the area or perimeter of the first quadrilateral is greater than the first threshold.
  • the first threshold may be 1/4 of the sum of the image area or the width and height. That is, the area of the first quadrilateral is larger than 1/4 of the image area, and/or the circumference of the first quadrilateral is larger than 1/4 of the sum of the width and height of the image.
  • the first quadrilateral in the first image may be detected by detecting an edge line in the first image, specifically, the edge line in the first image is detected, and four of the detected edge lines are selected to form The first quadrilateral on the first image. Specifically, it may include step 301a and step 301b.
  • Step 301a performing edge line detection on the pixel points on the first image.
  • the points in the image that differ greatly from the gray values of the surrounding pixels tend to be located in the edge regions of the image, and each edge line in the image is composed of such points on the edge line.
  • edge detection algorithms include Canny, Sobel, Prewitt, and others.
  • Edge points in the image are detected, all edge lines are obtained according to Hough Transform, candidate edge line segments are selected from all edge lines, and candidate edge line segments constitute set E.
  • FIG. 4a-4c illustrate an edge line detection method according to an embodiment of the present invention, which is specifically as follows:
  • Figure 4a is a schematic diagram showing a line in (R, ⁇ ) in a Cartesian coordinate system.
  • a line segment perpendicular to the line is made from the origin, assuming that the distance from the origin to the line is r, the angle between the perpendicular and the x-axis is ⁇ , and the relationship between any point (x, y) and (r, ⁇ ) on the straight line is as shown in the formula (1).
  • the Hough transform is performed on the edge points to obtain a curve in the (r, ⁇ ) space at any point (x, y) on the straight line in the Cartesian coordinate system as shown in Fig. 4b.
  • the curves in the (r, ⁇ ) space corresponding to the points on the same line in the Cartesian coordinate system intersect at one point, as shown in Figure 4c.
  • a line in the Cartesian coordinate system corresponds to a point in the (r, ⁇ ) space.
  • N i represents the intersection in a (r, ⁇ ) space at a point i (i is the number of intersections in the (r, ⁇ ) space)
  • the number of curves can be sorted from large to small for the values in the set S including all S(i), and k points in which the preset conditions are satisfied are selected as candidate edge segments capable of forming a rectangular frame.
  • the reserved candidate edge segments are formed into a set E.
  • T is a certain threshold, such as 5% or 10%.
  • the maximum value S max in the set S can be calculated, and the edge line segment of S(i) ⁇ S max *T is retained, and T is a certain threshold, such as 5% or 10%. It can be understood that the candidate edge segments in the set E are the longer edge segments among all the edge segments.
  • the above method for detecting a straight line (or edge line) according to the Hough transform is only one of a plurality of detection methods, and may also be implemented by a method such as linear fitting.
  • the intersection of the collinear points in the Cartesian coordinate system in the (r, ⁇ ) space may be distributed within a certain range, then (r, ⁇ ) space curves of any length and width are smaller rectangle d r and d ⁇ as comprising curves intersect, these curves corresponding to the points in the Cartesian coordinate system can be considered co-linear.
  • d r and d ⁇ are the width and height of the small rectangular frame in the (r, ⁇ ) space, respectively. At this time, it is necessary to linearly fit the points of the collinearity in the detected Cartesian coordinate system to obtain the straight line equation.
  • Step 301b obtaining a first quadrilateral from the set of image edge line segments.
  • the angles between the four edge segments and the x-axis are calculated separately, assuming that the four angles are ⁇ 1 , ⁇ 2 , ⁇ 3 , and ⁇ 4 , respectively, and the four angles can be sorted, and the angles after sorting are selected.
  • the line segments located in the first two digits and the line segments in the last two digits are the opposite two sides respectively. If the four line segments sorted according to the angle are respectively l A , l B , l C and l D , then l A can be calculated separately. Two intersections with l C and l D , and two intersections of l B and l C and l D .
  • the quadrilateral region V surrounded by the four edge segments taken from E needs to meet certain preset conditions, such as whether the area or perimeter of the region is greater than a certain threshold T, such as the threshold T being the total image. 1/4 of the area or 1/4 of the sum of the image width and height.
  • the quadrilateral region V surrounded by the four edge segments taken from E, when the region V satisfies a certain preset condition (first threshold), the quadrilateral may be referred to as a first quadrilateral.
  • Step 302 Determine distance information of the four taken points with respect to a camera that captures the first image.
  • a first distance of each of the four taken points to a camera that captures the first image is determined by a depth sensor.
  • the first image can be acquired by a device with a depth sensor and an optical camera.
  • the depth information of the photographed point corresponding to the point in the first image is obtained according to the depth sensor.
  • the depth information refers to the Euclidean distance, that is, the distance from the taken point to the camera.
  • the taken depth information can be set to d. That is, d is the first distance of the taken point.
  • a camera three-dimensional coordinate system is established for the camera that captures the first image, in which the origin is assumed to be O, as shown in the coordinate system shown in FIG. 6 below, and the z-axis is the main optical axis.
  • the image plane is perpendicular to the z-axis, and the distance of the image plane from the origin is the focal length f of the camera. Then the first distance is the distance from the point of the shot to the origin of the camera's three-dimensional coordinate system.
  • the distance from the taken point to the origin of the camera three-dimensional coordinate system in the embodiment of the present invention Represents the distance information of the subject relative to the camera.
  • the first distance may be a distance from the taken point to the origin of the camera's three-dimensional coordinate system.
  • the camera's three-dimensional coordinate system can select the camera's main optical axis as the Z-axis.
  • the second distance of each of the four taken points is determined by the first image and the second image.
  • the second distance of each of the captured points is the distance from each of the captured points to the first plane perpendicular to the main optical axis of the camera that captured the first image and passes through the origin of the camera's three-dimensional coordinate system, and the first image is taken.
  • the camera is in the first plane.
  • the second image includes a projection point of the four captured points on the second image, the camera that captures the second image is located on the first plane, and the camera that captures the first image and the second shot
  • the main optical axis of the camera of the image is parallel.
  • coordinate information of each of the four vertices on the first image coordinate information of each of the captured points at a projection point of the second image, and photographing the first
  • the focal length of the camera of an image and the focal length of the camera that captured the second image result in a second distance for each of the taken points.
  • first, two images can be obtained by a stereo camera, which are the first image and the second image, according to the coordinates of the matching feature points in the two images, and the stereo image.
  • the distance between the two cameras in the camera and the focal length of the camera corresponding to the two images are obtained by the corresponding points of the four vertices in the image to the second of the planes of the two cameras perpendicular to the main optical axes of the two cameras. distance.
  • the stereo camera can be obtained by calibration of a binocular camera.
  • the two camera images of the binocular camera can be understood as "two eyes of a person", and the first distance of the captured point can be understood as the distance from the captured point to any one of the eyes, and the second of the captured points.
  • the distance can be understood as the distance from the point of view to the face.
  • the plane of the face is perpendicular to the main optical axis of the two eyes, and the face includes two eyes.
  • the vertices of the quadrilateral in the image belong to the feature points, for example, the four vertices of the quadrilateral obtained by the rectangular projection are feature points.
  • Feature point tracking is performed on two images obtained by the stereo camera, and the matched feature points are two projection points of the same photographed point on the two images.
  • Feature point tracking of two images, combined with the focal length of two cameras of the stereo camera and the distance information between the two cameras, The distance of the taken point relative to the stereo camera can be obtained.
  • the matching feature points in the two images are the projection points of the same photographed point in the two images.
  • the matching feature points can be obtained by feature point tracking, including: the feature points in the image can generally be described by a feature descriptor calculated by points in a surrounding area, and the commonly used feature descriptors are as follows. For SIFT, SURF, HoG, etc., the feature descriptor is usually a vector. By detecting feature points in different images and calculating the similarity between the descriptors of each pair of feature points (such as Euclidean distance, etc.), it can be determined whether the two feature points match, so that the feature points are between different frame images. Tracking.
  • Embodiments of the present invention provide a method of calculating a second distance by using FIG.
  • FIG. 5 is a schematic diagram of photographing of a stereo camera.
  • the two cameras stereo camera coordinate system and the origin O l O r corresponding binocular obtained by camera calibration.
  • O l and O r are the origins of the camera three-dimensional coordinate system corresponding to the two cameras of the stereo camera.
  • the lines where O l F l and Or r F r are located are the directions of the main optical axes of the two cameras, and the lengths of the lines corresponding to O l F l and O r F r are the focal lengths of the two cameras.
  • the plane perpendicular to the main optical axis of the point F l and the point F r is the image plane of the two cameras.
  • the main optical axes of the two cameras are parallel.
  • the plane G is a plane perpendicular to the main optical axes O l F l and Or r F r , and the two camera coordinate system origins O l and O r are on the plane G. Then, the distance d from the point P to any of the two camera coordinate system origins O l and O r is the first distance, and the distance b from the point P to the plane G is the second distance.
  • the vertical projection point of the point P on the plane G is P',
  • b.
  • the vertical line from P point to the plane defined by the two optical axes of the camera, and the plane is at point A.
  • the plane defined by the main optical axes O l F l and O r F r of the two cameras be plane H.
  • the plane H is perpendicular to G intersects the plane G on the line O l O r
  • the line AA' is in the plane H and perpendicular to the line O l O r
  • the line AA' is perpendicular to the plane G. It can be seen that the straight line AA' is parallel to the straight line PP' and perpendicular to the straight line P'A'.
  • the straight line PA is parallel to the straight line P'A'
  • the straight line PP' is parallel to the straight line AA'
  • the parallelogram PP'A'A is a rectangle. Therefore
  • b.
  • the triangle OlOrA intersects the two image planes at points C l and C r , respectively.
  • F l C l is the positive direction of the x-axis of the left camera image plane
  • the x coordinate of the point C l is x l
  • C r F r is the positive direction of the x-axis of the right camera image plane, the point C r
  • the x coordinate is x r .
  • the line O l O r of the origin of the two camera three-dimensional coordinate systems is parallel to the two image planes.
  • the distance b from the point P to the plane G of the two camera coordinate system origins and perpendicular to the main optical axis of the two cameras can be calculated by the formula (2):
  • w is the distance between the origins of the two camera coordinate systems, that is, the length of the line segment O l O r is w
  • f is the focal length of the two cameras
  • D x l -x r .
  • the point P is a photographed point corresponding to a point in the image
  • the point P is a distance b from a plane perpendicular to the two camera coordinate origins and perpendicular to the two camera main optical axes, which is the photographing point P Two distances.
  • x r can usually be obtained by tracking algorithms such as scale-invariant feature transform SIFT, accelerated robust feature SURF, and direction gradient histogram HoG.
  • O l and O r are the origins of two camera coordinate systems.
  • the direction of the main optical axis of the camera corresponding to O l or O r can be set to the Z axis.
  • the second distance b is the Z-axis coordinate of the taken point.
  • Step 303 Determine the positions of the four captured points according to the distance information of the four taken points and the position information of the points on the first image.
  • a camera that captures the first image according to a first distance of each of the four taken points, a two-dimensional coordinate of each of the four vertices on the first image, respectively
  • the two-dimensional coordinates of the intersection of the main optical axis with the first image and the focal length of the camera that captured the first image determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • the two-dimensional coordinates of each of the four vertices on the first image, the main optical axis of the camera that captures the first image, and the first The two-dimensional coordinates of an image intersection and the focal length of the camera that captured the first image determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • Embodiments of the present invention provide two methods of determining three-dimensional coordinates. It can be divided into three-dimensional coordinate calculation method of camera with depth sensor and three-dimensional coordinate calculation method of stereo camera.
  • the three-dimensional coordinate calculation method of the camera with the depth sensor can be understood as calculating the three-dimensional coordinates by the first distance.
  • the three-dimensional coordinate calculation method of the stereo camera can be understood as calculating the three-dimensional coordinates by the second distance.
  • the three-dimensional coordinates mentioned in the embodiments of the present invention are the Z axis of the camera main optical axis.
  • the x and y axes in the three-dimensional coordinate system may be parallel to the x and y axes on the image plane, and the directions are the same.
  • Fig. 6 is a schematic view showing the projection of the subject in a three-dimensional coordinate system.
  • the direction of the Z axis in FIG. 6 corresponds to the direction of the main optical axis of the camera in FIG.
  • O is the origin of the camera three-dimensional coordinate system
  • the XYZ axis determines the three-dimensional coordinate system of the camera
  • the line OZ is the main optical axis of the camera (Principle Axis)
  • F is the intersection of the main optical axis and the image plane.
  • P is any point in the space (ie, the point of view)
  • Q is the projection of point P in the image plane.
  • the straight line PA is perpendicular to the plane OXZ and the intersection plane OXZ is at the point A, and the crossing point A is a straight line AB parallel to the X axis and the Z axis to the point B.
  • the plane QCF is perpendicular to the Z axis. Since the Z-axis and the two intersecting lines PA and AB in the PAB plane are both perpendicular, the Z-axis is perpendicular to the plane PAB. Therefore, the straight line PB is perpendicular to the Z axis. Also, since the Z axis is perpendicular to the image plane QCF, the Z axis is perpendicular to the straight line QF.
  • the main optical axis of the camera is the Z axis, and therefore, the point P corresponding to the point Q is greater than the origin of the two camera coordinate systems O l ,
  • the distance between O r and the plane perpendicular to the main optical axis is the Z coordinate of the point P. That is
  • b.
  • b can be directly calculated by formula (2), then the three-dimensional coordinates of point P can be calculated according to the similar triangle relationship, as shown in formula (6), formula (7) and formula (8):
  • P x , P y , and P z are the x, y, and z coordinates of the captured point P in the camera three-dimensional coordinate system, respectively.
  • the focal length data of the camera to be used the focal length data between the two cameras of the binocular camera, and the related execution program of the image processing are required.
  • the information is stored in the memory of the corresponding image processing device.
  • the processor of the image processing apparatus executes the data and programs stored in its memory to implement the method presented in FIG.
  • Step 304 When determining that the four taken points are coplanar according to the positions of the four taken points, the four taken points enclose a second quadrilateral when the second quadrilateral When the edge angle and the side length relationship satisfy the preset condition, the side length ratio of the adjacent two sides of the second quadrilateral is determined, and the first quadrilateral is corrected to a rectangle, and adjacent sides of the rectangle have the side length ratio.
  • a plane in which the three photographed points are located is determined according to positions of three of the four photographed points, and among the four photographed points, except for the three photographed points, a third distance from the point to the plane; when the third distance is less than the second threshold, the four points are coplanar.
  • the second threshold may be 1 mm or 5 mm or the like.
  • the method of judging whether the four vertices are coplanar is to take any three of them, calculate the equation of the plane formed by the three points in the camera coordinate system, and calculate the distance of the remaining one point to the plane. If the distance is less than a certain threshold, the four points are considered to be coplanar.
  • the image processing method provided by the embodiment of the present invention can determine whether the photographed points are coplanar according to the position of the photographed point corresponding to the vertex of the quadrilateral in the image.
  • the four taken points enclose a second quadrilateral; when the edge angle of the second quadrilateral and the side length relationship satisfy the preset condition, The first quadrilateral in the image is corrected to a rectangle.
  • the misdetection in Figures 2a, 2b, 2c and 2d can be eliminated.
  • the correction accuracy of the distorted rectangular frame in the prior art image is improved.
  • the preset condition comprises one or more of the following: the absolute value of the angle between the second quadrilateral and the two sides is lower than the third threshold; the angle between the two adjacent sides of the second quadrilateral is The absolute value of the difference between the right angles is lower than the fourth threshold; the absolute value of the difference between the lengths of the second quadrilateral and the two sides is lower than the fifth threshold; The absolute value of the difference between the distance between the two sides and the length of the other two sides of the second quadrilateral is lower than the sixth threshold.
  • the condition that the second quadrilateral is rectangular may be relaxed to satisfy one or more of the following conditions: the angle between the opposite two sides should be within ⁇ T 1 and T 1 be greater than The angle of 0, such as 5 ° or 10 °.
  • the difference between the lengths of the opposite sides should be within ⁇ T 2 and T 2 is a rational number greater than zero.
  • the difference between the opposite sides and the length of the other two sides should be within ⁇ T 3 , and T 3 is a rational number greater than zero.
  • the angle between the adjacent two sides should be in the range of 90 ° ⁇ T 4 , and T 4 is an angle greater than 0, such as 5 ° or 10 °.
  • the edge angle of the second quadrilateral and the edge length relationship satisfy the preset condition, determining the side length ratio of the adjacent sides of the second quadrilateral, correcting the first quadrilateral to a rectangular image,
  • the adjacent sides of the rectangle specify the side length ratio.
  • the aspect ratio of the adjacent two sides is the aspect ratio of the rectangle.
  • the image processing method provided by the embodiment of the present invention can obtain the adjacent two squares of the quadrilateral that meet the condition of the rectangle to be corrected according to the positions of the four captured points corresponding to the vertices of the quadrilateral in the image.
  • the embodiment of the present invention corrects the quadrilateral in the image to the rectangle of the aspect ratio by calculating the actual aspect ratio of the rectangle corresponding to the quadrilateral in the image. It can be ensured that the corrected rectangle is not distorted, and the aspect ratio of the corrected rectangular image is prevented from being different from the original rectangle.
  • 7a and 7b are schematic diagrams of image comparison before and after correction according to an embodiment of the present invention.
  • the mobile phone terminal has a camera, a processor, and a display screen.
  • 7a and 7b are schematic diagrams showing the display interface of the mobile phone terminal.
  • Fig. 7a is a schematic view showing the projection of an area including a rectangular frame in the image before correction.
  • 701 is a quadrilateral obtained by projecting a rectangular frame in the real world in the image, and the projected quadrilateral is distorted.
  • the opposite sides of the quadrilateral are not parallel and the angle between adjacent sides is not 90°.
  • 7011 A projection of an area enclosed by a rectangular frame in the image. This area contains useful information. Due to the severe distortion of the distal end of the image of the captured rectangular frame, some useful information is blurred, that is, the imaging area corresponding to 7011 cannot be clearly seen.
  • 7012 is an image of a rectangular frame border.
  • 702 is the display interface of the terminal. In FIG. 7a, the display interface 702 displays the captured image.
  • 703 and 704 are two rear cameras. The image displayed by 702 can be captured by two rear cameras, 703 and 704.
  • the terminal is integrated with a depth sensor.
  • only one of the rear cameras shown by 703 and 704 may be included.
  • the processor of the terminal can complete the calibration process of the rectangular frame image by running the relevant program in the memory.
  • Figure 7b is a schematic diagram of the corrected rectangular frame image. As shown in FIG. 7b, 705 is a schematic diagram of the corrected rectangular frame image. The corrected rectangular frame, the corresponding part of 7011, is displayed in actual scale, and the information can be clearly displayed.
  • FIG. 8 is a structural diagram of a first image processing apparatus according to an embodiment of the present invention. As shown in FIG. 8, the first camera 801, the second camera 802, the processor 803, the display screen 804, the memory 805, and the bus 806 are included. .
  • the apparatus provided in this embodiment captures an image through the first camera 801 and the second camera 802.
  • the first camera 801 and the second camera 802 can be integrated into one stereo camera.
  • the memory 805 is used to store information such as a program and a focal length of the camera.
  • the first camera 801, the second camera 802, the processor 803, the display screen 804, and the memory 805 communicate via a bus 806.
  • the processor 803 is configured to execute the program stored in the memory 805, so that the processor 803 performs the method steps in the above-described method embodiment of FIG.
  • the memory 805 may be a storage device or a collective name of a plurality of storage elements, and is used to store information such as programs and data required to run the conference server. And the memory 805 may include a random access memory (RAM), a flash memory, a read only memory (ROM), and an erasable programmable read only memory (Erasable). Programmable ROM (EPROM for short), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk (CD-ROM), flash memory, or any other well known in the art. A combination of one or more storage media in a form of storage medium or the like.
  • the processor 803 can be a CPU, a general-purpose processor, a DSP, an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, and a transistor logic. Device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, units and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the bus 806 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component (PCI) bus, or an Extended Industry Standard Architecture (EISA) bus.
  • ISA Industry Standard Architecture
  • PCI Peripheral Component
  • EISA Extended Industry Standard Architecture
  • the bus 806 can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 8, but it does not mean that there is only one bus or one type of bus.
  • a first camera 801 is used to capture a first image.
  • a processor 803 configured to detect a first quadrilateral in the first image, the first quadrilateral includes four vertices, the four vertices corresponding to four taken points, and the four vertices are the four The projection point of the captured point on the first image.
  • the second camera 802 is configured to capture a second image, and the second image includes projection points of the four taken points on the second image.
  • the processor 803 is configured to determine distance information of the four photographed points relative to the first camera 801 and the second camera 802 according to the first image and the second image.
  • a processor 803 configured to respectively compare the four camera points with respect to the first camera 801 and The distance information of the second camera 802 and the position information of the points on the first image determine the positions of the four captured points.
  • the processor 803 is configured to determine, according to the positions of the four taken points, that the four taken points are coplanar, the four taken points enclose a second quadrilateral, when the second four When the edge angle of the edge and the edge length relationship satisfy the preset condition, determine the side length ratio of the adjacent sides of the second quadrilateral, and correct the first quadrilateral to a rectangle, and the adjacent two sides of the rectangle Having the side length ratio.
  • a display screen 804 is used to display the rectangle.
  • the processor 803 is configured to determine, by using the first image and the second image, a second distance of each of the four captured points, where each of the captured points a distance between each of the photographed points to a first plane in which the camera that images the first image is located and that is perpendicular to a main optical axis of the camera that images the first image; wherein
  • the two images include projection points of the four captured points on the second image, and the camera that captures the first image is parallel to the main optical axis of the camera that captures the second image, and the image is taken The camera of the second image is located on the first plane.
  • the processor 803 is specifically configured to: according to coordinate information of each of the four vertices on the first image, where each of the captured points is at a projection point of the second image
  • the coordinate information, the focal length of the camera that captured the first image, and the focal length of the camera that captured the second image result in a second distance for each of the captured points.
  • the processor 803 is specifically configured to respectively perform, according to a second distance of each of the four taken points, a two-dimensionality of each of the four vertices on the first image.
  • the coordinates, the two-dimensional coordinates of the intersection of the main optical axis of the first camera and the first image, and the focal length of the first camera determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • the image processing apparatus obtains two images including the same projected rectangle projected into a quadrilateral by a camera with two cameras.
  • the actual position of the four taken points of the quadrilateral is obtained by the distance information of the photographed point relative to the camera and the position information of the points on the two images, and whether or not the two images are determined according to the actual positions of the four photographed points
  • the quadrilateral included therein is corrected to a rectangle in an image.
  • Embodiments of the present invention are coplanar with four taken points, and four surrounded by four taken points When the edge shape satisfies the preset condition, the quadrilateral after the rectangular frame projection in reality is corrected according to the side length ratio of the adjacent sides of the quadrilateral surrounded by the four captured points.
  • the technical solution provided by the embodiment of the invention can improve the correction accuracy of the rectangular frame in the image, and ensure that the corrected rectangular image is not distorted.
  • FIG. 9 is a flow chart showing an image processing method using the apparatus shown in Figure 8.
  • This embodiment takes a stereo camera correction rectangular frame as an example, and obtains two images including the same captured rectangle imaged into a quadrilateral by a stereo camera.
  • the actual position of the captured points corresponding to the four vertices is obtained by the distance information of the captured point and the stereo camera and the position information of the points on the two images, and whether the two positions are determined according to the actual positions of the four captured points
  • the quadrilateral included therein is corrected to a rectangle in any of the images. As shown in Figure 9, the following steps are included:
  • step 901 two images are obtained by a stereo camera.
  • the stereo camera can be obtained by calibration of a binocular camera.
  • Step 902 detecting a longer edge line segment in one of the images.
  • Step 903 taking four out of the set of longer edge line segments, calculating all possible combinations, and determining the four vertices of the quadrilateral enclosed by each combination, and enclosing the quadrilateral with a small area or a small circumference Eliminated.
  • step 301b The selection of the quadrilateral has been described in step 301b and will not be described here.
  • Step 904 is there a combination that has not been processed yet.
  • step 910 is executed to end the process. If there are still unprocessed combinations, step 905 is performed.
  • Step 905 respectively calculating the coordinates of the captured points corresponding to the four vertices of the quadrilateral in the camera three-dimensional coordinate system.
  • the distance information between the photographed points corresponding to the four vertices of the quadrilateral and the two cameras is obtained, and then the four vertex pairs of the quadrilateral are obtained according to the distance information and the position information of the points on the two images.
  • the distance information of the two points corresponding to the four vertices of the quadrilateral and the distance between the two cameras includes: a distance between the corresponding points of the four vertices and a plane perpendicular to the two camera coordinate origins and perpendicular to the main optical axes of the two cameras.
  • the position information of the points on the two images includes: coordinates of the four points corresponding to the four vertices of the quadrilateral projected on the two images, coordinates of the intersection of the two main axes of the camera and the two images, and the like.
  • the method for calculating the three-dimensional coordinates is described in the examples in steps 302 and 303, and is not described herein.
  • step 906 the corresponding points of the four vertices are coplanar.
  • step 904 it is determined whether the four corresponding captured points are coplanar. If not coplanar, step 904 is performed. If it is coplanar, step 907 is performed.
  • step 907 whether the corresponding points corresponding to the four vertices are enclosed in a rectangle.
  • step 304 it is mentioned whether or not the quadrilateral surrounded by the photographed points corresponding to the four vertices meets the preset condition. Therefore, a quadrangle that satisfies a preset condition surrounded by the corresponding points of the four vertices can be understood as a rectangle.
  • step 904 is performed. If it is enclosed in a rectangle, step 908 is performed.
  • Step 908 Calculate the aspect ratio of the rectangle enclosed by the captured points corresponding to the four vertices.
  • a quadrilateral that satisfies a preset condition surrounded by four taken points can be understood as a rectangle, and a side length ratio of an adjacent side of a quadrilateral that satisfies a preset condition corresponding to a reference point of four vertices is understood as The aspect ratio of the rectangle.
  • the aspect ratio of the rectangle corresponding to the quadrilateral in the image can be calculated.
  • the quadrilateral imaged by the rectangular frame in the image is then corrected based on the calculated aspect ratio.
  • step 909 the quadrilateral is corrected to the rectangular image of the aspect ratio described above.
  • step 910 the process ends.
  • FIG. 10 is a structural diagram of a second image processing apparatus according to an embodiment of the present invention. As shown in FIG. 10, it includes a camera 1001, a depth sensor 1002, a processor 1003, a display screen 1004, a memory 1005, and a bus 1006.
  • the apparatus provided in this embodiment captures an image through the camera 1001.
  • the depth information of the taken point is recorded by the depth sensor 1002.
  • the memory 1005 is used to store information such as a program and a focal length of the camera.
  • the first camera 1001, the depth sensor 1002, the processor 1003, the display screen 1004, and the memory 1005 communicate via the bus 1006.
  • the processor 1003 is configured to execute the program stored in the memory 1005, so that the processor 1003 executes the method steps in the above-described method embodiment of FIG.
  • camera 1001 is used to capture an image.
  • the processor 1003 is configured to detect a first quadrilateral in the first image, where the first quadrilateral includes four vertices, the four vertices correspond to four taken points, and the four vertices are the four The projection point of the captured point on the image.
  • the depth sensor 1002 is configured to determine distance information of the four captured points with respect to the camera, respectively.
  • the processor 1003 is configured to determine the positions of the four captured points according to the distance information of the four taken points and the position information of the points on the image.
  • the processor 1003 is configured to determine, when the four photographed points are coplanar according to the positions of the four photographed points, the four photographed points enclose a second quadrilateral, when the sides of the second quadrilateral When the angle of the angle and the length of the side meet the preset condition, determining the ratio of the sides of the adjacent sides of the second quadrilateral, correcting the first quadrilateral to a rectangle, and the adjacent sides of the rectangle are specific to the side Longer than.
  • a display screen 1004 for displaying the rectangle.
  • the depth sensor 1002 is specifically configured to determine a distance from each of the four taken points to the camera.
  • the distance from each of the taken points to the camera can be recorded as the first distance of each of the photographed points so as to correspond to the description in the method embodiment of FIG. 3 described above.
  • the processor 1003 is specifically configured to respectively perform, according to the distance from each of the four taken points to the camera, the two-dimensionality of each of the four vertices on the image. Coordinates, The two-dimensional coordinates of the intersection of the main optical axis of the camera with the image and the focal length of the camera determine the three-dimensional coordinates of each of the captured points in the three-dimensional coordinate system.
  • the processor 1003 is configured to determine a plane in which the three captured points are located according to positions of three of the four taken points, and obtain the four selected points. a distance from a photographed point outside the three photographed points to the plane; when the distance from the photographed point other than the three photographed points to the plane is less than a preset threshold, the four The points being photographed are coplanar.
  • An image processing apparatus obtains an image by a mobile device with a depth sensor and an optical camera.
  • the distance information of the four photographed points corresponding to the four vertices of the quadrilateral on the image is obtained by the depth sensor, and the actual positions of the four photographed points are obtained according to the distance information of the four photographed points and the position information of the points on the image. Whether or not to correct the quadrilateral included in the image to a rectangle is determined based on the actual positions of the four taken points.
  • the technical solution provided by the embodiment of the invention can improve the correction accuracy of the rectangular frame in the image, and ensure that the corrected rectangular image is not distorted.
  • FIG 11 is a flow chart showing an image processing method using the apparatus shown in Figure 10.
  • This embodiment takes a camera-corrected rectangular frame with a depth sensor as an example, and obtains the distance information of the photographed point corresponding to the vertex of the quadrilateral on the image by the depth sensor, according to the distance information of the photographed point and the position information of the point on the image.
  • the actual position of the photographed point corresponding to the four vertices of the quadrilateral is obtained, and whether or not the quadrilateral included in the image is corrected to a rectangle is determined based on the actual positions of the four photographed points.
  • the following steps are included:
  • step 1101 an image is obtained by a depth sensor and an optical camera.
  • the depth information of the photographed point corresponding to any point on the image is obtained by the depth sensor, that is, the distance of the photographed point corresponding to any point from the camera that takes the image.
  • Step 1102 detecting a longer edge line segment in the image.
  • Step 1103 taking four out of the set of longer edge line segments, calculating all possible combinations, and determining the four vertices of the quadrilateral enclosed by each combination, and enclosing the quadrilateral with a small area or a small circumference Eliminated.
  • step 301b The selection of the quadrilateral has been described in step 301b and will not be described here.
  • Step 1104 is there a combination that has not been processed yet.
  • step 1105 If there are still unprocessed combinations, step 1105 is performed. If there are no unprocessed combinations, step 1110 is executed to end the process.
  • step 1105 the coordinates of the four captured points corresponding to the four vertices of the quadrilateral in the camera three-dimensional coordinate system are respectively calculated.
  • the method for calculating the three-dimensional coordinates is described in the examples in steps 302 and 303, and is not described herein.
  • step 1106 the four captured points are coplanar.
  • step 1107 is performed. If not coplanar, step 1104 is performed.
  • step 1107 whether the four captured points are enclosed in a rectangle.
  • step 304 it is mentioned whether or not the quadrilateral surrounded by the photographed points corresponding to the four vertices meets the preset condition. Therefore, a quadrangle that satisfies a preset condition surrounded by the corresponding points of the four vertices can be understood as a rectangle.
  • step 1108 is performed. If it is not possible to enclose a rectangle, step 1104 is performed.
  • step 1108 the aspect ratio of the rectangle enclosed by the four taken points is calculated.
  • a quadrilateral that satisfies a preset condition surrounded by four taken points can be understood as a rectangle, and a side length ratio of an adjacent side of a quadrilateral that satisfies a preset condition corresponding to a reference point of four vertices is understood as The aspect ratio of the rectangle.
  • the aspect ratio of the rectangle corresponding to the quadrilateral in the image can be calculated.
  • the quadrilateral imaged by the rectangular frame in the image is then corrected based on the calculated aspect ratio.
  • step 1109 the quadrilateral is corrected to the rectangular image of the aspect ratio described above.
  • step 1110 the process ends.
  • the image processing method provided by the embodiment of the present invention determines the position of the photographed point corresponding to the four vertices of the quadrilateral by the position information of the quadrilateral in the image and the distance information between the object and the camera that captures the image. According to the positions of the four photographed points, the quadrilateral in the image is corrected to a rectangle when the position of the photographed point and the quadrilateral surrounded by the four photographed points satisfy the preset condition. According to the embodiment of the present invention, according to the square length ratio of the quadrilateral surrounded by the four captured points, the quadrilateral after the rectangular frame projection in the real world is corrected, the correction accuracy of the rectangular frame in the image can be improved, and the corrected rectangular image is not distorted. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例涉及图象处理方法及装置。该方法包括:检测图象中的第一四边形,第一四边形包括四个顶点,四个顶点对应四个被摄点,四个顶点为四个被摄点在图象上的投影点;分别确定四个被摄点相对于拍摄图象的相机的距离信息;分别根据四个被摄点的距离信息和图象上的点的位置信息,确定四个被摄点的位置;根据四个被摄点的位置,确定四个被摄点共面时,四个被摄点围成第二四边形,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形,该矩形的相邻两边具体该边长比。通过被摄点与相机之间的距离信息,确定被摄点的位置,根据被摄点的位置可准确将图象中包括的畸变的矩形校正。

Description

图象处理方法及装置 技术领域
本发明实施例涉及图象处理技术领域,尤其涉及一种图象处理方法及装置。
背景技术
随着科技的不断发展,越来越多的智能设备被应用到日常生活中,如智能手机、平板等。这些智能设备一般都带有摄像头,可以随时对幻灯片、白板、广告牌等具有有用信息的平面进行拍照,从而不必再费时费力地进行记录。
但由于用摄像头拍摄的图象是经过投影变换得到的,因此,可能使图象有畸变。如图1a为现有技术拍摄矩形框成象示意图。如图1a中O点为摄像头所在的位置,矩形框P1P2P3P4在图象上的投影为四边形Q1Q2Q3Q4。四边形Q1Q2Q3Q4相对两条边不再平行,相邻两条边的夹角也不能保证是90°,相对两条边的长度也不再相等。这种情况在使用摄像头拍摄的矩形框远端尤其明显,其中远端是指矩形框距离摄像头较远的边或较远的两边之间的夹角,由此带来的图象畸变会使一部分有用信息难以看清。因此,需要对摄像头拍摄的带有畸变的四边形图象进行校正,使得该四边形在图象中仍然为矩形,且宽高比与真实世界中的矩形框的宽高比保持一致。校正过程如图1b所示,将图象中的四边形101,校正为矩形102。
现有技术中,为了便于对四边形图象进行有效校正,需要先检测四边形的四条边缘线是否能够构成真实世界中的矩形。其中,判断图象中检测出的四边形的四条边缘线能否构成真实世界中的矩形的条件包括:相对的两条边的方向夹角应该在180°±30°范围内,相对的两条边的距离需要大于图象宽或高 的1/5,相邻的两条边的夹角应该在±90°±30°范围内,四边形的周长应该大于图象宽和高之和的1/4。
现有技术中采用上述判断条件可能会存在误检的情况。图2a-图2d中示出了几种可能的误检情况,图2a、图2b中与本子长边不平行的尺子的边缘被误检成本子的矩形边缘。图2c中桌子的边缘被误检成本子的矩形边缘。图2d中书籍矩形边缘内部的一条边缘也被误检成需要校正的书籍封面的边缘。
现有技术中的矩形框检测和校正方法,存在误检的情况,无法准确的判断检测出的四边形是否为需要还原的畸变矩形,无法保证校正后的矩形不失真。
发明内容
本发明实施例提供了一种图象处理方法及装置,可以准确的判断检测出的四边形是否为需要还原的矩形,并且使校正后的矩形与真实世界中的矩形框的宽高比保持一致。
本发明实施例提及的“被摄点”指的是所要拍摄的真实物体上的点,其中,所要拍摄的真实物体可叫做“被摄物”,“被摄点”可理解为“被摄物”上的点。
本发明实施例提及的“第一距离”为被摄点至拍摄该被摄点的相机的距离。“第二距离”为被摄点到拍摄该被摄点的相机所在的第一平面的距离,且该第一平面与拍摄该被摄点的相机的主光轴垂直。“第三距离”为四个被摄点中其中一个被摄点到其他三个被摄点组成的平面的距离。可以理解的是,上述“第一”、“第二”、“第三”仅用以区分不同的距离,不应理解为对不同距离的限定。
第一方面,本发明实施例提供了一种图象处理方法,该方法包括:检测第一图象中的第一四边形,第一四边形包括四个顶点,四个顶点对应四个被摄点,四个顶点为四个被摄点在第一图象上的投影点;分别确定四个被摄点 相对于拍摄第一图象的相机的距离信息;分别根据四个被摄点的距离信息和第一图象上的点的位置信息,确定四个被摄点的位置;根据四个被摄点的位置,确定四个被摄点共面时,四个被摄点围成第二四边形,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形,该矩形的相邻两边具有所述边长比。
具体地,本发明实施例提供的图象处理方法,可根据图象中四边形的顶点对应的四个被摄点的位置,得到四个被摄点围成的符合待校正矩形的条件的四边形的相邻两条边的边长比。如将符合待校正矩形的条件的四边形理解为矩形,相邻两条边的边长比可理解为矩形的宽高比。本发明实施例通过计算图象中四边形对应的被摄矩形的实际宽高比,将图象中的四边形校正为该宽高比的矩形。可以保证校正后的矩形不失真,避免校正后的矩形图象的宽高比不同于原被摄矩形而导致的失真。
在一种可能的实施方式中,第一四边形的面积或周长大于第一阈值。
具体地,第一四边形的面积或周长大于图象总面积的1/4或图象宽高之和的1/4,以剔除区域较小的四边形,避免误将原矩形框中包含的小的矩形框校正为真实的矩形框。
在一种可能的实施方式中,通过深度传感器确定四个被摄点中每个被摄点到拍摄第一图象的相机的第一距离。
在一种可能的实施方式中,通过第一图象和第二图象确定四个被摄点中每个被摄点的第二距离,每个被摄点的第二距离为每个被摄点到第一平面的距离,第一平面为拍摄第一图象的相机所在的平面,且第一平面与拍摄第一图象的相机的主光轴垂直;其中,第二图象包括四个被摄点在第二图象上的投影点,拍摄第一图象的相机与拍摄第二图象的相机的主光轴相互平行,拍摄第二图象的相机位于第一平面上。
在一种可能的实施方式中,根据四个顶点中每个顶点在第一图象上的坐标信息、每个被摄点在第二图象上的投影点的坐标信息、拍摄第一图象的相 机的焦距及拍摄第二图象的相机的焦距得到每个被摄点的第二距离。
在一种可能的实施方式中,分别根据四个被摄点中每个被摄点的第一距离、四个顶点中每个顶点在第一图象上的二维坐标、拍摄第一图象的相机的主光轴与第一图象所在的平面的交点的二维坐标以及拍摄第一图象的相机的焦距,确定每个被摄点在三维坐标系中的三维坐标。
在一种可能的实施方式中,分别根据四个被摄点中每个被摄点的第二距离、四个顶点中的每个顶点在第一图象上的二维坐标、拍摄第一图象的相机的主光轴与第一图象所在的平面的交点的二维坐标以及拍摄第一图象的相机的焦距,确定每个被摄点在三维坐标系中的三维坐标。
在一种可能的实施方式中,根据四个被摄点中三个被摄点的位置确定三个被摄点所在的第二平面,得到四个被摄点中除三个被摄点外的被摄点到第二平面的第三距离;当第三距离小于第二阈值时,四个被摄点共面。
在一种可能的实施方式中,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,其中,预设条件包括下述一项或多项:第二四边形相对两条边的夹角的绝对值低于第三阈值;第二四边形相邻两条边的夹角与直角之差的绝对值低于第四阈值;第二四边形相对两条边的长度之差的绝对值低于第五阈值;第二四边形相对两条边之间的距离与另外两条边的长度差的绝对值低于第六阈值。
具体地,可根据每个被摄点的深度信息确定图象中四边形对应的投影成象前的图形是否为待校正的矩形。或者,可根据每个被摄点距离拍摄该被摄点的相机所在的平面距离信息确定图象中四边形对应的投影成象前的图形是否为待校正的矩形。其中,上述平面与该相机主光轴垂直。当图象中四边形对应的投影成象前的图形满足待校正的矩形的条件时,将图象中的四边形校正为矩形。可以提高对图象中畸变矩形校正的准确率,保证校正后的矩形不失真。其中,图象中四边形对应的投影成象前的图形为该四边形对应的被摄物。
第二方面,本发明实施例提供了一种图象处理装置,该装置包括:摄像头、处理器、深度传感器、显示屏。摄像头,用于拍摄图象。处理器,用于检测图象中的第一四边形,第一四边形包括四个顶点,四个顶点对应四个被摄点,四个顶点为四个被摄点在图象上的投影点。深度传感器,用于分别确定四个被摄点相对于摄像头的距离信息。处理器,用于分别根据四个被摄点的距离信息和图象上的点的位置信息,确定四个被摄点的位置。处理器,用于根据四个被摄点的位置,确定四个被摄点共面时,四个被摄点围成第二四边形,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正矩形,该矩形的相邻两边具有所述该边长比。显示屏,用于显示矩形。
在一种可能的实施方式中,深度传感器,具体用于确定四个被摄点中每个被摄点到摄像头的第一距离。
在一种可能的实施方式中,处理器,具体用于分别根据四个被摄点中每个被摄点到摄像头的第一距离、四个顶点中每个顶点在图象上的二维坐标、摄像头的主光轴与图象交点的二维坐标以及摄像头的焦距,确定每个被摄点在三维坐标系中的三维坐标。
在一种可能的实施方式中,处理器,具体用于根据四个被摄点中三个被摄点的位置确定该三个被摄点所在的平面,得到四个被摄点中除三个被摄点外的被摄点到平面的第三距离;当第三距离小于预设阈值时,四个被摄点共面。
第三方面,本发明实施例提供了一种图象处理装置,该装置包括:第一摄像头、第二摄像头、处理器、显示屏。第一摄像头,用于拍摄第一图象。处理器,用于检测第一图象中的第一四边形,第一四边形包括四个顶点,四个顶点对应四个被摄点,四个顶点为四个被摄点在第一图象上的投影点。第二摄像头,用于拍摄第二图象,第二图象包括四个被摄点在第二图象上的投影点。处理器,用于根据第一图象和第二图象确定四个被摄点相对于第一摄 像头和第二摄像头的距离信息。处理器,用于分别根据四个被摄点相对于第一摄像头和第二摄像头的距离信息,以及第一图象上的点的位置信息,确定四个被摄点的位置。处理器,用于根据四个被摄点的位置,确定四个被摄点共面时,四个被摄点围成第二四边形,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形,该矩形的的相邻两边具有所述边长比。显示屏,用于显示矩形。
在一种可能的实施方式中,处理器,具体用于通过第一图象和第二图象确定四个被摄点中每个被摄点的第二距离,每个被摄点的第二距离为每个被摄点到第一摄像头所在的且与第一摄像头的主光轴垂直的第一平面的距离;其中,第一摄像头与第二摄像头的主光轴相互平行,第二摄像头位于第一平面上。
在一种可能的实施方式中,处理器,具体用于根据四个顶点中每个顶点在第一图象上的坐标信息、每个被摄点在第二图象的投影点的坐标信息、第一摄像头的焦距及第二摄像头的焦距得到每个被摄点的第二距离。
在一种可能的实施方式中,处理器,具体用于分别根据四个被摄点中每个被摄点的第二距离、四个顶点中每个顶点在第一图象上的二维坐标、第一摄像头的主光轴与第一图象交点的二维坐标以及第一摄像头的焦距,确定每个被摄点在三维坐标系中的三维坐标。
本发明实施例提供的图象处理方法及装置,根据被摄点相对于拍摄图象的相机的距离信息以及被摄点在图象上的投影点的位置信息,确定被摄点的位置。通过被摄点的位置,判断图象中第一四边形的四个顶点对应的四个被摄点是否共面,当四个被摄点共面时,四个被摄点围成第二四边形,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形,该矩形的相邻两边具有所述边长比。本发明实施例可以准确的校正图象中有畸变的矩形,将其校正为不失真的矩形。本发明实施例提供的技术方案可以提高图象中矩形框的校正准确率,保证校 正后的矩形图象不失真。
本发明的这些和其它方面在以下实施例的描述中会更加简明易懂。
附图说明
图1a为矩形框投影成象示意图;
图1b为畸变的矩形框校正示意图;
图2a为采用现有技术出现的第一种误检示意图;
图2b为采用现有技术出现的第二种误检示意图;
图2c为采用现有技术出现的第三种误检示意图;
图2d为采用现有技术出现的第四种误检示意图;
图3为本发明实施例提供的图象处理方法流程示意图;
图4a为笛卡尔坐标系中用(r,θ)表示一条直线的示意图;
图4b为笛卡尔坐标系中用(r,θ)表示的直线上的任意一点对应的(r,θ)空间中的曲线示意图;
图4c为笛卡尔坐标系中用(r,θ)表示的直线上的多个点对应的(r,θ)空间中的曲线的交点示意图;
图5为立体相机的拍照示意图;
图6为被摄点在三维坐标系中投影示意图;
图7a为校正前包括矩形框的区域在图象中的投影示意图;
图7b为校正后的矩形框图象示意图;
图8为本发明实施例提供的第一种图象处理装置架构图;
图9为采用图8所示的装置的图象处理方法流程示意图;
图10为本发明实施例提供的第二种图象处理装置架构图;
图11为采用图10所示的装置的图象处理方法流程示意图。
具体实施方式
下面结合附图,对本发明的实施例进行描述。
本发明实施例提及的“边缘线”指的是图象中与周围像素灰度值相差较大的点组成的线。本发明实施例提及的“边缘点”指的是在一个方向上灰度值变化较大的点,也可以理解为图象中位于“边缘线”上的点。本发明实施例提及的“特征点”指的是图象中位于灰度剧烈变化的区域的较易于与周围象素点区分开,易于检测的点,即在各个方向上灰度值变化较大的点。例如,图象中矩形框的角点。
本发明实施例提及的“投影点”指的是“被摄点”对应在图象上投影成象的点。本发明实施例提及的带有深度传感器和光学相机的设备,可将深度传感器集成在光学相机中,此时,可将该设备理解为带有深度传感器的相机。在说明书中将不再做特别说明。
图3为本发明实施例提供的图象处理方法流程示意图。本发明实施例提供的方法,通过得到图象中被摄矩形框成象的第一四边形的四个顶点对应的四个被摄点的位置,判断该四个被摄点是否共面,当该四个被摄点共面时,由该四个被摄点构成的四条线段围成第二四边形。可以理解的是,第二四边形指的是对应被摄物矩形边框的图形。
可以使用一些判断条件以判断第二四边形的四条线段是否能构成真实世界中的矩形,理想情况下,相对的两条边应该平行,相邻的两条边的夹角应该是直角,相对的两条边的长度应该相等,相对的两条边之间的距离应该等于另外两条边的长度。但考虑到实际工程应用中的噪声和误差等因素的影响,判断条件可以适当放宽。
本发明实施例提供的图象处理方法,可根据图象中矩形框成象的四边形顶点对应的被摄点的位置,判断图象中四边形的顶点对应的被摄点共面时围成的四边形是否符合待校正矩形的条件。如果符合,计算被摄点共面时围成的四边形的相邻两边的边长比,再将图象中的四边形校正为矩形,该矩形的相邻两边具有所述边长比。可避免图象中的四边形对应的被摄物的图形不是 矩形的情况,同时还可避免校正后的矩形宽高比(边长比)相对原矩形物体失真的情况。如图3所示,包括以下步骤:
步骤301,检测第一图象中的第一四边形,所述第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述第一图象上的投影点。
优选地,第一四边形的面积或周长大于第一阈值。
其中,第一阈值可以是图象面积或宽高之和的1/4。即第一四边形的面积大于图象面积的1/4,和/或第一四边形的周长大于图象宽高之和的1/4。
可以通过检测第一图象中的边缘线来检测第一图象中的第一四边形,具体地,检测第一图象中的边缘线,从检测到的边缘线中任选四条,组成第一图象上的第一四边形。具体可包括步骤301a和步骤301b。
步骤301a,对第一图象上的像素点进行边缘线检测。
图象中与周围像素灰度值相差较大的点往往位于图象中的边缘区域,图象中的每一条边缘线都是由这样的一些位于该边缘线上的点构成的。常用的边缘检测算法包括Canny、Sobel、Prewitt等。
检测图象中的边缘点,根据霍夫变换(Hough Transform)得到所有的边缘线,从所有边缘线中选取候选边缘线段,候选边缘线段构成集合E。
图4a-图4c为本发明实施例提供的一种边缘线检测方法,具体如下:
图4a为笛卡尔坐标系中用(r,θ)表示一条直线的示意图,即xy坐标系中,对于任意一条直线,从原点做一条垂直于该直线的线段,假设原点到该直线的距离为r,垂线与x轴的夹角为θ,则直线上的任意一点(x,y)与(r,θ)之间的关系如式(1)所示。
r=xcosθ+ysinθ     (1)
对边缘点进行霍夫变换,得到如图4b所示的笛卡尔坐标系中直线上任意一点(x,y)对应于(r,θ)空间中的一条曲线。位于笛卡尔坐标系中同一条直线上的点对应的(r,θ)空间中的若干条曲线会相交于一点,如图4c所示。则 笛卡尔坐标系中的一条直线对应于(r,θ)空间中的一个点。
计算如图4c所示的多条曲线的交点,对于每一个交点,相交于该点的曲线数量记为N,则N越大,代表笛卡尔坐标系中对应的线段越长。
在一个示例中,如果S(i)=Ni,i=1…n,Ni表示(r,θ)空间中相交于某一点i(i为(r,θ)空间中交点的序号)的曲线的数量,则可以对包括所有S(i)的集合S中的值进行由大到小排序,选取其中满足预设条件的k个点作为能构成矩形框的候选边缘线段。被保留的候选边缘线段构成集合E。
例如,选取上述排序结果的前5%或10%,即
Figure PCTCN2016111290-appb-000001
T为某一阈值,如5%或10%等。又例如,可以计算集合S中的最大值Smax,保留S(i)≥Smax*T的边缘线段,T为某一阈值,如5%或10%等。可以理解的是,集合E中的候选边缘线段为所有边缘线段中较长的边缘线段。
需要说明的是,上述根据霍夫变换进行直线(或边缘线)检测的方法只是多种检测方法之一,也可以使用线性拟合等方法实现。另外,在实际应用中,由于噪声和检测误差等因素的影响,笛卡尔坐标系中共线的点在(r,θ)空间中的曲线的交点可能分布在一定范围内,则把(r,θ)空间中任意长宽分别为dr和dθ的小矩形框包含的曲线作为相交的曲线,这些曲线在笛卡尔坐标系中对应的点可以认为是共线的。其中,dr和dθ分别为(r,θ)空间中小矩形框的宽高值。此时,需要对检测出的笛卡尔坐标系中共线的点进行线性拟合,以得到该直线方程。
步骤301b,从图象边缘线段集合中得到第一四边形。
从较长的边缘线段集合E中任取四条,计算所有可能的组合,确定每一种组合所围成的四边形的四个顶点,并将围成的区域较小的四边形剔除。
从集合E中任取四条边缘线段l1、l2、l3和l4,确定四条边缘线段所围成的四边形区域,由于四条边缘线段可能是两两相交的,所以此四条边缘线段之间的交点最多有6个,需要从6个交点中找出4个正确的交点。
在一个示例中,分别计算四条边缘线段与x轴的夹角,假设四个夹角分 别为α1、α2、α3和α4,可以对四个夹角进行排序,则排序后夹角位于前两位的线段和位于后两位的线段分别为相对的两条边,假设按照夹角排序后的四条线段分别为lA、lB、lC和lD,则可以分别计算lA与lC和lD的两个交点,以及lB与lC和lD的两个交点。至此,可以确定从E中任取的四条边缘线段所围成的四边形的区域V。其中,lA与lC和lD的两个交点,以及lB与lC和lD的两个交点为四边形V的四个顶点。
优选地,从E中任取的四条边缘线段所围成的四边形的区域V需满足一定的预设条件,例如该区域的面积或周长是否大于某一阈值T,如阈值T为图象总面积的1/4或图象宽高之和的1/4等。
从E中任取的四条边缘线段所围成的四边形的区域V,当区域V满足一定的预设条件(第一阈值)时,该四边形可称为第一四边形。
步骤302,分别确定所述四个被摄点相对于拍摄所述第一图象的相机的距离信息。
优选地,通过深度传感器确定所述四个被摄点中每个被摄点到拍摄所述第一图象的相机的第一距离。
在一个示例中,首先,可通过带有深度传感器和光学相机的设备获取该第一图象。在图象处理过程中,根据深度传感器得到第一图象中的点对应的被摄点的深度信息。其中,深度信息指的是欧式距离,即被摄点到相机的距离。
具体地,被摄点深度信息可设为d。也就是说,d为被摄点的第一距离。
在一个示例中,为拍摄所述第一图象的相机建立相机三维坐标系,在该三维坐标系中,假设原点为O,如下面图6所示的坐标系,z轴为主光轴,图象平面与z轴垂直,图象平面距原点的距离为相机的焦距f。则第一距离为被摄点到相机三维坐标系原点的距离。
需要说明的是,关于相机三维坐标系,可参照图6中的详细介绍。
可以理解的是,本发明实施例通过被摄点到相机三维坐标系原点的距离 代表被摄点相对于相机的距离信息。具体地,第一距离可以为被摄点到相机三维坐标系原点的距离。相机三维坐标系可以选取相机主光轴为Z轴。
优选地,通过第一图象和第二图象确定四个被摄点中每个被摄点的第二距离。每个被摄点的第二距离为每个被摄点到与拍摄第一图象的相机的主光轴垂直且经过相机三维坐标系原点的第一平面的距离,且拍摄第一图象的相机位于第一平面。其中,第二图象包括所述四个被摄点在所述第二图象上的投影点,拍摄第二图象的相机位于第一平面上,拍摄第一图象的相机与拍摄第二图象的相机的主光轴平行。
具体地,根据所述四个顶点中每个顶点在所述第一图象上的坐标信息、所述每个被摄点在所述第二图象的投影点的坐标信息、拍摄所述第一图象的相机的焦距及拍摄所述第二图象的相机的焦距得到所述每个被摄点的第二距离。
在一个示例中,首先,可通过立体相机得到两张图象,所述两张图象即为上述第一图象和第二图象,根据两张图象中匹配的特征点的坐标、立体相机中两个相机之间的距离以及两张图象对应的相机的焦距得到图象中的四个顶点对应的被摄点到与两个相机主光轴垂直的两个相机所在平面的第二距离。
需要说明的是,立体相机可以由双目相机校准得到。可以将双目相机的两个摄像头形象的理解为“人的两只眼睛”,则被摄点的第一距离可以理解为被摄点到其中任意一只眼睛的距离,被摄点的第二距离可以理解为被摄点到人脸的距离。其中,人脸所在平面与两只眼睛的主光轴垂直,人脸包括两只眼睛。
需要说明的是,图象中四边形的顶点属于特征点,例如矩形投影得到的四边形的四个顶点为特征点。对立体相机得到的两张图象进行特征点跟踪,匹配的特征点为同一被摄点在两张图象上的两个投影点。对两张图象进行特征点跟踪,结合立体相机两个摄像头的焦距和两个摄像头之间的距离信息, 可以得到所述被摄点相对于立体相机的距离。
具体地,两张图象中匹配的特征点为同一被摄点分别在两张图象中的投影点。匹配的特征点可以通过特征点跟踪得到,具体包括:图象中的特征点一般可以用其周围一块区域中的点计算出的特征描述子(Feature Descriptor)来描述,比较常用的特征描述子如SIFT、SURF和HoG等,特征描述子通常为一个向量。通过检测不同图象中的特征点并计算各对特征点的描述子之间的相似性(如欧式距离等),即可确定两个特征点是否匹配,以实现特征点在不同帧图象间的跟踪。
本发明实施例通过图5,提供一种计算第二距离的方法。
图5为立体相机的拍照示意图。该立体相机可通过两个相机坐标系原点Ol和Or对应的双目相机校准得到。其中,Ol和Or为立体相机的两个摄像头对应的相机三维坐标系的原点。OlFl、OrFr所在的直线分别为两个摄像头的主光轴的方向,OlFl和OrFr对应的线段的长度值为两个摄像头的焦距。过点Fl和点Fr与主光轴垂直的平面为两个摄像头的象平面。两个摄像头的主光轴平行。P为被摄点,平面G为与主光轴OlFl和OrFr垂直的平面,且两个相机坐标系原点Ol和Or在平面G上。则被摄点P至两个相机坐标系原点Ol和Or中任一点的距离d为第一距离,被摄点P到平面G的距离b为第二距离。
被摄点P在平面G的垂直投影点为P',||PP'||=b。为方便求b的数值,从P点做垂线到两个相机主光轴确定的平面,交该平面于点A。连接点A与Ol、Or,则在三角形OlOrA中,过A点做该点到直线OlOr的垂线AA'。设两个相机的主光轴OlFl和OrFr确定的平面为平面H,由于主光轴OlFl和OrFr平行且均垂直于平面G,则平面H垂直于G并交平面G于直线OlOr,又由于直线AA'在平面H内且垂直于直线OlOr,可知直线AA'平行于主光轴OlFl和OrFr,则直线AA'垂直于平面G。由此可知,直线AA'平行于直线PP'且垂直于直线P'A'。因此,在四边形PP'A'A内,直线PA平行于直线P'A',直线PP'平行于直线AA',则平行四边形PP'A'A为矩形。故||AA'||=b。三角形OlOrA与两个象平面分别交于点Cl和 Cr两点。其中,假设FlCl为左侧相机像平面x轴的正方向,点Cl的x坐标为xl;假设CrFr为右侧相机像平面x轴的正方向,点Cr的x坐标为xr
具体地,两个相机三维坐标系的原点的连线OlOr与两个象平面平行。
如图5所示。点P到过两个相机坐标系原点且与两个相机主光轴垂直的平面G的距离b可以用公式(2)计算得到:
Figure PCTCN2016111290-appb-000002
其中,w为两个相机坐标系原点之间的距离,即线段OlOr的长度为w,f为两个相机的焦距,D=xl-xr
具体地,点P为所述图象中的点对应的被摄点,点P到过两个相机坐标系原点且与两个相机主光轴垂直的平面的距离b为被摄点P的第二距离。
在实际应用中,已知xl的情况下,xr通常可以通过如尺度不变特征变换SIFT、加速稳健特征SURF和方向梯度直方图HoG等跟踪算法获得。
可以理解的是,图5所示的双目相机中,Ol和Or为两个相机坐标系原点。为简化后续的计算,可将Ol或Or对应的相机主光轴的方向设为Z轴。则第二距离b即为被摄点的Z轴坐标。
步骤303,分别根据所述四个被摄点的距离信息和所述第一图象上的点的位置信息,确定所述四个被摄点的位置。
优选地,分别根据所述四个被摄点中每个被摄点的第一距离、所述四个顶点中每个顶点在第一图象上的二维坐标、拍摄第一图象的相机的主光轴与第一图象交点的二维坐标以及拍摄第一图象的相机的焦距,确定每个被摄点在三维坐标系中的三维坐标。
优选地,分别根据所述四个被摄点的第二距离、所述四个顶点中每个顶点在第一图象上的二维坐标、拍摄第一图象的相机的主光轴与第一图象交点的二维坐标以及拍摄第一图象的相机的焦距,确定每个被摄点在三维坐标系中的三维坐标。
以下以图6为例,说明本发明实施例提供的三维坐标计算方法。本发明实施例提供两种确定三维坐标的方法。可分为带深度传感器的相机三维坐标计算法和立体相机三维坐标计算法。
其中,带深度传感器的相机三维坐标计算法又可理解为通过第一距离计算三维坐标。立体相机三维坐标计算法又可理解为通过第二距离计算三维坐标。
需要说明的是,本发明实施例中提及的三维坐标以相机主光轴为Z轴。另外,可以理解的是,为简化三维坐标的计算方法,可以设三维坐标系中的x、y轴分别与图象平面上的x、y轴平行且方向相同。
图6为被摄点在三维坐标系中投影示意图。其中,图6中Z轴的方向对应图5中相机主光轴的方向。如图6所示,O为相机三维坐标系原点,XYZ轴确定相机的三维坐标系,直线OZ为相机的主光轴(Principle Axis),F为主光轴与图象平面的交点。P为空间内的任意一点(即为被摄点),Q为点P在图象平面内的投影。
由点P作直线PA垂直于平面OXZ且交平面OXZ于点A,过点A作直线AB平行于X轴且交Z轴于点B。连接OA交象平面于点C,则平面QCF垂直于Z轴。由于Z轴与PAB平面内的两条相交直线PA和AB均垂直,则Z轴垂直于平面PAB。因此,直线PB垂直于Z轴。又由于Z轴垂直于象平面QCF,所以Z轴垂直于直线QF。那么,在ΔOPB内,QF和PB同时垂直于OB,则QF和PB平行。同理,由于直线OB与平面PAB、平面QCF均垂直,则在ΔOAB内直线CF平行于直线AB。因此,
Figure PCTCN2016111290-appb-000003
又由于ΔOPA与ΔOQC有公共夹角,因此ΔOPA与ΔOQC为相似三角形。则直线QC平行于直线PA,即直线QC垂直于平面OXZ。
则图6中||OF||=f,f为相机焦距,设点F和点Q在象平面的二维坐标系中的坐标为(u0,v0)和(xQ,yQ),假设象平面二维坐标系的XY轴与相机坐标 系三维平面的XY轴平行且方向相同,则||FC||=|u0-xQ|,||QC||=|v0-yQ|。根据勾股定理,可知
Figure PCTCN2016111290-appb-000004
在一个示例中,使用带有深度传感器和光学相机的移动设备拍照时,P点的深度信息||OP||=d可以由深度传感器直接获得,则根据相似三角形关系可以计算出点P的三维坐标,如公式(3)、公式(4)和公式(5)所示:
Figure PCTCN2016111290-appb-000005
Figure PCTCN2016111290-appb-000006
Figure PCTCN2016111290-appb-000007
在另一个示例中,使用立体相机拍照时,在该三维坐标系中,以相机的主光轴为Z轴,因此,点Q对应的被摄点P距离过两个相机坐标系原点Ol、Or且与主光轴垂直的平面的距离即为点P的Z坐标。即||OB||=b。b可由公式(2)直接计算得到,则根据相似三角形关系可以计算出点P的三维坐标,如公式(6)、公式(7)和公式(8)所示:
Figure PCTCN2016111290-appb-000008
Figure PCTCN2016111290-appb-000009
Pz=||OB||=b   (8)
其中,Px、Py、Pz分别为被摄点P在相机三维坐标系中的x、y、z坐标。
需要说明的是,在上述公式(2)—公式(8)的计算过程中,需要用到的相机的焦距数据、双目相机的两个相机之间的距离数据以及图象处理的相关执行程序等信息,均存储在对应的图象处理装置的存储器中。在图象处理装置的处理器执行其存储器保存的数据及程序,以实现图3给出的方法。
步骤304,根据所述四个被摄点的位置,确定所述四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形,该矩形的相邻两边具有所述边长比。
优选地,根据所述四个被摄点中三个被摄点的位置确定所述三个被摄点所在的平面,得到所述四个被摄点中除所述三个被摄点外的被摄点到所述平面的第三距离;当所述第三距离小于第二阈值时,所述四个被摄点共面。
需要说明的是,考虑实际工程应用中的噪声和误差因素,因此需要设定第二阈值。例如,第二阈值可以是1mm或5mm等。
在一个示例中,判断四个顶点是否共面的方法是,任取其中3个点,计算这三个点在相机坐标系内构成的平面的方程,再计算剩余的一个点到此平面的距离,如果该距离小于一定阈值,则认为该四点共面。
本发明实施例提供的图象处理方法,可以根据图象中四边形的顶点对应的被摄点的位置,判断被摄点是否共面。当确认图象中四边形的顶点对应的被摄点共面时,四个被摄点围成第二四边形;当第二四边形的边夹角以及边长关系满足预设条件时,将图象中的第一四边形校正为矩形。可以消除图2a、图2b、图2c以及图2d中的误检情况。提高现有技术图象中畸变矩形框的校正准确度。
优选地,预设条件包括下述一项或多项:第二四边形相对两条边的夹角的绝对值低于第三阈值;第二四边形相邻两条边的夹角与直角之差的绝对值低于第四阈值;第二四边形相对两条边的长度之差的绝对值低于第五阈值; 第二四边形相对两条边之间的距离与另外两条边的长度差的绝对值低于第六阈值。
在一个示例中,可将第二四边形是否为矩形的条件放宽为满足如下条件中的一个或多个:相对的两条边的夹角应该在±T1范围内,T1为一大于0的角度,如5°或10°等。相对的两条边的长度的差值应该在±T2范围内,T2为一大于0的有理数。相对的两条边之间的距离与另外两条边长度的差值应该在±T3范围内,T3为一大于0的有理数。相邻的两条边的夹角应该在90°±T4范围内,T4为一大于0的角度,如5°或10°等。
具体地,当第二四边形的边夹角以及边长关系满足预设条件时,确定第二四边形相邻两边的边长比,将第一四边形校正为矩形图象,该矩形的相邻两边具体所述边长比。
可以理解的是,当所述第二四边形为矩形时,上述相邻两条边的边长比即为矩形的宽高比。
本发明实施例提供的图象处理方法,可根据图象中四边形的顶点对应的四个被摄点的位置,得到四个被摄点围成的符合待校正矩形的条件的四边形的相邻两条边的边长比。如将符合待校正矩形的条件的四边形理解为矩形,相邻两条边的边长比可理解为矩形的宽高比。本发明实施例通过计算图象中四边形对应的被摄矩形的实际宽高比,将图象中的四边形校正为该宽高比的矩形。可以保证校正后的矩形不失真,避免校正后的矩形图象的宽高比不同于原被摄矩形而失真。
图7a和图7b为本发明实施例提供的校正前后图象对比示意图。以手机终端为例,手机终端带有摄像头、处理器、显示屏。图7a和图7b为手机终端的显示界面示意。
图7a为校正前包括矩形框的区域在图象中的投影示意图。如图7a所示,701为现实世界中的被摄矩形框在图像中投影后得到的四边形,投影后的四边形存在畸变。701中四边形相对的两边不平行且相邻边的夹角也不是90°。7011 为被摄矩形框围成的区域在图像中的投影,该区域包含有用信息,由于被摄矩形框的图像的远端畸变严重,使得部分有用信息模糊,即7011对应的成象区域不能看清楚。7012为矩形框边框的图象。702为终端的显示界面,在图7a中,702显示界面显示拍摄的图象。703和704为两个后置摄像头。702显示的图象可以由703和704两个后置摄像头拍摄得到。
需要说明的是,在本发明的另一实施方式中,终端集成有深度传感器,此时,可只包括703和704所示的后置摄像头中的一个。终端的处理器通过运行存储器中的相关程序,即可完成矩形框图象的校正过程。
图7b为校正后的矩形框图象示意图。如图7b所示,705为校正后的矩形框图象示意图。校正后的矩形框,其7011对应的部分,以实际比例显示,其信息可清晰显示。
需要说明的是,矩形框校正后,只显示边框内部区域。即7012边框部分将不显示。
图8为本发明实施例提供的第一种图象处理装置架构图,如图8所示,包括:第一摄像头801、第二摄像头802、处理器803、显示屏804、存储器805和总线806。
本实施例提供的装置,通过第一摄像头801和第二摄像头802拍摄图象。其中,第一摄像头801和第二摄像头802可以集成为一个立体相机。存储器805用于存储程序和摄像头的焦距等信息。第一摄像头801、第二摄像头802、处理器803、显示屏804、存储器805通过总线806通信。当进行图象处理时,处理器803用于执行存储器805存储的程序,使得处理器803执行上述图3方法实施例中的方法步骤。
其中,存储器805可以是一个存储装置,也可以是多个存储元件的统称,且用于存储运行会议服务器所需的程序以及数据等信息。且存储器805可以包括随机存取存储器(Random Access Memory,简称RAM)、闪存、只读存储器(Read Only Memory,简称ROM)、可擦除可编程只读存储器(Erasable  Programmable ROM,简称EPROM)、电可擦可编程只读存储器(Electrically EPROM,简称EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)、闪存(Flash)或者本领域熟知的任何其它形式的存储介质等中的一个或多个存储介质的组合。
处理器803可以是CPU,通用处理器,DSP,专用集成电路(Application-Specific Integrated Circuit,简称ASIC),现场可编程门阵列(Field Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,单元和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。
总线806可以是工业标准体系结构(Industry Standard Architecture,简称ISA)总线、外部设备互连(Peripheral Component,简称PCI)总线或扩展工业标准体系结构(Extended Industry Standard Architecture,简称EISA)总线等。该总线806可以分为地址总线、数据总线、控制总线等。为便于表示,图8中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在一个示例中,第一摄像头801,用于拍摄第一图象。
处理器803,用于检测第一图象中的第一四边形,第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述第一图象上的投影点。
第二摄像头802,用于拍摄第二图象,所述第二图象包括所述四个被摄点在所述第二图象上的投影点。
处理器803,用于根据所述第一图象和所述第二图象确定所述四个被摄点相对于所述第一摄像头801和所述第二摄像头802的距离信息。
处理器803,用于分别根据所述四个被摄点相对于所述第一摄像头801和 所述第二摄像头802的距离信息,以及所述第一图象上的点的位置信息,确定所述四个被摄点的位置。
处理器803,用于根据所述四个被摄点的位置,确定所述四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定所述第二四边形相邻两边的边长比,将所述第一四边形校正为矩形,该矩形的相邻两边具有所述边长比。
显示屏804,用于显示所述矩形。
优选地,处理器803,具体用于通过所述第一图象和第二图象确定所述四个被摄点中每个被摄点的第二距离,所述每个被摄点的第二距离为所述每个被摄点到拍摄所述第一图象的相机所在的且与拍摄所述第一图象的相机的主光轴垂直的第一平面的距离;其中,所述第二图象包括所述四个被摄点在所述第二图象上的投影点,拍摄所述第一图象的相机与拍摄所述第二图象的相机的主光轴相互平行,拍摄所述第二图象的相机位于所述第一平面上。
优选地,处理器803,具体用于根据所述四个顶点中每个顶点在所述第一图象上的坐标信息、所述每个被摄点在所述第二图象的投影点的坐标信息、拍摄所述第一图象的相机的焦距及拍摄所述第二图象的相机的焦距得到所述每个被摄点的第二距离。
优选地,处理器803,具体用于分别根据所述四个被摄点中每个被摄点的第二距离、所述四个顶点中每个顶点在所述第一图象上的二维坐标、所述第一摄像头的主光轴与所述第一图象交点的二维坐标以及所述第一摄像头的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
本发明实施例提供的图象处理装置,通过带有两个摄像头的相机得到两张包括有相同被摄矩形投影为四边形的图象。通过被摄点相对摄像头的距离信息以及两张图象上点的位置信息得到该四边形四个被摄点的实际位置,根据四个被摄点的实际位置判断是否在两张图象中的任一图象中将其包括的四边形校正为矩形。本发明实施例在四个被摄点共面,且四个被摄点围成的四 边形满足预设条件时,根据四个被摄点围成的四边形相邻边的边长比校正现实中矩形框投影后的四边形。本发明实施例提供的技术方案可以提高图象中矩形框的校正准确率,保证校正后的矩形图象不失真。
图9为采用图8所示的装置的图象处理方法流程示意图。该实施例以立体相机校正矩形框为例,通过立体相机得到两张包括相同被摄矩形成象为四边形的图象。通过被摄点与立体相机的距离信息以及两张图象上点的位置信息得到四个顶点对应的被摄点的实际位置,根据四个被摄点的实际位置判断是否在两张图象中的任一图象中将其包括的所述四边形校正为矩形。如图9所示,包括以下步骤:
步骤901,通过立体相机得到两张图象。
需要说明的是,所述立体相机可以由双目相机校准得到。
可参见图5示意。
步骤902,在其中一张图象中检测较长的边缘线段。
在步骤301a已对边缘线检测进行说明,在此不做赘述。
步骤903,从较长的边缘线段集合中任取四条,计算所有可能的组合,并确定每一种组合所围成的四边形的四个顶点,并将围成区域面积或周长较小的四边形剔除。
在步骤301b中已对四边形的选取进行说明,在此不做赘述。
步骤904,是否有还未处理的组合。
判断是否已将较长边缘线段集合中可能的组合选取完毕,如果已将所有可能的四边形组合考虑完毕,执行步骤910,结束流程。如果还有未处理的组合,执行步骤905。
步骤905,分别计算四边形四个顶点对应的被摄点在相机三维坐标系中的坐标。
根据双目相机得到四边形四个顶点对应的被摄点与两个相机的距离信息,进而根据该距离信息和两张图象上点的位置信息得到四边形四个顶点对 应的被摄点在相机三维坐标系中的坐标。
其中,四边形四个顶点对应的被摄点与两个相机的距离信息包括:四个顶点对应的被摄点到过两个相机坐标系原点且与两个相机主光轴垂直的平面的距离。两张图象上点的位置信息包括:四边形四个顶点对应的被摄点分别在两张图象上投影的坐标,两个相机主光轴与两张图象的交点坐标等。
具体计算三维坐标的方法在步骤302、303中的示例中已有说明,在此不做赘述。
步骤906,四个顶点对应的被摄点是否共面。
根据四个顶点对应的被摄点在相机三维坐标系中的坐标,判断四个对应的被摄点是否共面。如果不共面,执行步骤904。如果共面,执行步骤907。
步骤907,四个顶点对应的被摄点是否围成矩形。
根据四个顶点对应的被摄点在相机三维坐标系中的坐标,判断四个顶点对应的被摄点是否围成矩形。
需要说明的是,在步骤304中提到判断四个顶点对应的被摄点围成的四边形是否满足预设条件。因此,可将四个顶点对应的被摄点围成的满足预设条件的四边形理解为矩形。
如果四个被摄点不能围成矩形,执行步骤904。如果围成矩形,执行步骤908。
步骤908,计算四个顶点对应的被摄点围成的矩形的宽高比。
需要说明的是,在步骤304中提到判断四个被摄点围成的四边形是否满足预设条件。因此,可将四个被摄点围成的满足预设条件的四边形理解为矩形,将四个顶点对应的被摄点围成的满足预设条件的四边形的相邻边的边长比理解为矩形的宽高比。
为保证校正后的矩形不失真,可以计算图象中的四边形对应的被摄的矩形的宽高比。再根据计算的宽高比校正图象中被摄矩形框成象的四边形。
步骤909,将四边形校正为上述宽高比的矩形图象。
步骤910,结束流程。
图10为本发明实施例提供的第二种图象处理装置架构图。如图10所示,包括:摄像头1001、深度传感器1002、处理器1003、显示屏1004、存储器1005和总线1006。
本实施例提供的装置,通过摄像头1001拍摄图象。通过深度传感器1002记录被摄点的深度信息。存储器1005用于存储程序和摄像头的焦距等信息。第一摄像头1001、深度传感器1002、处理器1003、显示屏1004、存储器1005通过总线1006通信。当进行图象处理时,处理器1003用于执行存储器1005存储的程序,使得处理器1003执行上述图3方法实施例中的方法步骤。
上述各部分的连接关系以及各部分的功能可参照图8中的介绍,在此不再赘述。
在一个示例中,摄像头1001,用于拍摄图象。处理器1003,用于检测第一图象中的第一四边形,第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述图象上的投影点。深度传感器1002,用于分别确定四个被摄点相对于摄像头的距离信息。处理器1003,用于分别根据四个被摄点的距离信息和图象上的点的位置信息,确定四个被摄点的位置。处理器1003,用于根据四个被摄点的位置,确定四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定所述第二四边形相邻两边的边长比,将所述第一四边形校正为矩形,该矩形的相邻两边具体所述边长比。显示屏1004,用于显示所述矩形。
优选地,深度传感器1002,具体用于确定所述四个被摄点中每个被摄点到摄像头的距离。其中,每个被摄点到摄像头的距离可记为每个被摄点的第一距离,以便与上述图3方法实施例中的描述相对应。
优选地,处理器1003,具体用于分别根据所述四个被摄点中每个被摄点到所述摄像头的距离、所述四个顶点中每个顶点在所述图象上的二维坐标、 所述摄像头的主光轴与所述图象交点的二维坐标以及所述摄像头的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
优选地,处理器1003,具体用于根据所述四个被摄点中三个被摄点的位置确定所述三个被摄点所在的平面,得到所述四个被摄点中除所述三个被摄点外的被摄点到所述平面的距离;当所述除所述三个被摄点外的被摄点到所述平面的的距离小于预设阈值时,所述四个被摄点共面。
本发明实施例提供的图象处理装置,通过带有深度传感器和光学相机的移动设备得到图象。通过深度传感器得到图象上的四边形的四个顶点对应的四个被摄点的距离信息,根据四个被摄点的距离信息以及图象上点的位置信息得到四个被摄点的实际位置,根据四个被摄点的实际位置判断是否将图象中包括的四边形校正为矩形。本发明实施例提供的技术方案可以提高图象中矩形框的校正准确率,保证校正后的矩形图象不失真。
图11为采用图10所示的装置的图象处理方法流程示意图。该实施例以带深度传感器的相机校正矩形框为例,通过深度传感器得到图象上的四边形的顶点对应的被摄点的距离信息,根据被摄点的距离信息以及图象上点的位置信息得到该四边形四个顶点对应的被摄点的实际位置,根据四个被摄点的实际位置判断是否将图象中包括的四边形校正为矩形。如图11所示,包括以下步骤:
步骤1101,通过深度传感器和光学相机得到图象。
通过深度传感器得到图象上任一点对应的被摄点的深度信息,即任一点对应的被摄点距拍摄该图象的相机的距离。
步骤1102,在图象中检测较长的边缘线段。
在步骤301a已对边缘线检测进行说明,在此不做赘述。
步骤1103,从较长的边缘线段集合中任取四条,计算所有可能的组合,并确定每一种组合所围成的四边形的四个顶点,并将围成区域面积或周长较小的四边形剔除。
在步骤301b中已对四边形的选取进行说明,在此不做赘述。
步骤1104,是否有还未处理的组合。
如果还有未处理的组合,执行步骤1105。如果没有未处理的组合,执行步骤1110,结束流程。
步骤1105,分别计算四边形四个顶点对应的四个被摄点在相机三维坐标系中的坐标。
具体计算三维坐标的方法在步骤302、303中的示例中已有说明,在此不做赘述。
步骤1106,四个被摄点是否共面。
根据四个顶点对应的被摄点在相机三维坐标系中的坐标,判断四个对应的被摄点是否共面。如果共面,执行步骤1107。如果不共面,执行步骤1104。
步骤1107,四个被摄点是否围成矩形。
根据四个顶点对应的被摄点在相机三维坐标系中的坐标,判断四个顶点对应的被摄点是否围成矩形。
需要说明的是,在步骤304中提到判断四个顶点对应的被摄点围成的四边形是否满足预设条件。因此,可将四个顶点对应的被摄点围成的满足预设条件的四边形理解为矩形。
如果围成矩形,执行步骤1108。如果不能围成矩形,执行步骤1104。
步骤1108,计算四个被摄点围成的矩形的宽高比。
需要说明的是,在步骤304中提到判断四个被摄点围成的四边形是否满足预设条件。因此,可将四个被摄点围成的满足预设条件的四边形理解为矩形,将四个顶点对应的被摄点围成的满足预设条件的四边形的相邻边的边长比理解为矩形的宽高比。
为保证校正后的矩形不失真,可以计算图象中的四边形对应的被摄的矩形的宽高比。再根据计算的宽高比校正图象中被摄矩形框成象的四边形。
步骤1109,将四边形校正为上述宽高比的矩形图象。
步骤1110,结束流程。
本发明实施例提供的图象处理方法,通过图象中四边形的位置信息和被摄物体与拍摄该图象的相机的距离信息,确定四边形四个顶点对应的被摄点的位置。根据四个被摄点的位置,在被摄点的位置以及四个被摄点围成的四边形满足预设条件时,将图象中的四边形校正为矩形。本发明实施例根据四个被摄点围成的四边形的边长比校正现实中的矩形框投影后的四边形,可以提高图象中矩形框的校正准确率,保证校正后的矩形图象不失真。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令处理器完成,所述的程序可以存储于计算机可读存储介质中,所述存储介质是非短暂性(英文:non-transitory)介质,例如随机存取存储器,只读存储器,快闪存储器,硬盘,固态硬盘,磁带(英文:magnetic tape),软盘(英文:floppy disk),光盘(英文:optical disc)及其任意组合。
以上所述,仅为本发明的一些说明性示例,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (17)

  1. 一种图象处理方法,其特征在于,所述方法包括:
    检测第一图象中的第一四边形,所述第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述第一图象上的投影点;
    分别确定所述四个被摄点相对于拍摄所述第一图象的相机的距离信息;
    分别根据所述四个被摄点的距离信息和所述第一图象上的点的位置信息,确定所述四个被摄点的位置;
    根据所述四个被摄点的位置,确定所述四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定所述第二四边形相邻两边的边长比,将所述第一四边形校正为矩形,所述矩形的相邻两边具有所述边长比。
  2. 如权利要求1所述的方法,其特征在于,所述第一四边形的面积或周长大于第一阈值。
  3. 如权利要求1或2所述的方法,其特征在于,所述分别确定所述四个被摄点相对于拍摄所述第一图象的相机的距离信息,包括:
    通过深度传感器确定所述四个被摄点中每个被摄点到拍摄所述第一图象的相机的第一距离。
  4. 如权利要求1或2所述的方法,其特征在于,所述分别确定所述四个被摄点相对于拍摄所述第一图象的相机的距离信息,包括:
    通过所述第一图象和第二图象确定所述四个被摄点中每个被摄点的第二距离,所述每个被摄点的第二距离为所述每个被摄点到第一平面的距离,所述第一平面为拍摄所述第一图象的相机所在的平面,且所述第一平面与拍摄所述第一图象的相机的主光轴垂直;
    其中,所述第二图象包括所述四个被摄点在所述第二图象上的投影点,拍摄所述第一图象的相机与拍摄所述第二图象的相机的主光轴相互平行,拍 摄所述第二图象的相机位于所述第一平面上。
  5. 如权利要求4所述的方法,其特征在于,所述通过所述第一图象和第二图象确定所述四个被摄点中每个被摄点的第二距离,包括:
    根据所述四个顶点中每个顶点在所述第一图象上的坐标信息、所述每个被摄点在所述第二图象上的投影点的坐标信息、拍摄所述第一图象的相机的焦距及拍摄所述第二图象的相机的焦距得到所述每个被摄点的第二距离。
  6. 如权利要求3所述的方法,其特征在于,所述分别根据所述四个被摄点的距离信息和所述第一图象上的点的位置信息,确定所述四个被摄点的位置,包括:
    分别根据所述四个被摄点中每个被摄点的所述第一距离、所述四个顶点中每个顶点在所述第一图象上的二维坐标、拍摄所述第一图象的相机的主光轴与所述第一图象所在的平面的交点的二维坐标以及所述拍摄所述第一图象的相机的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
  7. 如权利要求4或5所述的方法,其特征在于,所述分别根据所述四个被摄点的距离信息和所述第一图象上的点的位置信息,确定所述四个被摄点的位置,包括:
    分别根据所述四个被摄点中每个被摄点的第二距离、所述四个顶点中的每个顶点在所述第一图象上的二维坐标、拍摄所述第一图象的相机的主光轴与所述第一图象所在的平面的交点的二维坐标以及所述拍摄所述第一图象的相机的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
  8. 如权利要求1-7中任一项所述的方法,其特征在于,所述根据所述四个被摄点的位置,确定所述四个被摄点共面,包括:
    根据所述四个被摄点中三个被摄点的位置确定所述三个被摄点所在的第二平面,得到所述四个被摄点中除所述三个被摄点外的被摄点到所述第二平面的第三距离;
    当所述第三距离小于第二阈值时,所述四个被摄点共面。
  9. 如权利要求1-8中任一项所述的方法,其特征在于,所述当所述第二四边形的边夹角以及边长关系满足预设条件,其中,所述预设条件包括下述一项或多项:
    所述第二四边形相对两条边的夹角的绝对值低于第三阈值;
    所述第二四边形相邻两条边的夹角与直角之差的绝对值低于第四阈值;
    所述第二四边形相对两条边的长度之差的绝对值低于第五阈值;
    所述第二四边形相对两条边之间的距离与另外两条边的长度差的绝对值低于第六阈值。
  10. 一种图象处理装置,其特征在于,所述装置包括:摄像头、处理器、深度传感器、显示屏;
    所述摄像头,用于拍摄图象;
    所述处理器,用于检测图象中的第一四边形,所述第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述图象上的投影点;
    所述深度传感器,用于分别确定所述四个被摄点相对于所述摄像头的距离信息;
    所述处理器,用于分别根据所述四个被摄点的距离信息和所述图象上的点的位置信息,确定所述四个被摄点的位置;
    所述处理器,用于根据所述四个被摄点的位置,确定所述四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定所述第二四边形相邻两边的边长比,将所述第一四边形校正为距形,所述矩形的相邻两边具有所述边长比;
    所述显示屏,用于显示所述矩形。
  11. 如权利要求10所述的装置,其特征在于,所述深度传感器,具体用于确定所述四个被摄点中每个被摄点到所述摄像头的第一距离。
  12. 如权利要求11所述的装置,其特征在于,所述处理器,具体用于分别根据所述四个被摄点中每个被摄点到所述摄像头的第一距离、所述四个顶点中每个顶点在所述图象上的二维坐标、所述摄像头的主光轴与所述图象所在的平面的交点的二维坐标以及所述摄像头的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
  13. 如权利要求10-12中任一项所述的装置,其特征在于,所述处理器,具体用于根据所述四个被摄点中三个被摄点的位置确定所述三个被摄点所在的平面,得到所述四个被摄点中除所述三个被摄点外的被摄点到所述平面的第三距离;当所述第三距离小于预设阈值时,所述四个被摄点共面。
  14. 一种图象处理装置,其特征在于,所述装置包括:第一摄像头、第二摄像头、处理器、显示屏;
    所述第一摄像头,用于拍摄第一图象;
    所述处理器,用于检测第一图象中的第一四边形,所述第一四边形包括四个顶点,所述四个顶点对应四个被摄点,所述四个顶点为所述四个被摄点在所述第一图象上的投影点;
    所述第二摄像头,用于拍摄第二图象,所述第二图象包括所述四个被摄点在所述第二图象上的投影点;
    所述处理器,用于根据所述第一图象和所述第二图象确定所述四个被摄点相对于所述第一摄像头和所述第二摄像头的距离信息;
    所述处理器,用于分别根据所述四个被摄点相对于所述第一摄像头和所述第二摄像头的距离信息,以及所述第一图象上的点的位置信息,确定所述四个被摄点的位置;
    所述处理器,用于根据所述四个被摄点的位置,确定所述四个被摄点共面时,所述四个被摄点围成第二四边形,当所述第二四边形的边夹角以及边长关系满足预设条件时,确定所述第二四边形相邻两边的边长比,将所述第 一四边形校正为矩形,所述矩形的相邻两边具有所述边长比;
    所述显示屏,用于显示所述矩形。
  15. 如权利要求14所述的装置,其特征在于,所述处理器,具体用于通过所述第一图象和所述第二图象确定所述四个被摄点中每个被摄点的第二距离,所述每个被摄点的第二距离为所述每个被摄点到第一平面的距离,所述第一平面为第一摄像头所在的平面,且所述第一平面与所述第一摄像头的主光轴垂直;
    其中,所述第一摄像头与所述第二摄像头的主光轴相互平行,所述第二摄像头位于所述第一平面上。
  16. 如权利要求15所述的装置,其特征在于,所述处理器,具体用于根据所述四个顶点中每个顶点在所述第一图象上的坐标信息、所述每个被摄点在所述第二图象的投影点的坐标信息、所述第一摄像头的焦距及所述第二摄像头的焦距得到所述每个被摄点的第二距离。
  17. 如权利要求15或16所述的装置,其特征在于,所述处理器,具体用于分别根据所述四个被摄点中每个被摄点的第二距离、所述四个顶点中每个顶点在所述第一图象上的二维坐标、所述第一摄像头的主光轴与所述第一图象所在的平面的交点的二维坐标以及所述第一摄像头的焦距,确定所述每个被摄点在三维坐标系中的三维坐标。
PCT/CN2016/111290 2016-12-21 2016-12-21 图象处理方法及装置 WO2018112790A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201680087913.9A CN109479082B (zh) 2016-12-21 2016-12-21 图象处理方法及装置
PCT/CN2016/111290 WO2018112790A1 (zh) 2016-12-21 2016-12-21 图象处理方法及装置
US16/472,067 US10909719B2 (en) 2016-12-21 2016-12-21 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/111290 WO2018112790A1 (zh) 2016-12-21 2016-12-21 图象处理方法及装置

Publications (1)

Publication Number Publication Date
WO2018112790A1 true WO2018112790A1 (zh) 2018-06-28

Family

ID=62624396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/111290 WO2018112790A1 (zh) 2016-12-21 2016-12-21 图象处理方法及装置

Country Status (3)

Country Link
US (1) US10909719B2 (zh)
CN (1) CN109479082B (zh)
WO (1) WO2018112790A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020136523A1 (en) * 2018-12-28 2020-07-02 Ecomagic Ltd System and method for the recognition of geometric shapes
CN113538291A (zh) * 2021-08-02 2021-10-22 广州广电运通金融电子股份有限公司 卡证图像倾斜校正方法、装置、计算机设备和存储介质
CN117853579A (zh) * 2023-10-07 2024-04-09 湖州丽天智能科技有限公司 一种光伏板位姿修正方法、光伏机器人及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109479082B (zh) * 2016-12-21 2021-10-15 华为技术有限公司 图象处理方法及装置
US10915998B2 (en) * 2016-12-21 2021-02-09 Huawei Technologies Co., Ltd. Image processing method and device
CN110889829B (zh) * 2019-11-09 2023-11-03 东华大学 一种基于鱼眼镜头的单目测距方法
CN111080544B (zh) * 2019-12-09 2023-09-22 Oppo广东移动通信有限公司 基于图像的人脸畸变校正方法、装置及电子设备
TWI755765B (zh) 2020-06-22 2022-02-21 中強光電股份有限公司 視覺與深度座標系的校正系統、校正方法及校正裝置
CN112710608B (zh) * 2020-12-16 2023-06-23 深圳晶泰科技有限公司 实验观测方法及系统
CN114018932B (zh) * 2021-11-02 2023-05-30 西安电子科技大学 基于矩形标定物的路面病害指标测量方法
CN114494892B (zh) * 2022-04-15 2022-07-15 广州市玄武无线科技股份有限公司 一种货架商品陈列信息识别方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1867940A (zh) * 2003-10-14 2006-11-22 卡西欧计算机株式会社 成像装置及其图像处理方法
CN1937698A (zh) * 2006-10-19 2007-03-28 上海交通大学 图像畸变自动校正的图像处理方法
US7268803B1 (en) * 1999-08-26 2007-09-11 Ricoh Company, Ltd. Image processing method and apparatus, digital camera, image processing system and computer readable medium
CN101271357A (zh) * 2008-05-12 2008-09-24 北京中星微电子有限公司 写字板上内容记录方法及装置
CN102714692A (zh) * 2009-09-23 2012-10-03 微软公司 基于照相机的扫描
CN106152947A (zh) * 2015-03-31 2016-11-23 北京京东尚科信息技术有限公司 测量物体尺寸的设备、方法和装置

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6449004B1 (en) * 1996-04-23 2002-09-10 Minolta Co., Ltd. Electronic camera with oblique view correction
EP0908846A3 (en) * 1997-10-07 2000-03-29 Canon Kabushiki Kaisha Moving object detection apparatus and method
US6449397B1 (en) * 1999-04-05 2002-09-10 Mustek Systems Inc. Image processing system for scanning a rectangular document
US6802614B2 (en) * 2001-11-28 2004-10-12 Robert C. Haldiman System, method and apparatus for ambient video projection
US20040150617A1 (en) * 2003-02-05 2004-08-05 Nec Viewtechnology, Ltd Image projector having a grid display device
US7171056B2 (en) 2003-02-22 2007-01-30 Microsoft Corp. System and method for converting whiteboard content into an electronic document
JP2005227661A (ja) * 2004-02-16 2005-08-25 Nec Viewtechnology Ltd プロジェクタおよび歪補正方法
US8120665B2 (en) * 2005-08-25 2012-02-21 Ricoh Company, Ltd. Image processing method and apparatus, digital camera, and recording medium recording image processing program
JP4539886B2 (ja) * 2007-08-20 2010-09-08 セイコーエプソン株式会社 画像処理システム、プロジェクタ、プログラムおよび情報記憶媒体
US8781152B2 (en) * 2010-08-05 2014-07-15 Brian Momeyer Identifying visual media content captured by camera-enabled mobile device
JP5812716B2 (ja) * 2010-08-27 2015-11-17 キヤノン株式会社 画像処理装置および方法
US8727539B2 (en) * 2010-10-28 2014-05-20 Seiko Epson Corporation Projector and method of controlling projector
JP5834615B2 (ja) * 2011-08-18 2015-12-24 株式会社リコー プロジェクタ、その制御方法、そのプログラム、及び、そのプログラムを記録した記録媒体
JP2014092460A (ja) * 2012-11-02 2014-05-19 Sony Corp 画像処理装置および方法、画像処理システム、並びにプログラム
JP6394081B2 (ja) * 2013-08-13 2018-09-26 株式会社リコー 画像処理装置、画像処理システム、画像処理方法、及びプログラム
US9160946B1 (en) * 2015-01-21 2015-10-13 A2iA S.A. Systems and methods for capturing images using a mobile device
CN104679001A (zh) 2015-01-25 2015-06-03 无锡桑尼安科技有限公司 一种用于矩形目标检测方法
KR102248459B1 (ko) * 2015-02-03 2021-05-06 한국전자통신연구원 카메라 캘리브레이션 장치 및 방법
WO2018045592A1 (zh) * 2016-09-12 2018-03-15 华为技术有限公司 拍摄图像方法、装置和终端
US20190355104A1 (en) * 2016-09-29 2019-11-21 Huawei Technologies Co., Ltd. Image Correction Method and Apparatus
US10915998B2 (en) * 2016-12-21 2021-02-09 Huawei Technologies Co., Ltd. Image processing method and device
CN109479082B (zh) * 2016-12-21 2021-10-15 华为技术有限公司 图象处理方法及装置
US9756303B1 (en) * 2016-12-30 2017-09-05 Texas Instruments Incorporated Camera-assisted automatic screen fitting
US10681318B2 (en) * 2017-11-14 2020-06-09 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction
US10684537B2 (en) * 2017-11-14 2020-06-16 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and correction

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7268803B1 (en) * 1999-08-26 2007-09-11 Ricoh Company, Ltd. Image processing method and apparatus, digital camera, image processing system and computer readable medium
CN1867940A (zh) * 2003-10-14 2006-11-22 卡西欧计算机株式会社 成像装置及其图像处理方法
CN1937698A (zh) * 2006-10-19 2007-03-28 上海交通大学 图像畸变自动校正的图像处理方法
CN101271357A (zh) * 2008-05-12 2008-09-24 北京中星微电子有限公司 写字板上内容记录方法及装置
CN102714692A (zh) * 2009-09-23 2012-10-03 微软公司 基于照相机的扫描
CN106152947A (zh) * 2015-03-31 2016-11-23 北京京东尚科信息技术有限公司 测量物体尺寸的设备、方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020136523A1 (en) * 2018-12-28 2020-07-02 Ecomagic Ltd System and method for the recognition of geometric shapes
CN113538291A (zh) * 2021-08-02 2021-10-22 广州广电运通金融电子股份有限公司 卡证图像倾斜校正方法、装置、计算机设备和存储介质
CN113538291B (zh) * 2021-08-02 2024-05-14 广州广电运通金融电子股份有限公司 卡证图像倾斜校正方法、装置、计算机设备和存储介质
CN117853579A (zh) * 2023-10-07 2024-04-09 湖州丽天智能科技有限公司 一种光伏板位姿修正方法、光伏机器人及存储介质

Also Published As

Publication number Publication date
US20200098133A1 (en) 2020-03-26
CN109479082B (zh) 2021-10-15
CN109479082A (zh) 2019-03-15
US10909719B2 (en) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2018112790A1 (zh) 图象处理方法及装置
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
US10915998B2 (en) Image processing method and device
WO2018214365A1 (zh) 图像校正方法、装置、设备、系统及摄像设备和显示设备
WO2017076106A1 (zh) 图像的拼接方法和装置
US11282232B2 (en) Camera calibration using depth data
WO2017091927A1 (zh) 图像处理方法和双摄像头系统
US8155387B2 (en) Method and system for position determination using image deformation
US20160050372A1 (en) Systems and methods for depth enhanced and content aware video stabilization
JP2020149111A (ja) 物体追跡装置および物体追跡方法
KR20160095560A (ko) 카메라 캘리브레이션 장치 및 방법
KR101868740B1 (ko) 파노라마 이미지 생성 방법 및 장치
CN114004890B (zh) 姿态确定方法、装置、电子设备和存储介质
JP7188067B2 (ja) 人検出装置および人検出方法
WO2024002186A1 (zh) 图像融合方法、装置及存储介质
JP7363504B2 (ja) オブジェクト検出方法、検出装置及び電子機器
WO2018152710A1 (zh) 图像校正的方法及装置
CN117253022A (zh) 一种对象识别方法、装置及查验设备
JP4548228B2 (ja) 画像データ作成方法
TWI658431B (zh) 影像處理方法、影像處理裝置及電腦可讀取記錄媒體
CN113870190B (zh) 竖直线条检测方法、装置、设备及存储介质
WO2023076913A1 (en) Methods, storage media, and systems for generating a three-dimensional line segment
CN113870292B (zh) 深度图像的边缘检测方法、装置和电子设备
CN114004839A (zh) 全景图像的图像分割方法、装置、计算机设备和存储介质
CN111131689B (zh) 全景图像修复方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16924756

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16924756

Country of ref document: EP

Kind code of ref document: A1