WO2023284202A1 - 一种图像分辨率归一化处理方法及装置 - Google Patents

一种图像分辨率归一化处理方法及装置 Download PDF

Info

Publication number
WO2023284202A1
WO2023284202A1 PCT/CN2021/130252 CN2021130252W WO2023284202A1 WO 2023284202 A1 WO2023284202 A1 WO 2023284202A1 CN 2021130252 W CN2021130252 W CN 2021130252W WO 2023284202 A1 WO2023284202 A1 WO 2023284202A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
plane
coordinate
point
object plane
Prior art date
Application number
PCT/CN2021/130252
Other languages
English (en)
French (fr)
Inventor
邓国强
Original Assignee
北京金博星指纹识别科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京金博星指纹识别科技有限公司 filed Critical 北京金博星指纹识别科技有限公司
Publication of WO2023284202A1 publication Critical patent/WO2023284202A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to an image resolution normalization processing method and device.
  • Optical image collector is a commonly used image acquisition device, through which the image of the object to be collected can be obtained, such as fingerprint collector and palmprint collector, etc.
  • the user's fingerprint image can be collected by the palmprint collector. Taking the fingerprint collector as an example to illustrate its collection principle and existing problems:
  • the user sticks his finger to the glass surface of the fingerprint collector, performs perspective projection imaging on the glass surface pressing the finger through the imaging lens group, and then takes pictures through the image acquisition chip .
  • the image plane of the imaging lens group is not parallel to the object plane formed by the glass surface, so the output image of the image plane collected on the image plane has a three-dimensional sense of being near large and far small. This stereoscopic effect is called the trapezoidal distortion of the image.
  • the trapezoidal distortion makes the image resolution of different regions in the output image of the image plane different, and the resolution of the image in different directions is different, indicating that there is a resolution error in the image, and the resolution error will be reduced.
  • the accuracy of image recognition is not even recognized.
  • the image resolution must be normalized so that each area in the image output by the optical image collector And different directions have the same image resolution, and further, different regions and different directions have the same image resolution, which cannot fully meet the requirements of image acquisition and image recognition.
  • the image resolution of the output image of the optical image collector should have a specified value, and The error cannot be too large, because if the image resolution is different between different images, the difficulty of image recognition will increase.
  • a multi-prism combination is added to the optical image collector, and the trapezoidal distortion is corrected by the optical method of the multi-prism combination, but adding a multi-prism combination will increase the geometric size and cost of the optical image collector, and the multi-prism combination is in the process of processing and Certain errors in the assembly process will also cause different image resolutions in different directions, so that the output images of the optical image collector still have certain resolution errors.
  • the present application provides an image resolution normalization processing method and device.
  • the present application provides an image resolution normalization processing method, the method comprising:
  • the coordinate mapping relationship between the object plane object point and the image plane image point, the coordinate value of the mapping point of each pixel point in the object plane output image in the image plane is obtained , to obtain the pixel point mapping relationship, the pixel point mapping relationship is a one-to-one correspondence between the pixel points in the object plane output image and the mapping points in the image plane;
  • the gray value of each pixel in the image plane original image and the size information of the image plane original image is obtained to obtain the object plane output image.
  • the obtaining the gray value of each pixel in the object plane output image according to the pixel point mapping relationship, the gray value of each pixel in the image plane original image, and the size information of the image plane original image includes:
  • the gray value of the pixel in the original image of the image plane is assigned to the corresponding pixel in the output image of the object plane;
  • the gray value of the corresponding pixel point in the output image of the object plane is assigned as an effective value, and the effective value is [0,255] a value in
  • Method 1 the coordinate value of the X-axis direction of the mapping point is less than 0; Method 2, the coordinate value of the X-axis direction of the mapping point is greater than or equal to the width of the original image plane; Method 3, the coordinate value of the Y-axis direction of the mapping point is less than 0; Method 4.
  • the Y-axis coordinate value of the mapping point is greater than or equal to the height of the original image on the image plane.
  • the device parameters include the coordinate values (x c , y c , z c ) of the viewpoint in the optical image collector, the object plane in the optical image collector and the image plane in the optical image collector
  • the included angle ⁇ between, the distance z' a between the object plane and the OXY' coordinate plane in the OXY'Z' coordinate system, the OXY'Z' coordinate system is determined by the OXYZ coordinates of the optical image collector It is obtained by rotating the included angle ⁇ around the X axis.
  • the coordinate values of the viewpoint include the coordinate values of the viewpoint on the three axes of the OXYZ coordinate system.
  • the Y axis in the OXYZ coordinate system is vertically upward and the Z axis points to The left and X axes are perpendicular to the OYZ coordinate plane and point inward, and the OXY coordinate plane overlaps with the image plane.
  • the coordinate mapping relationship between the object point on the object plane and the image point on the image plane includes:
  • (x a , y a , z a ) is the coordinate value of an object point in the object plane
  • (x p , y p ) is the X-axis direction and the Y-axis direction of the corresponding image point of the object point in the image plane 2D coordinate value.
  • the coordinate mapping relationship between the object point in the object plane and the image plane image point, the position of each pixel in the object plane output image in the image plane is obtained.
  • the coordinate values of the mapping points include:
  • the relational expression includes
  • (x ak , y ak , z ak ) is the coordinate value of the pixel in the output image of the object plane in the OXYZ coordinate system of the optical image collector
  • (x pk , y pk ) is the mapping when the pixel is mapped to the image plane
  • the two-dimensional coordinate values of the X-axis direction and the Y-axis direction of the point, the Y-axis in the OXYZ coordinate system is vertically upward, the Z-axis points to the left, the X-axis is perpendicular to the OYZ coordinate plane and points inward, and the OXY coordinate plane overlaps with the image plane;
  • the obtaining the coordinate values of each pixel in the output image of the object plane according to the device parameters, the size information and the position information of the output image of the object plane includes:
  • the coordinate value of the image center point of the object plane output image, the coordinate value of the image center point is the position information;
  • the OXY'Z' coordinate system is obtained by rotating the OXYZ coordinate system of the optical image collector around the X axis by an included angle ⁇ , and the included angle ⁇ is the object plane in the optical image collector and the optical image acquisition
  • the included angle between the image planes in the device, the Y axis in the OXYZ coordinate system is vertically upward, the Z axis points to the left, the X axis is perpendicular to the OYZ coordinate plane and points inward, and the OXY coordinate plane overlaps with the image plane.
  • the pre-obtaining process of the device parameters of the optical image collector includes:
  • the coordinate values of all object points corresponding to the image points in the original image of the image plane are the two-dimensional coordinate values of the X-axis direction and the Y-axis direction of the image points in the image plane;
  • the coordinate mapping relationship and the coordinate value of the image point in the image plane obtain the object plane object point coordinate set corresponding to each equipment parameter set to be processed, the object plane
  • the object point coordinate set includes the coordinate values of multiple object points in the object plane under the device parameter set to be processed, and the multiple object points are obtained by mapping multiple image points to the object plane;
  • the maximum distance difference is the maximum value among multiple distance differences of the object plane mapping point coordinate set, and the distance difference is the maximum distance difference in the object plane coordinate set The difference between the second distance between any two object points and its corresponding first distance;
  • the pre-obtaining process of the position information of the object plane output image includes:
  • the object point on the object plane and the image point on the image plane map each pixel in the original image of the image plane to the object plane, and obtain the mapping of each pixel in the object plane
  • the coordinate value of the point
  • the OXY'Z' coordinate system is obtained by rotating the OXYZ coordinate system of the optical image collector around the X axis by an included angle ⁇ , and the included angle ⁇ is the angle between the object plane in the optical image collector and the image plane in the optical image collector, the Y axis in the OXYZ coordinate system is vertically upward, the Z axis points to the left, and the X axis and the OYZ coordinate plane Vertical and pointing in, the OXY coordinate plane overlaps with the image plane;
  • the mean value of the maximum value of the coordinate value in the X-axis direction and the minimum value of the coordinate value in the X-axis direction is used as the coordinate value in the X-axis direction of the image center point of the output image of the object plane, and the maximum value of the coordinate value in the Y' axis direction and The mean value of the minimum value of the Y' axis direction coordinate value is used as the Y' axis direction coordinate value of the image center point of the output image of the object plane, and the coordinate value of the image center point of the object plane output image is the output of the object plane The location information of the image.
  • the present application provides an image resolution normalization processing device, the device comprising:
  • An acquisition unit configured to acquire device parameters of the optical image collector, acquire size information of the original image plane image of the optical image collector, and acquire size information and position information of the object plane output image of the optical image collector;
  • a determining unit configured to obtain the coordinate values of each pixel in the output image of the object plane according to the device parameters, the size information and the position information of the output image of the object plane;
  • a coordinate conversion unit configured to obtain the coordinate value of each pixel in the object plane output image in the image plane according to the coordinate mapping relationship between the object point on the object plane and the image point on the image plane according to the coordinate value of each pixel in the object plane output image.
  • the coordinate value of the mapping point, to obtain the pixel point mapping relationship, the pixel point mapping relationship is a one-to-one correspondence between the pixel point in the object plane output image and the mapping point in the image plane;
  • the gray value assignment unit is used to obtain the gray value of each pixel in the object plane output image according to the pixel point mapping relationship, the gray value of each pixel in the original image of the image plane, and the size information of the original image of the image plane. Obtain the object plane output image.
  • the present application provides an optical image collector, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to realize the above-mentioned image Resolution normalization processing method.
  • the present application provides a computer-readable storage medium.
  • the optical image collector can perform the above image resolution normalization. treatment method.
  • the above image resolution normalization processing method and device firstly obtain the equipment parameters of the optical image collector, obtain the object plane output image of the optical image collector and the size information of the original image plane image, and obtain the object plane output of the optical image collector
  • the position information of the image according to the position information, size information and device parameters of the object plane output image, the coordinate value of each pixel point in the object plane output image is obtained; according to the object plane output coordinate value of each pixel point in the image, device parameters, object
  • the coordinate mapping relationship between the object point on the plane and the image point on the image plane is used to obtain the coordinate value of the mapping point of each pixel point in the image plane in the object plane output image to obtain the pixel point mapping relationship.
  • the pixel point mapping relationship is the object plane output One-to-one correspondence between the pixels in the image and the mapping points in the image plane; according to the pixel mapping relationship, the gray value of each pixel in the original image of the image plane, and the size information of the original image of the image plane, the output image of the object plane is obtained The gray value of each pixel in to get the object plane output image. Since the pixels in the output image of the object plane are equivalent to the object points in the object plane, the distance between the object points only has calculation errors or errors caused by discretization, and there is no error caused by perspective projection imaging. Therefore, the object plane The displacement deviation of the pixel points in the output image will not exceed one pixel, and the normalization processing of the image resolution is realized.
  • Fig. 1 is the schematic diagram of the collecting principle of fingerprint collector in the prior art
  • Fig. 2 is a schematic diagram of image distortion provided by the embodiment of the present application.
  • Fig. 3 is a schematic diagram of the perspective principle of the optical image collector provided by the embodiment of the present application.
  • Fig. 4 is the simulated geometric light path diagram of the optical image collector provided by the embodiment of the present application.
  • FIG. 5 is a flow chart of an image resolution normalization processing method provided by an embodiment of the present application.
  • FIG. 6 is a flow chart for obtaining device parameters of an optical image collector provided by an embodiment of the present application.
  • Fig. 7 is the schematic diagram of the test piece provided by the embodiment of the present application.
  • Fig. 8 is a comparison diagram of the image plane original image and the object plane output image of the test piece provided by the embodiment of the present application;
  • FIG. 9 is a schematic structural diagram of an image resolution normalization processing device provided by an embodiment of the present application.
  • the images processed by modern computer image processing technology are all digital images, and the digitization of images includes two processes of sampling and quantization.
  • Sampling is to divide a two-dimensional (respectively referring to the width direction and height direction) image into small unit areas called pixels (sometimes called pixels).
  • Quantization is to use an integer called gray value to represent the lightness and darkness of a single pixel.
  • Image resolution is the number of pixels contained in the unit length in the width or height direction, the unit is "pixel point/unit length", the common unit is dpi, that is: dots per inch (dpi: Dots Per Inch), that is, the length of one inch
  • dpi dots per inch
  • the number of pixels in a unit, one inch equals 25.4 millimeters. All length units in this application, including coordinate values, etc., unless otherwise specified, are based on a point (or pixel) in a discrete space under the condition of a preset image resolution. For simplicity, points are collectively referred to as pixels.
  • the normalization process of image resolution is no matter what type of optical image collector, or regardless of the processing and assembly errors in the production process of optical image
  • the image resolution in a direction (such as width direction and/or height direction) is normalized to a fixed size, such as normalized to a preset image resolution or close to a preset image resolution.
  • the optical image collector uses perspective projection imaging to obtain a perspective view of an object pressed on its glass surface (object plane).
  • Perspective projection is a central projection in which when projecting an object onto the image plane (also called the projection plane), all projection lines emanate from a point called the center of projection.
  • object W there is an object W on the horizontal plane H
  • S is the projection center
  • S is equivalent to a projected point light source, which is called the viewpoint in perspective projection imaging and is located above the horizontal plane H.
  • the plane P be the image plane, and the image plane P is perpendicular to the horizontal plane H.
  • the image plane is placed behind the projection center so that the projection center is located between the image plane and the object.
  • a 0 is the perspective projection of the object point A, also called the image point, This is the imaging principle of the optical image collector, also called the perspective principle.
  • the simulated geometric light path diagram of the optical image collector shown in Fig. 4 is obtained.
  • the three-dimensional coordinate system of the optical image collector is set as OXYZ, the Y axis is vertically upward, the Z axis points to the left, and the X axis is perpendicular to the OYZ coordinate plane and points inward.
  • the viewpoint be S
  • its three-dimensional coordinates be S(x c , y c , z c ).
  • the optical axis ZH pass through the viewpoint S.
  • the image plane is located in the OXY coordinate plane. Different from the three-dimensional object W in Fig.
  • the optical image collector only images a plane, which is called the object plane.
  • the object plane is located at a certain position on the optical axis, and is perpendicular to the OYZ coordinate plane, and the angle between it and the image plane is ⁇ .
  • the simulated geometric light path diagram of the optical image collector by referring to the above description.
  • the coordinate values (x c , y c , z c ) of the viewpoint S, the angle ⁇ between the object plane and the image plane, and the distance z′ a between the object plane and the OXY′ coordinate plane It is 5 basic parameters associated with a single optical image collector. These 5 basic parameters are determined by the three elements of perspective projection, that is, the object plane, image plane, and viewpoint, and their mutual positional relationship in the OXYZ coordinate system.
  • the coordinate values (x c , y c , z c ), the angle ⁇ between the object plane and the image plane, and the distance z′ a between the object plane and the OXY′ coordinate plane are the equipment parameters of the optical image collector.
  • the coordinate values of the image point on the perspective line and the coordinate value of the viewpoint are respectively P(x p , y p , z p ), S(x c , y c , z c ), and T(x,y,z) is any point on the perspective line, then according to the principle of analytic geometry, the following perspective line equation holds:
  • the OXY'Z' coordinate system is generated by the OXYZ coordinate system rotating ⁇ around the X axis, and the OXY' coordinate plane in the OXY'Z' coordinate system is parallel to the object plane, the distance from the object plane to the OXY' coordinate plane is z' a , therefore, the coordinate value of the Z′ axis of all object points in the OXY′Z′ coordinate system is z′ a , after the coordinate rotation transformation operation, the object point A(x a , y a , z a ) satisfies the following conditions :
  • the evolution equation of the perspective line equation and the conditions satisfied by the object point can be used as the coordinate mapping relationship between the object point on the object plane and the image point on the image plane, specifically, the object point on the object plane and the image point on the image plane
  • the coordinate mapping relationship between includes:
  • each object point in the object plane to the image plane to obtain the coordinate value of its corresponding image point in the image plane; similarly, each image point in the image plane can be mapped to the object plane to obtain its corresponding The coordinate value of the object point.
  • the coordinate mapping relationship between the object point on the object plane and the image point on the image plane can be used as a geometrical optics mathematical model of the optical image collector.
  • FIG. 5 shows an optional flow of an image resolution normalization processing method provided by an embodiment of the present application, which may include the following steps:
  • the device parameters of the optical image collector are parameters determined by the three elements of the optical image collector object plane, image plane, and viewpoint in the OXYZ coordinate system, wherein the device parameters may include the optical image collector
  • the coordinate value S(x c , y c , z c ) of the middle viewpoint, the angle ⁇ between the object plane and the image plane, the distance z′ a between the object plane and the OXY′ coordinate plane in the OXY′Z′ coordinate system , the OXY'Z' coordinate system is obtained by rotating the OXYZ coordinate system of the optical image collector around the X axis by an angle ⁇ , and in the OXY'Z' coordinate system, OXY' is parallel to the object plane.
  • the size information of the image represents the size of the image, generally represented by width and height.
  • the width direction is called a column
  • the height direction is called a row.
  • the pixel position is represented by the number of columns and rows of pixels.
  • pixel point P(i,j) represents the i-th column and j-th row pixel point.
  • the position information of the image can be represented by the coordinate value of the center point of the image.
  • an image with a width of XW columns and a height of YW rows the center point of the image is located at the XW/2 column and YW/2 row pixels
  • the coordinate value of the center point represents the position information of the image.
  • the optical image collector in this application involves three kinds of images, one is the original image on the image plane, also called the image plane original image; the second is the image formed by mapping the image plane image to the object plane, also called The object plane image; the third is the output image of the optical image collector, which is a rectangular image cut out from the object plane image (also called the object plane output image)
  • the size information of the original image of the image plane is determined by the image acquisition chip used by the optical image collector. For example, if an image acquisition chip with a width of 640 columns and a height of 480 rows can be used, then the size information of the original image of the image plane is 640 columns. 480 lines.
  • the image plane is located in the OXY coordinate plane of the OXYZ coordinate system.
  • the width direction of the original image on the image plane is defined as the X direction
  • the height direction is defined as the Y direction.
  • the 0th column, 0th row pixel point is defined as the coordinate origin O.
  • the object plane image is an image formed by mapping the original image of the image plane to the object plane. Due to the perspective projection relationship, the object plane image is no longer necessarily a rectangle, and all the pixels in the object plane image form a trapezoidal area on the object plane.
  • the object plane output image is a rectangular image with a certain width and height cut out from the trapezoidal area of the object plane image.
  • the position information of the object plane output image may be the coordinate value of the center point of the object plane output image in the object plane (referred to as the coordinate value of the image center point), which is a parameter associated with the optical image collector.
  • the image center point of the object plane output image should be located at the center of the glass surface of the optical image collector as much as possible.
  • the coordinate value of the image center point of the output image of the object plane determined in advance is (x 0 , y′ 0 ), x 0 is the coordinate value of the X-axis direction, and y′ 0 is Y′ Coordinate values in the axis direction, the X-axis direction is used as the width direction of the object plane output image, and the Y' axis direction is used as the height direction of the object plane output image for cropping.
  • the width of the object plane output image can be 336 columns or 256 columns, and the corresponding height of the object plane output image can be 360 lines, other values are also possible.
  • the width and height of the object plane output image and the coordinates of the image center point can be customized according to the actual situation, and the width, height, and coordinate values of the image center point of the image are not limited in this embodiment.
  • the equipment parameters, the original image of the image plane, the size information and the position information of the output image of the object plane are all parameters that have been obtained when the product leaves the factory, and can be directly used in practical applications. Use these parameters for subsequent image resolution normalization.
  • the device parameters, the size information of the original image on the image plane, and the size and position information of the output image on the object plane may be read from the parameters obtained by the optical image collector.
  • OXYZ coordinate system As mentioned above, two coordinate systems are constructed in the optical image collector, one is the OXYZ coordinate system, and the other is the OXY'Z' coordinate system obtained by rotating OXYZ around the X axis by an angle of ⁇ .
  • OXY'Z' coordinate In the OXY'Z' coordinate
  • the OXY′ coordinate plane in the system is parallel to the object plane, and the distance is z′ a .
  • ⁇ and z' a are two key parameters among the equipment parameters obtained in step 101, and through step 101, the size information and position information of the output image of the object plane are also obtained.
  • the width of the object plane output image can be obtained as xw column pixels, and the height is yw row pixels.
  • the coordinate value of the image center point of the object plane output image (x 0 , y ' 0 )
  • x 0 is the coordinate value in the X-axis direction
  • y' 0 is the coordinate value in the Y'-axis direction.
  • the position of the i-th column and j-th row pixel in the output image of the object plane can be represented by coordinate values, such as:
  • the two-dimensional coordinate values of pixels in the X-axis direction and Y'-axis direction of the OXY'Z' coordinate system can be expressed as:
  • the object plane image is mentioned many times in this application, the object plane image is not really obtained in this application, because it is not necessary to obtain the coordinate value of each pixel in the object plane image, but the image center point of the object plane image
  • the coordinate value (x 0 , y′ 0 ) is the center, and the two-dimensional coordinate value of each pixel in the object plane output image in the X-axis direction and the Y’-axis direction in the OXY′Z′ coordinate system is obtained, as described above P '(i-xw/2+x 0 , j-yw/2+y' 0 ).
  • step 102 After obtaining the coordinate values of each pixel in the object plane output image by step 102, utilize the coordinate mapping relationship between the object plane object point and the image plane image point to map the coordinate values of the pixel points in the object plane output image to the image plane , to get the coordinate value of its mapping point on the image plane.
  • mapping point also called image point
  • P(x p , y p ) on the image plane P(x p , y p ).
  • the coordinate mapping relationship between the object point on the object plane and the image point on the image plane is:
  • P(x pk ,y pk ) is the two-dimensional coordinate value of the pixel point P(x ak ,y ak ,z ak ) in the output image of the object plane in the X-axis direction and the Y-axis direction of the mapping point in the image plane, so that Get the pixel point mapping relationship.
  • step 103 calculates the two-dimensional coordinate values of the X-axis direction and the Y-axis direction of its mapping point in the image plane ( x pk , y pk ), in the size information of the image plane original image obtained in step 101, XW is the width of the image plane original image, and YW is the height of the image plane original image, if 0 ⁇ xpk ⁇ XW; 0 ⁇ ypk ⁇ YW, then (x pk , y pk ) is the pixel in the x pk column and y pk row in the original image of the image plane, and the gray level of the pixel in the x pk column and y pk row in the original image of the image plane The value is assigned to the gray level of the pixel point P(x ak , y ak , z ak ) in the
  • x pk ⁇ 0, or x pk ⁇ XW, or y pk ⁇ 0, or y pk ⁇ YW it means that the coordinate value of the mapping point exceeds the range of the coordinate value of the original image on the image plane, and the coordinate value of the mapping point on the image plane It does not overlap with the coordinates of the pixel points in the original image of the image plane, and assigns the gray value of the pixel point P(x ak , y ak , z ak ) in the output image of the object plane to an effective value between [0, 255], which means from Choose a value from 0 to 255 (including 0 and 255). For example, you can assign a value between 0 or 0 to 255. There is no limit to the specific value assigned.
  • the input to the program is:
  • Size information of the original image on the image plane XW, YW; // Note: XW is width, YW is height;
  • the size information of the object plane output image xw, yw; //Note: xw is width, yw is height
  • the coordinate value of the image center point of the object plane output image (x 0 , y′ 0 ), where x 0 is the coordinate value of the X-axis direction, and y′ 0 is the coordinate value of the Y′-axis direction.
  • jj represents the jjth line in the original image of the image plane
  • IMG[tsa] Image[ts]; Note: Assign the gray value of the corresponding pixel on the image plane original image to the corresponding pixel in the object plane output image.
  • the coordinate values of each pixel in the object plane output image on the image plane can be obtained, and the gray value assignment of all pixels in the object plane output image can be completed, so as to obtain the object plane output image of the optical image collector.
  • the equipment parameters of the optical image collector obtain the object plane output image of the optical image collector and the size information of the original image plane image, and obtain the object plane output image of the optical image collector
  • the coordinate value of each pixel in the object plane output image is obtained;
  • the object plane output coordinate value of each pixel point in the image, equipment parameters, object plane The coordinate mapping relationship between the object point and the image plane image point, the coordinate value of the mapping point of each pixel point in the image plane in the object plane output image is obtained to obtain the pixel point mapping relationship
  • the pixel point mapping relationship is the object plane output image
  • One-to-one correspondence between the pixels in the image plane and the mapping points in the image plane; according to the pixel mapping relationship, the gray value of each pixel in the original image of the image plane, and the size information of the original image of the image plane, the output image in the object plane is obtained.
  • the gray value of each pixel to obtain the output image of the object plane. Since the pixels in the output image of the object plane are equivalent to the object points in the object plane, the distance between the object points only has calculation errors or errors caused by discretization, and there is no error caused by perspective projection imaging. Therefore, the object plane The displacement deviation of the pixel points in the output image will not exceed one pixel, and the normalization processing of the image resolution is realized.
  • the image resolution normalization processing method provided in this embodiment can be applied to fingerprint collectors.
  • fingerprint collectors When two fingerprint collectors with different image resolutions collect fingerprints on the same finger, the fingerprints obtained by the two fingerprint collectors The feature information is very different, which leads to poor interchangeability and compatibility of fingerprint collectors.
  • the fingerprint image entered in fingerprint collector 1 has a good recognition effect in fingerprint collector 1, but it The registered fingerprint image is copied to the fingerprint collector 2, because there is a difference in image resolution with the fingerprint collector 1 that records the fingerprint image, the recognition effect is poor when fingerprint recognition is performed, or even cannot be recognized.
  • the image resolution normalization processing method provided in this embodiment is applied to fingerprint collector 2 and fingerprint collector 1, both fingerprint collectors use the object plane output image as the output image of the fingerprint collector, and the object plane output The image has no or very little resolution error, so applying the image resolution normalization processing method provided in this embodiment can make different optical image collectors interchangeable and compatible, and improve interchangeability and compatibility.
  • Cutting out a rectangular image from the object plane image refers to obtaining the coordinate values of the pixels in the object plane output image in the OXY'Z' coordinate system according to the coordinates of the center point of the image and the size information of the object plane output image, and completing the process from the object plane.
  • the coordinate mapping relationship and coordinate transformation relationship between object points on the object plane and image points on the image plane need to be obtained in combination with device parameters of the optical image collector.
  • the device parameters of the optical image collector need to be obtained in advance, and any set of device parameters corresponds to the image resolution of the specified object plane output image.
  • the obtaining process of the device parameters of the optical image collector is shown in Figure 6, and may include the following steps:
  • the coordinate value of the image point is the two-dimensional coordinate value of the image point in the X-axis direction and the Y-axis direction in the image plane.
  • the initial device parameter set includes the initial value of each parameter in the device parameters, and the initial value can be set by the user, and the specific value of the initial value is not limited in this embodiment.
  • the value of each parameter can be adjusted using the preset search range.
  • the preset search range can be adjusted on the basis of the value adjusted last time.
  • the preset search range of each parameter in the device parameters can be the same or It can be different, for example, the preset search range of x c is ⁇ 20, and the preset search range of y c and z c is ⁇ 50.
  • each equipment parameter set to be processed includes equipment parameters and equipment
  • the value of each parameter among the parameters is different, and multiple sets of device parameters to be processed may include an initial set of device parameters.
  • the coordinate mapping relationship and the coordinate value of the image point in the image plane obtain the object plane object point coordinate set corresponding to each equipment parameter set to be processed, and the object plane object point coordinate set is included in Coordinate values of multiple object points in the object plane under the device parameter set to be processed. Multiple object points are obtained by mapping multiple image points onto the object plane.
  • multiple image points are mapped to the object plane, displayed as object points on the object plane, and the coordinate value of each object point in the object plane is obtained.
  • the obtained coordinate value of each object point in the object plane can be obtained by combining the above-mentioned coordinate mapping relationship and the values of the device parameters in each device parameter set to be processed. Because the values of the equipment parameters in each equipment parameter set to be processed may be different, when obtaining the coordinate value of each object point in the object plane, the equipment parameter set to be processed is used as the unit, and for the same object point, respectively Calculate its coordinate value in the object plane under each parameter set of the device to be processed. The process is as follows:
  • the coordinate mapping relationship between the image point on the image plane and the object point on the object plane is:
  • x c , y c , z c , z′ a , ⁇ can be any value of the device parameters in the device parameter set to be processed, and the device parameters to be processed Concentrate, exhaust every combination of equipment parameters, and perform the above operations once respectively, so as to obtain the coordinate value of the object point under each equipment parameter set to be processed.
  • the maximum distance difference is the maximum value among multiple distance differences of the object point coordinate set on the object plane.
  • the distance difference is the distance difference between any two objects in the object point coordinate set on the object plane. The difference between the second distance between points and their corresponding first distance.
  • the second distance is calculated from the coordinate value of the object point mapped by the image point on the object plane, for example, the second distance between the i-th object point and the j-th object point is:
  • N is the total number of object points when calculating the second distance
  • the coordinate value of the i-th object point is A(x ai ,y ai ,z ai )
  • the coordinate value of the j-th object point is A(x aj ,y aj ,z aj )
  • any two object points will get a distance difference, so an object point coordinate set on the object plane corresponds to multiple distance differences, and a distance difference with the largest value is selected from the multiple distance differences as the object
  • the maximum distance difference of the plane object point coordinate set such as
  • the maximum distance difference of each object plane object point coordinate set is compared with the preset distance threshold, and the object plane object point coordinate set whose maximum distance difference is less than the preset distance threshold is selected from multiple object plane object point coordinate sets as the target object.
  • the plane object point coordinate set such as the object plane object point coordinate set satisfying D max ⁇ D maxth is used as the target object plane object point coordinate set, D maxth is the preset distance threshold, and its value is not limited.
  • D max ⁇ D maxth select the device parameter with the smallest value in the device parameter set corresponding to the object point coordinate set of the target object plane as the device parameter of the optical image collector, which satisfies A group of parameters x c , y c , z c , z′ a , and ⁇ with the smallest D max ⁇ D maxth are the equipment parameters of the optical image collector.
  • x c , y c , z c , z′ a , ⁇ can be any value of the device parameters in the device parameter set to be processed, and the device parameters to be processed Concentrate, exhaust every combination of equipment parameters, and perform the above operations once respectively, so as to obtain the coordinate value of the object point under each equipment parameter set to be processed.
  • the default search range of ⁇ is ⁇ 2 degrees
  • the default search range of x c is ⁇ 20
  • the default search range of y c , z c , z′ a is ⁇ 50, the purpose of which is to obtain the equipment to be processed parameter set
  • step S11 If i4 ⁇ 100, execute step S11;
  • step S9 If i3 ⁇ 100, execute step S9;
  • step S27 If i2 ⁇ 100, execute step S7;
  • step S29 If i1 ⁇ 40, execute step S5; //Step S20 to step S29 limit the maximum value to which different equipment parameters are adjusted to reduce the amount of computation;
  • the output image of the object plane is cut out from the image of the object plane.
  • the position information of the output image of the object plane in the object plane needs to be obtained first.
  • the position information has various forms of expression, for example: The coordinate value of the image center point of the object plane output image is used to describe the position information of the object plane output image.
  • the device parameters of the optical image collector have been obtained above, use these device parameters and the coordinate mapping relationship between the object point on the object plane and the image point on the image plane to map each pixel in the original image on the image plane to the object plane , obtain the coordinate values of these mapping points in the OXY'Z' coordinate system, and obtain the maximum value X max and the minimum value X min of the coordinate values in the X-axis direction from the coordinate values of the mapping points on the object plane, and the Y' axis
  • the maximum value Y′ max and the minimum value Y′ min of the direction coordinate value, and the mean value (X max +X min )/2 of the maximum value and the minimum value of the X-axis direction coordinate value as the image center point of the object plane output image X-axis direction coordinate value x 0 take the mean value (Y′ max +Y′ min )/2 of the maximum value and minimum value of the Y′-axis direction coordinate value as the Y′-axis direction coordinate value of the image center point of the object plane
  • the coordinate mapping relationship between the image point on the image plane and the object point on the object plane is:
  • x c , y c , z c , z′ a , ⁇ are the equipment parameters obtained in the previous steps
  • P(x p , y p ) represents the X-axis direction and Y-axis direction of an image point in the image plane.
  • Two-dimensional coordinate value, the coordinate value of its mapping point in the object plane is represented by A(x a , y a , z a ).
  • X max max(X max , x a );
  • Y' max max(Y' max ,y' a ).
  • the image resolution of the object plane output image of the optical image collector is 500dpi
  • the line width of all lines of the square frame is 0.05mm. If calculated according to the image resolution of 500dpi (500 points per inch, in the object plane, one point is equivalent to one pixel), the line width is exactly one pixel, the side length of each side of the square is 200 pixels, and the distance between the diagonals The distance between them is 282.8427 pixels. Let the four vertices of the square be a 0 , a 1 , a 2 , a 3 respectively, then the distances between the four vertices are:
  • the optical image collector collects the line image on the test piece, and the four vertices of the square on the test piece are equivalent to setting four object points on the object plane.
  • the original image plane image of a test piece is collected, and the original image plane image in the figure has a width of 640 columns of pixels and a height of 480 rows of pixels. It is not difficult to see that the square in the center of the test piece becomes a trapezoid in the original image of the image plane.
  • the four vertices, four sides and two diagonals of the square can still be discerned.
  • the definition of coordinates is as mentioned above.
  • the X-axis is the width direction
  • the Y-axis is the height direction
  • the Z-axis points to the direction of the object plane, and is perpendicular to the OXY plane.
  • the 0th column and the 0th row are the coordinate origin O.
  • a set of initial values for selected device parameters :
  • the object plane output image the output image of the optical image collector (abbreviated as the object plane output image).
  • the width of the corresponding image plane original image in Figure 2 is 640 columns of pixels, and the height is 480 rows of pixels.
  • the width of the object plane output image is 336 columns of pixels, and the height is 360 rows of pixels.
  • the collector device is obtained by the aforementioned method After the parameter, the coordinate value of the image center point of the object plane output image is obtained:
  • the position information of the object plane output image, the size information of the object plane output image, and the size information of the original image plane image are used to obtain the object plane output image.
  • all pixels in the original image of the image plane can be mapped to the output image of the object plane, but the pixels in the original image of the image plane are discrete points, and they are also discontinuous after being mapped to the output image of the object plane.
  • the positions of all pixels in the object plane output image can be expressed as:
  • each image point is in the OXY'Z' coordinate system, and the coordinate translation distance is:
  • the coordinate value of the corresponding pixel point P"(70, 79) in the object plane output image in the image plane original image is obtained (81, 141), and further, the gray value of the pixel point in the image plane original image is assigned Output the pixel at P"(70,79) in the image for the object plane.
  • the embodiment of the present application also provides an image resolution normalization processing device, the structure of which is shown in FIG. Degree value assignment unit 40.
  • the acquiring unit 10 is configured to acquire device parameters of the optical image collector, acquire size information of an image plane original image of the optical image collector, and acquire size information and position information of an object plane output image of the optical image collector.
  • the equipment parameters include the coordinate values of the viewpoint in the optical image collector (x c , y c , z c ), the angle ⁇ between the object plane in the optical image collector and the image plane in the optical image collector, the object plane and OXY′
  • the distance z' a between the OXY' coordinate planes in the Z' coordinate system, the OXY'Z' coordinate system is obtained by the rotation angle of the OXYZ coordinate system of the optical image collector around the X axis
  • the coordinate value of the viewpoint includes the viewpoint in the OXYZ coordinate
  • the coordinate values on the three axes of the object plane output image can be represented by the coordinate value of the image center point of the object plane output image.
  • the determination unit 20 is used to obtain the coordinate value of each pixel in the output image of the object plane according to the device parameters, the size information and the position information of the output image of the object plane.
  • the coordinate conversion unit 30 is used to obtain the coordinate value of each pixel in the output image of the object plane in the image plane according to the coordinate mapping relationship between the object point of the object plane and the image point of the image plane according to the coordinate value of each pixel point in the output image of the object plane.
  • the coordinate values of the mapping points are used to obtain the pixel point mapping relationship, which is the one-to-one correspondence between the pixel points in the output image of the object plane and the mapping points in the image plane.
  • the coordinate mapping relationship between the object point on the object plane and the image point on the image plane includes:
  • (x a , y a , z a ) is the coordinate value of an object point in the object plane
  • (x p , y p ) is the coordinate value of an image point in the image plane, specifically the corresponding image of the object point in the image plane
  • the way that the coordinate conversion unit 30 obtains the coordinate value of the mapping point is: according to the coordinate mapping relationship between the object point on the object plane and the image point on the image plane Obtain the relational expression of each pixel in the output image of the object plane to the image plane, and the relational expression includes Where (x ak , y ak , z ak ) is the coordinate value of the pixel in the object plane output image in the OXYZ coordinate system, and (x pk , y pk ) is the X-axis direction of the mapping point when the pixel is mapped to the image plane , the two-dimensional coordinate value in the Y-axis direction; input the coordinate value of each pixel point in the output image of the object plane into the relational expression, and obtain the two-dimensional coordinate value in the X-axis direction and the Y-axis direction of each mapping point.
  • the gray value assignment unit 40 is used to obtain the gray value of each pixel in the object plane output image according to the pixel point mapping relationship, the gray value of each pixel in the image plane original image, and the size information of the image plane original image, to get the object plane output image.
  • the gray value of the pixel in the original image of the image plane is assigned to the corresponding pixel in the output image of the object plane;
  • the coordinate value of the mapped point on the plane does not overlap with the coordinate value of the pixel point in the original image plane image, and the gray value of the corresponding pixel point in the output image of the object plane is assigned as a valid value, and the valid value is a value in [0,255] ;
  • the coordinate value of the mapping point on the image plane does not overlap with the coordinate value of the pixel point in the original image of the image plane, it is determined by at least one of the following methods: method 1, the coordinate value of the X-axis direction of the mapping point is less than 0; method 2, the coordinate value of the mapping point The coordinate value in the X-axis direction is greater than or equal to the width of the original image on the image plane; method 3, the coordinate value in the Y-axis direction of the mapping point is less than 0; method 4, the coordinate value in the Y-axis direction of the mapping point is greater than or equal to the height of the original image on the image plane.
  • An embodiment of the present application also provides an optical image collector, including: a processor and a memory for storing processor-executable instructions; wherein, the processor is configured to execute the instructions to implement the above image resolution normalization processing method .
  • the optical image collector may also include a collection component for collecting the original image plane image, and the present embodiment will not describe the inherent components of the optical image collector one by one.
  • An embodiment of the present application provides a computer-readable storage medium.
  • the optical image collector can execute the above image resolution normalization processing method.
  • each embodiment in this specification can be described in a progressive manner, and the features recorded in each embodiment in this specification can be replaced or combined with each other, and each embodiment focuses on the difference with other embodiments.
  • the same and similar parts of the various embodiments may be referred to each other.
  • the device-type embodiments refer to the descriptions of the method embodiments for relevant information.
  • the above description is only the preferred embodiment of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present application, some improvements and modifications can also be made. These improvements and modifications are also It should be regarded as the protection scope of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

一种图像分辨率归一化处理方法及装置,获取光学图像采集器的设备参数,物平面输出图像与像平面原始图像的尺寸信息,物平面输出图像的位置信息;根据物平面输出图像的位置信息、尺寸信息和设备参数,得到物平面输出图像中各个像素点的坐标值;根据物平面输出图像中各个像素点的坐标值、设备参数、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系;根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像,实现了图像分辨率的归一化处理。

Description

一种图像分辨率归一化处理方法及装置
本申请要求于2021年07月13日提交中国专利局、申请号为202110790061.2、发明名称为“一种图像分辨率归一化处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于图像处理技术领域,尤其涉及一种图像分辨率归一化处理方法及装置。
背景技术
光学图像采集器是一种常用的图像采集设备,通过光学图像采集器可得到被采集对象的图像,例如指纹采集器和掌纹采集器等是目前常用的图像采集设备,通过指纹采集器可以采集用户的指纹图像,通过掌纹采集器可以采集用户的掌纹图像。以指纹采集器为例对其采集原理和存在的问题进行说明:
如图1所示,用指纹采集器采集指纹时,用户将手指紧贴在指纹采集器的玻璃面上,通过成像透镜组对按压手指的玻璃面进行透视投影成像,再通过图像采集芯片进行拍照。在指纹采集器等光学图像采集器的成像光路中,成像透镜组的像平面跟玻璃面构成的物平面不平行,因此像平面上采集到的像平面输出图像具有近大远小的立体感,这种立体感称为图像的梯形畸变,梯形畸变使得像平面输出图像中不同区域,不同方向的图像分辨率不相同,不同方向的图像分辨率不同说明图像存在分辨率误差,分辨率误差会降低图像识别的准确率,甚至无法识别。
如图2所示,是用光学图像采集器采集的物平面上边长为10.16毫米正方形的图像,很明显物平面上的正方形在像平面上变成了一个梯形,通过图像采集软件获得这个梯形四个顶点的坐标值为:
A 0(126,86)、A 1(490,88)、A 2(525,347)、A 3(91,345)。
通过计算,得到各顶点之间的图像分辨率分别为:
A 0A 1=910dpi(Dots Per Inch,每英寸点数量);A 1A 2=653dpi;
A 2A 3=1085dpi;A 0A 3=653dpi;A 0A 2=843dpi;A 1A 3=839dpi。
由此可见,像平面输出图像中不同区域,不同方向的分辨是不相同的,在进行图像识别前,必须先进行图像分辨率归一化处理,使得光学图像采集器输出的图像中每个区域且不同方向都具有相同的图像分辨率,进一步的不同区域和不同方向图像分辨率相同,还不能完全满足图像采集和图像识别要求,光学图像采集器输出图像的图像分辨率应具有指定值,且误差不能太大,因为不同图像之间,如果图像分辨率不同,图像识别的难度会增加。
为了解决梯形畸变,在光学图像采集器中增加一个多棱镜组合,通过多棱 镜组合的光学方法对梯形畸变进行校正,但是增加多棱镜组合会增加光学图像采集器的几何尺寸和成本,并且多棱镜组合在加工和组装过程中存在一定的误差也会使得不同方向的图像分辨率不相同,使光学图像采集器输出的图像仍存在一定的分辨率误差。
发明内容
本申请提供一种图像分辨率归一化处理方法及装置。
一方面,本申请提供一种图像分辨率归一化处理方法,所述方法包括:
获取光学图像采集器的设备参数,获取所述光学图像采集器的像平面原始图像的尺寸信息,获取所述光学图像采集器的物平面输出图像的尺寸信息与位置信息;
根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值;
根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,所述像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;
根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。
可选的,所述根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值包括:
如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值重叠,则将所述像平面原始图像中像素点的灰度值赋值给物平面输出图像中对应的像素点;
如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠,将物平面输出图像中对应的像素点的灰度值赋值为有效值,所述有效值为[0,255]中的一个数值;
其中所述像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠通过如下至少一种方式确定:
方式一、映射点的X轴方向坐标值小于0;方式二、映射点的X轴方向坐标值大于或等于像平面原始图像的宽度;方式三、映射点的Y轴方向坐标值小于0;方式四、映射点的Y轴方向坐标值大于或等于像平面原始图像的高度。
可选的,所述设备参数包括所述光学图像采集器中视点的坐标值(x c,y c,z c)、 所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角α、所述物平面和OXY′Z′坐标系中OXY′坐标平面之间的距离z′ a,所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转所述夹角α得到,所述视点的坐标值包括所述视点在所述OXYZ坐标系的三个轴上的坐标值,OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠。
可选的,所述物平面物点和像平面像点之间的坐标映射关系包括:
Figure PCTCN2021130252-appb-000001
z′ a=z a cos α-y a sin α;
其中(x a,y a,z a)为物平面中一个物点的坐标值,(x p,y p)为所述物点在像平面中对应像点的X轴方向、Y轴方向的二维坐标值。
可选的,所述根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值包括:
根据物平面物点和像平面像点之间的坐标映射关系中的
Figure PCTCN2021130252-appb-000002
Figure PCTCN2021130252-appb-000003
得到所述物平面输出图像中各个像素点到像平面映射的关系式,所述关系式包括
Figure PCTCN2021130252-appb-000004
其中(x ak,y ak,z ak)为物平面输出图像中像素点在光学图像采集器的OXYZ坐标系下的坐标值,(x pk,y pk)为像素点映射到像平面上时映射点的X轴方向、Y轴方向的二维坐标值,OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠;
将所述物平面输出图像中每个像素点的坐标值输入到所述关系式中,得到每个映射点的坐标值。
可选的,所述根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值包括:
对物平面输出图像中第i列,第j行像素点P"(i,j);i=0,1,2,……,xw;j=0,1,2,……,yw,得到第i列,第j行像素点P"(i,j)在OXY′Z′坐标系下的X轴方向、Y′轴方向的二维坐标值为P′(i+g,j+h);g=x 0-xw/2;h=y′ 0-yw/2;xw为所述物平面输出图像的宽度,yw为所述物平面输出图像的高度,(x 0,y′ 0)为物平面输出图像的图像中心点的坐标值,所述图像中心点的坐标值为所述位置信息;
利用OXY′Z′坐标系和OXYZ坐标系的坐标变换关系,得到第i列,第j行 像素点P”(i,j)在OXYZ坐标系下的坐标值为(x ak,y ak,z ak),x ak=i+g;y ak=(j+h)cos α-z′ asin α;z ak=(j+h)sin α+z′ acos α;k=xw*j+i;
所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转夹角α得到,所述夹角α为所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角,所述OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠。
可选的,所述光学图像采集器的设备参数的预先获得过程包括:
在光学图像采集器的物平面上分散设置多个物点,并按照指定的物平面输出图像的图像分辨率对各物点之间的距离进行离散化运算,得到任意两个物点之间的第一距离;
获得所有物点在像平面原始图像中对应像点的坐标值,所述像点的坐标值为像点在像平面中的X轴方向、Y轴方向的二维坐标值;
根据所述光学图像采集器的初始设备参数集和所述设备参数中每个参数的预设搜索范围,得到多个待处理设备参数集;
根据每个待处理设备参数集、所述坐标映射关系和所述像点在所述像平面中的坐标值,得到每个待处理设备参数集对应的物平面物点坐标集,所述物平面物点坐标集包括在所述待处理设备参数集下多个物点在所述物平面中的坐标值,所述多个物点是由多个像点映射到所述物平面上得到;
获取所述物平面物点坐标集中任意两个物点之间的第二距离;
分别获得每个物平面物点坐标集的最大距离差,所述最大距离差是所述物平面映射点坐标集的多个距离差中的最大值,所述距离差是所述物平面坐标集中任意两个物点之间的第二距离与其对应的第一距离之间的差值;
确定最大距离差小于预设距离阈值的目标物平面物点坐标集,选取所述目标物平面物点坐标集中最大距离差取值最小时对应的待处理设备参数集中的设备参数作为所述光学图像采集器的设备参数。
可选的,所述物平面输出图像的位置信息的预先获得过程包括:
根据所述设备参数、物平面物点与像平面像点之间的坐标映射关系,将像平面原始图像中的每一个像素点映射到物平面上,得到每个像素点在物平面中的映射点的坐标值;
获得每个映射点在OXY′Z′坐标系中的坐标值,所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转夹角α得到,所述夹角α为所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角,所述OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠;
将所有映射点在OXY′Z′坐标系中的X轴方向坐标值和X轴方向坐标的初始最大值X max进行比对,得到X轴方向坐标值的最大值;
将所有映射点在OXY′Z′坐标系中的X轴方向坐标值和X轴方向坐标的初始最小值X min进行比对,得到X轴方向坐标值的最小值;
将所有映射点在OXY′Z′坐标系中的Y′轴方向坐标值和Y′轴方向坐标的初始最大值Y′ max进行比对,得到Y′轴方向坐标值的最大值;
将所有映射点在OXY′Z′坐标系中的Y′轴方向坐标值和Y′轴方向坐标的初始最小值Y′ min进行比对,得到Y′轴方向坐标值的最小值;
将X轴方向坐标值的最大值和X轴方向坐标值的最小值的均值,作为所述物平面输出图像的图像中心点的X轴方向坐标值,将Y′轴方向坐标值的最大值和Y′轴方向坐标值的最小值的均值,作为所述物平面输出图像的图像中心点的Y′轴方向坐标值,所述物平面输出图像的图像中心点的坐标值为所述物平面输出图像的位置信息。
另一方面,本申请提供一种图像分辨率归一化处理装置,所述装置包括:
获取单元,用于获取光学图像采集器的设备参数,获取所述光学图像采集器的像平面原始图像的尺寸信息,获取所述光学图像采集器的物平面输出图像的尺寸信息与位置信息;
确定单元,用于根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值;
坐标转换单元,用于根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,所述像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;
灰度值赋值单元,用于根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。
再一方面,本申请提供一种光学图像采集器,包括:处理器和用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现上述图像分辨率归一化处理方法。
再一方面,本申请提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由光学图像采集器的处理器执行时,使得光学图像采集器能够执行上述图像分辨率归一化处理方法。
上述图像分辨率归一化处理方法及装置,首先获取光学图像采集器的设备参数,获取光学图像采集器的物平面输出图像与像平面原始图像的尺寸信息, 获取光学图像采集器的物平面输出图像的位置信息;根据物平面输出图像的位置信息、尺寸信息和设备参数,得到物平面输出图像中各个像素点的坐标值;根据物平面输出图像中各个像素点的坐标值、设备参数、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。由于物平面输出图像中的像素点相当于物平面中的物点,物点之间的距离只存在计算误差或离散化带来的误差,不存在透视投影成像带来的误差,所以,物平面输出图像中像素点的位移偏差不会超过一个像素,实现了图像分辨率的归一化处理。
附图说明
图1是现有技术中指纹采集器的采集原理的示意图;
图2是本申请实施例提供的图像畸变的示意图;
图3是本申请实施例提供的光学图像采集器的透视原理的示意图;
图4是本申请实施例提供的光学图像采集器的模拟几何光路图;
图5是本申请实施例提供的一种图像分辨率归一化处理方法的流程图;
图6是本申请实施例提供的获得光学图像采集器的设备参数的流程图;
图7是本申请实施例提供的测试片的示意图;
图8是本申请实施例提供的测试片的像平面原始图像与物平面输出图像的对照图;
图9是本申请实施例提供的一种图像分辨率归一化处理装置的结构示意图。
具体实施方式
首先对本申请实施例涉及的术语和原理进行说明:
现代计算机图像处理技术处理的图像都是数字图像,图像的数字化包括采样与量化两个过程。采样就是把一幅二维(分别指宽度方向与高度方向)图像分割成一个个小的,叫做像素的单元区域(有时也叫像元)。量化则是用一个叫灰度值的整数来表示单个像素的明暗程度。图像分辨率就是宽度或高度方向单位长度上所包含的像素数量,单位是“像素点/单位长度”,常用单位是dpi,即:点数/英寸(dpi:Dots Per Inch),也就是一英寸长度单位内的像素数量,一英寸等于25.4毫米。本申请所有长度单位,包括坐标值等,除特别说明外,均是在事先设定的图像分辨率的条件下,以离散空间的点(或像素)作为单位,为了简便统一称点为像素。
图像分辨率的归一化处理就是不管是何种类型的光学图像采集器,或不管光学图像采集器生产过程中的加工与组装误差如何,都将光学图像采集器输出的图像中不同区域且不同方向(如宽度方向和/或高度方向)的图像分辨率归一化为固定大小,如归一化为事先设定的图像分辨率或接近事先设定的图像分辨率。
光学图像采集器是用透视投影成像的方式来获取按压在其玻璃表面(物平面)上物体的透视图。透视投影是一种中心投影,当把物体投射到像平面(也称为投影面)上时,所有的投射线都是从一个称为投影中心的点发出的。如图3所示,设在水平面H上有一物体W,S为投影中心,S相当于一个投影点光源,在透视投影成像中称为视点,位于水平面H的上方。又设平面P为像平面,像平面P与水平面H垂直。为了方便,把像平面放在投影中心的后面,使投影中心位于像平面与物体之间。设物体W上有物点A,由视点S引视线SA至物点A,延长SA使其与像平面相交于A 0,则A 0是物点A的透视投影,也被称为像点,这就是光学图像采集器的成像原理,也叫透视原理。
根据光学图像采集器的透视原理,得到图4所示光学图像采集器的模拟几何光路图。在图4中,设光学图像采集器的三维坐标系为OXYZ,Y轴垂直向上,Z轴指向左,X轴与OYZ坐标平面垂直,且指向里。设视点为S,其三维坐标为S(x c,y c,z c)。设光轴ZH,穿过视点S。设像平面位于OXY坐标平面内。与图3中的三维物体W不同,光学图像采集器只对平面进行成像,该平面称之为物平面。设物平面位于光轴某一位置上,并垂直于OYZ坐标平面,且与像平面之间的夹角为α。把OXYZ坐标系绕X轴旋转α角,得到OXY′Z′坐标系,并使得OXY′坐标平面与物平面平行,物平面与OXY′坐标平面距离为z′ a。穿过视点S引一条透视线SA,与物平面相交于物点A(x a,y a,z a),延长SA与像平面相交于像点P(x p,y p,z p),设T(x,y,z)为透视线上任一点。
参照上述说明得到光学图像采集器的模拟几何光路图。在模拟几何光路图中,视点S的坐标值(x c,y c,z c),物平面和像平面之间的夹角α,以及物平面和OXY′坐标平面之间的距离z′ a是与单个光学图像采集器相关联的5个基本参数,这5个基本参数由透视投影三要素,即物平面、像平面、视点,在OXYZ坐标系中的相互位置关系所决定,因此视点S的坐标值(x c,y c,z c),物平面和像平面之间的夹角α,以及物平面和OXY′坐标平面之间的距离z′ a是光学图像采集器的设备参数。
在上述图4所示的OXYZ三维坐系中,透视线上像点的坐标值与视点的坐标值分别为P(x p,y p,z p),S(x c,y c,z c),且T(x,y,z)为透视线上任一点,则根据解析几何原理有下面的透视线方程成立:
Figure PCTCN2021130252-appb-000005
由于透视线与物平面相交于物点A(x a,y a,z a),又因为像平面位于OXY坐标平面中,所以在上述透视线方程中可以令:
x=x a;y=y a;z=z a;z p=0;
将其代入上述透视线方程中,透视线方程的演变方程为:
Figure PCTCN2021130252-appb-000006
由于OXY′Z′坐标系是由OXYZ坐标系绕X轴旋转α生成的,且OXY′Z′坐标系中OXY′坐标平面与物平面平行,物平面到OXY′坐标平面的距离为z′ a,所以,所有物点在OXY′Z′坐标系中的Z′轴的坐标值均为z′ a,经坐标旋转变换运算,对于物点A(x a,y a,z a)满足如下条件:
z′ a=z acos α-y asin α。
对于任一个光学图像采集器,透视线方程的演变方程和物点所满足条件可以作为物平面物点和像平面像点之间的坐标映射关系,具体的,物平面物点和像平面像点之间的坐标映射关系包括:
Figure PCTCN2021130252-appb-000007
z′ a=z acos α-y asin α。
将物平面中各物点映射到像平面上,得到其在像平面中对应像点的坐标值;同理也可以将像平面中各像点映射到物平面上,得到其在物平面中对应物点的坐标值。其中物平面物点和像平面像点之间的坐标映射关系可以作为光学图像采集器的几何光学数学模型。
请参见图5,其示出了本申请实施例提供的一种图像分辨率归一化处理方法的可选流程,可以包括以下步骤:
101:获取光学图像采集器的设备参数,获取光学图像采集器的像平面原始图像的尺寸信息,获取光学图像采集器的物平面输出图像的尺寸信息与位置信息。
在本实施例中,光学图像采集器的设备参数是由光学图像采集器物平面、像平面、视点三要素,在OXYZ坐标系中的相互位置关系决定的参数,其中设备参数可以包括光学图像采集器中视点的坐标值S(x c,y c,z c)、物平面和像平面之间的夹角α、物平面和OXY′Z′坐标系中OXY′坐标平面之间的距离z′ a,OXY′Z′坐标系是由光学图像采集器的OXYZ坐标系绕X轴旋转夹角α得到,且在OXY′Z′坐标系中OXY′与物平面平行。
无论是像平面原始图像的尺寸信息还是物平面输出图像的尺寸信息,图像的尺寸信息表征了图像的大小,一般用宽和高来表示,宽度方向被称为列,高 度方向被称为行。像素点位置则用第几列,第几行像素点来表示。例如:像素点P(i,j)表示第i列,第j行像素点。图像的位置信息在本申请中可以用图像中心点的坐标值来表示。例如:一幅宽为XW列,高为YW行的图像,其图像中心点位于第XW/2列,YW/2行像素点,这个中心点的坐标值表示了图像的位置信息。
本申请中的光学图像采集器涉及三种图像,一种是像平面上的原始图像,也叫像平面原始图像;第二种是像平面图像映射到物平面上构成的图像,也被称为物平面图像;第三种是光学图像采集器的输出图像,是从物平面图像中剪裁出来的一幅矩形图像(也可称为物平面输出图像)
像平面原始图像的尺寸信息是由光学图像采集器所用图像采集芯片决定的,例如,可以用宽度为640列,高度为480行的图像采集芯片,那么像平面原始图像的尺寸信息就是640列,480行。如前文所述,像平面位于OXYZ坐标系的OXY坐标平面中。本申请中将像平面原始图像的宽度方向定义为X方向,高度方向定义为Y方向。并把第0列,第0行像素点定义为坐标原点O。完成上述定义后就可以得到像平面原始图像中任一像素点在OXYZ坐标系中的坐标值。
物平面图像是由像平面原始图像映射到物平面上构成的图像,由于透视投影关系,物平面图像不再一定是矩形,物平面图像中所有像素点在物平面上形成了一个梯形区域。
物平面输出图像是在物平面图像的梯形区域中剪裁出来的一幅具有一定宽度与高度的矩形图像。物平面输出图像的位置信息可以是物平面输出图像的中心点在物平面中的坐标值(简称图像中心点的坐标值),这是一个与光学图像采集器相关联的参数。理想情况下,物平面输出图像的图像中心点应尽量位于光学图像采集器的玻璃面的中心位置。由于物平面与OXY′坐标平面平行,事先确定好的物平面输出图像的图像中心点的坐标值为(x 0,y′ 0),x 0为X轴方向坐标值,y′ 0为Y′轴方向坐标值,以X轴方向作为物平面输出图像的宽度方向,Y′轴方向作为物平面输出图像的高度方向进行裁剪。对于像平面原始图像宽度为640列,高度为480行的光学图像采集器来说,物平面输出图像的宽度可以是336列,也可以是256列,相对应的物平面输出图像的高度可以是360行,也可以是其它值。在实际应用中可以根据实际情况自定义物平面输出图像的宽度、高度及图像中心点的坐标值,对于图像的宽度、高度及图像中心点的坐标值本实施例均不进行限定。
在本申请中,对于单独的一个光学图像采集器来说,设备参数、像平面原始图像、物平面输出图像的尺寸信息与位置信息均为产品出厂时已获得了的参 数,实际应用时可以直接使用这些参数进行后续的图像分辨率归一化处理。而本步骤101可以是从光学图像采集器已经获得的参数中读取到设备参数、像平面原始图像的尺寸信息、物平面输出图像的尺寸信息与位置信息。
102:根据设备参数、物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值,其中物平面输出图像中各个像素点的坐标值是物平面输出图像中各个像素点在OXYZ坐标系中的坐标值。
在本实施例中,为了获取物平面输出图像中的像素点在像平面图像中的对应像素点的坐标值,首先需得到物平面输出图像中各个像素点在OXYZ坐标系中的坐标值。其中得到物平面输出图像中各个像素点的坐标值的一种可选方式如下:
如前文所述,在光学图像采集器中构造了两个坐标系,一个是OXYZ坐标系,另一个是将OXYZ绕X轴旋转α角得到的OXY′Z′坐标系,在OXY′Z′坐标系中OXY′坐标平面与物平面平行,且距离为z′ a
α,z′ a均为步骤101得到的设备参数中的两个关键参数,并且通过步骤101还得到了物平面输出图像的尺寸信息与位置信息。在尺寸信息中可以得到物平面输出图像的宽度为xw列像素,高度为yw行像素,从物平面输出图像的位置信息中可以得到物平面输出图像的图像中心点的坐标值(x 0,y′ 0),x 0为X轴方向坐标值,y′ 0为Y′轴方向坐标值。
物平面输出图像中第i列,第j行像素点的位置可以以坐标值表示,如可以表示为:
P"(i,j);i=0,1,2,……,xw;j=0,1,2,……,yw。
像素点在OXY′Z′坐标系的X轴方向、Y′轴方向的二维坐标值可以表示为:
P′(i-xw/2+x 0,j-yw/2+y′ 0);i=0,1,2,……,xw;j=0,1,2,……,yw。
令:g=x 0-xw/2;h=y′ 0-yw/2;物平面输出图像中像素点在OXY′Z′坐标系的X轴方向、Y′轴方向的二维坐标值进一步表示为:
P′(i+g,j+h);i=0,1,2,……,xw;j=0,1,2,……,yw;由于物平面与OXY′坐标平面平行,物平面输出图像中所有像素点在Z′轴方向的坐标值为z′ a。因此,根据坐标变换关系,物平面输出图像中各像素点在OXYZ坐标系中的坐标值可以表示为:x ak=i+g;
y ak=(j+h)cos α-z′ asin α;
z ak=(j+h)sin α+z′ acos α;
i=0,1,2,……,xw;j=0,1,2,……,yw;k=xw*j+i。
在本申请中虽然多次提到物平面图像,但是本申请没有真正得到物平面图像,因为没有必要求出物平面图像中每个像素点的坐标值,而是以物平面图像 的图像中心点的坐标值(x 0,y′ 0)为中心,得到物平面输出图像中每个像素点在OXY′Z′坐标系下的X轴方向、Y′轴方向的二维坐标值,如上述P′(i-xw/2+x 0,j-yw/2+y′ 0)。
103:根据物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系。
由步骤102得到物平面输出图像中各个像素点的坐标值后,利用物平面物点和像平面像点之间的坐标映射关系,将物平面输出图像中像素点的坐标值映射到像平面中,得到其在像平面的映射点的坐标值。
如果物点为A(x a,y a,z a),其在像平面上的映射点(也可以称为像点)为P(x p,y p)。物平面物点和像平面像点之间的坐标映射关系是:
Figure PCTCN2021130252-appb-000008
由步骤102得到的物平面输出图像中的像素点的坐标值为P(x ak,y ak,z ak);i=0,1,2,……,xw;j=0,1,2,……,yw;k=xw*j+i,相当于物点的坐标值,所以,在上述坐标映射关系中,令:x a=x ak,y a=y ak,z a=z ak,y p=y pk,x p=x pk
Figure PCTCN2021130252-appb-000009
引入上述坐标映射关系中有:
Figure PCTCN2021130252-appb-000010
P(x pk,y pk)即为物平面输出图像中的像素点P(x ak,y ak,z ak)在像平面中映射点的X轴方向、Y轴方向的二维坐标值,从而得到像素点映射关系。
104:根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值。
对于物平面输出图像中的每一个像素点P(x ak,y ak,z ak),步骤103计算出其在像平面中的映射点的X轴方向、Y轴方向的二维坐标值为(x pk,y pk),在步骤101获得的像平面原始图像的尺寸信息中XW为像平面原始图像的宽度,YW为像平面原始图像的高度,如果0≤x pk<XW;0≤y pk<YW,则(x pk,y pk)为像平面原始图像中第x pk列,第y pk行的像素点,像平面原始图像中第x pk列,第y pk行的像素点的灰度值被赋值给物平面输出图像中像素点P(x ak,y ak,z ak)的灰度。如果x pk<0,或x pk≥XW,或y pk<0,或y pk≥YW,则表示映射点的坐标值超出了像平面原始图像的坐标值范围,像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠,将物平面输出图像中像素点P(x ak,y ak,z ak)的灰度值赋值为[0,255]之间的有效值,表示从0至255选 取一个数值(包括0和255),例如可以赋值0或者0至255之间的一个数值,具体赋值多大,不做限定。
上述图4所示图像分辨率归一化处理方法可通过如下程序实现:
程序的输入是:
x c,y c,z c,z′ a,α;
像平面原始图像的尺寸信息:XW、YW;//注:XW为宽度,YW为高度;
物平面输出图像的尺寸信息:xw、yw;//注:xw为宽度,yw为高度
物平面输出图像的图像中心点的坐标值:(x 0,y′ 0),x 0为X轴方向坐标值,y′ 0为Y′轴方向坐标值。
K1.计算:
X轴方向和Y′方向的坐标平移距离:hx=x 0-xw/2;hy=y′ 0-yw/2;
ax=α*3.1415926/180;
cosx=cos(ax);
sinx=sin(ax);
azsin=z′ asinx;
azcos=z′ acosx;
K2.令:y=0;//注:物平面输出图像中第y行
K3.计算:
iy=y+hy;
fy=iy*cosx-azsin;
fz=iy*sinx+azcos;
b=fz/z c
Figure PCTCN2021130252-appb-000011
//注:jj表示像平面原始图像中第jj行
K4.令:x=0;//注:物平面输出图像中第x列
K5.计算:
ix=xy+hx;
Figure PCTCN2021130252-appb-000012
//注:ii表示像平面原始图像第ii列
ts=jj*XWD+ii;
tsa=y*xw+x;
K6.如果:jj<0;或者jj>=YW;或者ii<0;或者ii>=XW,
则有:
IMG[tsa]=0;//映射点的坐标值超出了像平面原始图像的坐标值范围,则物平面输出图像中像素点的灰度值赋值为0
否则:IMG[tsa]=Image[ts];注:将像平面原始图像上对应像素点的灰度值赋给物平面输出图像中相应像素点。
K7.x=x+1;
K8.如果x<xw执行步骤K5;
K9.y=y+1;
K11.如果y<yw执行步骤K3;
K12.程序结束。
通过上述程序能够获取物平面输出图像中各个像素点在像平面中映射点的坐标值,以及完成物平面输出图像所有像素点灰度值的赋值,从而得到光学图像采集器的物平面输出图像。
上述图像分辨率归一化处理方法中,首先获取光学图像采集器的设备参数,获取光学图像采集器的物平面输出图像与像平面原始图像的尺寸信息,获取光学图像采集器的物平面输出图像的位置信息;根据物平面输出图像的位置信息、尺寸信息和设备参数,得到物平面输出图像中各个像素点的坐标值;根据物平面输出图像中各个像素点的坐标值、设备参数、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。由于物平面输出图像中的像素点相当于物平面中的物点,物点之间的距离只存在计算误差或离散化带来的误差,不存在透视投影成像带来的误差,所以,物平面输出图像中像素点的位移偏差不会超过一个像素,实现了图像分辨率的归一化处理。
本实施例提供的图像分辨率归一化处理方法可以应用到指纹采集器中,目前两个具有不同图像分辨率的指纹采集器采集同一个手指上的指纹时,两个指纹采集器得到的指纹特征信息有很大的差别,从而导致指纹采集器的互换性与兼容性差,例如在指纹采集器1中录入的指纹图像,在指纹采集器1中识别有较好的识别效果,但是将其录入的指纹图像复制到指纹采集器2中,因与录入指纹图像的指纹采集器1存在图像分辨率的差异,在进行指纹识别时识别效果差,甚至无法识别。如果将本实施例提供的图像分辨率归一化处理方法应用到指纹采集器2和指纹采集器1中,两个指纹采集器都以物平面输出图像作为指纹采集器的输出图像,物平面输出图像没有分辨率误差或分辨率误差很小,因此应用本实施例提供的图像分辨率归一化处理方法可以使得不同光学图像采集器之间能够互换与兼容,提高互换性与兼容性。
并且光学图像采集器不同方向和不同区域的图像分辨率存在差异,即光学图像采集器存在上述梯形畸变,使得手指或手掌按压在光学图像采集器的不同位置或不同方向都会影响识别的效果,在进行图像识别操作时,手指或手掌按压的方向与位置只有跟录入时相近才会有较理想的识别效果。而本实施例从物平面图像中裁剪出一幅矩形图像(如上述物平面输出图像)用作图像识别之用,物平面输出图像的图像分辨率已归一化成了一个固定的值,无论方向和位置是否与录入时相接近也可以取得较好的识别效果。从物平面图像中裁剪出一幅矩形图像是指根据图像中心点的坐标和物平面输出图像的尺寸信息,得到物平面输出图像中像素点在OXY′Z′坐标系的坐标值,完成从物平面图像的裁剪,然后转换得到像素点在OXYZ坐标系的坐标值;再得到在像平面中映射点的坐标值,利用在像平面中映射点的坐标值完成对物平面输出图像的像素点的灰度赋值,从而得到矩形图像。
在本实施例中,物平面物点和像平面像点之间的坐标映射关系、坐标变换关系需要结合光学图像采集器的设备参数得到。相对应的,在实施图像分辨率归一化处理时需预先得到光学图像采集器的设备参数,且任一组设备参数都是与指定的物平面输出图像的图像分辨率相对应的。其中光学图像采集器的设备参数的获得过程如图6所示,可以包括以下步骤:
201:在光学图像采集器的物平面上分散设置多个物点,如A 0,A 1,A 2,……,A N,N为物点数量,应大于或等于3,各物点位置最好不要在同一条直线上,记录下各物点之间的距离A iA j;i,j=0,1,2,……,N;j>i,单位为毫米。
202:按指定的物平面输出图像的图像分辨率对各物点之间的距离进行离散化运算,其结果作为两个物点之间的第一距离a ia j;i,j=0,1,2,……,N;j>i,单位为像素。例如:如果要求将物平面输出图像的图像分辨率归一化处理成500dpi,也就是说一英寸(一英寸相当于25.4毫米)长度为500像素,则有:a ia j=(A iA j*500)/25.4。如果两个物点之间的距离是A iA j=10.16毫米,离散化后就是(500*10.16)/25.4=200像素。如果图像分辨率是510dpi,则10.16毫米长度离散化后就是(510*10.16)/25.4=204像素。这也说明了,物平面上相同距离的两个物点,在不同图像分辨率的图像中显示的长度会不同,这种差异对图像识别效果会有很大的影响。
203:用光学图像采集器采集物平面通过透视投影投射到像平面上的图像,得到像平面原始图像。
204:用图像采集软件获得所有物点在像平面原始图像中对应像点的坐标值:
P(x pi,y pi);i=1,2,3,……,N;像点的坐标值为像点在像平面中的X轴方向、 Y轴方向的二维坐标值。
205:根据光学图像采集器的初始设备参数集和设备参数中每个参数的预设搜索范围,得到多个待处理设备参数集。
初始设备参数集包括设备参数中每个参数的初始值,初始值可以由用户设置,对于初始值的具体取值本实施例不进行限定。每个参数的取值可利用预设搜索范围进行调整,如在上一次调整到的取值的基础上再利用预设搜查范围进行调整,设备参数中每个参数的预设搜索范围可相同也可以不同,例如x c的预设搜索范围是±20,y c和z c的预设搜索范围是±50。利用设备参数中每个参数的预设搜索范围,对初始设备参数集中设备参数的初始值进行多次调整,从而得到多个待处理设备参数集,每个待处理设备参数集包括设备参数且设备参数中每个参数的取值不同,多个待处理设备参数集可包括初始设备参数集。
206:根据每个待处理设备参数集、坐标映射关系和像点在像平面中的坐标值,得到每个待处理设备参数集对应的物平面物点坐标集,物平面物点坐标集包括在待处理设备参数集下多个物点在物平面中的坐标值,多个物点是由多个像点映射到物平面上得到。
在本实施例中,将多个像点映射到物平面中,在物平面上以物点形式展示,并获得每个物点在物平面中的坐标值。其中获得的每个物点在物平面中的坐标值可以结合上述坐标映射关系和每个待处理设备参数集中的设备参数的取值而获得。因每个待处理设备参数集中的设备参数的取值可能不同,在获得每个物点在物平面中的坐标值时,以待处理设备参数集为单位,对同一个物点来说,分别在每个待处理设备参数集下计算其在物平面中的坐标值。过程如下:
像平面像点和物平面物点之间的坐标映射关系是:
Figure PCTCN2021130252-appb-000013
z′ a=z acos α-y asin α。
以P(x p,y p)表示像平面中的一个像点的X轴方向、Y轴方向的二维坐标值,将其映射到物平面中以A(x a,y a,z a)表示,根据上述像平面像点和物平面物点之间的坐标映射关系可以得到:
Figure PCTCN2021130252-appb-000014
Figure PCTCN2021130252-appb-000015
如果
Figure PCTCN2021130252-appb-000016
Figure PCTCN2021130252-appb-000017
Figure PCTCN2021130252-appb-000018
如果
Figure PCTCN2021130252-appb-000019
根据
Figure PCTCN2021130252-appb-000020
可以得到
Figure PCTCN2021130252-appb-000021
Figure PCTCN2021130252-appb-000022
相对 应的z a=bz c,x a=b(x c-x p)+x p
在将像平面中的像点映射到物平面中时,x c,y c,z c,z′ a,α可以是待处理设备参数集中的设备参数的任一取值,在待处理设备参数集中,穷尽每一种设备参数组合,并分别执行一次上述操作,从而得到每个待处理设备参数集下物点的坐标值。
207:分别获得每个物平面物点坐标集的最大距离差,最大距离差是物平面物点坐标集的多个距离差中的最大值,距离差是物平面物点坐标集中任意两个物点之间的第二距离与其对应的第一距离之间的差值。
第二距离是由像点在物平面映射出的物点的坐标值计算得到,如第i个物点和第j个物点之间的第二距离是:
Figure PCTCN2021130252-appb-000023
Figure PCTCN2021130252-appb-000024
N为计算第二距离时的物点的总数,第i个物点的坐标值为A(x ai,y ai,z ai),第j个物点的坐标值为A(x aj,y aj,z aj),均由步骤206计算得到。将两个物点的第二距离和第一距离进行比对,得到第二距离和第一距离之间的距离差,距离差可以以D=|a ia j-b ib j|表示,i=1,2,3,……,N-1,j=i+1,……,N,a ia j表示第一距离。
在物平面物点坐标集中,任意两个物点都会得到距离差,因此一个物平面物点坐标集对应有多个距离差,从这多个距离差中选取一个取值最大的距离差作为物平面物点坐标集的最大距离差,如
Figure PCTCN2021130252-appb-000025
208:确定最大距离差小于预设距离阈值的目标物平面物点坐标集,选取目标物平面物点坐标集中最大距离差取值最小时对应的待处理设备参数集中的设备参数作为光学图像采集器的设备参数。
每个物平面物点坐标集的最大距离差分别与预设距离阈值进行比对,从多个物平面物点坐标集中选取最大距离差小于预设距离阈值的物平面物点坐标集作为目标物平面物点坐标集,如满足D max<D maxth的物平面物点坐标集作为目标物平面物点坐标集,D maxth为预设距离阈值,其取值不进行限定。
通过D max<D maxth如果得到多个目标物平面物点坐标集,选取目标物平面物点坐标集对应的待处理设备参数集中取值最小的设备参数作为光学图像采集器的设备参数,即满足D max<D maxth最小的一组参数x c,y c,z c,z′ a,α为光学图像采集器的设备参数。
在将像平面中的像点映射到物平面中时,x c,y c,z c,z′ a,α可以是待处理设备参数集中的设备参数的任一取值,在待处理设备参数集中,穷尽每一种设备参数组合,并分别执行一次上述操作,从而得到每个待处理设备参数集下物点的 坐标值。
上述光学图像采集器的设备参数求解过程中,可以采用一种迭代逼近算法,运算过程如下:
S1.启动程序;
S2.输入各已知参数:
读取各物点之间的第一距离:a ia j,i,j=1,2,3,……,N,j>i;
读取各像点的X轴方向、Y轴方向的二维坐标值:P(x pi,y pi);i=1,2,3,……,N;
对D maxth赋初值,一般取D maxth=1;//注:位移偏差不超过1个像素
迭代次数CN赋初值;
令count=0;
对D pmax赋一个很大的数,取D pmax=10000;
令:D p=D pmax
对x cpk,y cpk,z cpk,z′ apkpk赋初值(初始设备参数集中设备参数的初始值);
定义搜寻范围数组序列:
CNT[201]={0,1,-1,2,-2,3,-3,4,-4,5,-5……,100,-100};
一般情况下α的预设搜寻范围取±2度,x c的预设搜寻范围取±20,y c,z c,z′ a的预设搜寻范围取±50,其目的是得到待处理设备参数集
S3.令:x cp=x cpk;y cp=y cpk;z cp=z cpk;z′ ap=z′ apk;α p=α pk
S4.令i1=0;
S5.x c=x cp+CNT[i1];
S6.令i2=0;
S7.y c=y cp+CNT[i2];
S8.令i3=0;
S9.z c=z cp+CNT[i3];
S10.令i4=0;
S11.z′ a=z′ ap+CNT[i4];
S12.令i5=0;
S13.α=α p+CNT[i5];//步骤S4至S13是调整设备参数的取值的过程
S14.令i=0;
S15.计算:
Figure PCTCN2021130252-appb-000026
Figure PCTCN2021130252-appb-000027
Figure PCTCN2021130252-appb-000028
Figure PCTCN2021130252-appb-000029
z ai=bz c
x ai=b(x c-x pi)+x pi;//获得物点的坐标值
S16.i=i+1;
S17.如果i<N,执行步骤15;
S18.计算:
Figure PCTCN2021130252-appb-000030
Figure PCTCN2021130252-appb-000031
S19.如果D max<D pmax
令:D pmax=D max;x cpk=x c;y cpk=y c;z cpk=z c;z′ apk=z′ a;α pk=α;
S20.i5=i5+1;
S21.如i5<5,执行步骤S13;
S22.i4=i4+1;
S23.如i4<100,执行步骤S11;
S24.i3=i3+1;
S25.如i3<100,执行步骤S9;
S26.i2=i2+1;
S27.如i2<100,执行步骤S7;
S28.i1=i1+1;
S29.如i1<40,执行步骤S5;//步骤S20至步骤S29限定不同设备参数调整至的最大值,以降低运算量;
S30.如果D p=D pmax,执行步骤S34;
S31.count=count+1;
D p=D pmax
S32.如果count<CN,执行步骤S3;
S33.运算失败,执行S36;
S34.x c=x cpk;y c=y cpk;z c=z cpk;z′ a=z′ apk;α=α pk
S35.如果D pmax≥D max_th执行步骤S33,否则运算成功;
S36.程序结束。
在本实施例中,物平面输出图像是从物平面图像中剪裁出来的,为此需要 首先获得物平面输出图像在物平面中的位置信息,其中位置信息有多种表述形式,例如:可以用物平面输出图像的图像中心点的坐标值来描述物平面输出图像的位置信息。由于前文已经获得了光学图像采集器的设备参数,利用这些设备参数以及物平面物点与像平面像点之间的坐标映射关系,将像平面原始图像中的每一个像素点映射到物平面上,获得这些映射点在OXY′Z′坐标系中的坐标值,并在这些物平面的映射点的坐标值中求得X轴方向坐标值的最大值X max与最小值X min,Y′轴方向坐标值的最大值Y′ max与最小值Y′ min,并把X轴方向坐标值的最大值与最小值的均值(X max+X min)/2作为物平面输出图像的图像中心点的X轴方向坐标值x 0,把Y′轴方向坐标值的最大值与最小值的均值(Y′ max+Y′ min)/2作为物平面输出图像的图像中心点的Y′轴方向坐标值y′ 0。最后,获得了物平面输出图像的图像中心点的坐标值(x 0,y′ 0),其处理过程如下:
首先,给X min,X max,Y′ min,Y′ max赋初始值,令:
X min=100000,X max=-100000,Y′ min=100000,Y′ max=-100000;
像平面像点和物平面物点之间的坐标映射关系是:
Figure PCTCN2021130252-appb-000032
z′ a=z acos α-y asin α。
上式中x c,y c,z c,z′ a,α为前述步骤获得的设备参数,P(x p,y p)表示像平面中的一个像点的X轴方向、Y轴方向的二维坐标值,将其在物平面中映射点的坐标值以A(x a,y a,z a)表示。
在宽度为XW列,高度为YW行的像平面原始图像中,对每一个像素点P(x pk,y pk);i=0,1,2,……,XW;j=0,1,2,……,YW;k=XW*j+i,完成下述运算:
令:x p=x pk;y p=y pk
根据上述像平面像点和物平面物点之间的坐标映射关系可以得到:
Figure PCTCN2021130252-appb-000033
Figure PCTCN2021130252-appb-000034
如果
Figure PCTCN2021130252-appb-000035
Figure PCTCN2021130252-appb-000036
如果
Figure PCTCN2021130252-appb-000037
根据
Figure PCTCN2021130252-appb-000038
可以得到
Figure PCTCN2021130252-appb-000039
Figure PCTCN2021130252-appb-000040
相对应的z a=bz c,x a=b(x c-x p)+x p
对映射点A(x a,y a,z a)进行坐标旋转变换:y′ a=y acos(-α)-z asin(-α);
X min=min(X min,x a);
X max=max(X max,x a);
Y′ min=min(Y′ min,y′ a);
Y′ max=max(Y′ max,y′ a)。
对每一个像素点P(x pk,y pk);i=0,1,2,……,XW;j=0,1,2,……,YW;k=XW*j+i,完成上述运算后,就可以得到下面的物平面输出图像的图像中心点的坐标值:x 0=(X max+X min)/2,y′ 0=(Y′ max+Y′ min)/2。
下面以示例进行说明,首先确定光学图像采集器的物平面输出图像的图像分辨率为500dpi,用2毫米厚的玻璃制作一个测试片,如图7所示,测试片正中心刻有一个10.16毫米×10.16毫米正方形框,正方形框所有线条的线宽均为0.05mm。如果按500dpi(每英寸500个点,在物平面中,一个点相当于一个像素)的图像分辨率计算,线宽正好为一个像素,正方形每条边的边长为200像素,对角线之间的距离为282.8427像素。设正方形四个顶点分别为a 0,a 1,a 2,a 3,则四个顶点之间的距离分别为:
a 0a 1=200像素;a 0a 2=282.8427像素;a 0a 3=200像素;
a 1a 2=200像素;a 1a 3=282.8427像素;a 2a 3=200像素;
为了让光刻在测试片上的线条图形能印到光学图像采集器的玻璃面(物平面)上,需要在测试片上稍沾点水,然后将其紧贴到光学图像采集器玻璃面上,以便光学图像采集器采集到测试片上的线条图像,测试片上正方形的四个顶点相当于在物平面上设置了4个物点。如图2所示为采集到的一幅测试片的像平面原始图像,图中像平面原始图像的宽度为640列像素,高度为480行像素。不难看出,测试片中心部位的正方形,在像平面原始图像中变成了一个梯形。尽管如此,但仍能辨别出正方形的四个顶点,四条边和两条对角线。坐标定义如前文所述,在像平面上,X轴为宽度方向,Y轴为高度方向,Z轴指向物平面方向,并与OXY平面垂值,第0列,第0行为坐标原点O,用图像处理软件确定出正方形四个顶点在像平面中的坐标值为:
A 0(126,86),A 1(490,88),A 2(525,347),A 3(91,345)。
选定设备参数的一组初始值:
x c:320;y c:955;z c:467;z′ a:100;α:11。
按上述迭代逼近算法求得:
x c=294;y c=949;z c=469;z′ a=108;α=12。
对顶点A 0(126,86)进行运算:令x p=126;y p=86,并代入如下公式可得:
Figure PCTCN2021130252-appb-000041
Figure PCTCN2021130252-appb-000042
则有:
Figure PCTCN2021130252-appb-000043
又令:
Figure PCTCN2021130252-appb-000044
所以有:
z a=bz c=211.36038644696;
x a=b(x c-x p)+x p=201.71118320488。
求得了与顶点A 0(126,86)相对应的物平面物点的三维坐标值:
x a=201.71118320488;y a=474.92113753460,z a=211.36038644696
将其它三个顶点的像点的坐标值分别代入,重复上述运算,因此,得到四个顶点在物平面上的映射点的三维坐标值如下:
x[0]201.71118320488
x[1]401.51023785943
x[2]400.25471973082
x[3]200.50810105784;
y[0]474.92113753460
y[1]476.72288368895
y[2]672.09376070150
y[3]670.82705930510;
z[0]211.36038644696
z[1]211.74335940780
z[2]253.27072054652
z[3]253.00147485777。
由这些坐标值计算出顶点之间的距离分别为:
[0]199.80754539892
[1]199.73952446516
[2]199.75081651136
[3]200.28618982706
[4]282.93648638357
[5]282.45446018944。
由此可以看出,实际距离与计算出来的物点距离之差不超过1个像素,达到了图像分辨率归一化处理的目的:使像素点的位移偏差不超过1个像素。如果把像平面上的所有像素点映射到物平面上,物平面上的这幅物平面图像中每一个区域和不同的方向都是相同或接近的图像分辨率,达到了图像分辨率归一化处理的目的。
在实际应用中,将从物平面图像中剪裁出一幅矩形图像作为光学图像采集器的输出图像(简称为物平面输出图像)。例如图2中对应的像平面原始图像的宽度是640列像素,高度是480行像素,本例中物平面输出图像的宽度为336列像素,高度为360行像素,用前述方法获得采集器设备参数后,又求得物平面输出图像的图像中心点的坐标值为:
X轴方向:x 0=308;
Y′轴方向:y′ 0=605。
通过上述操作,得到光学图像采集器的设备参数、物平面输出图像的位置信息、物平面输出图像的尺寸信息、像平面原始图像的尺寸信息后,利用这些信息来获得物平面输出图像。为了得到物平面输出图像可以将像平面原始图像中所有像素点都映射到物平面输出图像中,但是像平面原始图像中的像素点是离散点,映射到物平面输出图像后也是不连续的,从而在物平面输出图像中留下空隙,因此实际处理时,是获取物平面输出图像的每一个像素点在像平面中的映射点,然后把该映射点的像素点灰度值赋值给物平面上相应的像素点,从而得到物平面输出图像。
设所求物平面输出图像的宽度为xw=336列像素,高度为yw=360行像素,所有像素点在物平面输出图像中的位置可以表示为:
P"(i,j);i=1,2,3,……,xw;j=1,2,3,……,yw
由于物平面是与OXY′Z′坐标系中的OXY′坐标平面平行。在OXY′Z′坐标系中,如前文所述,对所有像素点z′ a=118,且物平面输出图像的图像中心点的坐标为:
X轴方向:x 0=308;
Y′轴方向:y′ 0=605;
则每一个像点在OXY′Z′坐标系中,坐标平移距离为:
g=x 0-xw/2=308-336/2=140;h=y′ 0-yw/2=605-360/2=425。
物平面输出图像中任一像素点在OXY′Z′坐标系中的三维坐标值可以表示为:P′(i+140,j+425,118);i=1,2,3,……,xw;j=1,2,3,……,yw;
选取任一像素点,令:i=70;j=79;
现在,以物平面输出图像中这个像素点为例求出其在像平面上的坐标值。按前文所述公式,有:
x a=i+g=70+140=210;
y a=(h+j)cos α-z′ asin α=(425+79)*cos(12°)-108*sin(12°)=470.532;
z a=(h+j)sin α+z′ acos α=(425+79)*sin(12°)+108*cos(12°)=210.471;
Figure PCTCN2021130252-appb-000045
则:
Figure PCTCN2021130252-appb-000046
Figure PCTCN2021130252-appb-000047
求得了物平面输出图像中像素点P"(70,79)在像平面原始图像中对应像素点的坐标值为(81,141),进一步的,将像平面原始图像中该像素点的灰度值赋值给物平面输出图像中P"(70,79)处的像素点。
对物平面输出图像上的所有像素点:P"(i,j);i=0,1,2,3,……,xw;j=0,1,2,3,……,yw,重复上述运算,可以得到物平面输出图像中所有像素点的灰度值,从而得到了物平面输出图像。如图8所示为测试片的像平面原始图像与物平面输出图像的对照图。右侧为像平面原始图像,左侧为物平面输出图像,很明显物平面输出图像是一幅图像分辨率归一化后图像,分辨率误差被大幅降低。
与上述方法实施例相对应,本申请实施例还提供一种图像分辨率归一化处理装置,其结构如图9所示,可以包括:获取单元10、确定单元20、坐标转换单元30和灰度值赋值单元40。
获取单元10,用于获取光学图像采集器的设备参数,获取光学图像采集器的像平面原始图像的尺寸信息,获取光学图像采集器的物平面输出图像的尺寸信息与位置信息。
设备参数包括光学图像采集器中视点的坐标值(x c,y c,z c)、光学图像采集器中物平面和光学图像采集器中像平面之间的夹角α、物平面和OXY′Z′坐标系中OXY′坐标平面之间的距离z′ a,OXY′Z′坐标系是由光学图像采集器的OXYZ坐标系绕X轴旋转夹角得到,视点的坐标值包括视点在OXYZ坐标系的三个轴上的坐标值,物平面输出图像的位置信息可以以物平面输出图像的图像中心点的坐标值表示,对于设备参数和位置信息的预先获得过程,本实施例不再阐述。
确定单元20,用于根据设备参数、物平面输出图像的尺寸信息与位置信 息,得到物平面输出图像中各个像素点的坐标值。
对物平面输出图像中第i列,第j行像素点P"(i,j);i=0,1,2,……,xw;j=0,1,2,……,yw,得到第i列,第j行像素点P"(i,j)在OXY′Z′坐标系下的坐标值为P′(i+g,j+h);g=x 0-xw/2;h=y′ 0-yw/2;xw为物平面输出图像的宽度,yw为物平面输出图像的高度,(x 0,y′ 0)为物平面输出图像的图像中心点的坐标值,图像中心点的坐标值为位置信息;利用OXY′Z′坐标系和OXYZ坐标系的坐标变换关系,得到第i列,第j行像素点P”(i,j)在OXYZ坐标系下的坐标值为(x ak,y ak,z ak),x ak=i+g;y ak=(j+h)cos α-z′ asin α;z ak=(j+h)sin α+z′ acos α;k=xw*j+i。
坐标转换单元30,用于根据物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系。
物平面物点和像平面像点之间的坐标映射关系包括:
Figure PCTCN2021130252-appb-000048
z′ a=z acos α-y asin α;
其中(x a,y a,z a)为物平面中一个物点的坐标值,(x p,y p)为像平面中一个像点的坐标值,具体是物点在像平面中对应像点的X轴方向、Y轴方向的二维坐标值。
坐标转换单元30得到映射点的坐标值的方式是:根据物平面物点和像平面像点之间的坐标映射关系中的
Figure PCTCN2021130252-appb-000049
得到物平面输出图像中各个像素点到像平面映射的关系式,关系式包括
Figure PCTCN2021130252-appb-000050
Figure PCTCN2021130252-appb-000051
其中(x ak,y ak,z ak)为物平面输出图像中像素点在OXYZ坐标系下的坐标值,(x pk,y pk)为像素点映射到像平面上时映射点的X轴方向、Y轴方向的二维坐标值;将物平面输出图像中每个像素点的坐标值输入到关系式中,得到每个映射点的X轴方向、Y轴方向的二维坐标值。
灰度值赋值单元40,用于根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。
例如如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值重叠,则将像平面原始图像中像素点的灰度值赋值给物平面输出图像中对应的像素点;如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠,将物平面输出图像中对应的像素点的灰度值赋值为有效值,有效值为[0,255]中的一个数值;
其中像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠通过如下至少一种方式确定:方式一、映射点的X轴方向坐标值小于0;方式二、映射点的X轴方向坐标值大于或等于像平面原始图像的宽度;方式三、映射点的Y轴方向坐标值小于0;方式四、映射点的Y轴方向坐标值大于或等于像平面原始图像的高度。
本申请实施例还提供一种光学图像采集器,包括:处理器和用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令,以实现上述图像分辨率归一化处理方法。对于光学图像采集器来说,光学图像采集器还可以包括采集部件,用于采集像平面原始图像,对于光学图像采集器的固有部件,本实施例不再一一阐述。
本申请实施例提供一种计算机可读存储介质,当计算机可读存储介质中的指令由光学图像采集器的处理器执行时,使得光学图像采集器能够执行上述图像分辨率归一化处理方法。
需要说明的是,本说明书中的各个实施例可以采用递进的方式描述、本说明书中各实施例中记载的特征可以相互替换或者组合,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。对于装置类实施例而言,其相关之处参见方法实施例的部分说明即可。以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (11)

  1. 一种图像分辨率归一化处理方法,其特征在于,所述方法包括:
    获取光学图像采集器的设备参数,获取所述光学图像采集器的像平面原始图像的尺寸信息,获取所述光学图像采集器的物平面输出图像的尺寸信息与位置信息;
    根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值;
    根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,所述像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;
    根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值包括:
    如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值重叠,则将所述像平面原始图像中像素点的灰度值赋值给物平面输出图像中对应的像素点;
    如果像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠,将物平面输出图像中对应的像素点的灰度值赋值为有效值,所述有效值为[0,255]中的一个数值;
    其中所述像平面上映射点的坐标值与像平面原始图像中像素点的坐标值不重叠通过如下至少一种方式确定:
    方式一、映射点的X轴方向坐标值小于0;方式二、映射点的X轴方向坐标值大于或等于像平面原始图像的宽度;方式三、映射点的Y轴方向坐标值小于0;方式四、映射点的Y轴方向坐标值大于或等于像平面原始图像的高度。
  3. 根据权利要求1所述的方法,其特征在于,所述设备参数包括所述光学图像采集器中视点的坐标值(x c,y c,z c)、所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角α、所述物平面和OXY′Z′坐标系中OXY′坐标平面之间的距离z′ a,所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转所述夹角得到,所述视点的坐标值包括所述视点在所述OXYZ坐标系的三个轴上的坐标值,OXYZ坐标系中的Y轴垂直向上、Z轴指 向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠。
  4. 根据权利要求1或3所述的方法,其特征在于,所述物平面物点和像平面像点之间的坐标映射关系包括:
    Figure PCTCN2021130252-appb-100001
    z′ a=z acosα-y asinα;
    其中(x a,y a,z a)为物平面中一个物点的坐标值,(x p,y p)为所述物点在像平面中对应像点的X轴方向、Y轴方向的二维坐标值。
  5. 根据权利要求1所述的方法,其特征在于,所述根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值包括:
    根据物平面物点和像平面像点之间的坐标映射关系中的
    Figure PCTCN2021130252-appb-100002
    Figure PCTCN2021130252-appb-100003
    得到所述物平面输出图像中各个像素点到像平面映射的关系式,所述关系式包括
    Figure PCTCN2021130252-appb-100004
    其中(x ak,y ak,z ak)为物平面输出图像中像素点在光学图像采集器的OXYZ坐标系下的坐标值,(x pk,y pk)为像素点映射到像平面上时映射点的X轴方向、Y轴方向的二维坐标值,OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠;
    将所述物平面输出图像中每个像素点的坐标值输入到所述关系式中,得到每个映射点的坐标值。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值包括:
    对物平面输出图像中第i列,第j行像素点P"(i,j);i=0,1,2,……,xw;j=0,1,2,……,yw,得到第i列,第j行像素点P"(i,j)在OXY′Z′坐标系下的X轴方向、Y′轴方向的二维坐标值为P′(i+g,j+h);g=x 0-xw/2;h=y′ 0-yw/2;xw为所述物平面输出图像的宽度,yw为所述物平面输出图像的高度,(x 0,y′ 0)为物平面输出图像的图像中心点的坐标值,所述图像中心点的坐标值为所述位置信息;
    利用OXY′Z′坐标系和OXYZ坐标系的坐标变换关系,得到第i列,第j行像素点P”(i,j)在OXYZ坐标系下的坐标值为(x ak,y ak,z ak),x ak=i+g;y ak=(j+h)cosα-z′ asinα;z ak=(j+h)sinα+z′ acosα;k=xw*j+i;
    所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转 夹角α得到,所述夹角α为所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角,所述OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠。
  7. 根据权利要求1所述的方法,其特征在于,所述光学图像采集器的设备参数的预先获得过程包括:
    在光学图像采集器的物平面上分散设置多个物点,并按照指定的物平面输出图像的图像分辨率对各物点之间的距离进行离散化运算,得到任意两个物点之间的第一距离;
    获得所有物点在像平面原始图像中对应像点的坐标值,所述像点的坐标值为像点在像平面中的X轴方向、Y轴方向的二维坐标值;
    根据所述光学图像采集器的初始设备参数集和所述设备参数中每个参数的预设搜索范围,得到多个待处理设备参数集;
    根据每个待处理设备参数集、所述坐标映射关系和所述像点在所述像平面中的坐标值,得到每个待处理设备参数集对应的物平面物点坐标集,所述物平面物点坐标集包括在所述待处理设备参数集下多个物点在所述物平面中的坐标值,所述多个物点是由多个像点映射到所述物平面上得到;
    获取所述物平面物点坐标集中任意两个物点之间的第二距离;
    分别获得每个物平面物点坐标集的最大距离差,所述最大距离差是所述物平面映射点坐标集的多个距离差中的最大值,所述距离差是所述物平面坐标集中任意两个物点之间的第二距离与其对应的第一距离之间的差值;
    确定最大距离差小于预设距离阈值的目标物平面物点坐标集,选取所述目标物平面物点坐标集中最大距离差取值最小时对应的待处理设备参数集中的设备参数作为所述光学图像采集器的设备参数。
  8. 根据权利要求1所述的方法,其特征在于,所述物平面输出图像的位置信息的预先获得过程包括:
    根据所述设备参数、物平面物点与像平面像点之间的坐标映射关系,将像平面原始图像中的每一个像素点映射到物平面上,得到每个像素点在物平面中的映射点的坐标值;
    获得每个映射点在OXY′Z′坐标系中的坐标值,所述OXY′Z′坐标系是由所述光学图像采集器的OXYZ坐标系绕X轴旋转夹角α得到,所述夹角α为所述光学图像采集器中物平面和所述光学图像采集器中像平面之间的夹角,所述OXYZ坐标系中的Y轴垂直向上、Z轴指向左、X轴与OYZ坐标平面垂直,且指向里,OXY坐标平面与像平面重叠;
    将所有映射点在OXY′Z′坐标系中的X轴方向坐标值和X轴方向坐标的初 始最大值X max进行比对,得到X轴方向坐标值的最大值;
    将所有映射点在OXY′Z′坐标系中的X轴方向坐标值和X轴方向坐标的初始最小值X min进行比对,得到X轴方向坐标值的最小值;
    将所有映射点在OXY′Z′坐标系中的Y′轴方向坐标值和Y′轴方向坐标的初始最大值Y′ max进行比对,得到Y′轴方向坐标值的最大值;
    将所有映射点在OXY′Z′坐标系中的Y′轴方向坐标值和Y′轴方向坐标的初始最小值Y′ min进行比对,得到Y′轴方向坐标值的最小值;
    将X轴方向坐标值的最大值和X轴方向坐标值的最小值的均值,作为所述物平面输出图像的图像中心点的X轴方向坐标值,将Y′轴方向坐标值的最大值和Y′轴方向坐标值的最小值的均值,作为所述物平面输出图像的图像中心点的Y′轴方向坐标值,所述物平面输出图像的图像中心点的坐标值为所述物平面输出图像的位置信息。
  9. 一种图像分辨率归一化处理装置,其特征在于,所述装置包括:
    获取单元,用于获取光学图像采集器的设备参数,获取所述光学图像采集器的像平面原始图像的尺寸信息,获取所述光学图像采集器的物平面输出图像的尺寸信息与位置信息;
    确定单元,用于根据所述设备参数、所述物平面输出图像的尺寸信息与位置信息,得到物平面输出图像中各个像素点的坐标值;
    坐标转换单元,用于根据所述物平面输出图像中各个像素点的坐标值、物平面物点和像平面像点之间的坐标映射关系,得到物平面输出图像中各个像素点在像平面中的映射点的坐标值,以得到像素点映射关系,所述像素点映射关系为物平面输出图像中像素点与像平面中映射点之间的一一对应关系;
    灰度值赋值单元,用于根据像素点映射关系、像平面原始图像中各像素点的灰度值以及像平面原始图像的尺寸信息,获得物平面输出图像中各像素点的灰度值,以得到物平面输出图像。
  10. 一种光学图像采集器,其特征在于,包括:处理器和用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如权利要求1至8中任一项所述的图像分辨率归一化处理方法。
  11. 一种计算机可读存储介质,其特征在于,当所述计算机可读存储介质中的指令由光学图像采集器的处理器执行时,使得光学图像采集器能够执行如权利要求1至8中任一项所述的图像分辨率归一化处理方法。
PCT/CN2021/130252 2021-07-13 2021-11-12 一种图像分辨率归一化处理方法及装置 WO2023284202A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110790061.2 2021-07-13
CN202110790061.2A CN113239918B (zh) 2021-07-13 2021-07-13 一种图像分辨率归一化处理方法及装置

Publications (1)

Publication Number Publication Date
WO2023284202A1 true WO2023284202A1 (zh) 2023-01-19

Family

ID=77135472

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/130252 WO2023284202A1 (zh) 2021-07-13 2021-11-12 一种图像分辨率归一化处理方法及装置

Country Status (2)

Country Link
CN (1) CN113239918B (zh)
WO (1) WO2023284202A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934833A (zh) * 2023-07-18 2023-10-24 广州大学 基于双目视觉水下结构病害检测方法、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239918B (zh) * 2021-07-13 2021-10-01 北京金博星指纹识别科技有限公司 一种图像分辨率归一化处理方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087253A (zh) * 2017-06-13 2018-12-25 杭州海康威视数字技术股份有限公司 一种图像校正方法及装置
CN110057295A (zh) * 2019-04-08 2019-07-26 河海大学 一种免像控的单目视觉平面距离测量方法
US20190278548A1 (en) * 2018-03-06 2019-09-12 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method and apparatus, virtual reality apparatus, and computer-program product
CN112200734A (zh) * 2020-09-15 2021-01-08 江苏大学 一种用于交通事故现场重构的逆透视变换计算方法
CN113239918A (zh) * 2021-07-13 2021-08-10 北京金博星指纹识别科技有限公司 一种图像分辨率归一化处理方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7352913B2 (en) * 2001-06-12 2008-04-01 Silicon Optix Inc. System and method for correcting multiple axis displacement distortion
CN105095896B (zh) * 2015-07-29 2019-01-08 江苏邦融微电子有限公司 一种基于查找表的图像畸变校正方法
CN109726673B (zh) * 2018-12-28 2021-06-25 北京金博星指纹识别科技有限公司 实时指纹识别方法、系统及计算机可读存储介质
CN111800589B (zh) * 2019-04-08 2022-04-19 清华大学 图像处理方法、装置和系统,以及机器人

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087253A (zh) * 2017-06-13 2018-12-25 杭州海康威视数字技术股份有限公司 一种图像校正方法及装置
US20190278548A1 (en) * 2018-03-06 2019-09-12 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method and apparatus, virtual reality apparatus, and computer-program product
CN110057295A (zh) * 2019-04-08 2019-07-26 河海大学 一种免像控的单目视觉平面距离测量方法
CN112200734A (zh) * 2020-09-15 2021-01-08 江苏大学 一种用于交通事故现场重构的逆透视变换计算方法
CN113239918A (zh) * 2021-07-13 2021-08-10 北京金博星指纹识别科技有限公司 一种图像分辨率归一化处理方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116934833A (zh) * 2023-07-18 2023-10-24 广州大学 基于双目视觉水下结构病害检测方法、设备及介质

Also Published As

Publication number Publication date
CN113239918B (zh) 2021-10-01
CN113239918A (zh) 2021-08-10

Similar Documents

Publication Publication Date Title
WO2023284202A1 (zh) 一种图像分辨率归一化处理方法及装置
WO2021115071A1 (zh) 单目内窥镜图像的三维重建方法、装置及终端设备
WO2019196308A1 (zh) 人脸识别模型的生成装置、方法及计算机可读存储介质
US7403658B2 (en) Direct homography computation by local linearization
WO2021129791A1 (zh) 基于光学动捕的大空间环境下多相机标定方法及相关设备
US7477784B2 (en) Spatial transforms from displayed codes
CN110780739B (zh) 基于注视点估计的眼控辅助输入方法
CN111091075B (zh) 人脸识别方法、装置、电子设备及存储介质
US20050147281A1 (en) Local localization using fast image match
KR20050072704A (ko) 고속 이미지 매칭에 의한 글로벌 국소화
US20130051607A1 (en) Candidate Identification by Image Fingerprinting and Model Matching
US11475707B2 (en) Method for extracting image of face detection and device thereof
CN109859137B (zh) 一种广角相机非规则畸变全域校正方法
CN113449724B (zh) 一种图像文本校正方法、装置、设备及存储介质
CN116188594B (zh) 沙姆相机的标定方法、标定系统、装置和电子设备
CN111950554A (zh) 一种身份证识别方法、装置、设备及存储介质
WO2022121842A1 (zh) 文本图像的矫正方法及装置、设备和介质
CN110136205B (zh) 多目相机的视差校准方法、装置及系统
CN108710877A (zh) 一种图像采集方法
CN106204604A (zh) 投影触控显示装置及其交互方法
WO2022121843A1 (zh) 文本图像的矫正方法及装置、设备和介质
JP2017199288A (ja) 画像処理装置、画像処理方法及びプログラム
KR101766787B1 (ko) Gpu장치를 기반으로 하는 딥러닝 분석을 이용한 영상 보정 방법
WO2022027191A1 (zh) 平面矫正方法及装置、计算机可读介质和电子设备
WO2024146165A1 (zh) 人眼定位方法、装置、计算设备及存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE