CN115272052A - Image processing method and device and computer readable storage medium - Google Patents

Image processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN115272052A
CN115272052A CN202110481942.6A CN202110481942A CN115272052A CN 115272052 A CN115272052 A CN 115272052A CN 202110481942 A CN202110481942 A CN 202110481942A CN 115272052 A CN115272052 A CN 115272052A
Authority
CN
China
Prior art keywords
image
coordinates
correspondence
point
panoramic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481942.6A
Other languages
Chinese (zh)
Inventor
张恒之
伊红
贾海晶
张宇鹏
王炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to CN202110481942.6A priority Critical patent/CN115272052A/en
Publication of CN115272052A publication Critical patent/CN115272052A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method, an image processing device and a computer readable storage medium. The image processing method according to the embodiment of the invention comprises the following steps: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting among the different viewpoints; respectively projecting the first plane image and the second plane image into a first panoramic image and a second panoramic image; obtaining a first correspondence relationship and a second correspondence relationship based on the projection, wherein the first correspondence relationship represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence relationship represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence and the second correspondence.

Description

Image processing method and device and computer readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method and apparatus, and a computer-readable storage medium.
Background
In the deep learning method for feature point detection and matching of a planar image, the training data used for training the feature point detection model of the planar image is usually a pair of planar images with different viewpoints and a point correspondence between the two planar images, where the correspondence can be usually represented by a homography matrix.
Similarly, in order to achieve accuracy of feature point detection and matching of panoramic images, training of a panoramic image feature point detection model is also required, and it is desirable to obtain a pair of panoramic images having different viewpoints and a point correspondence relationship between the two panoramic images as training data. However, common panoramic image data sets such as Matterport3D, SUN3D, and the like that are commonly used at present are all panoramic images synthesized from a plurality of RGB-D planar images (depth images), and the viewpoint of the camera does not actually change when synthesizing planar images into a panoramic image. Further, for the planar image, as described above, the correspondence between corresponding points on the planar image of two different viewpoints can be generally expressed by a homography matrix; however, in the panoramic image, due to the projection characteristics of the spherical image, the homography transformation relationship of the spherical images at different viewpoints cannot be obtained by geometric operation. Therefore, there is currently no common panoramic image dataset with different viewpoints.
Accordingly, there is a need for an image processing method and apparatus capable of generating a pair of panoramic images having different viewpoints and obtaining a point correspondence relationship between the two panoramic images, so as to provide a data base for training a panoramic image feature point detection model.
Disclosure of Invention
To solve the above technical problem, according to an aspect of the present invention, there is provided an image processing method including: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints; projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively; obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
According to still another aspect of the present invention, there is provided an image processing apparatus comprising: an acquisition unit configured to acquire a first planar image and a second planar image having different viewpoints and a homography matrix for conversion between the different viewpoints; a projection unit configured to project the first planar image and the second planar image as a first panoramic image and a second panoramic image, respectively; and a relationship determination unit configured to obtain a first correspondence relationship and a second correspondence relationship based on the projection, wherein the first correspondence relationship represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence relationship represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; wherein the relationship determination unit is further configured to determine a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, based on the homography matrix, the first correspondence, and the second correspondence.
According to still another aspect of the present invention, there is provided an image processing apparatus comprising: a processor; and a memory having computer program instructions stored therein, wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints; projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively; obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
According to yet another aspect of the invention, there is provided a computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the steps of: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints; projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively; obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
According to the image processing method, the image processing device and the computer-readable storage medium, a pair of panoramic images with different viewpoints can be generated in a simulation mode based on plane images with different viewpoints, and the point correspondence between the two panoramic images is obtained, so that a data base is provided for training a panoramic image feature point detection model, and the problems of feature point detection and inaccurate matching caused by panoramic image distortion are solved.
Drawings
The above and other objects, features, and advantages of the present invention will become more apparent from the following detailed description of the embodiments of the present invention when taken in conjunction with the accompanying drawings.
FIG. 1 shows a flow diagram of an image processing method according to one embodiment of the invention;
FIG. 2 illustrates an example of a first planar image and a second planar image having different viewpoints according to one embodiment of the present invention;
FIG. 3 illustrates an example of warping a square sub-image based on a homography matrix to obtain warped sub-images, according to an embodiment of the present invention;
FIG. 4 illustrates an example of projecting a first planar image and a second planar image into a first panoramic image and a second panoramic image, respectively, according to one embodiment of the present invention;
fig. 5 shows an example of determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, according to an embodiment of the present invention;
FIG. 6 shows an example of first, second and third coordinate grid tables according to one embodiment of the invention;
FIG. 7 shows a block diagram of an image processing apparatus according to an embodiment of the invention;
fig. 8 shows a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
An image processing method, apparatus, and computer-readable storage medium according to embodiments of the present invention will be described below with reference to the accompanying drawings. In the drawings, like reference numerals refer to like elements throughout. It should be understood that: the embodiments described herein are merely illustrative and should not be construed as limiting the scope of the invention.
An image processing method according to an embodiment of the present invention will be described below with reference to fig. 1. Fig. 1 shows a flow chart of the image processing method 100.
As shown in fig. 1, in step S101, a first planar image and a second planar image having different viewpoints and a homography matrix for converting between the different viewpoints are acquired.
Fig. 2 illustrates an example of a first planar image and a second planar image having different viewpoints according to one embodiment of the present invention. As shown in fig. 2, the first planar image is composed of six square sub-images s1 to s6 having the same size, so that a cubic projection view is generated based on the six square sub-images in the subsequent step. Each of the square sub-images s1 to s6 may be a planar image obtained by shooting with a perspective camera (a camera that shoots in a perspective mapping manner, such as a single lens reflex camera, a micro single lens camera, or the like), or a planar image in a common data set downloaded from an open source website for deep learning.
It should be noted here that although the first plane image is shown to be composed of six square sub-images in the example of fig. 2, the present invention is not limited thereto. In particular, the number of square sub-images contained in the first planar image may also be less than 6, and in this case the cube projection drawing obtained does not have an image on every surface. For example, if the first planar image consists of only one square sub-image, it can be projected onto one of the faces of the cube, and the resulting cube projection view has an image on only one face. However, in order to maximize the efficiency of acquiring the training data, the first plane image is preferably composed of six square sub-images, and will be described below by way of example.
Wherein, in this step, the acquiring the first and second planar images having different viewpoints and the homography matrix for converting between the different viewpoints may include: acquiring six square sub-images with the same size to combine the six square sub-images to be used as the first plane image; calculating homography matrixes corresponding to the square sub-images respectively based on the original coordinates and the target coordinates of the four vertexes of the square sub-images; and warping each corresponding square sub-image based on the calculated homography matrix respectively to take the combination of the warped sub-images as the second plane image.
With continuing reference to fig. 2, an example of a second planar image having a different viewpoint from the first planar image is further shown in fig. 2, where the second planar image is composed of six sub-images s1 'to s6', which are distorted sub-images obtained by respectively distorting the square sub-images s1 to s6 based on the homography matrices H1 to H6, for example, the sub-image s1 'is a sub-image obtained by distorting the sub-image s1 based on the homography matrix H1, the sub-image s2' is a sub-image obtained by distorting the sub-image s2 based on the homography matrix H2, and so on.
Fig. 3 shows an example of warping a square sub-image based on a homography matrix to obtain a warped sub-image according to an embodiment of the present invention. As shown in the left side of fig. 3, the original coordinates of the four vertices of a square sub-image in the first plane image are (x) respectively1,y1)、(x2,y2)、(x3,y3) And (x)4,y4) And as shown in the right side of fig. 3, in order to warp the square sub-image, target coordinates of four vertices of the sub-image may be set to be respectively (x'1,y′1)、(x′2,y′2)、(x′3,y′3) And (x'4,y′4). In this step, those skilled in the art can arbitrarily set the target coordinates of the four vertices as long as the warped whole sub-image is still satisfiedWithin the area of the original square sub-image (i.e., as shown in fig. 3, the target coordinates of the four vertices all fall within a square box of the same size as the original square sub-image).
When the target coordinates of the four vertices are set, a homography matrix can be calculated based on the original coordinates and the target coordinates of the four vertices. Specifically, since the homography matrix has 8 degrees of freedom, the homography matrix can be solved by substituting the original coordinates and the target coordinates of the four vertices by 8 coordinates in total into an equation for solving the homography matrix. In another example, the original coordinates and the target coordinates of the four vertices may also be input into the cv2.Getperspectivetransform function of OpenCV, so as to obtain a corresponding solved homography matrix. After the homography matrix is obtained, the original square sub-image shown on the left side of fig. 3 may be warped based on the homography matrix, so as to obtain a warped sub-image shown on the right side of fig. 3, wherein the coordinates of each point on the original square sub-image are multiplied by the homography matrix, so as to obtain the coordinates of the corresponding point on the warped sub-image.
In this way, referring back to fig. 2, the target coordinates of the four vertices of each of the six square sub-images s1 to s6 may be set, respectively, so that the homography matrix corresponding to each square sub-image is calculated based on the original coordinates and the target coordinates of the four vertices of each square sub-image, respectively. Then, after the homography matrix corresponding to each square sub-image is calculated, each corresponding square sub-image may be warped based on the calculated homography matrix, so as to combine the warped sub-images as a second planar image. In this step, since each homography matrix can be decomposed into a rotation (R) matrix and a translation (T) matrix, respectively, the homography matrix can simulate a rotation transformation and a translation transformation that a camera undergoes from one viewpoint to another viewpoint, thereby simulating a change in viewpoints, and thus a first planar image and a second planar image having different viewpoints are obtained. It should be noted that the six square sub-images s1 to s6 can be respectively distorted based on the same or different homography matrixes, and the invention is not limited thereto.
In step S102, the first planar image and the second planar image are projected as a first panoramic image and a second panoramic image, respectively.
In this step, the projecting the first planar image and the second planar image as a first panoramic image and a second panoramic image, respectively, may include: respectively projecting six square sub-images contained in the first planar image and six sub-images contained in the second planar image onto different surfaces of six surfaces of a cube according to the same projection surface correspondence so as to obtain a first cube projection drawing corresponding to the first planar image and a second cube projection drawing corresponding to the second planar image; performing spherical projection on the first cubic projection drawing and the second cubic projection drawing respectively to obtain a first spherical image corresponding to the first cubic projection drawing and a second spherical image corresponding to the second cubic projection drawing; and performing equidistant cylindrical projection on the first spherical image and the second spherical image respectively to obtain the first panoramic image corresponding to the first spherical image and the second panoramic image corresponding to the second spherical image.
Fig. 4 illustrates an example of projecting a first planar image and a second planar image as a first panoramic image and a second panoramic image, respectively, according to an embodiment of the present invention. As shown in fig. 4, first, six square sub-images included in the first planar image and six sub-images included in the second planar image are respectively projected onto different surfaces of six surfaces of a cube according to the same projection surface correspondence, so as to obtain a first cube projection drawing and a second cube projection drawing. The same projection surface correspondence means that each sub-image included in the first planar image and the corresponding sub-image included in the second planar image are projected onto the same surface of the cube. For example, as shown in fig. 4, the sub-image s2 included in the first planar image and the sub-image s2 'included in the second planar image are projected onto the upper surface of the cube, and the sub-image s5 included in the first planar image and the sub-image s5' included in the second planar image are projected onto the lower surface of the cube, and so on. Subsequently, after the first and second cube projection views are obtained, spherical projection can be performed on the first and second cube projection views, respectively, to obtain a first spherical image corresponding to the first cube projection view and a second spherical image corresponding to the second cube projection view (not shown in fig. 4). Subsequently, after the first spherical image and the second spherical image are obtained, they may be subjected to equidistant cylindrical projection, respectively, to obtain a first panoramic image corresponding to the first spherical image and a second panoramic image corresponding to the second spherical image. In another example, the first planar image and the second planar image may also be projected as a first panoramic image and a second panoramic image, respectively, through a remapping function of OpenCV, which is not limited herein.
In step S103, a first correspondence relationship and a second correspondence relationship are obtained based on the projection, wherein the first correspondence relationship represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence relationship represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image.
Specifically, based on the projection process in step S102, coordinates of corresponding points of each point on the first planar image on the first spherical image may be determined based on the size of the first planar image and the cube-spherical projective transformation relation defined in advance; subsequently, coordinates of corresponding points of the respective points on the first spherical image on the first panoramic image may be determined based on the predefined size of the first panoramic image and the equidistant cylindrical projective transformation relationship. Thereby, the coordinates of the corresponding point on the first panoramic image may be calculated from the coordinates of each point on the first planar image, the predefined size of the first planar image, and the predefined size of the first panoramic image, and the first correspondence relationship representing the correspondence between the coordinates of the point on the first panoramic image and the coordinates of the corresponding point on the first planar image may be obtained. Similarly, the coordinates of the corresponding point on the second panoramic image may be calculated from the coordinates of each point on the second planar image, the size of the second planar image defined in advance, and the size of the second panoramic image defined in advance, and a second correspondence relationship indicating correspondence between the coordinates of the point on the second panoramic image and the coordinates of the corresponding point on the second planar image may be obtained.
In step S104, a third correspondence relation representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image is determined based on the homography matrix, the first correspondence relation, and the second correspondence relation.
As described above, the homography matrix may represent correspondence between coordinates of a point on the first planar image and coordinates of a corresponding point on the second planar image, the first correspondence may represent correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence may represent correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image. Thus, in this step, for each point on the first panoramic image, the coordinates of its corresponding point on the second panoramic image may be derived step by step based on the correspondence described above, i.e. a third correspondence representing the correspondence between the coordinates of the point on the first panoramic image and the coordinates of the corresponding point on the second panoramic image may be obtained.
Specifically, the determining, based on the homography matrix, the first correspondence and the second correspondence, a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image includes: for each specific point on the first panoramic image, determining the coordinates of a first corresponding point on the first planar image corresponding to the specific point based on the first corresponding relation; multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and determining the coordinate of a third corresponding point corresponding to the specific point on the second panoramic image based on the coordinate of the second corresponding point and the second corresponding relation.
Fig. 5 illustrates an example of determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, in accordance with an embodiment of the present invention. As shown in fig. 5, for a particular point 501 on the first panoramic image, the coordinates of a first corresponding point 502 on the first planar image may be determined based on the first correspondence; subsequently, the coordinates of the first corresponding point 502 may be multiplied by the homography matrix corresponding to the sub-image S2, thereby obtaining the coordinates of the second corresponding point 503 on the second planar image; finally, the coordinates of the third corresponding point 504 on the second panoramic image may be finally determined based on the coordinates of the second corresponding point 503 and the second correspondence. The above-described operation may be repeated for each specific point on the first panoramic image, thereby obtaining the above-described third correspondence.
In one example, the first correspondence and the second correspondence may be implemented using a coordinate grid table having corresponding point coordinates stored at corresponding positions. Specifically, the first correspondence relationship may be a first coordinate grid table having the same size as the first panoramic image, in which the coordinates of the corresponding point on the first planar image are stored at a position in the first coordinate grid table corresponding to the coordinates of each point on the first panoramic image; and the second correspondence may be a second coordinate grid table having the same size as the second planar image, in which coordinates of corresponding points on the second panoramic image are stored at positions in the second coordinate grid table corresponding to coordinates of respective points on the second planar image. In this case, the determining, based on the homography matrix, the first correspondence, and the second correspondence, a third correspondence that represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image may include: for each specific point on the first panoramic image, querying the first coordinate grid table to determine the coordinates of a first corresponding point on the first planar image corresponding to the specific point; multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and querying the second coordinate grid table based on the coordinates of the second corresponding point to determine the coordinates of a third corresponding point on the second panoramic image corresponding to the specific point.
In this example, based on the above-described specific process of determining the third correspondence, a coordinate grid table representing the third correspondence may also be obtained. Specifically, the third correspondence may be a third coordinate grid table having the same size as the first panoramic image, in which the coordinates of the corresponding point on the second panoramic image are stored at a position in the third coordinate grid table corresponding to the coordinates of each point on the first panoramic image.
Fig. 6 shows an example of first, second and third coordinate grid tables according to an embodiment of the invention. As shown in fig. 6, the first coordinate grid table may have the same size as the first panoramic image, and the coordinates (i, j) of the corresponding point 502 on the first planar image are stored in the first coordinate grid table at a position 601 having the same coordinates as the specific point 501 on the first panoramic image. Similarly, as further shown in fig. 6, the second coordinate grid table may be of the same size as the second planar image, and the coordinates (x, y) of the corresponding point 504 on the second panoramic image are stored in the second coordinate grid table at a location 602 having the same coordinates as the particular point 503 on the second planar image. Accordingly, to determine the third correspondence, for a particular point 501 on the first panoramic image, the first coordinate grid table may be queried based on the coordinates of the particular point 501 to obtain the coordinates (i, j) of the first corresponding point 502 stored at the position 601 of the same coordinates; subsequently, the coordinates (i, j) of the first corresponding point 502 may be multiplied by the homography matrix corresponding to the sub-image S2, thereby obtaining the coordinates of the second corresponding point 503 on the second planar image; finally, the second coordinate grid table may be queried based on the coordinates of the second corresponding point 503 to obtain the coordinates (x, y) of the third corresponding point 504 stored at the location 602 of the same coordinates.
The above-described query process may be repeated for each point on the first panoramic image to obtain the coordinates of its corresponding point on the second panoramic image, and finally a third coordinate grid table may be obtained as shown in fig. 6, where the third coordinate grid table has the same size as the first panoramic image and which stores the coordinates (x, y) of a third corresponding point 504 on the second panoramic image at a location 603 having the same coordinates as the specific point 501 on the first panoramic image.
After the first panoramic image, the second panoramic image, and the third corresponding relationship are obtained, the feature point detection model of the panoramic image may be trained by using the first panoramic image, the second panoramic image, and the third corresponding relationship, so as to obtain a trained feature point detection model of the panoramic image. Therefore, more accurate panoramic image feature point detection and matching processing can be realized by using the trained panoramic image feature point detection model.
In summary, according to the image processing method of the present invention, a pair of panoramic images with different viewpoints can be generated based on planar images with different viewpoints by simulation, and a point correspondence relationship between the two panoramic images is obtained, so as to provide a data base for training a panoramic image feature point detection model, thereby improving the problem of feature point detection and inaccurate matching caused by panoramic image distortion.
Next, an image processing apparatus according to an embodiment of the present invention is described with reference to fig. 7. Fig. 7 shows a block diagram of an image processing apparatus 700 according to an embodiment of the present invention. As shown in fig. 7, the image processing apparatus 700 includes an acquisition unit 710, a projection unit 720, and a relationship determination unit 730. The image processing apparatus 700 may include other components in addition to these units, however, since these components are not related to the contents of the embodiment of the present invention, illustration and description thereof are omitted herein.
Specifically, the acquisition unit 710 acquires a first planar image and a second planar image having different viewpoints and a homography matrix for converting between the different viewpoints.
Fig. 2 illustrates an example of a first planar image and a second planar image having different viewpoints according to one embodiment of the present invention. As shown in fig. 2, the first planar image acquired by the acquisition unit 710 is composed of six square sub-images s1 to s6 having the same size, so that a cube projection drawing is generated based on the six square sub-images in the subsequent step. Each of the square sub-images s1 to s6 may be a planar image obtained by photographing with a perspective camera (a camera that photographs in a perspective mapping manner, such as a single lens reflex camera, a micro single camera, or the like), or a planar image in a common data set for deep learning downloaded from an open source website.
It should be noted here that although the first plane image is shown to be composed of six square sub-images in the example of fig. 2, the present invention is not limited thereto. Specifically, the number of square sub-images included in the first plane image acquired by the acquisition unit 710 may also be less than 6, and in this case, the obtained cube projection drawing does not have an image on every surface. For example, if the first planar image consists of only one square sub-image, it can be projected onto one of the faces of the cube, and the cube projection map thus obtained has an image on only one face. However, in order to maximize the efficiency of acquiring the training data, the first plane image is preferably composed of six square sub-images, and will be described below by way of example.
Wherein the acquiring unit 710 acquiring the first and second planar images having different viewpoints and the homography matrix for converting between the different viewpoints may include: acquiring six square sub-images with the same size to combine the six sub-images as the first plane image; respectively calculating homography matrixes corresponding to the square sub-images based on the original coordinates and the target coordinates of the four vertexes of the square sub-images; and warping each corresponding square sub-image based on the calculated homography matrix respectively to take the combination of the warped sub-images as the second plane image.
With continuing reference to fig. 2, fig. 2 further shows an example of a second planar image having a different viewpoint from the first planar image, wherein the second planar image is composed of six sub-images s1 'to s6', the sub-images are distorted sub-images obtained by the obtaining unit 710 by respectively distorting the square sub-images s1 to s6 based on the homography matrices H1 to H6, for example, the sub-image s1 'is a sub-image obtained by distorting the sub-image s1 based on the homography matrix H1, the sub-image s2' is a sub-image obtained by distorting the sub-image s2 based on the homography matrix H2, and so on.
Fig. 3 illustrates an example of warping a square sub-image based on a homography matrix to obtain a warped sub-image according to an embodiment of the present invention. As shown in the left side of fig. 3, the original coordinates of the four vertices of a square sub-image in the first plane image are (x) respectively1,y1)、(x2,y2)、(x3,y3) And (x)4,y4) And as shown in the right side of fig. 3, in order to warp the square sub-image, target coordinates of four vertices of the sub-image may be set to be respectively (x ') by the acquisition unit 710'1,y′1)、(x′2,y′2)、(x′3,y′3) And (x'4,y′4). The acquisition unit 710 may be used by those skilled in the art to arbitrarily set the target coordinates of the four vertices, as long as the warped sub-image is still within the area of the original square sub-image (i.e., the target coordinates of the four vertices all fall within a square frame having the same size as the original square sub-image, as shown in fig. 3).
After the target coordinates of the four vertices are set, a homography matrix may be calculated by the obtaining unit 710 based on the original coordinates and the target coordinates of the four vertices. Specifically, since the homography matrix has 8 degrees of freedom, the homography matrix can be solved by substituting the original coordinates and the target coordinates of the four vertices by 8 coordinates in total into the equation for solving the homography matrix. In another example, the original coordinates and the target coordinates of the four vertices may also be input into the cv2.Getperspectivetransform function of OpenCV, so as to obtain a corresponding solved homography matrix. After obtaining the homography matrix, the obtaining unit 710 may warp the original square sub-image shown on the left side of fig. 3 based on the homography matrix to obtain a warped sub-image shown on the right side of fig. 3, wherein the coordinates of each point on the original square sub-image are multiplied by the homography matrix, i.e. the coordinates of the corresponding point on the warped sub-image may be obtained.
In the above manner, referring back to fig. 2, the obtaining unit 710 may be used to set the target coordinates of the four vertexes of each of the six square sub-images s1 to s6, so as to calculate the homography matrix corresponding to each square sub-image based on the original coordinates and the target coordinates of the four vertexes of each square sub-image. Subsequently, after the homography matrices corresponding to the respective square sub-images are calculated, the respective corresponding square sub-images may be warped by the acquisition unit 710 based on the calculated homography matrices, respectively, to take a combination of the warped sub-images as a second plane image. In this step, since each homography matrix can be decomposed into a rotation (R) matrix and a translation (T) matrix, respectively, the homography matrix can simulate a rotation transformation and a translation transformation that a camera undergoes from one viewpoint to another viewpoint, thereby simulating a change in viewpoints, and thus a first planar image and a second planar image having different viewpoints are obtained. It should be noted that the six square sub-images s1 to s6 can be respectively distorted based on the same or different homography matrixes, and the invention is not limited thereto.
Subsequently, the projection unit 720 projects the first planar image and the second planar image as a first panoramic image and a second panoramic image, respectively.
Wherein the projecting unit 720 projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image, respectively, may include: respectively projecting six square sub-images contained in the first planar image and six sub-images contained in the second planar image onto different surfaces of six surfaces of a cube according to the same projection surface correspondence so as to obtain a first cube projection drawing corresponding to the first planar image and a second cube projection drawing corresponding to the second planar image; performing spherical projection on the first cubic projection drawing and the second cubic projection drawing respectively to obtain a first spherical image corresponding to the first cubic projection drawing and a second spherical image corresponding to the second cubic projection drawing; and performing equidistant cylindrical projection on the first spherical image and the second spherical image respectively to obtain the first panoramic image corresponding to the first spherical image and the second panoramic image corresponding to the second spherical image.
Fig. 4 illustrates an example of projecting a first planar image and a second planar image as a first panoramic image and a second panoramic image, respectively, according to an embodiment of the present invention. As shown in fig. 4, first, the projection unit 720 may project the six square sub-images included in the first planar image and the six sub-images included in the second planar image onto different surfaces of the six surfaces of the cube according to the same projection surface correspondence, so as to obtain the first cube projection drawing and the second cube projection drawing. The same projection surface correspondence means that each sub-image included in the first planar image and the corresponding sub-image included in the second planar image are projected onto the same surface of the cube. For example, as shown in fig. 4, the sub-image s2 included in the first planar image and the sub-image s2 'included in the second planar image are both projected onto the upper surface of the cube, and the sub-image s5 included in the first planar image and the sub-image s5' included in the second planar image are both projected onto the lower surface of the cube, and so on. Subsequently, after obtaining the first cube projection drawing and the second cube projection drawing, the projection unit 720 may perform spherical projection on the first cube projection drawing and the second cube projection drawing respectively to obtain a first spherical image corresponding to the first cube projection drawing and a second spherical image corresponding to the second cube projection drawing (not shown in fig. 4). Subsequently, after obtaining the first spherical image and the second spherical image, the projection unit 720 may perform equidistant cylindrical projection on them to obtain a first panoramic image corresponding to the first spherical image and a second panoramic image corresponding to the second spherical image, respectively. In another example, the first planar image and the second planar image may also be projected as a first panoramic image and a second panoramic image, respectively, through a remapping function of OpenCV, which is not limited herein.
Subsequently, the relationship determination unit 730 obtains, based on the projection, a first correspondence relationship representing correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and a second correspondence relationship representing correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image.
Specifically, based on the processing of the above projection unit 720, the relationship determination unit 730 may first determine the coordinates of the corresponding point of each point on the first planar image on the first spherical image based on the size of the first planar image and the cubic and spherical projective transformation relationship defined in advance; subsequently, the relationship determination unit 730 may determine coordinates of corresponding points of respective points on the first spherical image on the first panoramic image based on the size of the first panoramic image defined in advance and the equidistant cylindrical projective transformation relationship. Thus, the relationship determination unit 730 may calculate the coordinates of the corresponding point on the first panoramic image from the coordinates of each point on the first planar image, the size of the first planar image defined in advance, and the size of the first panoramic image defined in advance, and may obtain the first correspondence relationship representing the correspondence between the coordinates of the point on the first panoramic image and the coordinates of the corresponding point on the first planar image. Similarly, the relationship determination unit 730 may calculate the coordinates of the corresponding point on the second panoramic image from the coordinates of each point on the second planar image, the size of the second planar image defined in advance, and the size of the second panoramic image defined in advance, and may obtain a second correspondence relationship representing the correspondence between the coordinates of the point on the second panoramic image and the coordinates of the corresponding point on the second planar image.
Subsequently, the relationship determination unit 730 determines a third correspondence relationship representing correspondence between the coordinates of the point on the first panoramic image and the coordinates of the corresponding point on the second panoramic image, based on the homography matrix, the first correspondence relationship, and the second correspondence relationship.
As described above, the homography matrix may represent correspondence between coordinates of a point on the first planar image and coordinates of a corresponding point on the second planar image, the first correspondence may represent correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence may represent correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image. Thus, the relationship determining unit 730 may derive, for each point on the first panoramic image, the coordinates of its corresponding point on the second panoramic image step by step based on the correspondence described above, i.e. may obtain a third correspondence representing the correspondence between the coordinates of the point on the first panoramic image and the coordinates of the corresponding point on the second panoramic image.
Specifically, the relationship determining unit 730 determines, based on the homography matrix, the first correspondence relationship, and the second correspondence relationship, a third correspondence relationship that represents correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image includes: for each specific point on the first panoramic image, determining the coordinates of a first corresponding point on the first planar image corresponding to the specific point based on the first corresponding relation; multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and determining the coordinate of a third corresponding point corresponding to the specific point on the second panoramic image based on the coordinate of the second corresponding point and the second corresponding relation.
Fig. 5 illustrates an example of determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, in accordance with an embodiment of the present invention. As shown in fig. 5, for a particular point 501 on the first panoramic image, the relationship determining unit 730 may determine the coordinates of the first corresponding point 502 on the first planar image based on the first corresponding relationship; subsequently, the relationship determining unit 730 may multiply the coordinates of the first corresponding point 502 by the homography matrix corresponding to the sub-image S2, thereby obtaining the coordinates of the second corresponding point 503 on the second planar image; finally, the relationship determination unit 730 may finally determine the coordinates of the third corresponding point 504 on the second panoramic image based on the coordinates of the second corresponding point 503 and the second corresponding relationship. The relationship determining unit 730 may repeat the above-described operations for respective specific points on the first panoramic image, thereby obtaining the above-described third correspondence relationship.
In one example, the first correspondence and the second correspondence may be implemented with a coordinate grid table having stored coordinates of corresponding points at corresponding locations. Specifically, the first correspondence relationship may be a first coordinate grid table having the same size as the first panoramic image, in which coordinates of corresponding points on the first planar image are stored at positions in the first coordinate grid table corresponding to coordinates of respective points on the first panoramic image; and the second correspondence relationship may be a second coordinate grid table having the same size as the second planar image, in which coordinates of corresponding points on the second panoramic image are stored at positions in the second coordinate grid table corresponding to coordinates of respective points on the second planar image. In this case, the relationship determining unit 730 may determine, based on the homography matrix, the first correspondence relationship, and the second correspondence relationship, a third correspondence relationship representing correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, including: for each specific point on the first panoramic image, querying the first coordinate grid table to determine the coordinates of a first corresponding point on the first planar image corresponding to the specific point; multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and querying the second coordinate grid table based on the coordinates of the second corresponding point to determine the coordinates of a third corresponding point on the second panoramic image corresponding to the specific point.
In this example, based on the specific processing of the above-described relationship determination unit 730 to determine the third correspondence relationship, a coordinate grid table representing the third correspondence relationship may also be obtained. Specifically, the third correspondence may be a third coordinate grid table having the same size as the first panoramic image, in which the coordinates of the corresponding point on the second panoramic image are stored at a position in the third coordinate grid table corresponding to the coordinates of each point on the first panoramic image. Fig. 6 shows an example of a first, a second and a third coordinate grid table according to an embodiment of the invention. As shown in fig. 6, the first coordinate grid table may have the same size as the first panoramic image, and the coordinates (i, j) of the corresponding point 502 on the first planar image are stored in the first coordinate grid table at a position 601 having the same coordinates as the specific point 501 on the first panoramic image. Similarly, as further shown in fig. 6, the second coordinate grid table may be of the same size as the second planar image, and the coordinates (x, y) of the corresponding point 504 on the second panoramic image are stored at a location 602 in the second coordinate grid table that has the same coordinates as the particular point 503 on the second planar image. Accordingly, to determine the third correspondence relationship, for a particular point 501 on the first panoramic image, the relationship determination unit 730 may query the first coordinate grid table based on the coordinates of the particular point 501 to obtain the coordinates (i, j) of the first corresponding point 502 stored at the position 601 of the same coordinates; subsequently, the relationship determining unit 730 may multiply the coordinates (i, j) of the first corresponding point 502 by the homography matrix corresponding to the sub-image S2, thereby obtaining the coordinates of the second corresponding point 503 on the second planar image; finally, the relation determining unit 730 may query the second coordinate grid table based on the coordinates of the second corresponding point 503 to obtain the coordinates (x, y) of the third corresponding point 504 stored at the position 602 of the same coordinates.
The above-described query process may be repeated for each point on the first panoramic image by the relationship determination unit 730 to obtain the coordinates of its corresponding point on the second panoramic image, and finally a third coordinate grid table as shown in fig. 6 may be obtained, where the third coordinate grid table has the same size as the first panoramic image and stores the coordinates (x, y) of the third corresponding point 504 on the second panoramic image at the position 603 having the same coordinates as the specific point 501 on the first panoramic image.
After the first panoramic image, the second panoramic image, and the third corresponding relationship are obtained, a training unit (not shown in fig. 7) may train the panoramic image feature point detection model by using the first panoramic image, the second panoramic image, and the third corresponding relationship to obtain a trained panoramic image feature point detection model. Therefore, more accurate panoramic image feature point detection and matching processing can be realized by using the trained panoramic image feature point detection model.
In summary, according to the image processing apparatus of the present invention, a pair of panoramic images with different viewpoints can be generated based on planar images with different viewpoints by simulation, and a point correspondence relationship between the two panoramic images is obtained, so as to provide a data base for training a panoramic image feature point detection model, thereby improving the problem of feature point detection and inaccurate matching caused by panoramic image distortion.
Next, an image processing apparatus according to an embodiment of the present invention is described with reference to fig. 8. Fig. 8 shows a block diagram of an image processing apparatus 800 according to an embodiment of the present invention. As shown in fig. 8, the image processing apparatus 800 may be a computer or a server.
As shown in fig. 8, the image processing apparatus 800 includes one or more processors 810 and a memory 820, although of course, the image processing apparatus 800 may include input devices, output devices (not shown), and the like, which may be interconnected via a bus system and/or other form of connection mechanism. It should be noted that the components and configuration of the image processing apparatus 800 shown in fig. 8 are only exemplary and not limiting, and the image processing apparatus 800 may have other components and configurations as necessary.
Processor 810 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may utilize computer program instructions stored in memory 820 to perform desired functions, which may include: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints; projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively; obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
Memory 820 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 810 to implement the functions of the image processing apparatus of the embodiments of the present invention described above and/or other desired functions, and/or may execute an image processing method according to an embodiment of the present invention. Various applications and various data may also be stored in the computer-readable storage medium.
In the following, a computer readable storage medium according to an embodiment of the present invention is described, on which computer program instructions are stored, wherein the computer program instructions, when executed by a processor, implement the steps of: acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints; projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively; obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
Of course, the above-mentioned embodiments are merely illustrative and not restrictive, and those skilled in the art can combine and combine some steps and devices from the above-mentioned separately described embodiments to achieve the effects of the present invention according to the inventive concept, and such combined and combined embodiments are also included in the present invention, and such combination and combination is not described herein separately.
Note that advantages, effects, and the like mentioned in the present invention are merely examples and not limitations, and they cannot be considered essential to various embodiments of the present invention. Furthermore, the foregoing detailed description of the invention is provided for the purpose of illustration and understanding only, and is not intended to be limiting, since the invention will be described in detail as it will be apparent from the following detailed description.
The block diagrams of devices, apparatus, apparatuses, systems involved in the present invention are by way of illustrative examples only and are not intended to require or imply that the devices, apparatus, apparatuses, systems must be connected, arranged, or configured in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by one skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably herein. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The flowchart of steps in the present invention and the above description of the method are only given as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by those of skill in the art, the order of the steps in the above embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are only used to guide the reader through the description of these methods. Furthermore, any reference to an element in the singular, for example, using the articles "a," "an," or "the" is not to be construed as limiting the element to the singular.
In addition, the steps and devices in the embodiments are not limited to be implemented in a certain embodiment, and in fact, some steps and devices related to the embodiments may be combined according to the concept of the present invention to conceive new embodiments, and these new embodiments are also included in the scope of the present invention.
The individual operations of the methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software components and/or modules including, but not limited to, a circuit, an Application Specific Integrated Circuit (ASIC), or a processor.
The various illustrative logical blocks, modules, and circuits described may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an ASIC, a field programmable gate array signal (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the invention may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of tangible storage medium. Some examples of storage media that may be used include Random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, and the like. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. A software module may be a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
The methods of the invention herein comprise one or more acts for carrying out the recited methods. The methods and/or acts may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of actions is specified, the order and/or use of specific actions may be modified without departing from the scope of the claims.
The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a tangible computer-readable medium. A storage media may be any available tangible media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc.
Accordingly, a computer program product may perform the operations presented herein. For example, such a computer program product may be a computer-readable tangible medium having instructions stored (and/or encoded) thereon that are executable by one or more processors to perform the operations described herein. The computer program product may include packaged material.
Software or instructions may also be transmitted over a transmission medium. For example, the software may be transmitted from a website, server, or other remote source using a transmission medium such as coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, or microwave.
Further, modules and/or other suitable means for carrying out the methods and techniques described herein may be downloaded and/or otherwise obtained by a user terminal and/or base station as appropriate. For example, such a device may be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a CD or floppy disk) so that the user terminal and/or base station can obtain the various methods when coupled to or providing storage means to the device. Further, any other suitable technique for providing the methods and techniques described herein to a device may be utilized.
Other examples and implementations are within the scope and spirit of the invention and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hard-wired, or any combination of these. Features implementing functions may also be physically located at various places, including being distributed so that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims, "or" as used in a listing of items beginning with "at least one" indicates a separate listing, such that a listing of "at least one of a, B, or C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the word "exemplary" does not mean that the described example is preferred or better than other examples.
Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings as defined by the appended claims. Moreover, the scope of the claims is not limited to the specific aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. Processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the invention to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An image processing method comprising:
acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints;
projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively;
obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and the number of the first and second groups,
determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
2. The method of claim 1, wherein the acquiring the first and second planar images having different viewpoints and the homography matrix for converting between the different viewpoints comprises:
acquiring six square sub-images with the same size to combine the six sub-images as the first plane image;
calculating homography matrixes corresponding to the square sub-images respectively based on the original coordinates and the target coordinates of the four vertexes of the square sub-images; and
and respectively distorting each corresponding square sub-image based on the calculated homography matrix so as to take the combination of the distorted sub-images as the second plane image.
3. The method of claim 2, wherein said projecting the first and second planar images into first and second panoramic images, respectively, comprises:
respectively projecting six square sub-images contained in the first planar image and six sub-images contained in the second planar image onto different surfaces of six cubic surfaces according to the same projection surface correspondence so as to obtain a first cubic projection diagram corresponding to the first planar image and a second cubic projection diagram corresponding to the second planar image;
performing spherical projection on the first cubic projection drawing and the second cubic projection drawing respectively to obtain a first spherical image corresponding to the first cubic projection drawing and a second spherical image corresponding to the second cubic projection drawing; and
and respectively carrying out equidistant cylindrical projection on the first spherical image and the second spherical image so as to obtain the first panoramic image corresponding to the first spherical image and the second panoramic image corresponding to the second spherical image.
4. The method of claim 3, wherein,
the first correspondence relationship is a first coordinate grid table having the same size as the first panoramic image, in which coordinates of corresponding points on the first planar image are stored at positions in the first coordinate grid table corresponding to coordinates of respective points on the first panoramic image; and
the second correspondence relationship is a second coordinate grid table having the same size as the second planar image, in which coordinates of corresponding points on the second panoramic image are stored at positions in the second coordinate grid table corresponding to coordinates of respective points on the second planar image.
5. The method of claim 3, wherein the determining, based on the homography matrix, the first correspondence, and the second correspondence, a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image comprises:
for each specific point on the first panoramic image, determining the coordinates of a first corresponding point on the first planar image corresponding to the specific point based on the first corresponding relation;
multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and the number of the first and second groups,
and determining the coordinate of a third corresponding point corresponding to the specific point on the second panoramic image based on the coordinate of the second corresponding point and the second corresponding relation.
6. The method of claim 4, wherein the determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence comprises:
for each specific point on the first panoramic image, querying the first coordinate grid table to determine the coordinates of a first corresponding point on the first planar image corresponding to the specific point;
multiplying the coordinates of the first corresponding point by a homography matrix of the square sub-image corresponding to the first corresponding point to obtain the coordinates of a second corresponding point corresponding to the specific point on the second plane image; and (c) a second step of,
and querying the second coordinate grid table based on the coordinates of the second corresponding point to determine the coordinates of a third corresponding point on the second panoramic image corresponding to the specific point.
7. The method of any of claims 1-6, further comprising:
and training a panoramic image feature point detection model by using the first panoramic image, the second panoramic image and the third corresponding relation so as to obtain the trained panoramic image feature point detection model.
8. An image processing apparatus comprising:
an acquisition unit configured to acquire a first planar image and a second planar image having different viewpoints and a homography matrix for conversion between the different viewpoints;
a projection unit configured to project the first planar image and the second planar image as a first panoramic image and a second panoramic image, respectively; and
a relationship determination unit configured to obtain a first correspondence relationship and a second correspondence relationship based on the projection, wherein the first correspondence relationship represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence relationship represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; wherein the content of the first and second substances,
the relationship determination unit is further configured to determine a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image, based on the homography matrix, the first correspondence, and the second correspondence.
9. An image processing apparatus comprising:
a processor;
and a memory having computer program instructions stored therein,
wherein the computer program instructions, when executed by the processor, cause the processor to perform the steps of:
acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints;
projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively;
obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and the number of the first and second groups,
determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
10. A computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions when executed by a processor implement the steps of:
acquiring a first plane image and a second plane image with different viewpoints and a homography matrix for converting between the different viewpoints;
projecting the first planar image and the second planar image into a first panoramic image and a second panoramic image respectively;
obtaining a first correspondence and a second correspondence based on the projection, wherein the first correspondence represents a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the first planar image, and the second correspondence represents a correspondence between coordinates of a point on the second planar image and coordinates of a corresponding point on the second panoramic image; and the number of the first and second groups,
determining a third correspondence representing a correspondence between coordinates of a point on the first panoramic image and coordinates of a corresponding point on the second panoramic image based on the homography matrix, the first correspondence, and the second correspondence.
CN202110481942.6A 2021-04-30 2021-04-30 Image processing method and device and computer readable storage medium Pending CN115272052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481942.6A CN115272052A (en) 2021-04-30 2021-04-30 Image processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481942.6A CN115272052A (en) 2021-04-30 2021-04-30 Image processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115272052A true CN115272052A (en) 2022-11-01

Family

ID=83746038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481942.6A Pending CN115272052A (en) 2021-04-30 2021-04-30 Image processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115272052A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116000484A (en) * 2023-03-28 2023-04-25 湖南视比特机器人有限公司 Workpiece secondary positioning method, positioning device, workpiece groove cutting method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116000484A (en) * 2023-03-28 2023-04-25 湖南视比特机器人有限公司 Workpiece secondary positioning method, positioning device, workpiece groove cutting method and device
CN116000484B (en) * 2023-03-28 2023-07-25 湖南视比特机器人有限公司 Workpiece secondary positioning method, positioning device, workpiece groove cutting method and device

Similar Documents

Publication Publication Date Title
CN110021069B (en) Three-dimensional model reconstruction method based on grid deformation
US10872439B2 (en) Method and device for verification
JP6228300B2 (en) Improving the resolution of plenoptic cameras
CN108225216B (en) Structured light system calibration method and device, structured light system and mobile device
US10726580B2 (en) Method and device for calibration
CN102833460B (en) Image processing method, image processing device and scanner
JP2019509569A (en) Perspective correction for curved display screens
CN109711472B (en) Training data generation method and device
CN104994367B (en) A kind of image correction method and camera
JP2019532531A (en) Panorama image compression method and apparatus
CN106570907B (en) Camera calibration method and device
WO2018080533A1 (en) Real-time generation of synthetic data from structured light sensors for 3d object pose estimation
US20220004840A1 (en) Convolutional neural network-based data processing method and device
CN115272052A (en) Image processing method and device and computer readable storage medium
CN112581632A (en) House source data processing method and device
JP2022548608A (en) Method and Related Apparatus for Acquiring Textures of 3D Models
Noury et al. Light-field camera calibration from raw images
CN107067461A (en) The construction method and device of indoor stereo figure
CN114792345A (en) Calibration method based on monocular structured light system
CN111178266B (en) Method and device for generating key points of human face
CN102314682B (en) Method, device and system for calibrating camera
Askarian Bajestani et al. Scalable and view-independent calibration of multi-projector display for arbitrary uneven surfaces
CN112652056A (en) 3D information display method and device
CN113554686B (en) Image processing method, apparatus and computer readable storage medium
CN106408499B (en) Method and device for acquiring reverse mapping table for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination