CN118014832A - Image stitching method and related device based on linear feature invariance - Google Patents

Image stitching method and related device based on linear feature invariance Download PDF

Info

Publication number
CN118014832A
CN118014832A CN202410418203.6A CN202410418203A CN118014832A CN 118014832 A CN118014832 A CN 118014832A CN 202410418203 A CN202410418203 A CN 202410418203A CN 118014832 A CN118014832 A CN 118014832A
Authority
CN
China
Prior art keywords
matrix
data
image
abscissa
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410418203.6A
Other languages
Chinese (zh)
Other versions
CN118014832B (en
Inventor
崔乔乔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Seichitech Technology Co ltd
Original Assignee
Shenzhen Seichitech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Seichitech Technology Co ltd filed Critical Shenzhen Seichitech Technology Co ltd
Priority to CN202410418203.6A priority Critical patent/CN118014832B/en
Publication of CN118014832A publication Critical patent/CN118014832A/en
Application granted granted Critical
Publication of CN118014832B publication Critical patent/CN118014832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image stitching method and a related device based on linear characteristic invariance, which are used for improving the defect detection efficiency of a VR display screen. The image stitching method comprises the following steps: inputting a calibration image to the VR display screen; shooting the VR display screen in a region to generate a shooting image; acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, sequencing the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix; calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation; calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data; carrying out parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrixes by using affine transformation; and splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.

Description

Image stitching method and related device based on linear feature invariance
Technical Field
The embodiment of the application relates to the field of display screen image splicing, in particular to an image splicing method and a related device based on linear characteristic invariance.
Background
Along with the development of technology, various devices are continuously updated and iterated, and a display screen is used as one of display components of the devices and is applied to various high-end devices, such as mobile phones, televisions, tablet computers and the like. With the increasing demands of people on picture display, display screens gradually become technically precise products.
The traditional display screen is basically quadrilateral, and four edge contour lines of the screen body can be easily found in an AOI detection positioning part. The effective area of the quadrilateral regular rectangular display screen can be effectively detected through the gray level detection principle.
However, nowadays, holographic projection technology is continuously updated iteratively, and a display screen dedicated to holographic projection is generated. In order to cater for the function of holographic projection, a VR-Glass display screen is generated, which needs to cater for the viewing angle of the user for projection, providing the user with an immersive experience, and modifying the conventional rectangular display screen to have edges with different inclinations, which are called as profile screens. VR-Glass is more and more popular with young people because of the strong experience, and thus VR-Glass design is more and more novel and more fashionable. Today, VR-Glass display screens are already far different from the traditional display screen's outline structure.
The VR-Glass display screen is larger than a quadrangle, so that the VR-Glass display screen is an irregular special-shaped screen, and the number of the edge line segments is not completely the same, which cannot be solved by the traditional AA region extraction algorithm. The AA area (effective area) extraction in AOI detection is an indispensable technical link, and the accurate extraction of the AA area is very important for defect detection and coordinate statistics.
The VR-Glass display screen has small size and high resolution, so the pixel density of the display screen is relatively high. Meanwhile, the distance between the VR display screen and human eyes is relatively short, so that the defect detection requirement is relatively high in the production and preparation process of the VR display screen. The minimum defect size is about 1um, and the detection accuracy is improved only by increasing the resolution of the camera, which severely limits the field of view of the camera. For example, using a 151M camera, a 3-inch display screen needs to be divided into at least 6 blocks to take the entire picture while satisfying the defect detection accuracy.
The multi-camera image shooting is needed to be adopted for image splicing, the multi-camera image taking is caused, gray correction of different cameras is needed, the conversion relation among the cameras is calculated, the camera data size is large, the processors needed by the multi-camera image shooting are more, the subsequent data integration is troublesome, and the defect detection efficiency of the VR display screen is greatly reduced.
Disclosure of Invention
The application discloses an image splicing method and a related device based on linear feature invariance, in particular to a multi-image splicing method for a VR_glass display screen, which is characterized in that the PPI of VR is larger, meanwhile, defects about 1um are required to be detected, a camera with higher resolution is required to take photos, the display screen cannot be imaged once, and multiple photo taking is required to splice. The method can eliminate image deviation caused by translation and rotation in the moving process of the camera and position deviation among images far away from each other, and can accurately splice a plurality of images together.
The first aspect of the application provides an image stitching method based on linear feature invariance, comprising the following steps:
inputting a calibration image to the VR display screen, wherein lattice points are arranged on the calibration image;
Using a camera to carry out regional shooting on the VR display screen to generate at least 2 shooting images, wherein two shooting images in adjacent regions have lattice points in the same row or the same column;
acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, sequencing the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix;
Calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation;
calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
carrying out parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrixes by using affine transformation;
And splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
Optionally, acquiring the abscissa data of the dot matrix points of the shot image, sorting the sizes of the abscissa data and the ordinate data, and placing the subscript set of the abscissa data into a blank matrix to generate a coordinate data matrix, including:
Acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data;
Calculating the actual row number and the actual column number according to the horizontal distance, the vertical distance and the difference value between the preset two adjacent lattice points;
determining the value range of the abscissa and ordinate data;
The method comprises the steps of performing size sorting on dot matrix dot coordinates in abscissa and ordinate data, and obtaining a subscript set of the dot matrix dot coordinates after sorting;
Generating a blank matrix according to the actual row and column number;
and placing the ordered index set into a blank matrix according to the value range to generate a coordinate data matrix.
Optionally, before acquiring the abscissa data of the dot matrix points of the shot image, sorting the sizes of the abscissa data and the ordinate data, placing the subscript set of the abscissa data and the ordinate data into a blank matrix, generating a coordinate data matrix, and calculating the mapping relationship between the abscissa data and the serial numbers of the dot matrix points in the coordinate data matrix through perspective transformation, the image stitching method further comprises:
ordering and correlating the abscissa and ordinate data of the same row and column in a coordinate data matrix to obtain a new arrangement index;
determining missing lattice points in a row or column by calculating differences between the row and between the column and the column;
taking the average value of the rows or columns as the coordinates of the missing points, and complementing the abscissa and ordinate data and the coordinate data matrix.
Optionally, the affine transformation is used for parallel processing of the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrix, including:
Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken;
respectively determining linear lattice point coordinates positioned in one row or the same column in two groups of new lattice point coordinates, respectively performing straight line fitting on the two groups of linear lattice point coordinates, and generating two fitting straight lines;
Calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation;
and carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
Optionally, after performing parallel processing on the new lattice point coordinates in the same row or the same column in the adjacent coordinate data matrixes by using affine transformation, stitching the image according to the new lattice point coordinate set and the coordinates of the shot image, and before acquiring the target stitched image, the image stitching method further includes:
Obtaining the difference value between the coordinates of the last dot matrix point of the first fitting straight line and the coordinates of the first dot in the second fitting straight line, so as to obtain all translation difference values of the second fitting straight line, and generating a translation matrix;
And translating the linear lattice point coordinates by using a translation matrix, and fitting, calculating an inclination angle, calculating translation parameters and carrying out translation processing on the new lattice point coordinates of each photographed image in the mode.
Optionally, stitching the image according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the photographed image to generate a target stitched image, including:
Calculating a first conversion matrix of a second shot image by perspective transformation by taking the first shot image as a reference system, wherein the first shot image and the second shot image are two shot images of adjacent areas;
Calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points;
Transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix;
Obtaining a third conversion matrix and a fourth conversion matrix of a third shot image through perspective transformation and affine transformation, wherein the third shot image is a shot image of another adjacent area of the second shot image;
Transforming the third shot image to a splicing position through the first conversion matrix, the third conversion matrix and the fourth conversion matrix;
and splicing and cutting the photographed images at the splicing positions to generate target spliced images.
Optionally, the VR display is photographed in areas by using a camera, to generate at least 2 photographed images, including:
Setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera;
setting a shooting environment to a black room state;
And moving the camera to a preset point, polishing by using strip light, photographing and imaging, and generating at least 2 photographed images.
The second aspect of the present application provides an image stitching device based on linear feature invariance, comprising:
the input unit is used for inputting a calibration image to the VR display screen, and the calibration image is provided with lattice points;
The first generation unit is used for carrying out regional shooting on the VR display screen by using a camera to generate at least 2 shooting images, and two shooting images in adjacent regions have lattice points in the same row or the same column;
The second generation unit is used for acquiring the abscissa data of the dot matrix points of the shot image, sorting the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix;
The first calculation unit is used for calculating the mapping relation between the abscissa and ordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation;
the second calculation unit is used for calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
the parallel processing unit is used for carrying out parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrixes by using affine transformation;
and the third generation unit is used for splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
Optionally, the second generating unit includes:
Acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data;
Calculating the actual row number and the actual column number according to the horizontal distance, the vertical distance and the difference value between the preset two adjacent lattice points;
determining the value range of the abscissa and ordinate data;
The method comprises the steps of performing size sorting on dot matrix dot coordinates in abscissa and ordinate data, and obtaining a subscript set of the dot matrix dot coordinates after sorting;
Generating a blank matrix according to the actual row and column number;
and placing the ordered index set into a blank matrix according to the value range to generate a coordinate data matrix.
Optionally, after the second generating unit, before the first calculating unit, the image stitching apparatus further includes:
The first acquisition unit is used for sequencing and correlating the abscissa and ordinate data of the same row and the same column in the coordinate data matrix to acquire a new arrangement index;
a determining unit for determining lattice points missing in a row or a column by calculating differences between the row and between the column and the column;
And the complementing unit is used for taking the average value of the rows or the columns as the coordinate of the missing point and complementing the abscissa data and the ordinate data matrix.
Optionally, the parallel processing unit includes:
Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken;
respectively determining linear lattice point coordinates positioned in one row or the same column in two groups of new lattice point coordinates, respectively performing straight line fitting on the two groups of linear lattice point coordinates, and generating two fitting straight lines;
Calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation;
and carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
Optionally, after the parallel processing unit and before the third generating unit, the image stitching device further includes:
the second acquisition unit is used for acquiring the difference value between the coordinates of the last dot matrix point of the first fitting straight line and the coordinates of the first dot in the second fitting straight line so as to acquire all translation difference values of the second fitting straight line and generate a translation matrix;
and the translation unit is used for translating the linear lattice point coordinates by using the translation matrix, and fitting, calculating the inclination angle, calculating the translation parameters and carrying out translation processing on the new lattice point coordinates of each photographed image in the mode.
Optionally, the third generating unit includes:
Calculating a first conversion matrix of a second shot image by perspective transformation by taking the first shot image as a reference system, wherein the first shot image and the second shot image are two shot images of adjacent areas;
Calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points;
Transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix;
Obtaining a third conversion matrix and a fourth conversion matrix of a third shot image through perspective transformation and affine transformation, wherein the third shot image is a shot image of another adjacent area of the second shot image;
Transforming the third shot image to a splicing position through the first conversion matrix, the third conversion matrix and the fourth conversion matrix;
and splicing and cutting the photographed images at the splicing positions to generate target spliced images.
Optionally, the first generating unit includes:
Setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera;
setting a shooting environment to a black room state;
And moving the camera to a preset point, polishing by using strip light, photographing and imaging, and generating at least 2 photographed images.
A third aspect of the present application provides an electronic device, comprising:
A processor, a memory, an input-output unit, and a bus;
The processor is connected with the memory, the input/output unit and the bus;
The memory holds a program that the processor invokes to perform any of the optional image stitching methods as in the first aspect as well as the first aspect.
A fourth aspect of the application provides a computer readable storage medium having a program stored thereon, which when executed on a computer performs any of the alternative image stitching methods as in the first aspect and the first aspect.
From the above technical solutions, the embodiment of the present application has the following advantages:
In the application, a calibration image is input to a VR display screen, and lattice points are arranged on the calibration image. And carrying out regional shooting on the VR display screen by using a camera to generate at least 2 shooting images, wherein two shooting images in adjacent regions have lattice points in the same row or the same column. And acquiring the abscissa data and the ordinate data of the dot matrix points of the shot image, sequencing the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix. And calculating the mapping relation between the abscissa and ordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation. And calculating the new dot matrix point coordinates according to the mapping relation and the abscissa and ordinate data. And carrying out parallel processing on the coordinates of the new lattice points in the same row or the same column in the adjacent coordinate data matrixes by using affine transformation. And splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
The method comprises the steps of obtaining photographed images of different areas, sorting the sizes of lattice points in the photographed images, then placing the lattice points in blank matrixes with corresponding sizes, and calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrixes through perspective transformation. And then calculating a new coordinate position according to the mapping relation. And then, carrying out parallel processing on lattice points in the same row or the same column in two adjacent matrixes through affine transformation to ensure that the lattice points are in the same line, finally, adjusting the lattice points on the photographed image according to the coordinates of the adjusted lattice points, and finally, carrying out image stitching by means of overlapping lattice points of the adjacent photographed images to generate a target stitched image. Through perspective transformation and affine transformation, the translation and image deviation caused by rotation and the position deviation between images with far distance can be eliminated, so that a plurality of images can be accurately spliced together, and the defect detection efficiency of the VR display screen is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a linear feature invariance-based image stitching method of the present application;
FIG. 2 is a schematic diagram of an embodiment of a first stage of the image stitching method based on linear feature invariance of the present application;
FIG. 3 is a schematic diagram of a second stage of the image stitching method based on linear feature invariance according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an embodiment of a third stage of the image stitching method based on linear feature invariance of the present application;
FIG. 5 is a schematic view of an embodiment of an image stitching device based on linear feature invariance in accordance with the present application;
FIG. 6 is a schematic diagram of another embodiment of an image stitching device based on linear feature invariance in accordance with the present application;
FIG. 7 is a schematic diagram of an embodiment of an electronic device of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the prior art, the conventional display screen is basically quadrilateral, and four edge contour lines of the screen body can be easily found in an AOI detection positioning part. The effective area of the quadrilateral regular rectangular display screen can be effectively detected through the gray level detection principle.
However, nowadays, holographic projection technology is continuously updated iteratively, and a display screen dedicated to holographic projection is generated. In order to cater for the function of holographic projection, a VR-Glass display screen is generated, which needs to cater for the viewing angle of the user for projection, providing the user with an immersive experience, and modifying the conventional rectangular display screen to have edges with different inclinations, which are called as profile screens. VR-Glass is more and more popular with young people because of the strong experience, and thus VR-Glass design is more and more novel and more fashionable. Today, VR-Glass display screens are already far different from the traditional display screen's outline structure.
The VR-Glass display screen is larger than a quadrangle, so that the VR-Glass display screen is an irregular special-shaped screen, and the number of the edge line segments is not completely the same, which cannot be solved by the traditional AA region extraction algorithm. The AA area (effective area) extraction in AOI detection is an indispensable technical link, and the accurate extraction of the AA area is very important for defect detection and coordinate statistics.
The VR-Glass display screen has small size and high resolution, so the pixel density of the display screen is relatively high. Meanwhile, the distance between the VR display screen and human eyes is relatively short, so that the defect detection requirement is relatively high in the production and preparation process of the VR display screen. The minimum defect size is about 1um, and the detection accuracy is improved only by increasing the resolution of the camera, which severely limits the field of view of the camera. For example, using a 151M camera, a 3-inch display screen needs to be divided into at least 6 blocks to take the entire picture while satisfying the defect detection accuracy.
The multi-camera image shooting is needed to be adopted for image splicing, the multi-camera image taking is caused, gray correction of different cameras is needed, the conversion relation among the cameras is calculated, the camera data size is large, the processors needed by the multi-camera image shooting are more, the subsequent data integration is troublesome, and the defect detection efficiency of the VR display screen is greatly reduced.
Based on the above, the application discloses an image stitching method and a related device based on linear characteristic invariance, which are used for improving the defect detection efficiency of a VR display screen.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The method of the present application may be applied to a server, a device, a terminal, or other devices having logic processing capabilities, and the present application is not limited thereto. For convenience of description, the following description will take an execution body as an example of a terminal.
Referring to fig. 1, the present application provides an embodiment of an image stitching method based on linear feature invariance, including:
101. Inputting a calibration image to the VR display screen, wherein lattice points are arranged on the calibration image;
in this embodiment, the terminal inputs the calibration image to the VR display screen, so as to enable the VR display screen to display the orderly arranged lattice points, and the calibration image is provided with the orderly arranged lattice points.
102. Using a camera to carry out regional shooting on the VR display screen to generate at least 2 shooting images, wherein two shooting images in adjacent regions have lattice points in the same row or the same column;
In this embodiment, the terminal needs to shoot different dot matrix point areas of the VR display screen to obtain at least 2 shot images, and the dot matrix points at the edges in the shot images are overlapped dot matrix points, so that the splicing link has reference points.
103. Acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, sequencing the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix;
The terminal acquires the abscissa data and the ordinate data of the dot matrix points of the shot image, firstly, the dot matrix points in the same row or the same column are classified according to the abscissa or the ordinate in the abscissa data, then, the size of the column coordinates or the abscissa is ordered, so that the dot matrix points in the same row are ordered according to the size of the column coordinates, or the dot matrix points in the same column are ordered according to the size of the row coordinates. And then placing the subscript set of the abscissa and ordinate data into a blank matrix to generate a coordinate data matrix.
104. Calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation;
The terminal calculates the mapping relation between the abscissa data and the serial numbers of the lattice points in the coordinate data matrix through perspective transformation, the specific terminal calculates the mapping relation between the abscissa data and the serial numbers of the lattice points in the coordinate data matrix through perspective transformation, namely the coordinates (xm, yn) of the nth row and the mth column, and then the relation between the (n, m) and the (xm, yn) is calculated through transmission change (formula 1), and the formula 1 is as follows:
Wherein X and Y are new lattice point coordinates, X and Y are original lattice point coordinates Z are supplementary parameters, Is a transition matrix of transmission variation.
105. Calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
The terminal calculates new lattice point coordinates according to the mapping relation and the abscissa and ordinate data, specifically, the terminal calculates new lattice point coordinates according to the line and the column of the first line relative to the coordinate data matrix in the processed image, and then the perspective transformation relation is utilized.
106. Carrying out parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrixes by using affine transformation;
The terminal uses affine transformation to process the new lattice point coordinates in the same row or column in the adjacent coordinate data matrix in parallel, and aims to enable two adjacent rows or columns of lattice points to be in the same straight line as much as possible.
107. And splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
After the back end uses affine transformation to process the new lattice point coordinates in the same row or column in the adjacent coordinate data matrix in parallel, the terminal splices the images according to the new lattice point coordinate set and the horizontal and vertical coordinate data of the lattice points of the shot images to generate a target spliced image.
In this embodiment, a calibration image is input to the VR display screen first, and a dot matrix point is disposed on the calibration image. And carrying out regional shooting on the VR display screen by using a camera to generate at least 2 shooting images, wherein two shooting images in adjacent regions have lattice points in the same row or the same column. And acquiring the abscissa data and the ordinate data of the dot matrix points of the shot image, sequencing the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix. And calculating the mapping relation between the abscissa and ordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation. And calculating the new dot matrix point coordinates according to the mapping relation and the abscissa and ordinate data. And carrying out parallel processing on the coordinates of the new lattice points in the same row or the same column in the adjacent coordinate data matrixes by using affine transformation. And splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
The method comprises the steps of obtaining photographed images of different areas, sorting the sizes of lattice points in the photographed images, then placing the lattice points in blank matrixes with corresponding sizes, and calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrixes through perspective transformation. And then calculating a new coordinate position according to the mapping relation. And then, carrying out parallel processing on lattice points in the same row or the same column in two adjacent matrixes through affine transformation to ensure that the lattice points are in the same line, finally, adjusting the lattice points on the photographed image according to the coordinates of the adjusted lattice points, and finally, carrying out image stitching by means of overlapping lattice points of the adjacent photographed images to generate a target stitched image. Through perspective transformation and affine transformation, the translation and image deviation caused by rotation and the position deviation between images with far distance can be eliminated, so that a plurality of images can be accurately spliced together, and the defect detection efficiency of the VR display screen is improved.
(1) The embodiment provides a set of corresponding optical design scheme aiming at a multi-image splicing system of a VR display screen with high PPI, ensures small gray level difference between images, and facilitates subsequent detection of a spliced image;
(2) The image stitching adopts single-row and single-column information to perform position relation calculation, so that the accuracy of image conversion and the coordinate accuracy of defects in stitched images are improved to the maximum extent
(3) The image in the upper left corner is taken as a reference point, the row number of the image to be measured relative to the reference image is calculated, and the error caused by inaccurate offset coordinates is reduced by calculating the conversion relation between row and column information and the coordinate information of the dot matrix points in the image to be measured
(4) The coordinate information of the adjacent images is utilized to gradually acquire the conversion relation between the image to be detected and the reference image, so that the accuracy of the position information can be improved to the maximum extent, and the spliced coordinates obtained by converting the matrix of the extracted actual coordinates are basically consistent with the position information of the dot matrix points of the reference image in the corresponding area
(5) The embodiment has simple algorithm logic, can calculate the conversion relation in the process of camera shooting, greatly shortens the operation time of the algorithm and improves the practicability of the algorithm
Referring to fig. 2, 3 and 4, an embodiment of an image stitching method based on invariance of linear features is provided, including:
201. Inputting a calibration image to the VR display screen, wherein lattice points are arranged on the calibration image;
step 201 in this embodiment is similar to step 101 in the previous embodiment, and will not be repeated here.
202. Setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera;
203. setting a shooting environment to a black room state;
204. moving a camera to a preset point, polishing by using strip light, photographing and imaging to generate at least 2 photographed images;
in this embodiment, the terminal sets the shooting parameters to remain unchanged all the time in the process of each shooting of the camera, omits a great amount of correction and compensation workload for the follow-up, then sets the shooting environment to be in a black room state, prevents the interference of an external light source, moves the camera to a preset point location, uses strip light to perform polishing treatment, and performs shooting and image shooting, so as to generate at least 2 shooting images.
Specifically, in the method, the same camera is adopted to shoot all shooting images, the camera is moved along the horizontal and vertical directions through the XY axis, parameters such as exposure time, aperture, gain and the like are kept unchanged in the process of shooting each time, meanwhile, a laboratory is kept in a dark state, and the imaging is carried out by means of bar light. Therefore, the gray scales of the pictures shot in the default different areas of the experiment are consistent, and gray scale correction is not needed.
In addition, in this embodiment, a dot matrix calibration board (calibration image) is adopted, photographing can be performed at 6 different positions through a camera, a part of dot matrix images of the calibration board are obtained, then the dot matrix images (photographing images) are spliced together to obtain a complete calibration board image, and in order to obtain a correlation matrix between different areas, a column (row) of the dot matrix images of adjacent areas are ensured to be overlapped with each other. And simultaneously, the same row and column numbers of each drawing are ensured as much as possible. In this embodiment, the dot matrix of 4 rows and 5 columns of the calibration plate can be respectively shot at six positions where the camera moves, and the adjacent areas can be ensured to share one row or one column. In addition, for convenience of presentation, 6 areas are marked as 1,2,3,4,5,6 in order from left to right, from top to bottom, and camera photographing is also performed in this order, facilitating subsequent step analysis.
205. Acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data;
206. Calculating the actual row number and the actual column number according to the horizontal distance, the vertical distance and the difference value between the preset two adjacent lattice points;
207. Determining the value range of the abscissa and ordinate data;
208. The method comprises the steps of performing size sorting on dot matrix dot coordinates in abscissa and ordinate data, and obtaining a subscript set of the dot matrix dot coordinates after sorting;
209. Generating a blank matrix according to the actual row and column number;
210. Placing the ordered index sets into a blank matrix according to the value range to generate a coordinate data matrix;
In this embodiment, the terminal obtains the abscissa data of the dot matrix points of the shot image, determines the difference between the maximum value and the minimum value according to the abscissa data, calculates the actual row and column numbers according to the horizontal distance, the vertical distance and the difference between the preset two adjacent dot matrix points, determines the value range of the abscissa data, sorts the dot matrix coordinates in the abscissa data, obtains the index set of the dot matrix coordinates with the sorted dot matrix coordinates, generates a blank matrix according to the actual row and column numbers, and places the sorted index set into a blank matrix according to the value range to generate the coordinate data matrix.
Specifically, in this embodiment, 6 shot images are shot together, and finally, the shot images are spliced together to form a complete image, and a starting position needs to be set, and then, sequencing is performed sequentially. The first shot image is taken as a starting position when sitting on the seat, and then the other areas are ordered by referring to the first shot image.
According to the abscissa and ordinate data of the dot matrix points of the shot images, row coordinates { xi1, xi2, xi3, & gt.+ -. And column coordinates { yi1, yi2, yi3, & gt.+ -. Of the dot matrix points in the 6 shot images are calculated respectively, so that coordinate omission is prevented, and the dot matrix point coordinates can be fitted according to the distribution condition of each row 5 and column of FIG. 4.
In this embodiment, the terminal predicts the horizontal distance w and the vertical distance h between two adjacent points, calculates the difference between the maximum value and the minimum value in { xi1, xi2, xi3, & gt.
Next { xi1, xi2, xi3,... And then placing the ordered index sets in a 4x5 matrix according to the value range of the first step, so that the data in the same row and column in the XY set can be placed together.
211. Ordering and correlating the abscissa and ordinate data of the same row and column in a coordinate data matrix to obtain a new arrangement index;
212. determining missing lattice points in a row or column by calculating differences between the row and between the column and the column;
213. Taking the average value of the rows or columns as the coordinates of the missing points, and complementing the abscissa data and the ordinate data matrix;
In this embodiment, the terminal sequences and associates the abscissa and ordinate data of the same row and the same column in the coordinate data matrix to obtain a new arrangement index, then calculates the difference between the row and the column to determine the missing dot matrix point of a certain row or a certain column, and then takes the average value of the row or the column as the coordinate of the missing dot to complement the abscissa and ordinate data and the coordinate data matrix.
Specifically, in this embodiment, the XY data in the same row and the same column are ordered and associated, for example, according to the ordering result of X, the corresponding Y value may be obtained, so as to obtain a new arrangement index, that is, the position relationship of the actual lattice point is determined. The new arrangement index is mainly used for providing more accurate index for subsequent transformation, and is more convenient.
At this time, the missing value of a certain row or a certain column is confirmed by calculating the difference value between the rows and the columns, and then the average value of the row (column) is taken as the coordinate of the missing point, and then the complete dot matrix point coordinate information is obtained.
214. Calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation;
215. calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
Steps 214 to 215 in this embodiment are similar to steps 104 to 105 in the previous embodiment, and are not repeated here.
216. Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken;
217. Respectively determining linear lattice point coordinates positioned in one row or the same column in two groups of new lattice point coordinates, respectively performing straight line fitting on the two groups of linear lattice point coordinates, and generating two fitting straight lines;
218. Calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation;
219. carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation;
in this embodiment, the movement process of the camera has a difference between offset and rotation due to an error, so that adjacent images cannot be directly spliced together only through the above perspective transformation, and a secondary process is required.
The terminal needs to take two sets of new lattice point coordinates corresponding to two shooting images of the adjacent area. And respectively determining linear lattice point coordinates positioned in one row or the same column in the two groups of new lattice point coordinates, and respectively performing straight line fitting on the two groups of linear lattice point coordinates to generate two fitting straight lines. And calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation. And carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
Specifically, in this embodiment, the terminal acquires new coordinates of lattice points in the matrix corresponding to two adjacent captured images, and because the coordinates are in a good order, a single row or a single column (according to the mutual positional relationship of the adjacent captured images) is taken, and considering that the lattice on the calibration board has better collinearity, meanwhile, the overlapping part between the adjacent images has only one row or one column, and the mutual conversion relationship cannot be directly obtained by using perspective transformation, so that the same row (column) in the two sets of coordinates is extracted, and the same row (column) is defaulted to be on the same straight line, and the points are fitted into the straight line. Calculating the inclination angle of two straight linesThe two lines are made parallel by the rotation matrix R (formula 2) of affine transformation, formula 2 is as follows:
220. Obtaining the difference value between the coordinates of the last dot matrix point of the first fitting straight line and the coordinates of the first dot in the second fitting straight line, so as to obtain all translation difference values of the second fitting straight line, and generating a translation matrix;
221. Translating the linear lattice point coordinates by using a translation matrix, and fitting, calculating an inclination angle, calculating translation parameters and carrying out translation treatment on the new lattice point coordinates of each photographed image in the mode;
In this embodiment, the terminal generates new coordinates (in this experiment, according to the region label, the precedence relation is distinguished) from the coordinates of the next dot matrix point through the transformation matrix, calculates the difference between the last dot coordinates of the previous dot matrix point (the first shot image) and the coordinates of the first dot in the next new coordinate set (the second shot image, the shot image of the adjacent region in the same line as the first shot image), obtains the translation data Tx and Ty of the next dot set, generates the translation matrix M (formula 3), and translates the rotated coordinate set to a new position, so that the coordinate set can be co-lined (listed) with the previous coordinate set. Equation 3 is shown below:
In this embodiment, after the first shot image and the second shot image are subjected to the rotation translation transformation, the same row (column) of two adjacent images may be overlapped, the calculation in the above steps is repeated, all rows (columns) in the two shot images are overlapped into a straight line, the original coordinate set { xi1, xi2, xi3, }, { yi1, yi2, yi3, }, after the rotation translation transformation, a new coordinate set { xi1', xi2', xi3',.+ -, { yi1', yi2', yi3',.}, and all shot images are transformed in the above manner, that is, the shot image of one area is determined, and then the adjacent shot image is determined, and the coordinate transformation is performed.
222. Calculating a first conversion matrix of a second shot image by perspective transformation by taking the first shot image as a reference system, wherein the first shot image and the second shot image are two shot images of adjacent areas;
223. Calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points;
224. Transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix;
225. Obtaining a third conversion matrix and a fourth conversion matrix of a third shot image through perspective transformation and affine transformation, wherein the third shot image is a shot image of another adjacent area of the second shot image;
226. transforming the third shot image to a splicing position through the first conversion matrix, the third conversion matrix and the fourth conversion matrix;
227. and splicing and cutting the photographed images at the splicing positions to generate target spliced images.
In this embodiment, for example, according to 6 regions, the rotation, translation matrix, and transformed coordinate set can be obtained according to 1-2,1-3,2-4,4-6,3-5 coordination, and then image stitching is performed.
Specifically, according to the calculated new coordinate set, the first shot image and the coordinates of the coordinate matrix corresponding to the first shot image are subjected to image conversion according to two conversion routes, namely, the routes 1:1-2-4-6 and the routes 2:1-3-5, and a final spliced image is obtained. The method is as follows:
a. firstly, corresponding to a new coordinate set and an old coordinate set in an image, firstly, calculating a transformation matrix Ti through perspective transformation (formula 1), and then, calculating a transformation matrix T (i, 1) through perspective transformation (formula 1) with the coordinates of the image 1 (the coordinates of the actual calibration plate 1 area).
B. Taking the second shot image as an example, taking the first shot image as a reference object, the actual coordinates of the image 2 can obtain final coordinates through T2 and T (2, 1) transformation, and meanwhile, the image 2 is also transformed to a final splicing position through two transformation matrixes T2 and T (2, 1).
C. The actual coordinates of the image 4 are transformed by T4, T2, T (4, 1) to obtain the theoretical coordinates of the image 4, and the image 4 is transformed by the same transformation.
D. the actual coordinates of the image 6 are transformed by T6, T4, T2, T (6, 1) to obtain the splice location.
E. the image 3 is then T3, and T (3, 1) can obtain the spliced conversion relation.
F. and the image 5 is subjected to T5, T3 and T (5, 1) to calculate the coordinate and conversion relation of the splicing position.
And converting the original image into a new splicing image according to the conversion matrix set of each image to obtain a final target splicing image.
In this embodiment, a calibration image is input to the VR display screen first, and a dot matrix point is disposed on the calibration image. And setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera. The photographing environment is set to a black room state. And moving the camera to a preset point, polishing by using strip light, photographing and imaging, and generating at least 2 photographed images.
And acquiring the abscissa data and the ordinate data of the dot matrix points of the shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data. And calculating the actual row number and the actual column number according to the horizontal distance, the vertical distance and the difference value between the preset two adjacent lattice points. And determining the value range of the abscissa and ordinate data. And carrying out size sorting on the dot matrix dot coordinates in the abscissa and ordinate data, and obtaining a subscript set of the dot matrix dot coordinates after sorting. A blank matrix is generated based on the actual number of rows and columns. And placing the ordered index set into a blank matrix according to the value range to generate a coordinate data matrix.
And ordering and correlating the abscissa and ordinate data with the same row and the same column in the coordinate data matrix to obtain a new arrangement index. The missing lattice points for a row or column are determined by calculating the differences between the rows and columns. And taking the average value of the row or the column as the coordinate of the missing point, and complementing the abscissa and ordinate data and the coordinate data matrix.
And calculating the mapping relation between the abscissa and ordinate data and the sequence numbers of the lattice points in the coordinate data matrix through perspective transformation. And calculating the new dot matrix point coordinates according to the mapping relation and the abscissa and ordinate data.
Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken. And respectively determining linear lattice point coordinates positioned in one row or the same column in the two groups of new lattice point coordinates, and respectively performing straight line fitting on the two groups of linear lattice point coordinates to generate two fitting straight lines. And calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation. And carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
And obtaining the difference value between the coordinates of the last dot matrix point of the first fitting straight line and the coordinates of the first dot in the second fitting straight line, so as to obtain all translation difference values of the second fitting straight line, and generating a translation matrix. And translating the linear lattice point coordinates by using a translation matrix, and fitting, calculating an inclination angle, calculating translation parameters and carrying out translation processing on the new lattice point coordinates of each photographed image in the mode.
And calculating a first transformation matrix of a second shot image by taking the first shot image as a reference system through perspective transformation, wherein the first shot image and the second shot image are two shot images of adjacent areas. And calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points. And transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix. And acquiring a third conversion matrix and a fourth conversion matrix of a third photographed image through perspective transformation and affine transformation, wherein the third photographed image is a photographed image of another adjacent area of the second photographed image. And transforming the third shot image to a splicing position through the first conversion matrix, the third conversion matrix and the fourth conversion matrix. And splicing and cutting the photographed images at the splicing positions to generate target spliced images.
The method comprises the steps of obtaining photographed images of different areas, sorting the sizes of lattice points in the photographed images, then placing the lattice points in blank matrixes with corresponding sizes, and calculating the mapping relation between the horizontal and vertical coordinate data and the sequence numbers of the lattice points in the coordinate data matrixes through perspective transformation. And then calculating a new coordinate position according to the mapping relation. And then, carrying out parallel processing on lattice points in the same row or the same column in two adjacent matrixes through affine transformation to ensure that the lattice points are in the same line, finally, adjusting the lattice points on the photographed image according to the coordinates of the adjusted lattice points, and finally, carrying out image stitching by means of overlapping lattice points of the adjacent photographed images to generate a target stitched image. Through perspective transformation and affine transformation, the translation and image deviation caused by rotation and the position deviation between images with far distance can be eliminated, so that a plurality of images can be accurately spliced together, and the defect detection efficiency of the VR display screen is improved.
Secondly, the embodiment aims at a whole set of algorithm calculation flow provided by a multi-image splicing system of the VR_glass display screen. The embodiment aims at an optical imaging method provided by a multi-image stitching system of a VR_glass display screen. In the embodiment, the conversion relation between the row and column numbers and the actual coordinates is adopted to move the image to be detected to an ideal area, so that errors caused by theoretical coordinate calculation can be reduced. According to the embodiment, the position relation of two images is obtained by calculating the colinear of the adjacent dot matrixes in the same row and the same column, so that the accuracy of image and coordinate conversion can be improved to the greatest extent. In this embodiment, the conversion relationship between each graph to be measured and the reference graph is obtained by deducing the adjacent graphs step by step, so as to improve the conversion accuracy. The algorithm of the embodiment is matched with imaging, so that the running time is reduced, and the practicability of the algorithm is improved.
Referring to fig. 5, the present application provides an embodiment of an image stitching device based on invariance of linear features, including:
An input unit 301, configured to input a calibration image to the VR display, where a dot matrix point is disposed on the calibration image;
the first generating unit 302 is configured to perform regional shooting on the VR display screen by using a camera, generate at least 2 shot images, where two shot images in adjacent regions have lattice points in the same row or the same column;
a second generating unit 303, configured to obtain the abscissa data of the dot matrix points of the captured image, sort the sizes of the abscissa data and the ordinate data, and put the subscript set of the abscissa data into a blank matrix to generate a coordinate data matrix;
a first calculating unit 304, configured to calculate a mapping relationship between the abscissa data and the serial numbers of the lattice points in the coordinate data matrix through perspective transformation;
a second calculating unit 305, configured to calculate a new dot coordinate according to the mapping relationship and the abscissa data;
a parallel processing unit 306, configured to use affine transformation to perform parallel processing on coordinates of new lattice points in the same row or column in adjacent coordinate data matrices;
And a third generating unit 307, configured to stitch the image according to the new dot coordinate set and the abscissa data of the dot matrix of the captured image, and generate a target stitched image.
Referring to fig. 6, the present application provides an embodiment of an image stitching device based on invariance of linear features, including:
An input unit 401, configured to input a calibration image to the VR display, where a dot matrix point is disposed on the calibration image;
a first generating unit 402, configured to perform regional shooting on the VR display screen with a camera, generate at least 2 shot images, where two shot images in adjacent regions have lattice points in the same row or the same column;
optionally, the first generating unit 402 includes:
Setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera;
setting a shooting environment to a black room state;
And moving the camera to a preset point, polishing by using strip light, photographing and imaging, and generating at least 2 photographed images.
A second generating unit 403, configured to obtain the abscissa data of the dot matrix points of the captured image, sort the sizes of the abscissa data and the ordinate data, put the subscript set of the abscissa data into a blank matrix, and generate a coordinate data matrix;
optionally, the second generating unit 403 includes:
Acquiring the abscissa data and the ordinate data of dot matrix points of a shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data;
Calculating the actual row number and the actual column number according to the horizontal distance, the vertical distance and the difference value between the preset two adjacent lattice points;
determining the value range of the abscissa and ordinate data;
The method comprises the steps of performing size sorting on dot matrix dot coordinates in abscissa and ordinate data, and obtaining a subscript set of the dot matrix dot coordinates after sorting;
Generating a blank matrix according to the actual row and column number;
and placing the ordered index set into a blank matrix according to the value range to generate a coordinate data matrix.
A first obtaining unit 404, configured to sort and correlate the abscissa and ordinate data of the same row and column in the coordinate data matrix, and obtain a new arrangement index;
A determining unit 405 for determining lattice points missing in a row or a column by calculating differences between the row and the column;
a complementing unit 406, configured to complement the abscissa data and the coordinate data matrix by taking an average value of the rows or columns as a coordinate of the missing points;
a first calculating unit 407, configured to calculate, through perspective transformation, a mapping relationship between the abscissa data and the serial numbers of the lattice points in the coordinate data matrix;
a second calculating unit 408, configured to calculate a new dot coordinate according to the mapping relationship and the abscissa data;
a parallel processing unit 409 for performing parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrix using affine transformation;
Optionally, the parallel processing unit 409 includes:
Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken;
respectively determining linear lattice point coordinates positioned in one row or the same column in two groups of new lattice point coordinates, respectively performing straight line fitting on the two groups of linear lattice point coordinates, and generating two fitting straight lines;
Calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation;
and carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
A second obtaining unit 410, configured to obtain a difference between the coordinates of the last dot matrix point of the first fitting line and the coordinates of the first dot in the second fitting line, so as to obtain all translation differences of the second fitting line, and generate a translation matrix;
A translation unit 411, configured to translate the linear lattice point coordinates using a translation matrix, and in this way, fit, calculate an inclination angle, calculate a translation parameter, and perform translation processing on the new lattice point coordinates of each captured image;
the third generating unit 412 is configured to stitch the image according to the new dot coordinate set and the abscissa data of the dot matrix of the captured image, and generate a target stitched image.
Optionally, the third generating unit 412 includes:
Calculating a first conversion matrix of a second shot image by perspective transformation by taking the first shot image as a reference system, wherein the first shot image and the second shot image are two shot images of adjacent areas;
Calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points;
Transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix;
Obtaining a third conversion matrix and a fourth conversion matrix of a third shot image through perspective transformation and affine transformation, wherein the third shot image is a shot image of another adjacent area of the second shot image;
Transforming the third shot image to a splicing position through the first conversion matrix, the third conversion matrix and the fourth conversion matrix;
and splicing and cutting the photographed images at the splicing positions to generate target spliced images.
Referring to fig. 7, the present application provides an electronic device, including:
A processor 501, a memory 502, an input-output unit 503, and a bus 504.
The processor 501 is connected to a memory 502, an input/output unit 503, and a bus 504.
The memory 502 holds a program, and the processor 501 invokes the program to execute the image stitching method as in fig. 1, 2, 3, and 4.
The present application provides a computer-readable storage medium having a program stored thereon, which when executed on a computer performs the image stitching method as in fig. 1,2,3 and 4.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. An image stitching method based on linear feature invariance is characterized by comprising the following steps:
inputting a calibration image to a VR display screen, wherein lattice points are arranged on the calibration image;
Using a camera to carry out regional shooting on the VR display screen to generate at least 2 shooting images, wherein two shooting images in adjacent regions have lattice points in the same row or the same column;
acquiring the abscissa data of the dot matrix points of the shot image, sorting the sizes of the abscissa data and the ordinate data, putting a subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix;
calculating the mapping relation between the abscissa data and the serial numbers of lattice points in the coordinate data matrix through perspective transformation;
calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
Carrying out parallel processing on the coordinates of the new lattice points which are positioned in the same row or the same column in the adjacent coordinate data matrixes by using affine transformation;
And splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot image to generate a target spliced image.
2. The image stitching method according to claim 1, wherein the acquiring the abscissa data of the dot matrix points of the captured image, sorting the sizes of the abscissa data, placing the subscript set of the abscissa data into a blank matrix, and generating the coordinate data matrix includes:
acquiring the abscissa data and the ordinate data of the dot matrix points of the shot image, and determining the difference value between the maximum value and the minimum value according to the abscissa data and the ordinate data;
calculating the actual row number and the actual column number according to the horizontal distance and the vertical distance between the preset two adjacent lattice points and the difference value;
Determining the value range of the abscissa and ordinate data;
Performing size sorting on the dot matrix point coordinates in the abscissa and ordinate data, and acquiring a subscript set of the dot matrix point coordinates after sorting is completed;
Generating a blank matrix according to the actual row and column number;
and placing the ordered subscript set into a blank matrix according to the value range to generate a coordinate data matrix.
3. The image stitching method according to claim 2, wherein, before acquiring the abscissa data of the dot matrix points of the captured image, sorting the sizes of the abscissa data, placing the subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix, the image stitching method further comprises, before calculating the mapping relationship between the abscissa data and the serial numbers of the dot matrix points in the coordinate data matrix by perspective transformation:
Sorting and correlating the abscissa and ordinate data of the same row and column in the coordinate data matrix to obtain a new arrangement index;
determining missing lattice points in a row or column by calculating differences between the row and between the column and the column;
taking the average value of the rows or columns as the coordinates of the missing points, and complementing the abscissa and ordinate data and the coordinate data matrix.
4. The image stitching method according to claim 1, wherein the parallel processing of the new lattice point coordinates in the same row or column in the adjacent coordinate data matrix using affine transformation includes:
Two groups of new lattice point coordinates corresponding to two shooting images of the adjacent area are taken;
respectively determining linear lattice point coordinates positioned in one row or the same column in two groups of new lattice point coordinates, respectively performing straight line fitting on the two groups of linear lattice point coordinates, and generating two fitting straight lines;
calculating the inclination angles of the two fitting straight lines, and generating a rotation matrix according to the inclination angles through affine transformation;
and carrying out parallel processing on the two fitting straight lines through a rotation matrix of affine transformation.
5. The image stitching method according to claim 4, wherein after parallel processing of the new lattice point coordinates in the same row or column in the adjacent coordinate data matrix using the affine transformation, the image is stitched according to a new lattice point coordinate set and the coordinates of the captured image, and before acquiring the target stitched image, the image stitching method further comprises:
Obtaining the difference value between the coordinates of the last dot matrix point of the first fitting straight line and the coordinates of the first dot in the second fitting straight line, so as to obtain all translation difference values of the second fitting straight line, and generating a translation matrix;
and translating the linear lattice point coordinates by using the translation matrix, and fitting, calculating an inclination angle, calculating translation parameters and carrying out translation processing on the new lattice point coordinates of each photographed image in the mode.
6. The image stitching method according to claim 1, wherein stitching the image according to the new dot coordinate set and the abscissa and ordinate data of the dot matrix points of the photographed image to generate a target stitched image, comprising:
calculating a first transformation matrix of a second shot image by taking the first shot image as a reference system through perspective transformation, wherein the first shot image and the second shot image are two shot images of adjacent areas;
Calculating a second conversion matrix through perspective transformation according to the new lattice point coordinate set of the second shot image and the abscissa data of the lattice points;
transforming the second shot image to a splicing position according to the first transformation matrix and the second transformation matrix;
Obtaining a third conversion matrix and a fourth conversion matrix of a third shot image through perspective transformation and affine transformation, wherein the third shot image is a shot image of another adjacent area of the second shot image;
Transforming the third photographed image to a stitching position through the first, third, and fourth transformation matrices;
and splicing and cutting the photographed images at the splicing positions to generate target spliced images.
7. The method of any one of claims 1 to 6, wherein the using a camera to capture the VR display in areas generates at least 2 captured images includes:
Setting shooting parameters to be unchanged all the time in the process of shooting each time by the camera;
setting a shooting environment to a black room state;
and moving the camera to a preset point, polishing by using strip light, photographing and imaging, and generating at least 2 photographed images.
8. An image stitching device based on linear feature invariance, comprising:
the input unit is used for inputting a calibration image to the VR display screen, and the calibration image is provided with lattice points;
The first generation unit is used for carrying out regional shooting on the VR display screen by using a camera to generate at least 2 shooting images, and two shooting images in adjacent regions have lattice points in the same row or the same column;
The second generation unit is used for acquiring the abscissa data of the dot matrix points of the shot image, sorting the sizes of the abscissa data and the ordinate data, putting the subscript set of the abscissa data into a blank matrix, and generating a coordinate data matrix;
the first calculation unit is used for calculating the mapping relation between the abscissa data and the serial numbers of the lattice points in the coordinate data matrix through perspective transformation;
the second calculation unit is used for calculating a new lattice point coordinate according to the mapping relation and the abscissa and ordinate data;
A parallel processing unit, configured to use affine transformation to perform parallel processing on the coordinates of the new lattice points in the same row or column in the adjacent coordinate data matrices;
and the third generation unit is used for splicing the images according to the new dot matrix point coordinate set and the abscissa and ordinate data of the dot matrix points of the shot images to generate a target spliced image.
9. An electronic device, comprising:
A processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
The memory holds a program that the processor invokes to execute the image stitching method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program which, when executed on a computer, performs the image stitching method according to any one of claims 1 to 7.
CN202410418203.6A 2024-04-09 2024-04-09 Image stitching method and related device based on linear feature invariance Active CN118014832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410418203.6A CN118014832B (en) 2024-04-09 2024-04-09 Image stitching method and related device based on linear feature invariance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410418203.6A CN118014832B (en) 2024-04-09 2024-04-09 Image stitching method and related device based on linear feature invariance

Publications (2)

Publication Number Publication Date
CN118014832A true CN118014832A (en) 2024-05-10
CN118014832B CN118014832B (en) 2024-07-26

Family

ID=90958031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410418203.6A Active CN118014832B (en) 2024-04-09 2024-04-09 Image stitching method and related device based on linear feature invariance

Country Status (1)

Country Link
CN (1) CN118014832B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118429396A (en) * 2024-07-03 2024-08-02 深圳精智达技术股份有限公司 Image geometric registration method, system, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044181A (en) * 1997-08-01 2000-03-28 Microsoft Corporation Focal length estimation method and apparatus for construction of panoramic mosaic images
CN105115979A (en) * 2015-09-09 2015-12-02 苏州威盛视信息科技有限公司 Image mosaic technology-based PCB working sheet AOI (Automatic Optic Inspection) method
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN112862678A (en) * 2021-01-26 2021-05-28 中国铁道科学研究院集团有限公司 Unmanned aerial vehicle image splicing method and device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6044181A (en) * 1997-08-01 2000-03-28 Microsoft Corporation Focal length estimation method and apparatus for construction of panoramic mosaic images
CN105115979A (en) * 2015-09-09 2015-12-02 苏州威盛视信息科技有限公司 Image mosaic technology-based PCB working sheet AOI (Automatic Optic Inspection) method
CN110782394A (en) * 2019-10-21 2020-02-11 中国人民解放军63861部队 Panoramic video rapid splicing method and system
CN112862678A (en) * 2021-01-26 2021-05-28 中国铁道科学研究院集团有限公司 Unmanned aerial vehicle image splicing method and device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张平梅等: "基于仿射变换的图像分块拼接方法", 信息技术与信息化, no. 01, 10 February 2020 (2020-02-10) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118429396A (en) * 2024-07-03 2024-08-02 深圳精智达技术股份有限公司 Image geometric registration method, system, device and storage medium

Also Published As

Publication number Publication date
CN118014832B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
US11503275B2 (en) Camera calibration system, target, and process
CN111179358B (en) Calibration method, device, equipment and storage medium
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN118014832B (en) Image stitching method and related device based on linear feature invariance
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
CN110189322B (en) Flatness detection method, device, equipment, storage medium and system
CN111872544B (en) Calibration method and device for laser light-emitting indication point and galvanometer coaxial vision system
WO2022143283A1 (en) Camera calibration method and apparatus, and computer device and storage medium
US12112490B2 (en) Method and apparatus for stitching dual-camera images and electronic device
WO2016155110A1 (en) Method and system for correcting image perspective distortion
CN105989588B (en) Special-shaped material cutting image correction method and system
CN111445537B (en) Calibration method and system of camera
CN114283079A (en) Method and equipment for shooting correction based on graphic card
CN107067441B (en) Camera calibration method and device
CN115965697A (en) Projector calibration method, calibration system and device based on Samm's law
CN114463437A (en) Camera calibration method, device, equipment and computer readable medium
CN101729739A (en) Method for rectifying deviation of image
CN113963065A (en) Lens internal reference calibration method and device based on external reference known and electronic equipment
CN117057996A (en) Photovoltaic panel image processing method, device, equipment and medium
CN114466143B (en) Shooting angle calibration method and device, terminal equipment and storage medium
CN112995641B (en) 3D module imaging device and method and electronic equipment
CN116452668A (en) Correction method for camera installation angle
JP2002135807A (en) Method and device for calibration for three-dimensional entry
CN116433769B (en) Space calibration method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant