CN117455818A - Correction method and device for intelligent glasses screen, electronic equipment and storage medium - Google Patents

Correction method and device for intelligent glasses screen, electronic equipment and storage medium Download PDF

Info

Publication number
CN117455818A
CN117455818A CN202311691750.3A CN202311691750A CN117455818A CN 117455818 A CN117455818 A CN 117455818A CN 202311691750 A CN202311691750 A CN 202311691750A CN 117455818 A CN117455818 A CN 117455818A
Authority
CN
China
Prior art keywords
image
distortion
mark
shading correction
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311691750.3A
Other languages
Chinese (zh)
Other versions
CN117455818B (en
Inventor
罗文果
杨硕
张滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Seichitech Technology Co ltd
Original Assignee
Shenzhen Seichitech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Seichitech Technology Co ltd filed Critical Shenzhen Seichitech Technology Co ltd
Priority to CN202311691750.3A priority Critical patent/CN117455818B/en
Priority claimed from CN202311691750.3A external-priority patent/CN117455818B/en
Publication of CN117455818A publication Critical patent/CN117455818A/en
Application granted granted Critical
Publication of CN117455818B publication Critical patent/CN117455818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a correction method, device, electronic equipment and storage medium of an intelligent glasses screen, which are used for reducing complexity of a correction process of pixels of a traditional intelligent glasses screen. The correction method comprises the following steps: manufacturing Grid positioning images according to the resolution of the intelligent glasses screen to be tested; inputting the Grid positioning image into an intelligent glasses screen to be detected, and obtaining a dot matrix shooting image; performing shading correction processing on the dot matrix shooting image to generate a shading correction image; acquiring actual Mark lattice position information; calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information; scaling the distorted pixel point coordinates of the shading correction image by using a scaling matrix to obtain expected pixel point coordinates; calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates; calculating the distortion amount according to the distortion parameter set; and carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.

Description

Correction method and device for intelligent glasses screen, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the field of intelligent glasses screen correction, in particular to a correction method and device for an intelligent glasses screen, electronic equipment and a storage medium.
Background
With the continuous and deep research of holographic projection and the development of artificial intelligence and chips, the field of AR intelligent glasses is rapidly developed, and more intelligent glasses are accepted by users. The screen body part of intelligent glasses need only match the vision that the people's eye snatches the region through the bending of large tracts of land, and this just makes the screen body part of intelligent glasses not only need possess outstanding projection effect in the plane part, also possess more lifelike projection effect in curved surface part adaptation.
Because the curved surface degree of the glasses is far greater than that of the curved surface screen of the mobile phone, when the detection is carried out, the edge imaging of the glasses can generate great distortion, so that corresponding technology is required to overcome the existing problems. The large curved surface can cause great distortion in shooting, and great influence is caused in extracting specific pixel points and subsequently performing demura. At present, most panel factories are in production and manufacturing links, and also due to technical reasons, the detection specification and the requirement of the curved surface part are greatly weakened, the demura process is to extend the repair data of the plane part through model simulation or interpolation, and the real and full data processing is not carried out on the curved surface part, so that the display quality and the uniformity of some curved surface screens on the market at present have larger differences in comparison with the plane part.
In the present stage, when the situation is handled, the prism is mainly adopted to assist in imaging the pixels of the curved surface part, and then the images of the plane part (positive shooting) and the curved surface part (prism reflection shooting) of the curved surface screen are spliced and stitched.
The method aims at distortion correction of the curved surface screen with large distortion, and the processing mode of the curved surface part is different from that of the conventional method. In the past, the complete screen image is formed by mainly imaging through a prism and then splicing a curved surface part and a plane part. Generally, the curved surface is the position of the edge and four corners of the screen, so that a plurality of prisms are required to image, which causes complexity of debugging, in addition, the complexity of an algorithm is increased by splicing multiple images, and meanwhile, the overall cost is increased by increasing hardware. The intelligent glasses screen has the advantages that the bending degree is larger, the whole part needing bending (the curved surface area added for adapting to the human eye capturing area) is more than the curved surface screen of the mobile phone, in the prism reflection method, as the imaging of the plane and the curved surface is not in one image, a plurality of images are needed to be spliced, and the complexity of a distortion correction algorithm is greatly increased.
Disclosure of Invention
The application discloses a correction method, device, electronic equipment and storage medium of an intelligent glasses screen, which are used for reducing complexity of a correction process of pixels of a traditional intelligent glasses screen.
The first aspect of the application provides a correction method for an intelligent glasses screen, which comprises the following steps:
manufacturing a Grid positioning image according to the resolution of the intelligent glasses screen to be detected, wherein Mark points which are uniformly arranged are arranged on the Grid positioning image;
inputting the Grid positioning image into an intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image;
performing shading correction processing on the dot matrix shooting image to generate a shading correction image;
extracting Mark points of the shading correction image, and obtaining actual Mark lattice position information;
calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
scaling the distorted pixel point coordinates of the shading correction image by using a scaling matrix to obtain expected pixel point coordinates;
calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
calculating the distortion amount according to the distortion parameter set;
and carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
Optionally, performing shading correction processing on the spot-shooting image to generate a shading correction image, including:
generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, a row gray value moment and a column gray value moment;
and recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image.
Optionally, after calculating the scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image, scaling the distorted pixel point coordinates of the shading correction image by using the scaling, and before obtaining the expected pixel point coordinates, the correction method further includes:
determining an intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information;
and removing the background area except the area of the intelligent glasses screen to be detected from the shading correction image.
Optionally, calculating the scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image includes:
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to actual Mark lattice information, and generating actual Mark area width and height;
Determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to expected Mark lattice information, and generating expected Mark area width and height;
determining the proportion of the column direction and the proportion of the row direction according to the actual Mark area width and height and the expected Mark area width and height;
a scaling matrix is generated from the column-direction scale and the row-direction scale and the two-dimensional spatial transformation matrix.
Optionally, extracting Mark points of the shading correction image, and obtaining actual Mark lattice position information includes:
and (3) carrying out threshold segmentation on the shading correction image, then disconnecting the selected region from the connected region, and extracting Mark point position information through screening the shape and the pixel area to obtain the actual Mark dot matrix position information.
Optionally, performing distortion correction on the pixel point on the shading correction image according to the distortion amount includes:
partitioning according to the number of Mark points of the shading correction image;
creating a blank AA area;
creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount;
generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix;
and carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
A second aspect of the present application provides a correction device for an intelligent glasses screen, including:
the manufacturing unit is used for manufacturing a Grid positioning image according to the resolution ratio of the intelligent glasses screen to be tested, and Mark points which are uniformly arranged are arranged on the Grid positioning image;
the shooting unit is used for inputting the Grid positioning image into the intelligent glasses screen to be detected and shooting the intelligent glasses screen to be detected to obtain a dot matrix shooting image;
a generation unit for performing shading correction processing on the dot matrix photographed image to generate a shading corrected image;
the first acquisition unit is used for extracting Mark points of the shading correction image and acquiring actual Mark lattice position information;
the first calculation unit is used for calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
the second acquisition unit is used for scaling the distorted pixel point coordinates of the shading correction image by using the scaling matrix to acquire expected pixel point coordinates;
the second calculation unit is used for calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
a third calculation unit for calculating an amount of distortion according to the set of distortion parameters;
and the correction unit is used for carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
Optionally, the generating unit includes:
generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, a row gray value moment and a column gray value moment;
and recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image.
Optionally, after the first calculation unit, before the second acquisition unit, the correction device further includes:
the determining unit is used for determining an intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information;
and the screening unit is used for removing the background area of the intelligent glasses screen area to be detected from the shading correction image.
Optionally, the first computing unit includes:
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to actual Mark lattice information, and generating actual Mark area width and height;
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to expected Mark lattice information, and generating expected Mark area width and height;
Determining the proportion of the column direction and the proportion of the row direction according to the actual Mark area width and height and the expected Mark area width and height;
a scaling matrix is generated from the column-direction scale and the row-direction scale and the two-dimensional spatial transformation matrix.
Optionally, the first obtaining unit includes:
and (3) carrying out threshold segmentation on the shading correction image, then disconnecting the selected region from the connected region, and extracting Mark point position information through screening the shape and the pixel area to obtain the actual Mark dot matrix position information.
Optionally, the correction unit includes:
partitioning according to the number of Mark points of the shading correction image;
creating a blank AA area;
creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount;
generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix;
and carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
A fourth aspect of the present application provides an electronic device, comprising:
a processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
The memory holds a program that the processor invokes to perform any of the optional correction methods as in the first aspect as well as the first aspect.
A fifth aspect of the present application provides a computer readable storage medium having a program stored thereon, which when executed on a computer performs any of the optional correction methods as in the first aspect and the first aspect.
From the above technical solutions, the embodiments of the present application have the following advantages:
in the method, firstly, grid positioning images are manufactured according to the resolution ratio of the intelligent glasses screen to be tested, and Mark points which are uniformly arranged are arranged on the Grid positioning images. And inputting the Grid positioning image into the intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image. The dot matrix shooting image at the moment comprises a boundary area and an intelligent glasses screen area to be detected, wherein the intelligent glasses screen area to be detected is provided with a Mark dot matrix, and Mark points and pixel points of a plurality of curved surface parts of the intelligent glasses screen area to be detected are more compact relative to a plane part. Then, shading correction processing is carried out on the spot-shooting image, and a shading correction image is generated, so that the edge background of the curved surface part in the intelligent glasses screen area to be detected is enhanced through shading correction. And then, extracting Mark points of the shading correction image, and acquiring actual Mark lattice position information. And calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image. And scaling the distorted pixel point coordinates of the shading correction image by using a scaling ratio to obtain expected pixel point coordinates. The scaling matrix can enable the intelligent glasses screen area to be detected of the shading correction image to return to the same size of the real object, and can enable pixel points in the intelligent glasses screen area to return to the expected position. At this time, a distortion parameter set is calculated from the expected pixel point coordinates and the distorted pixel point coordinates. And calculating the distortion quantity according to the distortion parameter set. And carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
The edge background of the curved surface part in the intelligent glasses screen area to be detected is enhanced through shading correction, so that the pixel points of the curved surface part can be captured more easily. And secondly, calculating a scaling matrix according to actual Mark lattice information and expected Mark lattice information of the Grid positioning image, so that the scaling degree can be accurately obtained, scaling distorted pixel point coordinates on the shading correction image by using the scaling matrix, obtaining expected pixel point coordinates, and returning the pixel points to the same size as the intelligent glasses screen. At the moment, a distortion parameter set is calculated according to the expected pixel point coordinates and the distortion pixel point coordinates, the distortion parameter set can calculate the distortion quantity, and the distortion correction is carried out on the pixel points on the shading correction image according to the distortion quantity, so that the complexity of the correction process of the pixel points of the traditional intelligent glasses screen is reduced compared with the path reflection method of the traditional intelligent glasses screen.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of one embodiment of a method for calibrating a smart glasses screen of the present application;
FIG. 2 is a schematic diagram of one embodiment of a first stage of a correction method for a smart glasses screen of the present application;
FIG. 3 is a schematic diagram of one embodiment of a second stage of the correction method of the smart glasses screen of the present application;
FIG. 4 is a schematic diagram of one embodiment of a third stage of the correction method of the smart glasses screen of the present application;
FIG. 5 is a schematic diagram of one embodiment of a correction device for a smart glasses screen of the present application;
FIG. 6 is a schematic diagram of another embodiment of a correction device for a smart glasses screen of the present application;
FIG. 7 is a schematic diagram of one embodiment of an electronic device of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the prior art, the curved surface degree of the glasses is far greater than that of the curved surface screen of the mobile phone, and when the detection is performed, the edge imaging of the glasses can generate great distortion, so that corresponding technology is needed to overcome the existing problem. The large curved surface can cause great distortion in shooting, and great influence is caused in extracting specific pixel points and subsequently performing demura. At present, most panel factories are in production and manufacturing links, and also due to technical reasons, the detection specification and the requirement of the curved surface part are greatly weakened, the demura process is to extend the repair data of the plane part through model simulation or interpolation, and the real and full data processing is not carried out on the curved surface part, so that the display quality and the uniformity of some curved surface screens on the market at present have larger differences in comparison with the plane part.
In the present stage, when the situation is handled, the prism is mainly adopted to assist in imaging the pixels of the curved surface part, and then the images of the plane part (positive shooting) and the curved surface part (prism reflection shooting) of the curved surface screen are spliced and stitched.
The method aims at distortion correction of the curved surface screen with large distortion, and the processing mode of the curved surface part is different from that of the conventional method. In the past, the complete screen image is formed by mainly imaging through a prism and then splicing a curved surface part and a plane part. Generally, the curved surface is the position of the edge and four corners of the screen, so that a plurality of prisms are required to image, which causes complexity of debugging, in addition, the complexity of an algorithm is increased by splicing multiple images, and meanwhile, the overall cost is increased by increasing hardware. The intelligent glasses screen has the advantages that the bending degree is larger, the whole part needing bending (the curved surface area added for adapting to the human eye capturing area) is more than the curved surface screen of the mobile phone, in the prism reflection method, as the imaging of the plane and the curved surface is not in one image, a plurality of images are needed to be spliced, and the complexity of a distortion correction algorithm is greatly increased.
Based on the above, the application discloses a correction method, a correction device, electronic equipment and a storage medium of an intelligent glasses screen, which are used for reducing the complexity of a correction process of pixels of a traditional intelligent glasses screen.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The method of the present application may be applied to a server, a device, a terminal, or other devices with logic processing capabilities, which is not limited in this application. For convenience of description, the following description will take an execution body as an example of a terminal.
Referring to fig. 1, an embodiment of a method for correcting an intelligent glasses screen is provided, including:
101. manufacturing a Grid positioning image according to the resolution of the intelligent glasses screen to be detected, wherein Mark points which are uniformly arranged are arranged on the Grid positioning image;
102. inputting the Grid positioning image into an intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image;
In this embodiment, the terminal needs to make Grid positioning images, display and collect corresponding images with the intelligent glasses screen to be tested, and then enhance the edge portion by using shadow correction.
In this embodiment, the terminal makes a suitable Grid positioning image according to the resolution of the smart glasses screen to be tested, where the size of the positioning image is equal to the resolution of the screen. And importing the produced Grid positioning image into PG to enable the screen to display the Grid positioning image, and then acquiring the corresponding image through a camera and performing shading correction. In making Grid-positioned images, the following points need to be noted:
(1) The radius of Mark points in the Grid positioning image is properly large, so that the stability of extracting Mark points in algorithm processing can be improved. The Mark points are uniformly distributed at equal intervals, the distortion correction effect is better by increasing the number of the Mark points, but the pixel adhesion can be caused by too large or too dense Mark point diameters, and algorithm errors can be caused by the failure of dividing the Mark points. Different screens and built optical systems have different display positioning images, and proper Grid positioning images need to be adjusted according to actual conditions.
(2) The Grid positioning image shooting is consistent with an optical system of a defect detection picture, and the relative positions of a camera and an intelligent glasses screen to be detected cannot be changed in shooting of all pictures.
103. Performing shading correction processing on the dot matrix shooting image to generate a shading correction image;
after the terminal inputs the Grid positioning image into the intelligent glasses screen to be detected and shoots the intelligent glasses screen to be detected to obtain the dot matrix shooting image, the terminal carries out shading correction processing on the dot matrix shooting image to generate a shading correction image, and the background of the edge (especially the curved surface edge part) is enhanced through shading correction.
104. Extracting Mark points of the shading correction image, and obtaining actual Mark lattice position information;
105. calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
the terminal extracts Mark points of the shading correction image, acquires actual Mark dot matrix position information, then carries out scaling calculation according to Mark dot matrixes on the standard diagram and the distortion diagram by using the expected Mark dot matrix information of the Grid positioning image, and acquires a scaling matrix.
106. Scaling the distorted pixel point coordinates of the shading correction image by using a scaling matrix to obtain expected pixel point coordinates;
the terminal uses the scaling matrix to scale the distorted pixel coordinates of the shading correction image, specifically multiplies the scaling matrix and the pixel coordinates on the shading correction image, and the obtained new coordinates are the expected pixel coordinates.
107. Calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
and the terminal calculates a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates, and is specifically divided into radial and tangential parts.
In this embodiment, two distorted expressions are separated, one representing radial distortion,/>) One representing tangential distortion (+)>,/>)。
The radial distortion coefficient is expressed as:
wherein,,/>the radial pixel coordinates after distortion are obtained by radial segmentation in pixel point coordinates on a shading correction image, and x and y are ideal coordinates (expected pixel point coordinates),>、/>and->For the radial distortion coefficient, x and y have the following relationship:
the tangential distortion coefficient is expressed as:
wherein,,/>is tangential pixel coordinates after distortion, namely, is obtained by tangential segmentation in pixel point coordinates on a shading correction image, wherein x and y are ideal coordinates (expected pixel point coordinates), and +>And->Is a tangential distortion parameter.
The above can be solved for five distortion coefficients、/>、/>,/>And->
In this embodiment, two distorted expressions are separated, one representing radial distortion,/>) One representing tangential distortion (+)>,/>). The two are then combined into one equation containing radial and tangential distortions, resulting in the following polynomials:
For ideal column coordinates>For ideal line coordinates, x=grid positioning image Mark column coordinates×mr-shading correction image Mark column coordinates, y=grid positioning image Mark line coordinates×mr-shading correction image Mark line coordinates. Wherein, MR is the MR pixels in the image of the corresponding imaging system of a pixel of the screen. By substituting the two polynomials into the corresponding coordinate information, five distortion coefficients +.>、/>、/>,/>And->
108. Calculating the distortion amount according to the distortion parameter set;
109. and carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
When the terminal calculates five distortion coefficients、/>、/>,/>And->And then, calculating the distortion quantity according to the distortion parameter set, and finally, carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
In this embodiment, firstly, a Grid positioning image is made according to the resolution of the intelligent glasses screen to be tested, and Mark points uniformly arranged are arranged on the Grid positioning image. And inputting the Grid positioning image into the intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image. The dot matrix shooting image at the moment comprises a boundary area and an intelligent glasses screen area to be detected, wherein the intelligent glasses screen area to be detected is provided with a Mark dot matrix, and Mark points and pixel points of a plurality of curved surface parts of the intelligent glasses screen area to be detected are more compact relative to a plane part. Then, shading correction processing is carried out on the spot-shooting image, and a shading correction image is generated, so that the edge background of the curved surface part in the intelligent glasses screen area to be detected is enhanced through shading correction. And then, extracting Mark points of the shading correction image, and acquiring actual Mark lattice position information. And calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image. And scaling the distorted pixel point coordinates of the shading correction image by using a scaling ratio to obtain expected pixel point coordinates. The scaling matrix can enable the intelligent glasses screen area to be detected of the shading correction image to return to the same size of the real object, and can enable pixel points in the intelligent glasses screen area to return to the expected position. At this time, a distortion parameter set is calculated from the expected pixel point coordinates and the distorted pixel point coordinates. And calculating the distortion quantity according to the distortion parameter set. And carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
The edge background of the curved surface part in the intelligent glasses screen area to be detected is enhanced through shading correction, so that the pixel points of the curved surface part can be captured more easily. And secondly, calculating a scaling matrix according to actual Mark lattice information and expected Mark lattice information of the Grid positioning image, so that the scaling degree can be accurately obtained, scaling distorted pixel point coordinates on the shading correction image by using the scaling matrix, obtaining expected pixel point coordinates, and returning the pixel points to the same size as the intelligent glasses screen. At the moment, a distortion parameter set is calculated according to the expected pixel point coordinates and the distortion pixel point coordinates, the distortion parameter set can calculate the distortion quantity, and the distortion correction is carried out on the pixel points on the shading correction image according to the distortion quantity, so that the complexity of the correction process of the pixel points of the traditional intelligent glasses screen is reduced compared with the path reflection method of the traditional intelligent glasses screen.
Referring to fig. 2, 3 and 4, another embodiment of a method for correcting an intelligent glasses screen is provided, including:
201. manufacturing a Grid positioning image according to the resolution of the intelligent glasses screen to be detected, wherein Mark points which are uniformly arranged are arranged on the Grid positioning image;
202. Inputting the Grid positioning image into an intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image;
in this embodiment, steps 201 to 202 are similar to steps 101 and 102 described above, and will not be described here.
203. Generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
204. calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, a row gray value moment and a column gray value moment;
205. recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image;
the terminal firstly generates a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image, and the formula is as follows:
wherein,for gray value moment in row direction +.>The column-wise gray value moment, F, is the number of pixels in a plane and Region is the area in the shading correction image that is needed to fit the plane. Mean is the gray-scale average of the area needed to fit the plane. Image (r, c) is the gray at the row and column coordinates corresponding to the shading correction ImageAnd (5) a degree value. />And->Representing the center of the image,
next, the terminal calculates a row gradient Alpha and a column gradient Beta of the dot matrix photographed image according to a preset scale moment set, a row gray value moment and a column gray value moment, specifically, alpha represents a gradient along the row direction axis ("downward"), beta represents a gradient along the column axis (to the right). The formula is as follows:
、/>And->The scale moment of the region within the image is corrected for shading.
Next, a gradation value is then created, and the gradation of each pixel of the image is recalculated:
is the gray value after shading correction.
206. Threshold segmentation is carried out on the shading correction image, then the selected area is disconnected from the connected domain, mark point position information is extracted through screening of the shape and the pixel area, and actual Mark dot position information is obtained;
and the terminal performs threshold segmentation on the shading correction image, then disconnects the selected region from the connected region, and extracts Mark point position information through screening the shape and the pixel area to obtain actual Mark dot position information. Specifically, mark points in the Grid graph are extracted through threshold segmentation, morphological operations and feature extraction. Occasionally, a single Mark point near the edge is not extracted due to too dark of a row or column, so that the Mark point can be complemented in the array process, and the correction effect is not affected.
207. Determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to actual Mark lattice information, and generating actual Mark area width and height;
208. determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to expected Mark lattice information, and generating expected Mark area width and height;
209. Determining the proportion of the column direction and the proportion of the row direction according to the actual Mark area width and height and the expected Mark area width and height;
210. generating a scaling matrix according to the column direction proportion, the row direction proportion and the two-dimensional space transformation matrix;
the terminal determines the column direction coordinates of the first column and the last column of the Mark point and the row direction coordinates of the first row and the last row of the Mark point according to the actual Mark lattice information, and generates the actual Mark area width and height. Assuming that MarkColEnd is the column direction coordinate of the last column of Mark points in the Grid positioning image, markColStart is the column coordinate of the first column of Mark points, markRowEnd is the row direction coordinate of the last row of Mark points in the Grid positioning image, and MarkRowStart is the row coordinate of the first row of Mark points, the following formula is given:
MR is the number of MR pixels in the image of the imaging system corresponding to one pixel of the screen,and->To the expected Mark area width and height. The actual Mark area width and height calculation method is similar, and will not be described here.
Dividing the width and height of the expected Mark region and the width and height of the actual Mark region, and calculating the proportion of the column direction and the proportion of the row direction:
wherein the matrix S is a scaling relation matrix in the row-column direction,for column direction proportion, ++>Is the row direction scale.
Is the final matrix after the scaling relationship is included, i.e., the scaling matrix.
211. Determining an intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information;
212. removing a background area of the intelligent glasses screen area to be detected from the shading correction image;
after the terminal calculates the scaling matrix, mark points in the shadow correction image can be used as positioning reference points, the size of the Grid positioning image is used as a standard, the scaling matrix is used for determining the area of the intelligent glasses screen to be detected, and then the background area of the intelligent glasses screen to be detected, except the shadow correction image, is removed. The mode can enable the follow-up calculation links to be in the intelligent glasses screen area to be measured, and the calculated amount is reduced.
213. Scaling the distorted pixel point coordinates of the shading correction image by using a scaling matrix to obtain expected pixel point coordinates;
214. calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
215. calculating the distortion amount according to the distortion parameter set;
in this embodiment, steps 213 to 215 are similar to steps 106 and 108 described above, and will not be described here.
The terminal calculates the distortion quantity according to the distortion parameter set, specifically, the offset of the row direction and the column direction is obtained by subtracting Mark point coordinates in the shading correction image from the result after the conversion of the front HomMat2DScale matrix.
216. Partitioning according to the number of Mark points of the shading correction image;
217. creating a blank AA area;
218. creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount;
219. generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix;
220. and carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
The terminal performs partitioning according to the number of Mark points of the shading correction image, specifically, partitioning according to the number of Mark points. For example, the number of Mark points in the Grid positioning image is 16×12=192. Every 4 marks as a group, the image was divided into 192/4=48 partitions and correction calculations were performed simultaneously with multithreading.
Then, the terminal creates a blank AA area, and the distortion is corrected by partition correction through data such as distortion quantity and the AA area. The terminal creates and calculates a transformation matrix, and performs geometric transformation correction distortion on the shading correction image.
The terminal generates a distortion correction matrix according to the conversion relation matrix and the offset relation matrix, specifically, the terminal firstly defines a matrix A (conversion relation matrix), wherein the matrix A represents the conversion relation from the shadow correction image to the AA area, the matrix B (offset relation matrix) is the offset conversion relation on the shadow correction image, and the matrix C is a target matrix (distortion correction matrix). The following relationship exists between the matrices:
The matrix is expanded to the following formula:
where QX, QY are coordinates of the AA area, PX, PY are shading correction image coordinates, row represents an original line direction offset amount, and Col represents a column direction offset amount.
And the terminal calculates the target matrix, then carries out matrix operation on the coordinates in the shading correction image, calculates the target coordinates and obtains the corrected target image. After the target matrix C is obtained, the coordinates of each pixel point on the shading correction image are multiplied by the matrix C to obtain new coordinates, and the coordinates are filled with the pixel values of the corresponding pixel points on the shading correction image.
In this embodiment, firstly, a Grid positioning image is made according to the resolution of the intelligent glasses screen to be tested, and Mark points uniformly arranged are arranged on the Grid positioning image. And inputting the Grid positioning image into the intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image. The dot matrix shooting image at the moment comprises a boundary area and an intelligent glasses screen area to be detected, wherein the intelligent glasses screen area to be detected is provided with a Mark dot matrix, and Mark points and pixel points of a plurality of curved surface parts of the intelligent glasses screen area to be detected are more compact relative to a plane part. Next, a row gray value moment and a column gray value moment are generated from gray data of the dot matrix photographed image. And calculating the row gradient and the column gradient of the dot matrix shooting image according to the preset scale moment set, the row gray value moment and the column gray value moment. And recalculating the gray values of each pixel according to the row gradient and the column gradient to generate a shading correction image, wherein the purpose is to enhance the edge background of the curved surface part in the intelligent glasses screen area to be detected through shading correction.
And (3) carrying out threshold segmentation on the shading correction image, then disconnecting the selected region from the connected region, and extracting Mark point position information through screening the shape and the pixel area to obtain the actual Mark dot matrix position information.
And then, determining the column direction coordinates of the first column and the last column of the Mark point and the row direction coordinates of the first row and the last row of the Mark point according to the actual Mark lattice information, and generating the actual Mark region width and height. And determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to the expected Mark lattice information, and generating the width and height of the expected Mark area. And determining the column direction proportion and the row direction proportion according to the actual Mark region width and height and the expected Mark region width and height. A scaling matrix is generated from the column-direction scale and the row-direction scale and the two-dimensional spatial transformation matrix.
And determining the intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information. And removing the background area except the area of the intelligent glasses screen to be detected from the shading correction image.
And scaling the distorted pixel point coordinates of the shading correction image by using a scaling ratio to obtain expected pixel point coordinates. The scaling matrix can enable the intelligent glasses screen area to be detected of the shading correction image to return to the same size of the real object, and can enable pixel points in the intelligent glasses screen area to return to the expected position. At this time, a distortion parameter set is calculated from the expected pixel point coordinates and the distorted pixel point coordinates. And calculating the distortion quantity according to the distortion parameter set. Partitioning is performed according to the number of Mark points of the shading correction image. A blank AA area is created. And creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount. And generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix. And carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
The edge background of the curved surface part in the intelligent glasses screen area to be detected is enhanced through shading correction, so that the pixel points of the curved surface part can be captured more easily. And secondly, calculating a scaling matrix according to actual Mark lattice information and expected Mark lattice information of the Grid positioning image, so that the scaling degree can be accurately obtained, scaling distorted pixel point coordinates on the shading correction image by using the scaling matrix, obtaining expected pixel point coordinates, and returning the pixel points to the same size as the intelligent glasses screen. At the moment, a distortion parameter set is calculated according to the expected pixel point coordinates and the distortion pixel point coordinates, the distortion parameter set can calculate the distortion quantity, and the distortion correction is carried out on the pixel points on the shading correction image according to the distortion quantity, so that the complexity of the correction process of the pixel points of the traditional intelligent glasses screen is reduced compared with the path reflection method of the traditional intelligent glasses screen.
Secondly, after the terminal calculates a scaling matrix, determining the area of the intelligent glasses screen to be detected through the scaling matrix according to the Mark point in the shadow correction image as a positioning reference point and the size of the Grid positioning image as a standard, and then removing the background area of the intelligent glasses screen to be detected from the shadow correction image. The mode can enable the follow-up calculation links to be in the intelligent glasses screen area to be measured, and the calculated amount is reduced.
Referring to fig. 5, an embodiment of a correction device for an intelligent glasses screen is provided, including:
the manufacturing unit 501 is configured to manufacture a Grid positioning image according to the resolution of the intelligent glasses screen to be tested, where Mark points are uniformly arranged on the Grid positioning image;
the shooting unit 502 is used for inputting the Grid positioning image into the intelligent glasses screen to be detected and shooting the intelligent glasses screen to be detected to obtain a dot matrix shooting image;
a generating unit 503 for performing shading correction processing on the dot matrix captured image to generate a shading corrected image;
a first obtaining unit 504, configured to extract Mark points of the shading correction image, and obtain actual Mark lattice position information;
a first calculating unit 505, configured to calculate a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
a second obtaining unit 506, configured to scale the distorted pixel point coordinates of the shading correction image by using the scaling matrix, to obtain expected pixel point coordinates;
a second calculating unit 507, configured to calculate a distortion parameter set according to the expected pixel point coordinates and the distorted pixel point coordinates;
a third calculation unit 508 for calculating an amount of distortion according to the distortion parameter set;
A correction unit 509 for performing distortion correction on the pixel points on the shading correction image according to the distortion amount.
Referring to fig. 6, another embodiment of a correction device for a smart glasses screen is provided, including:
the manufacturing unit 601 is configured to manufacture a Grid positioning image according to the resolution of the intelligent glasses screen to be tested, where Mark points are uniformly arranged on the Grid positioning image;
the shooting unit 602 is configured to input a Grid positioning image into the intelligent glasses screen to be detected, and shoot the intelligent glasses screen to be detected to obtain a dot matrix shooting image;
a generating unit 603 for performing shading correction processing on the dot matrix captured image to generate a shading corrected image;
optionally, the generating unit 603 includes:
generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, a row gray value moment and a column gray value moment;
and recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image.
A first obtaining unit 604, configured to extract Mark points of the shading correction image, and obtain actual Mark lattice position information;
Optionally, the first obtaining unit 604 includes:
and (3) carrying out threshold segmentation on the shading correction image, then disconnecting the selected region from the connected region, and extracting Mark point position information through screening the shape and the pixel area to obtain the actual Mark dot matrix position information.
A first calculation unit 605 for calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
optionally, the first computing unit 605 includes:
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to actual Mark lattice information, and generating actual Mark area width and height;
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to expected Mark lattice information, and generating expected Mark area width and height;
determining the proportion of the column direction and the proportion of the row direction according to the actual Mark area width and height and the expected Mark area width and height;
a scaling matrix is generated from the column-direction scale and the row-direction scale and the two-dimensional spatial transformation matrix.
A determining unit 606, configured to determine an intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information;
The screening unit 607 is used for removing the background area of the intelligent glasses screen area to be detected from the shading correction image;
a second obtaining unit 608, configured to scale the distorted pixel point coordinates of the shading correction image by using the scaling matrix, to obtain expected pixel point coordinates;
a second calculation unit 609 configured to calculate a distortion parameter set according to the expected pixel point coordinates and the distorted pixel point coordinates;
a third calculation unit 610 for calculating an amount of distortion from the distortion parameter set;
a correction unit 611 for performing distortion correction on the pixel points on the shading correction image according to the distortion amount.
Optionally, the correction unit 611 includes:
partitioning according to the number of Mark points of the shading correction image;
creating a blank AA area;
creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount;
generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix;
and carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
Referring to fig. 7, the present application provides an electronic device, including:
a processor 701, a memory 703, an input-output unit 702, and a bus 704.
The processor 701 is connected to a memory 703, an input-output unit 702, and a bus 704.
The memory 703 holds a program, and the processor 701 invokes the program to execute the correction method as in fig. 1, 2, 3, and 4.
The present application provides a computer-readable storage medium having a program stored thereon that, when executed on a computer, performs the correction method as in fig. 1, 2, 3 and 4.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. The correction method of the intelligent glasses screen is characterized by comprising the following steps of:
manufacturing a Grid positioning image according to the resolution of the intelligent glasses screen to be detected, wherein Mark points which are uniformly arranged are arranged on the Grid positioning image;
inputting the Grid positioning image into the intelligent glasses screen to be tested, and shooting the intelligent glasses screen to be tested to obtain a dot matrix shooting image;
performing shading correction processing on the dot matrix shooting image to generate a shading correction image;
extracting Mark points of the shading correction image, and obtaining actual Mark lattice position information;
calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
scaling the distorted pixel point coordinates of the shading correction image by using the scaling matrix to obtain expected pixel point coordinates;
calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
calculating an abnormal quantity according to the distortion parameter set;
and carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
2. The correction method according to claim 1, wherein performing shading correction processing on the dot matrix captured image to generate a shading corrected image, comprises:
Generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, the row gray value moment and the column gray value moment;
and recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image.
3. The correction method according to claim 1, wherein after calculating a scaling matrix from the actual Mark lattice information and the expected Mark lattice information of the Grid-positioned image, scaling distorted pixel coordinates of the shading-corrected image using the scaling, and before obtaining expected pixel coordinates, the correction method further comprises:
determining an intelligent glasses screen area to be detected on the shading correction image according to the scaling matrix, the size information of the Grid positioning image and the expected Mark lattice information;
and removing the background area except the intelligent glasses screen area to be detected from the shading correction image.
4. The correction method according to claim 1, wherein calculating a scaling matrix from the actual Mark lattice information and the expected Mark lattice information of the Grid-positioned image includes:
Determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to the actual Mark lattice information, and generating actual Mark region width and height;
determining column direction coordinates of a first column and a last column of Mark points and row direction coordinates of a first row and a last row of Mark points according to the expected Mark lattice information, and generating an expected Mark area width and height;
determining a column direction proportion and a row direction proportion according to the actual Mark region width and height and the expected Mark region width and height;
and generating a scaling matrix according to the column direction scale, the row direction scale and the two-dimensional space transformation matrix.
5. The correction method according to claim 1, wherein extracting Mark points of the shading correction image, obtaining actual Mark lattice position information, comprises:
and (3) carrying out threshold segmentation on the shading correction image, then disconnecting the selected region from the connected region, and extracting Mark point position information through screening the shape and the pixel area to obtain actual Mark dot position information.
6. The correction method according to claim 1, wherein performing distortion correction on the pixel points on the shading correction image according to the distortion amount includes:
Partitioning according to the number of Mark points of the shading correction image;
creating a blank AA area;
creating a conversion relation matrix according to the coordinates of the AA area and the coordinates of the shading correction image, and creating an offset relation matrix according to the distortion amount;
generating a distortion correction matrix according to the conversion relation matrix and the offset relation matrix;
and carrying out geometric transformation correction distortion on the AA area according to the distortion correction matrix, and then carrying out gray filling according to the shading correction image.
7. The utility model provides a correcting unit of intelligent glasses screen which characterized in that includes:
the manufacturing unit is used for manufacturing a Grid positioning image according to the resolution ratio of the intelligent glasses screen to be tested, and Mark points which are uniformly arranged are arranged on the Grid positioning image;
the shooting unit is used for inputting the Grid positioning image into the intelligent glasses screen to be detected and shooting the intelligent glasses screen to be detected to obtain a dot matrix shooting image;
a generating unit for performing shading correction processing on the dot matrix shooting image to generate a shading correction image;
the first acquisition unit is used for extracting Mark points of the shading correction image and acquiring actual Mark lattice position information;
the first calculation unit is used for calculating a scaling matrix according to the actual Mark lattice information and the expected Mark lattice information of the Grid positioning image;
The second obtaining unit is used for scaling the distorted pixel point coordinates of the shading correction image by using the scaling matrix to obtain expected pixel point coordinates;
the second calculation unit is used for calculating a distortion parameter set according to the expected pixel point coordinates and the distortion pixel point coordinates;
a third calculation unit for calculating an amount of distortion according to the set of distortion parameters;
and the correction unit is used for carrying out distortion correction on the pixel points on the shading correction image according to the distortion quantity.
8. The correction device according to claim 7, wherein the generation unit includes:
generating a row gray value moment and a column gray value moment according to gray data of the dot matrix shooting image;
calculating the row gradient and the column gradient of the dot matrix shooting image according to a preset scale moment set, the row gray value moment and the column gray value moment;
and recalculating the gray value of each pixel according to the row gradient and the column gradient to generate a shading correction image.
9. An electronic device is characterized by comprising a processor, a memory, an input-output unit and a bus;
the processor is connected with the memory, the input/output unit and the bus;
The memory holds a program that the processor calls to execute the correction method as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program which, when executed on a computer, performs the correction method according to any one of claims 1 to 6.
CN202311691750.3A 2023-12-11 Correction method and device for intelligent glasses screen, electronic equipment and storage medium Active CN117455818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311691750.3A CN117455818B (en) 2023-12-11 Correction method and device for intelligent glasses screen, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311691750.3A CN117455818B (en) 2023-12-11 Correction method and device for intelligent glasses screen, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117455818A true CN117455818A (en) 2024-01-26
CN117455818B CN117455818B (en) 2024-04-30

Family

ID=

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003018447A (en) * 2001-07-04 2003-01-17 Matsushita Electric Ind Co Ltd Image distortion correction device and method
CN106127701A (en) * 2016-06-16 2016-11-16 深圳市凌云视迅科技有限责任公司 Fisheye image distortion correction method and device
CN109035170A (en) * 2018-07-26 2018-12-18 电子科技大学 Adaptive wide-angle image correction method and device based on single grid chart subsection compression
CN109773332A (en) * 2018-12-29 2019-05-21 大族激光科技产业集团股份有限公司 A kind of bearing calibration and more galvanometers correction system of more galvanometer systems
CN112947885A (en) * 2021-05-14 2021-06-11 深圳精智达技术股份有限公司 Method and device for generating curved surface screen flattening image
CN116993612A (en) * 2023-08-03 2023-11-03 昆明理工大学 Nonlinear distortion correction method for fisheye lens

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003018447A (en) * 2001-07-04 2003-01-17 Matsushita Electric Ind Co Ltd Image distortion correction device and method
CN106127701A (en) * 2016-06-16 2016-11-16 深圳市凌云视迅科技有限责任公司 Fisheye image distortion correction method and device
CN109035170A (en) * 2018-07-26 2018-12-18 电子科技大学 Adaptive wide-angle image correction method and device based on single grid chart subsection compression
CN109773332A (en) * 2018-12-29 2019-05-21 大族激光科技产业集团股份有限公司 A kind of bearing calibration and more galvanometers correction system of more galvanometer systems
CN112947885A (en) * 2021-05-14 2021-06-11 深圳精智达技术股份有限公司 Method and device for generating curved surface screen flattening image
CN116993612A (en) * 2023-08-03 2023-11-03 昆明理工大学 Nonlinear distortion correction method for fisheye lens

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李海丽: "超大视场近眼显示设备的畸变校正及图像渲染", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 February 2021 (2021-02-15), pages 1 - 58 *

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
US20180144447A1 (en) Image processing apparatus and method for generating high quality image
CN109840477B (en) Method and device for recognizing shielded face based on feature transformation
CN110008806B (en) Information processing device, learning processing method, learning device, and object recognition device
CN106875437B (en) RGBD three-dimensional reconstruction-oriented key frame extraction method
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
CN112070657B (en) Image processing method, device, system, equipment and computer storage medium
CN113810611B (en) Data simulation method and device for event camera
CN114067051A (en) Three-dimensional reconstruction processing method, device, electronic device and storage medium
CN111353956B (en) Image restoration method and device, computer equipment and storage medium
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN110827309B (en) Super-pixel-based polaroid appearance defect segmentation method
US20120038785A1 (en) Method for producing high resolution image
CN117455818B (en) Correction method and device for intelligent glasses screen, electronic equipment and storage medium
CN115526891B (en) Training method and related device for defect data set generation model
CN117455818A (en) Correction method and device for intelligent glasses screen, electronic equipment and storage medium
Zhang et al. Consecutive context perceive generative adversarial networks for serial sections inpainting
Fry et al. Validation of modulation transfer functions and noise power spectra from natural scenes
CN116385567A (en) Method, device and medium for obtaining color card ROI coordinate information
Narayan et al. Optimized color models for high-quality 3d scanning
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera
CN109035306A (en) Moving-target automatic testing method and device
CN113048899A (en) Thickness measuring method and system based on line structured light
CN112884664B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant