CN115578473B - Method and system for correcting output image of diffraction light waveguide - Google Patents

Method and system for correcting output image of diffraction light waveguide Download PDF

Info

Publication number
CN115578473B
CN115578473B CN202211577552.XA CN202211577552A CN115578473B CN 115578473 B CN115578473 B CN 115578473B CN 202211577552 A CN202211577552 A CN 202211577552A CN 115578473 B CN115578473 B CN 115578473B
Authority
CN
China
Prior art keywords
image
output image
color
color value
optical waveguide
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211577552.XA
Other languages
Chinese (zh)
Other versions
CN115578473A (en
Inventor
周宇泽
赵宇暄
孟祥峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zhige Technology Co ltd
Original Assignee
Zhejiang Zhige Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zhige Technology Co ltd filed Critical Zhejiang Zhige Technology Co ltd
Priority to CN202211577552.XA priority Critical patent/CN115578473B/en
Publication of CN115578473A publication Critical patent/CN115578473A/en
Application granted granted Critical
Publication of CN115578473B publication Critical patent/CN115578473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a system for correcting an output image of a diffraction light waveguide, wherein the method comprises the following steps: the color value mapping table is generated by the input image of the diffraction optical waveguide and the color value of the output image of the diffraction optical waveguide, and the output image of the diffraction optical waveguide is corrected according to the color value mapping table, so that the display color difference of the output image of the diffraction optical waveguide can be greatly reduced, the output image of the diffraction optical waveguide can better meet the expectation of people, and the accurate correction of the color of the corresponding position of the output image of the diffraction optical waveguide is realized.

Description

Method and system for correcting output image of diffraction light waveguide
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for correcting an output image of a diffraction light waveguide.
Background
Augmented Reality (AR) technology is a technology for providing additional information (so-called "augmentation") to a user in the real world by some technical means, organically integrating images of a virtual world and scenes of the real world, and deeply integrating calculated information with the real world, thereby providing the user with richer information and immersive experience.
The augmented reality technology can be realized by a plurality of hardware platforms, wherein wearable augmented reality equipment, namely AR glasses, have the most immersive feeling, the hardware form of the method is simple glasses, light is guided into human eyes through microstructures on the surfaces of lenses, and the hardware realization method is the most convenient and fast and convenient and is the mainstream technology of AR. The AR lens aims at guiding an image into human eyes from a micro display through the lens, and the optical waveguide scheme is a mainstream technical scheme and comprises a waveguide substrate, an in-coupling grating and an out-coupling grating, wherein the in-coupling grating and the out-coupling grating are arranged on the waveguide substrate, the basic principle is as shown in figure 1, light output by an optical machine 1 is coupled into a waveguide substrate 2 through an in-coupling grating 3 and is propagated in the waveguide substrate 2 in a total reflection mode, when encountering the out-coupling grating 4, a part of light is coupled out, the out-coupling light (a solid line in the direction of entering the human eyes in the figure) enters the human eyes, so that the same image output by the optical machine 1 can be seen, meanwhile, the human eyes can see a real world scene (a dotted line in the direction of entering the human eyes in the figure), and the function of enhancing reality can be realized by overlapping the two parts.
The main task of the diffraction optical waveguide is to provide clear and bright virtual images for users, and the images can be superposed on a real external scene; the main requirements are large field of view, large exit pupil, clear image, small volume, light weight, etc.
In order to realize color imaging, the optical engine generally outputs red, green and blue light beams with three different wavelengths, which pass through the optical waveguide pupil expanding and enter human eyes for imaging. Due to the fact that diffraction efficiency of different wavelengths is different, propagation step length is different after total reflection, energy attenuation exists in the propagation process, and the problems enable a projection image of the optical machine to the diffraction light waveguide to generate obvious chromatic aberration after the projection image is output through the diffraction light waveguide.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention mainly aims to provide a method and a system for correcting an output image of a diffraction light waveguide, which enable the output image to be uniform in color.
The invention is realized by the following technical scheme:
the method for correcting the output image of the diffraction light waveguide comprises the following steps:
generating a color value mapping table from the color values of the first image and the second image; wherein the first image is an input image of the diffractive optical waveguide and the second image is an output image of the diffractive optical waveguide;
and correcting the output image of the diffraction optical waveguide according to the color value mapping table.
Further, the generating a color value mapping table from color values of the first image and the second image specifically includes:
recording color values of an internal area of the first image;
recording color values of an internal area of the second image;
and correspondingly setting the color values of the same positions of the first image and the second image in the internal area to obtain a color value mapping table.
Further, the recording the color value of the internal area of the first image specifically includes:
and in the process of inputting the image to the diffraction optical waveguide by adopting the optical machine, recording the position of each input pixel point of the input image and the color value corresponding to the pixel point.
Further, the recording color values of the internal area of the second image specifically includes:
acquiring a second image;
extracting the color value of each output pixel point in the internal area of the second image;
and recording the position and the color value of the output pixel point.
Further, the correspondingly setting the color values of the first image and the second image at the same position in the internal area to obtain a color value mapping table specifically includes:
and correspondingly setting the color values of the same pixel points in the internal area of the first image and the second image to obtain a color value mapping table.
Further, after the second image is collected, edge recognition is carried out on the second image, and a key area is extracted.
Further, the correcting the output image of the diffractive optical waveguide according to the color value mapping table specifically includes:
determining color values of an output image of the diffractive optical waveguide;
according to the determined color value of the output image of the diffractive light guide, inquiring the corresponding color value of the input image of the diffractive light guide in the color value mapping table;
and obtaining a corrected output image of the diffractive light guide based on the corresponding color value of the input image of the diffractive light guide.
Further, the first image comprises a plurality of pure color images;
the RGB values of different pure color images are different.
Further, the method also comprises the following steps: evaluating the corrected output image of the diffractive optical waveguide, specifically comprising:
dividing the output image before correction and the output image after correction into N parts respectively;
calculating a standard deviation of color values of the output image before correction based on the divided output image before correction;
calculating a standard deviation of color values of the corrected output image based on the divided corrected output image;
and calculating the reduction degree of the standard deviation of the color value of the corrected output image according to the standard deviation of the color value of the output image before correction and the standard deviation of the color value of the corrected output image.
Further, the method also comprises the following steps: evaluating the corrected output image of the diffractive optical waveguide, specifically comprising:
dividing the output image before correction and the output image after correction into N parts respectively;
based on the divided output image before correction, converting the RGB value of each pixel point in the image into two-dimensional points in a color space, and calculating the maximum distance of a first two-dimensional point group;
based on the divided corrected output image, converting the RGB value of each pixel point in the image into a two-dimensional point in a color space, and calculating the maximum distance of a second two-dimensional point group;
and calculating the color value optimization degree of the corrected output image according to the maximum distance of the first two-dimensional point group and the maximum distance of the second two-dimensional point group.
Further, the method also comprises the following steps: evaluating the correction method, specifically comprising:
dividing the corrected output image into N images;
calculating a standard deviation of color values of the corrected output image based on the divided corrected output image;
and calculating the actual expected degree of color fitting of the output image according to the standard deviation of the color values of the N images and the corrected output image.
Further, the calculation of the actual expected degree of color fitting of the output image is specifically realized by the following formula:
Figure 44999DEST_PATH_IMAGE001
wherein M represents the actual expected degree of color matching of the output image, δ represents the standard deviation of color values, α is the color value of the intended modulation, N represents the number of image divisions of the output image, and x represents the actual output color value of a divided image.
Correspondingly, the invention also provides a system for correcting the output image of the diffraction light waveguide, which comprises a color value mapping table generating unit and a correcting unit;
the color value mapping table generating unit is used for forming a color value mapping table from color values of the first image and the second image; wherein the first image is an input image of the diffractive optical waveguide and the second image is an output image of the diffractive optical waveguide;
and the correcting unit is used for correcting the output image of the diffraction optical waveguide according to the color value mapping table.
Further, the color value mapping table generating unit comprises a first recording module, a second recording module and a processing module;
the first recording module is used for recording color values of an internal area of the first image;
the second recording module is used for recording the color value of the internal area of the second image;
and the processing module is used for correspondingly setting the color values of the first image and the second image at the same position in the internal area to obtain a color value mapping table.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the invention provides a method for correcting an output image of a diffraction light waveguide, which is characterized in that a color value mapping table is generated by color values of an input image of the diffraction light waveguide and an output image of the diffraction light waveguide, and the output image of the diffraction light waveguide is corrected according to the color value mapping table, so that the display chromatic aberration of the output image of the diffraction light waveguide can be greatly reduced, the output image of the diffraction light waveguide can better meet the expectation of people, and the accurate correction of the color of the corresponding position in the output image of the diffraction light waveguide is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of the transmission principle of a conventional diffractive optical waveguide;
FIG. 2 is an image of a prior art diffractive optical waveguide using light machine projection;
FIG. 3 is a prior art image observed by human eyes after an image projected into a diffractive optical waveguide by the optical engine of FIG. 2 passes through the diffractive optical waveguide;
FIG. 4 is a schematic overview flow chart of the diffractive optical waveguide output image correction method of the present invention;
FIG. 5 shows the captured image before no key regions are extracted;
FIG. 6 shows the captured image after extracting key regions;
FIG. 7 is a schematic diagram of the angular division in the Canny algorithm;
FIG. 8 is a schematic representation of non-maxima suppression in the Canny algorithm;
FIG. 9 is an image of a diffractive optical waveguide using light machine projection in accordance with aspects of the present invention;
FIG. 10 is an image observed by human eyes after the image projected into the diffractive optical waveguide by the optical engine of FIG. 8 passes through the diffractive optical waveguide in accordance with an embodiment of the present invention;
fig. 11 is a color space distribution diagram.
Description of the reference numerals
The optical waveguide grating comprises a 1-optical machine, a 2-waveguide substrate, a 3-coupling-in grating and a 4-coupling-out grating.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments of the present invention, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "first," "second," and the like, are not intended to imply any order, quantity, or importance, but rather are used to distinguish one element from another. As used herein, the terms "a," "an," and the like are not intended to mean that there is only one of the described items, but rather that the description is directed to only one of the described items, which may have one or more. As used herein, the terms "comprise," "include," and other similar words are intended to refer to logical interrelationships and are not to be construed as representing spatial structural relationships. For example, "a includes B" is intended to mean that logically B belongs to a, and not that spatially B is located inside a. Furthermore, the terms "comprising," "including," and other similar words are to be construed as open-ended as opposed to closed-ended. For example, "a includes B" is intended to mean that B belongs to a, but B does not necessarily constitute all of a, and a may also include other elements such as C, D, E, and the like.
The terms "embodiment," "present embodiment," "preferred embodiment," "one embodiment" herein do not denote a relative description as applicable to only one particular embodiment, but rather denote that the descriptions may be applicable to one or more other embodiments. Those of skill in the art will understand that any of the descriptions given herein for one embodiment can be combined with, substituted for, or combined with the descriptions of one or more other embodiments to produce new embodiments, which are readily apparent to those of skill in the art and are intended to be within the scope of the present invention.
In the description herein, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
When color imaging is realized, the displayed color of the existing diffraction optical waveguide is not uniform. For example, fig. 2 shows the image projected into the diffractive optical waveguide by the optical engine as a pure gray image (RGB is equal), and fig. 3 shows the image observed by human eyes after the pure gray image of fig. 2 passes through the diffractive optical waveguide (the image is actually a color image), so although the optical engine projects the pure gray image, the image observed by human eyes is no longer a pure gray image, but has very obvious chromatic aberration.
The invention provides a method for correcting an output image of a diffraction light waveguide, which solves the problem that the existing diffraction light waveguide has obvious chromatic aberration during color imaging.
The invention relates to a method for correcting an output image of a diffraction light waveguide, which has the following general inventive concept:
the image input into the diffraction optical waveguide by the optical machine and the image observed by human eyes after passing through the diffraction optical waveguide are respectively regarded as independent data sets, the function of the diffraction optical waveguide is to change the image input into the diffraction optical waveguide by the optical machine into the image observed by human eyes, and the change is fixed under the condition of constant input. The input and output of the same position on the image are made into a mapping table, namely the function of the diffraction light waveguide on the input picture can be completely represented, the mapping table is called as LUT (lookup Table), and when the output at a certain position is determined, the value of the required input can be obtained by searching the mapping table.
Specifically, as shown in fig. 4, the method for correcting the output image of the diffractive light waveguide according to the present invention includes the steps of:
s1, generating a color value mapping table from color values of the first image and the second image.
Wherein, the first and the second end of the pipe are connected with each other,
the first image is an input image of the diffractive optical waveguide, such as a projected image projected onto the diffractive optical waveguide using an optical engine.
The second image is an output image of the diffractive optical waveguide, for example, an output image of a projection image projected by the optical engine to the diffractive optical waveguide after passing through the diffractive optical waveguide.
The second image may be acquired using an image recording device, such as a CCD camera.
And S2, correcting the output image of the diffraction optical waveguide according to the color value mapping table.
Specifically, the step S1 specifically includes:
s11, recording color values of the inner area of the first image.
For example, in the process of inputting an image to the diffraction optical waveguide by using the optical machine, the position of each pixel point of the input image and the color value corresponding to the pixel point are recorded.
The recording mode here may take the form of a matrix [ x, y, R, G, B ]. Wherein, x represents the abscissa of the pixel, y represents the ordinate of the pixel, R represents the R value in RGB, G represents the G value in RGB, B represents the B value in RGB.
As a preferred embodiment, in order for the color value mapping table to construct a sufficient set of mappings, the first image (i.e., the diffractive optical waveguide input image) includes a plurality of pure color images, with different RGB values of the pure color images. Further preferably, the first image includes 256 pure color images, RGB values of the 256 pure color images are set to 0 to 255, for example, R, G, and B values of the RBGs of each pixel of the first pure color image are all 0, R, G, and B values of the RBGs of each pixel of the second pure color image are all 1, \ 8230, and R, G, and B values of the RBGs of each pixel of the 256 pure color image are all 255.
As a preferred embodiment, when the first image includes a plurality of solid color images having different RGB values, each solid color image may be named by an RBG value, and color values of an inner area of each solid color image may be described in the form of a matrix as described above.
It should be noted that the optical engine is not limited to be used to input the image to the diffractive light waveguide, and those skilled in the art may use other alternative devices to achieve the same function according to actual operations.
S12, color values of the inner area of the second image are extracted.
Specifically, the method comprises the following steps of,
the second image is acquired, where the second image may be acquired using an image recording device, such as a CCD camera.
And extracting the color value of each output pixel point in the internal area of the second image.
And recording the position and the color value of each output pixel point in the same way as the first image. That is, a matrix [ x, y, R, G, B ] is adopted, where x represents the abscissa of the pixel, y represents the ordinate of the pixel, R represents the R value in RGB, G represents the G value in RGB, and B represents the B value in RGB.
In a preferred embodiment, after the second image is acquired and before the second image is extracted, the image is subjected to edge recognition to extract a key area. For example, canny algorithm may be used to perform edge recognition on the captured image, and opencv (Open Source Computer Vision Library, which is a cross-platform Computer Vision Library) is used to extract a key region, where the key region is a region that identifies an edge from the captured image of the image recording device and extracts an edge inside, that is, the above-mentioned second image internal region, and does not include other regions. For example, fig. 5 shows a captured image before the key region is not extracted, and fig. 6 shows a captured image after the key region is extracted.
The size and the angle of the acquired second image are made to correspond to the input image as much as possible, so that the second image and the first image can be subjected to accurate mapping set subsequently.
The Canny algorithm is an edge detection method which comprehensively seeks an optimal compromise scheme between noise interference resistance and accurate positioning, and belongs to a method of smoothing before derivation.
From the convolution templates in the x-direction and the y-direction, it can be known that the partial derivative (gradient) in the x-direction is obtained by subtracting the pixels in the top row from the pixel values in the bottom row in the 3 × 3 neighborhood; similarly, the partial derivative in the y-direction is obtained by subtracting the pixels of the left column from the pixel values of the right column.
From this, the point gradient direction and magnitude angle can be found:
Figure 383402DEST_PATH_IMAGE002
where Q represents the gradient direction, α represents the magnitude angle, (x, y) is the point coordinate, pow is the squaring function, and gx and gy are the x-direction and y-direction derivatives of the image at that point, respectively.
Carrying out non-maximum suppression on the amplitude image:
the angles are first divided into four directional ranges: horizontal (0 °, ± 180 °), ± 45 °, vertical (± 90 °), ± 135 °, all in a bi-directional arrangement, as shown in fig. 7.
Then, non-maximum suppression is performed on four basic edge directions of the 3 × 3 region:
as shown in FIG. 8, if the gradient magnitude of the neighborhood along the direction of the center point (i.e., access point) is the largest, it remains; otherwise, inhibit.
Because the collected image may have a tilt condition, after edge identification and before extracting a key region, the image needs to be rotated to a horizontal region as much as possible, specifically, the image is multiplied by a rotation matrix, and the rotation matrix specifically comprises the following steps:
the centre point coordinates (centerX, centerY), the rotation angle θ, the scaling are input, giving the transformation matrix T as follows:
Figure 721586DEST_PATH_IMAGE003
the transformation matrix T is a rotation matrix for setting the acquired image to be in the horizontal direction, wherein the rotation angle θ is calculated according to the inclination angle of the minimum enclosing rectangle of the image when the edge of the image is identified.
It should be noted that, the edge recognition of the image by using the canny algorithm is only an example of the embodiment, and those skilled in the art may also use other algorithms to perform the edge detection according to actual needs.
In addition, the relative position of the diffraction optical waveguide and the image recording equipment is ensured to be fixed as much as possible in the image acquisition process, so that the accuracy in extracting the key area can be realized.
Since the coupling-out intensity of the coupling-out grating of the diffractive optical waveguide cannot reach the maximum value, i.e., all output sets cannot be collected, proper overexposure can be performed at the time of image collection.
When the first image (i.e., the diffraction optical waveguide input image) includes a plurality of pure color images with different RGB values, the second image may include a plurality of color-nonuniform display images. The second image may also be named according to the RGB values corresponding to the first image, and the color value of the internal area of each image is defined in the form of the matrix.
S13, correspondingly setting the color values of the first image and the second image in the same position of the internal area to obtain a color value mapping table.
In particular, the method comprises the following steps of,
and correspondingly setting the color values of the same pixel points in the internal area of the first image and the second image to obtain a color value mapping table.
For example, when the first image (i.e., the diffractive optical waveguide input image) includes a plurality of pure color images with different RGB values, the second image may include a plurality of display images.
And corresponding the matrix value in the matrix formed by the color value of the internal area of each first image to the matrix value in the matrix formed by the color value of the internal area of each second image corresponding to the first image, wherein further, the corresponding matrix value is the corresponding of the same point of the horizontal and vertical coordinates in the matrix value.
Specifically, the step S2 specifically includes:
and determining the color value of the output image of the diffraction optical waveguide, namely determining the color value of each pixel point of the output image after the optical machine inputs the image to the diffraction optical waveguide and passes through the diffraction optical waveguide. For example, the color value of the output image of the diffractive optical waveguide determined herein may be determined according to the user requirement, such as the color value (R, G, B value) of each pixel point of the output image of the diffractive optical waveguide provided by the user.
According to the determined color value of the output image of the diffractive light waveguide, the corresponding color value of the input image of the diffractive light waveguide is inquired in the color value mapping table, specifically, based on the determined color value of each pixel point of the output image of the input image of the optical machine to the diffractive light waveguide after passing through the diffractive light waveguide, the color value of each pixel point of the input image of the optical machine to the diffractive light waveguide is inquired in the color value mapping table correspondingly.
And obtaining a corrected output image of the diffractive light guide based on the corresponding color value of the input image of the diffractive light guide. Namely, the optical machine is adopted to project an input image to the diffraction light waveguide, the color value of each pixel point of the input image is the color value of each pixel point of the corresponding required optical machine input image to the diffraction light waveguide, which is inquired in the color value mapping table, so that a corrected output image of the diffraction light waveguide is obtained, and the expected output image of the diffraction light waveguide is obtained.
For example, it is expected that the output image of the diffractive light waveguide is a pure gray image as shown in fig. 2, the corresponding color value of the input image of the diffractive light waveguide is queried in the color value mapping table, as shown in fig. 9, an input image (actually, some colors) of the diffractive light waveguide requiring an optical engine is obtained, and then, after passing through the diffractive light waveguide, the output image of the diffractive light waveguide (actually, close to pure gray) as shown in fig. 10 is obtained, so that the output image of the diffractive light waveguide in this embodiment better conforms to the expected color compared with the output image of the diffractive light waveguide in fig. 3.
Further preferably, the method for correcting an output image of a diffractive light waveguide according to the present invention further includes: and evaluating the output image of the corrected diffraction optical waveguide, specifically evaluating the image uniformity of the output image of the corrected diffraction optical waveguide.
The present embodiment example evaluates the image uniformity of the output image of the corrected diffractive light waveguide in two ways.
The first evaluation method is as follows:
the degree of standard deviation reduction was used as an evaluation system:
the output image before correction and the output image after correction are divided into N parts respectively.
Based on the divided output image before correction, a standard deviation of color values of the output image before correction is calculated.
Based on the divided corrected output image, a standard deviation of color values of the corrected output image is calculated.
And calculating the reduction degree of the standard deviation of the color value of the corrected output image based on the standard deviation of the color value of the output image before correction and the standard deviation of the color value of the corrected output image.
The color value standard deviation calculation is specifically realized by the following formula:
Figure 45382DEST_PATH_IMAGE004
where σ denotes standard deviation, N denotes total number of data, X i Represents the ith data among the N data,
Figure 529453DEST_PATH_IMAGE005
mean values of N data are shown.
The degree of standard deviation reduction of the color value is specifically realized by the following formula:
Figure 529770DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 361110DEST_PATH_IMAGE007
represents the degree of standard deviation decrease of the color value>
Figure 596919DEST_PATH_IMAGE008
Representing the standard deviation of the color values of the output image before correction,
Figure 155070DEST_PATH_IMAGE009
representing the standard deviation of the color values of the corrected output image.
For example, in the present embodiment, the corrected output image is divided into N parts, red (R), green (G), and blue (B) values are extracted for each part, and standard deviation data of red (R), green (G), and blue (B) is obtained by calculation according to the above formula (1), and the data is as shown in table 1 below.
The output image before correction was divided into N parts, and the red (R), green (G), and blue (B) values of each part were extracted, respectively, and calculated according to the above formula (1), to obtain standard deviation data of red (R), green (G), and blue (B), the data being as shown in table 1 below.
The degrees of standard deviation reduction of red (R), green (G), and blue (B) of the corrected output image are calculated from the standard deviation data of red (R), green (G), and blue (B) of the corrected output image and the standard deviation data of red (R), green (G), and blue (B) of the output image before correction, and the data are as follows in table 2:
Figure 215037DEST_PATH_IMAGE010
Figure 780011DEST_PATH_IMAGE011
as can be seen from tables 1 and 2, the standard deviation of the color values of the corrected output image of the present embodiment is much smaller than the standard deviation of the output color values before correction, and the degree of decrease of the standard deviation of the corrected output image compared to the output image before correction is greater than 35% for red, the degree of decrease of the standard deviation of the corrected output image compared to the output image before correction is greater than 79% for green, and the degree of decrease of the standard deviation of the corrected output image compared to the output image before correction is greater than 59% for blue. As can be seen, the output image obtained by the diffraction light waveguide output image correction method of the present embodiment has better image uniformity.
Evaluation method two:
the maximum distance of the color space coordinates was used as an evaluation system.
The output image before correction and the output image after correction are divided into N parts respectively.
And based on the divided output image before correction, converting the RGB value of each pixel point in the image into a two-dimensional point in a color space, and calculating the maximum distance of the first two-dimensional point group.
And based on the divided corrected output image, converting the RGB value of each pixel point in the image into a two-dimensional point in a color space, and calculating the maximum distance of a second two-dimensional point group.
Calculating the color value optimization degree of the corrected output image based on the maximum distance of the first two-dimensional point group and the maximum distance of the second two-dimensional point group, and specifically calculating by the following formula:
Figure 616248DEST_PATH_IMAGE012
wherein, K represents the colour value optimization degree of the output image after correcting, D1 represents the maximum distance of the first two-dimensional point group, and D2 represents the maximum distance of the second two-dimensional point group.
In particular, the method comprises the following steps of,
the RGB value of each pixel point in the image may be converted into three-dimensional coordinates x, y, and z by a conversion matrix, and may be specifically calculated by one of the following formulas (4) to (6):
Figure 169852DEST_PATH_IMAGE013
then converting the values of the three-dimensional coordinates x, y and z into coordinate values of two-dimensional uv coordinates
Figure 511971DEST_PATH_IMAGE014
And->
Figure 688875DEST_PATH_IMAGE015
The inner dark region a as in fig. 10 is obtained, and the transformation is specifically realized by the following formula (7):
Figure 907234DEST_PATH_IMAGE016
then according to the coordinate value of uv coordinate
Figure 830190DEST_PATH_IMAGE014
And->
Figure 592479DEST_PATH_IMAGE015
The data is shown in table 3 below for the farthest distances for which the values of x, y, and z are calculated.
The output image before correction was divided into N, and the above operation was also performed to obtain the inner light-colored B region as in fig. 11, and the farthest distances of the values of x, y, and z as in table 3 below.
Figure 600886DEST_PATH_IMAGE017
As can be seen from table 3, the xyz farthest distance of the corrected output image is smaller than the xyz farthest distance of the output image before correction, and the degree of color value optimization K =32.38% of the corrected output image is further calculated according to the above formula (3). Correspondingly, the image uniformity of the corrected output image is better than that of the output image before correction.
Further, the present invention provides an evaluation method for the diffraction light waveguide output image correction method, specifically, the evaluation method comprises:
the corrected output image is divided into N images.
Based on the divided corrected output image, a standard deviation of color values of the corrected output image is calculated.
And calculating the actual expected degree of color fitting of the output image based on the standard deviation of the color values of the N images and the corrected output image.
The standard deviation of the color values of the corrected output image is calculated by the formula (1), which is not described herein again.
The output image color fits the actual expected degree, and is specifically calculated by the following formula:
Figure 342708DEST_PATH_IMAGE018
wherein M represents the degree of the output image color fitting the actual expectation, and the smaller the value of M, the more the output image color fitting the actual expectation,
Figure 41543DEST_PATH_IMAGE019
is the standard deviation of color values, alpha is the color value of the intended modulation, N represents the number of image divisions of the output image, and x represents the actual output color value of a divided image.
For example, in this embodiment, the corrected output image is divided into N parts, red (R), green (G), and blue (B) values are extracted for each part, the standard deviation is calculated according to the above formula (1), and then the actual expected degree of color matching of the output image is calculated according to the above formula (8), so as to obtain the data shown in table 4 below.
Similarly, the output image before correction is divided into N parts, red (R), green (G), and blue (B) values are extracted from each part, the standard deviation is calculated according to the above formula (1), and then the actual expected degree of color matching of the output image is calculated according to the above formula (8), and the data are shown in table 4 below.
Figure 83055DEST_PATH_IMAGE020
Further, for the corrected output image and the output image before correction, the improvement rate of the actual expected degree of the output image fit is calculated, specifically, by the following formula:
Figure 391808DEST_PATH_IMAGE021
where P represents the rate of increase in the degree of actual expectation of output image sticking, M1 represents the degree of actual expectation of output image sticking before correction, and M2 represents the degree of actual expectation of output image sticking after correction.
Figure 842381DEST_PATH_IMAGE022
As can be seen from tables 4 and 5, the method for correcting the output image of the diffractive optical waveguide according to the present embodiment has a good correction effect, that is, the chromatic aberration of the output image of the diffractive optical waveguide is greatly reduced, so that the output image of the diffractive optical waveguide better meets the expectation of the user.
Correspondingly, the invention also provides a system for correcting the output image of the diffraction light waveguide, wherein the correction system comprises a color value mapping table generation unit and a correction unit.
The color value mapping table generating unit is used for forming a color value mapping table from color values of the first image and the second image; wherein the first image is an input image of the diffractive optical waveguide and the second image is an output image of the diffractive optical waveguide.
And the correcting unit is used for correcting the output image of the diffraction optical waveguide according to the color value mapping table.
Further, the color value mapping table generating unit includes a first recording module, a second recording module, and a processing module.
The first recording module is used for recording color values of the internal area of the first image, and specifically records the position of each input pixel point of the input image and the color value corresponding to the pixel point in the process of inputting the image to the diffraction optical waveguide by adopting the optical machine.
And the second recording module is used for recording the color value of the internal area of the second image.
Specifically, the second recording module specifically includes an acquisition submodule, an extraction submodule, and a recording submodule.
The acquisition submodule is configured to acquire a second image, and may employ a CCD camera, for example.
And the extraction submodule is used for extracting the color value of each output pixel point in the internal area of the second image.
And the recording submodule is used for recording the position and the color value of the output pixel point.
The processing module is used for correspondingly setting the color values of the first image and the second image at the same position in the internal area to obtain a color value mapping table. Specifically, color values of the same pixel points in the internal area of the first image and the second image are correspondingly set, and a color value mapping table is obtained.
Specifically, the correction unit includes a determination sub-module, a query sub-module, and a correction sub-module.
The determining submodule is used for determining color values of an output image of the diffraction optical waveguide;
the inquiring submodule is used for inquiring the corresponding color value of the input image of the diffractive light guide in the color value mapping table according to the determined color value of the output image of the diffractive light guide;
the correction submodule is used for obtaining a corrected output image of the diffraction light waveguide based on the corresponding color value of the input image of the diffraction light waveguide, and the correction submodule can adopt the diffraction light waveguide.
Although the present invention has been described in detail with reference to the above embodiments, those skilled in the art can make modifications and equivalents to the embodiments of the present invention without departing from the spirit and scope of the present invention, which is set forth in the claims of the present application.

Claims (11)

1. A method of correcting an output image of a diffractive light waveguide, comprising the steps of:
generating a color value mapping table from color values of the first image and the second image, specifically comprising:
recording color values of an internal area of the first image;
recording color values of an internal area of the second image;
correspondingly setting color values of the same positions of the first image and the second image in the internal area to obtain the color value mapping table;
wherein the first image is an input image of the diffractive optical waveguide and the second image is an output image of the diffractive optical waveguide;
correcting the output image of the diffractive optical waveguide according to the color value mapping table, specifically comprising:
determining color values of an output image of the diffractive optical waveguide;
according to the determined color value of the output image of the diffraction optical waveguide, inquiring the corresponding color value of the input image of the diffraction optical waveguide in the color value mapping table;
and obtaining a corrected output image of the diffractive light guide based on the corresponding color value of the input image of the diffractive light guide.
2. The method for correcting the output image of the diffractive light waveguide according to claim 1, wherein the recording color values of the inner area of the first image specifically comprises:
and in the process of inputting the image to the diffraction optical waveguide by adopting the optical machine, recording the position of each input pixel point of the input image and the color value corresponding to the pixel point.
3. The method for correcting the output image of the diffractive light waveguide according to claim 1, wherein said recording color values of the inner area of the second image specifically includes:
acquiring a second image;
extracting the color value of each output pixel point in the internal area of the second image;
and recording the position and the color value of the output pixel point.
4. The method of correcting the diffraction light guide output image according to claim 1, wherein the correspondingly setting the color values of the same positions in the internal area of the first image and the second image to obtain the color value mapping table specifically includes:
and correspondingly setting the color values of the same pixel points in the internal area of the first image and the second image to obtain a color value mapping table.
5. The method of claim 3, wherein after the second image is captured, edge recognition is performed on the second image to extract the key region.
6. The optical waveguide output image correction method according to any one of claims 1 to 5,
the first image comprises a plurality of solid color images;
the RGB values of different solid color images are different.
7. The method of correcting the output image of the diffractive light waveguide according to claim 1, further comprising: evaluating the corrected output image of the diffractive optical waveguide, specifically comprising:
dividing the output image before correction and the output image after correction into N parts respectively;
calculating a standard deviation of color values of the output image before correction based on the divided output image before correction;
calculating a standard deviation of color values of the corrected output image based on the divided corrected output image;
and calculating the reduction degree of the standard deviation of the color value of the corrected output image according to the standard deviation of the color value of the output image before correction and the standard deviation of the color value of the corrected output image.
8. The method of correcting the output image of the diffractive light waveguide according to claim 1, further comprising: evaluating the corrected output image of the diffractive optical waveguide, specifically comprising:
dividing the output image before correction and the output image after correction into N parts respectively;
based on the divided output image before correction, converting the RGB value of each pixel point in the image into two-dimensional points in a color space, and calculating the maximum distance of a first two-dimensional point group;
based on the divided corrected output image, converting the RGB value of each pixel point in the image into a two-dimensional point in a color space, and calculating the maximum distance of a second two-dimensional point group;
and calculating the color value optimization degree of the corrected output image according to the maximum distance of the first two-dimensional point group and the maximum distance of the second two-dimensional point group.
9. The method of correcting the output image of the diffractive light waveguide according to claim 1, further comprising: evaluating the correction method, specifically comprising:
dividing the corrected output image into N images;
calculating a standard deviation of color values of the corrected output image based on the divided corrected output image;
and calculating the actual expected degree of color fitting of the output image according to the standard deviation of the color values of the N images and the corrected output image.
10. The method of correcting the output image of the diffractive light waveguide according to claim 9, wherein the calculating of the degree of color matching of the output image to an actual expected degree is implemented by the following formula:
Figure QLYQS_1
wherein M represents the actual degree of expectation of the color fit of the output image, δ represents the standard deviation of the color values, α is the color value of the expected modulation, N represents the number of image divisions of the output image, and x represents the actual output color value of a divided image.
11. A diffraction light waveguide output image correction system is characterized by comprising a color value mapping table generating unit and a correction unit;
the color value mapping table generating unit is used for forming a color value mapping table from color values of the first image and the second image; wherein the first image is an input image of the diffractive optical waveguide and the second image is an output image of the diffractive optical waveguide;
the color value mapping table generating unit comprises a first recording module, a second recording module and a processing module;
the first recording module is used for recording color values of the internal area of the first image;
the second recording module is used for recording the color value of the internal area of the second image;
the processing module is used for correspondingly setting the color values of the first image and the second image at the same position in the internal area to obtain a color value mapping table;
the correcting unit is used for correcting the output image of the diffraction optical waveguide according to the color value mapping table;
the correction unit comprises a determination submodule, a query submodule and a syndrome module;
a determining submodule for determining color values of an output image of the diffractive optical waveguide;
the query submodule is used for querying a corresponding color value of the diffraction light waveguide input image in a color value mapping table according to the determined color value of the output image of the diffraction light waveguide;
and the correction submodule is used for obtaining a corrected output image of the diffraction light waveguide based on the corresponding color value of the input image of the diffraction light waveguide.
CN202211577552.XA 2022-12-09 2022-12-09 Method and system for correcting output image of diffraction light waveguide Active CN115578473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211577552.XA CN115578473B (en) 2022-12-09 2022-12-09 Method and system for correcting output image of diffraction light waveguide

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211577552.XA CN115578473B (en) 2022-12-09 2022-12-09 Method and system for correcting output image of diffraction light waveguide

Publications (2)

Publication Number Publication Date
CN115578473A CN115578473A (en) 2023-01-06
CN115578473B true CN115578473B (en) 2023-04-18

Family

ID=84590238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211577552.XA Active CN115578473B (en) 2022-12-09 2022-12-09 Method and system for correcting output image of diffraction light waveguide

Country Status (1)

Country Link
CN (1) CN115578473B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860761A (en) * 2010-04-16 2010-10-13 浙江大学 Correction method of color distortion of projected display images
CN104504722A (en) * 2015-01-09 2015-04-08 电子科技大学 Method for correcting image colors through gray points

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100843378B1 (en) * 2006-08-30 2008-07-03 삼성전기주식회사 Reduction apparatus of the display surface in the display system using the optical diffractive modulator
CN102801952B (en) * 2011-05-28 2015-01-21 华为终端有限公司 Method and device for adjusting video conference system
CN102769759B (en) * 2012-07-20 2014-12-03 上海富瀚微电子有限公司 Digital image color correcting method and realizing device
CN103905803B (en) * 2014-03-18 2016-05-04 中国科学院国家天文台 A kind of color calibration method of image and device
CN110570367A (en) * 2019-08-21 2019-12-13 苏州科达科技股份有限公司 Fisheye image correction method, electronic device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860761A (en) * 2010-04-16 2010-10-13 浙江大学 Correction method of color distortion of projected display images
CN104504722A (en) * 2015-01-09 2015-04-08 电子科技大学 Method for correcting image colors through gray points

Also Published As

Publication number Publication date
CN115578473A (en) 2023-01-06

Similar Documents

Publication Publication Date Title
US6768509B1 (en) Method and apparatus for determining points of interest on an image of a camera calibration object
US8111296B2 (en) Apparatus and method for generating panorama image and computer readable medium stored thereon computer executable instructions for performing the method
CN103609102B (en) High resolution multispectral image capture
US10567646B2 (en) Imaging apparatus and imaging method
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
CN108629756B (en) Kinectv2 depth image invalid point repairing method
CN102156969A (en) Processing method for correcting deviation of image
CN105359024B (en) Camera device and image capture method
CN103034330A (en) Eye interaction method and system for video conference
JP2013243515A (en) Image adjustment device, image adjustment system, and image adjustment method
JP7378219B2 (en) Imaging device, image processing device, control method, and program
US20140085422A1 (en) Image processing method and device
CN102598682A (en) Three-dimensional Imaging Device
CN106067937A (en) Camera lens module array, image sensering device and digital zooming image interfusion method
Li et al. Pro-cam ssfm: Projector-camera system for structure and spectral reflectance from motion
CN111491149A (en) Real-time image matting method, device, equipment and storage medium based on high-definition video
CN113506275B (en) Urban image processing method based on panorama
CN115578473B (en) Method and system for correcting output image of diffraction light waveguide
CN110909571A (en) High-precision face recognition space positioning method
CN109084679A (en) A kind of 3D measurement and acquisition device based on spatial light modulator
JP5777031B2 (en) Image processing apparatus, method, and program
GB2585197A (en) Method and system for obtaining depth data
US7065248B2 (en) Content-based multimedia searching system using color distortion data
CN112752088B (en) Depth image generation method and device, reference image generation method and electronic equipment
CN109300186A (en) Image processing method and device, storage medium, electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant