CN107071234B - Lens shadow correction method and device - Google Patents

Lens shadow correction method and device Download PDF

Info

Publication number
CN107071234B
CN107071234B CN201710049647.7A CN201710049647A CN107071234B CN 107071234 B CN107071234 B CN 107071234B CN 201710049647 A CN201710049647 A CN 201710049647A CN 107071234 B CN107071234 B CN 107071234B
Authority
CN
China
Prior art keywords
pixel
gray data
data
pixel point
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710049647.7A
Other languages
Chinese (zh)
Other versions
CN107071234A (en
Inventor
江巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Xingxin Microelectronics Technology Co.,Ltd.
Original Assignee
Shanghai X-Chip Microelectronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai X-Chip Microelectronic Technology Co Ltd filed Critical Shanghai X-Chip Microelectronic Technology Co Ltd
Priority to CN201710049647.7A priority Critical patent/CN107071234B/en
Publication of CN107071234A publication Critical patent/CN107071234A/en
Application granted granted Critical
Publication of CN107071234B publication Critical patent/CN107071234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters

Abstract

The embodiment of the invention discloses a method and a device for correcting lens shadow, wherein the method comprises the following steps: shooting and acquiring a white field image under the condition of setting a light source through a camera, and extracting pixel gray data of the white field image; determining a pixel point of the maximum gray data according to the gray data of the pixel; and determining a correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition. The embodiment of the invention solves the problem that the image correction can not be accurately carried out under the condition that the lens is not in the device level in the prior art, and realizes the self-adaptive image correction under various device states of the lens.

Description

Lens shadow correction method and device
Technical Field
The embodiment of the invention relates to a digital imaging calibration technology, in particular to a method and a device for correcting lens shadow.
Background
Currently, image sensors are widely used in various fields, such as robots, machine vision, unmanned aerial vehicles, or VR (Virtual Reality) products, and many image sensors integrate modules such as image processing on a chip circuit.
CMOS (Complementary Metal Oxide Semiconductor) is a common type of civil image sensor, and the main principle is to receive light to form a pixel array. Lens shading is one of important factors influencing high-quality images, lens shading is caused by the combined influence of parameters of a lens and an image sensor, lens vignetting and pixel vignetting are generated, the combination of the lens vignetting and the pixel vignetting can generate plane-long amplitude or low-frequency change and the like, so that the light incoming quantity is limited, meanwhile, the light incoming can show the phenomenon of gradually reducing from the center of an image outwards, the generated related shading can influence the shooting quality of the image, and the correction degree of the lens shading can also influence the subsequent image processing.
At present, the lens shading correction is generally carried out by taking the ratio of the current brightness distance to the maximum and minimum brightness distances as a coefficient formula through a radius shading correction method. However, the above method is only applicable to the case where the lens device is horizontal and the light source irradiation point is at the center of the image, and accurate shading correction cannot be performed without the lens device being horizontal.
Disclosure of Invention
The invention provides a lens shading correction method and a lens shading correction device, which are used for realizing self-adaption of different lens device states to realize lens shading correction.
In a first aspect, an embodiment of the present invention provides a lens shading correction method, where the method includes:
shooting and acquiring a white field image under the condition of setting a light source through a camera, and extracting pixel gray data of the white field image;
determining a pixel point of the maximum gray data according to the gray data of the pixel;
and determining a correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
Further, before determining the pixel point of the maximum gray data according to the pixel gray data, the method further includes:
and carrying out smoothing processing on the pixel gray data.
Further, the smoothing of the pixel grayscale data includes:
determining current pixel points one by one from all the pixel points;
according to the neighborhood adaptive value of the pixel gray data of the current pixel point, multiplying and summing the neighborhood adaptive value and a preset operation factor to determine the smooth pixel gray of the current pixel point;
and determining the pixel gray data of the current pixel point according to the smooth pixel gray.
Further, before the smoothing processing is performed on the pixel gray data, the method further includes:
converting the pixel gray data into Bayer format data;
determining current pixel points and associated pixel points of the current pixel points one by one from all the pixel points;
and merging the pixel gray data of the Bayer format data of the current pixel point and the associated pixel point to serve as the pixel gray data of the current pixel point, and deleting the associated pixel point.
Further, confirm current pixel one by one from each pixel to and the associated pixel of current pixel includes:
and determining current pixel points one by one from the pixel points, and taking the pixel points which are adjacent to the current pixel points in the set direction and are set at intervals as associated pixel points.
Further, determining a correction coefficient of each pixel point according to the pixel gray data of each pixel point and the maximum gray data, including:
and determining the ratio of the maximum gray data to the pixel gray data of each pixel point, and determining the correction coefficient of each pixel point according to the ratio.
Further, the pixel gray data of each pixel point includes red channel pixel gray data, green channel pixel gray data, and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively.
Further, determining a ratio between the maximum gray data and the pixel gray data of each pixel point, and determining the ratio as a correction coefficient of each pixel point, includes:
determining a first ratio between the maximum gray data of the green channel and the gray data of pixels of the green channel according to the maximum gray data of the green channel, and determining a difference value between the first ratio and 1 as a correction coefficient of the pixels of the green channel;
and determining a second ratio between the maximum gray data of the red or blue channel and the pixel gray data of each pixel point of the red or blue channel according to the maximum gray data of the red or blue channel, and taking a third ratio between the second ratio and the first ratio at the corresponding position as a correction coefficient of each pixel point of the red or blue channel.
Further, before determining the pixel point of the maximum gray data according to the pixel gray data, the method further includes:
and expanding and increasing the number of the bit levels of the pixel gray data.
Further, the shooting and acquiring of the white field image under the condition of setting the light source through the camera comprises:
and respectively shooting and acquiring white field images in light fields with different angles, different quantities or different types of light sources through a camera with fixed gain, and recording the conditions that the light sources with different angles, different quantities or different types of light sources are set as the light sources.
In a second aspect, an embodiment of the present invention further provides a lens shading correction apparatus, including:
the image data acquisition module is used for shooting and acquiring a white field image under the condition of setting a light source through a camera and extracting pixel gray data of the white field image;
the first pixel point determining module is used for determining the pixel point of the maximum gray data according to the pixel gray data;
and the correction coefficient determining module is used for determining the correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
Further, the apparatus further comprises:
and the data smoothing module is used for smoothing the pixel gray data before determining the pixel point of the maximum gray data according to the pixel gray data.
Further, the data smoothing module comprises:
the current pixel point determining unit is used for determining the current pixel points one by one from all the pixel points;
the smooth pixel gray level determining unit is used for multiplying and summing the neighborhood adaptive value and a preset operation factor according to the neighborhood adaptive value of the pixel gray level data of the current pixel point to determine the smooth pixel gray level of the current pixel point;
and the pixel gray data determining unit is used for determining the pixel gray data of the current pixel point according to the smooth pixel gray.
Further, the apparatus further comprises:
the data format conversion module is used for converting the pixel gray data into Bayer format data before the pixel gray data is subjected to smoothing processing;
the second pixel point determining module is used for determining the current pixel point and the associated pixel point of the current pixel point one by one from all the pixel points;
and the pixel gray data merging module is used for merging the pixel gray data of the Bayer format data of the current pixel point and the associated pixel point to serve as the pixel gray data of the current pixel point and deleting the associated pixel point.
Further, the second pixel point determining module is specifically configured to:
and determining current pixel points one by one from the pixel points, and taking the pixel points which are adjacent to the current pixel points in the set direction and are set at intervals as associated pixel points.
Further, the correction coefficient determination module is specifically configured to:
and determining the ratio of the maximum gray data to the pixel gray data of each pixel point, and determining the correction coefficient of each pixel point according to the ratio.
Further, the pixel gray data of each pixel point includes red channel pixel gray data, green channel pixel gray data, and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively.
Further, the correction coefficient determination module includes:
the first correction coefficient determining unit is used for determining a first ratio between the maximum gray data of the green channel and the gray data of pixels of the green channel according to the maximum gray data of the green channel, and determining the difference between the first ratio and 1 as the correction coefficient of the pixels of the green channel;
and the second correction coefficient determining unit is used for determining a second ratio between the maximum gray data of the red or blue channel and the gray data of the pixels of the red or blue channel according to the maximum gray data of the red or blue channel, and taking a third ratio between the second ratio and the first ratio at the corresponding position as the correction coefficient of each pixel of the red or blue channel.
Further, the apparatus further comprises:
and the bit level expansion module is used for expanding and increasing the number of the bit levels of the pixel gray data before determining the pixel point of the maximum gray data according to the pixel gray data.
Further, the image data acquiring module is specifically configured to:
and respectively shooting and acquiring white field images in light fields with different angles, different quantities or different types of light sources through a camera with fixed gain, and recording the conditions that the light sources with different angles, different quantities or different types of light sources are set as the light sources.
The embodiment of the invention determines the pixel point of the maximum gray data by extracting the pixel gray data of the white field image, determines the correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, namely determines the relation between the pixel point of the maximum gray data and the pixel gray data of each pixel point by taking the pixel point of the maximum gray data as a reference point to determine the correction coefficient, corrects the image, replaces the condition that the image is corrected by taking a central pixel point as the reference point in the prior art, solves the problem that the image cannot be corrected accurately under the condition that the lens is not at the device level in the prior art, and realizes the self-adaptive image correction under various device states of the lens.
Drawings
FIG. 1 is a flowchart illustrating a lens shading correction method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a lens shading correction method according to a second embodiment of the present invention;
FIG. 3A is a flowchart illustrating a lens shading correction method according to a third embodiment of the present invention;
FIG. 3B is a schematic diagram of Bayer format data suitable for use in embodiments of the present invention;
FIG. 3C is a schematic diagram of a current pixel point and an associated pixel point of a red channel according to an embodiment of the present invention;
FIG. 3D is a schematic diagram of a current pixel point and an associated pixel point of a blue channel, which is applicable to the embodiment of the present invention;
FIG. 3E is a diagram illustrating a current pixel point and an associated pixel point of an odd row of a green channel in accordance with an embodiment of the present invention;
FIG. 3F is a diagram illustrating a current pixel point and an associated pixel point in an even row of a green channel in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a lens shading correction method according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a lens shading correction apparatus according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a lens shading correction method according to an embodiment of the present invention, where the embodiment is applicable to a case where lens shading correction is adaptively implemented in different states of a lens device, and the method can be implemented by the lens shading correction device according to the embodiment of the present invention, and the device can be implemented in a software and/or hardware manner. The method specifically comprises the following steps:
and S110, shooting by a camera under the condition of setting a light source to obtain a white field image, and extracting pixel gray data of the white field image.
The white field image refers to an image whose image content or image background is white, and may be, for example, a picture obtained by taking a picture of white paper or frosted glass by a camera, specifically, the white field image is obtained and stored by setting a light source to illuminate in a closed darkroom, and pixel grayscale data of the white field image is extracted.
Preferably, the step S110 of capturing the white field image by the camera under the condition of setting the light source may further include:
and respectively shooting and acquiring white field images in light fields with different angles, different quantities or different types of light sources through the camera with fixed gain, and recording the light sources with different angles, different quantities or different types of light sources as the set light source condition.
The gain of the camera is set to be constant when the camera shoots white field pictures of different light sources, so that different pictures can be conveniently processed under the same condition, and specifically, the gain of the camera is determined according to a processing result of a historical lens shadow correction coefficient.
In this embodiment, the material of the lens itself determines that the transmittances of different positions of the lens are different, specifically, the transmittance at the center of the lens is higher, and the transmittance at the edge of the lens is lower, so that when light is incident through the lens and forms an image, the gray value of the pixel point at the center of the image is larger, and the gray value of the pixel point at the edge of the lens is smaller, thereby forming a lens shadow. When the angles, the number and the types of the light sources are different, the transmittance of the lens is different, the obtained images are different, and the corresponding image shading correction coefficients are different, for example, the light source angle may be vertical to the lens or the light source has a certain angle with the lens, the number of the light sources may be one or more, and the type of the light source may be a D65 light source or a D50 light source, etc.
In this embodiment, the white field images are acquired under different angles, different numbers or different types of light sources, and the light source angles, the number and the types of the white field images are marked, so that the white field images under different set light sources can be conveniently identified.
Preferably, the pixel gray data of each pixel point includes red channel pixel gray data, green channel pixel gray data, and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively.
The pixel gray data of each pixel point is obtained by overlapping the pixel gray data of the red channel, the pixel gray data of the green channel and the pixel gray data of the blue channel, and each pixel point can read the pixel gray data of the three channels.
In this embodiment, extracting the pixel grayscale data of the white field image refers to extracting the red channel pixel grayscale data, the green channel pixel grayscale data, and the blue channel pixel grayscale data of the white field image.
And S120, determining the pixel point of the maximum gray data according to the gray data of the pixel.
In this embodiment, pixel points corresponding to the maximum gray data in the pixel gray data of each channel are respectively determined by extracting the pixel gray data of three channels. Illustratively, when the white field image is stored as a 24-bit true color bmp (Bitmap, image file format) image, the three-channel pixel gray scale data of each pixel point may be 0 to 255, and the maximum gray scale data of the pixel gray scale data of each channel is determined according to the three-channel pixel gray scale data, and illustratively, the three-channel maximum gray scale data may be 255.
S130, determining a correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
In this embodiment, the pixel point of the maximum gray data of the image corresponds to the position of the maximum transmittance of the lens, and the pixel point of the maximum gray data of the image is used as the reference point for image correction. When the lens device is horizontal, the image center pixel point is the pixel point of the maximum gray data in the image, and when the lens device is not horizontal, the image center pixel point is not the pixel point of the maximum gray data in the image.
In this embodiment, the pixel point of the maximum gray data is used as a reference point for image correction, a relationship between the pixel point and the gray data of all the pixel points is determined, a correction coefficient is determined according to the relationship between the gray data of the pixel, and correction is performed when the camera shoots an image. For example, the correction coefficient may be determined by a ratio relationship between the maximum gray data and the pixel gray data of all the pixel points.
It should be noted that, in this embodiment, correction coefficients are respectively determined for white field images acquired under different angles, different numbers, or different types of light sources, and the correction coefficients and setting conditions of the light sources are correspondingly recorded and stored, when a camera performs image shooting, angles, numbers, and types of the light sources can be identified, and the corresponding correction coefficients are called to correct the images, so that the problem that shadow correction can only be performed on images under one light source in the prior art is solved, and image correction under light sources of any angles, numbers, and types is self-adapted.
According to the technical scheme of the embodiment, the pixel gray data of the white field image is extracted, the pixel point of the maximum gray data is determined, the correction coefficient of each pixel point is determined according to the pixel gray data and the maximum gray data of each pixel point, namely the pixel point of the maximum gray data is used as a reference point, the relation between the pixel point of the maximum gray data and the pixel gray data of each pixel point is determined, the correction coefficient is determined, the image is corrected, the condition that the image is corrected by taking the central pixel point as the reference point in the prior art is replaced, the problem that the image correction cannot be accurately performed under the condition that the lens is not at the device level in the prior art is solved, and the self-adaptive image correction under various device states of the lens is realized.
On the basis of the above technical solution, before step S120, the method may further include:
the number of bit levels of the pixel gradation data is expanded and increased.
For example, when the white field image is saved as a 24-bit true color bmp image, the bit level of the red channel pixel gradation data, the green channel pixel gradation data, and the blue channel pixel gradation data of the white field image is 8 bits, and the pixel gradation of each channel may be 0 to 255. The expansion increase of the number of the bit levels of the pixel gradation data may be, for example, an expansion of the bit level of the pixel gradation data of the white field image from 8 bits to 10 bits. Illustratively, the bit level of the pixel gray data may be expanded by: r ═ R < (nSensorBit-8)) | ((1< (nSensorBit-8)) -1), where R is the red channel pixel gray scale data for each pixel point, nSensorBit is the bit level to be converted, and exemplary, nSensorBit can be 10.
In this embodiment, the number of bit levels of the pixel grayscale data is increased by extension, so that the precision of the pixel grayscale data is improved, and the calculation precision of the correction coefficient is improved.
Example two
Fig. 2 is a flowchart of a lens shading correction method according to a second embodiment of the present invention, where on the basis of the above-mentioned embodiment, before determining a pixel point of maximum gray data according to pixel gray data, smoothing processing is further added to the pixel gray data, and accordingly, the method may specifically include:
s210, shooting and acquiring a white field image under the condition of setting a light source through a camera, and extracting pixel gray data of the white field image.
And S220, smoothing the pixel gray scale data.
Noise is inevitably generated in the process of extracting the pixel gray data, the noise reduction effect can be realized by smoothing the pixel gray data, and the pixel gray data can be smoothed by a smoothing filter.
Preferably, step S220 may also be:
determining current pixel points one by one from all the pixel points;
multiplying and summing the neighborhood adaptive value and a preset operation factor according to the neighborhood adaptive value of the pixel gray data of the current pixel point, and determining the smooth pixel gray of the current pixel point;
and determining the pixel gray data of the current pixel point according to the smooth pixel gray.
With Rij、GijAnd BijRespectively representing red channel pixel gray data, green channel pixel gray data and blue channel pixel gray data, wherein i is the line number of the white field image pixel point, j is the column number of the white field image pixel point, and i and j are positive integers more than or equal to 1.
In this embodiment, the neighborhood adaptive value of the current pixel point is determined according to a preset rule. Illustratively, the current pixel point is determined to be R11The preset rule may be to select the pixel gray data of pixels with preset number, which are adjacent to the current pixel in the same column in sequence, for example, the preset number may be 7, specifically, R11The same column of sequentially adjacent pixel points is R21、R31、R41、R51、R61、R71And R81The gray scale data of the 8 pixels is expressed by R11And carrying out symmetry operation for the center to form a matrix of 15 times 1, and determining the matrix as a neighborhood adaptive value of the current pixel point: [ R ]81R71R61R51R41R31R21R11R21R31R41R51R61R71R81]. It should be noted that, if the number of the pixels sequentially adjacent to the same column of the current pixel is less than the preset number, the last pixel is used for repeated supplementation, and for example, the last row of the first column of the image may be R, for example, the pixel may be R91Then R is91The neighborhood adaptive value of (c) is: [ R ]91R91R91R91R91R91R91R91R91R91R91R91R91R91R91]。
The predetermined operation factor is a matrix with a sum of all elements being 1, and may be, for example, a matrix M of 3 by 5, and preferably, the predetermined operation factor has the smallest central element and the gradually increasing peripheral elements. Respectively comparing all elements in the neighborhood adaptive value of the current pixel point with preset operation factorsM is multiplied and summed: N-2R81M+2R71M+…+2R21M+R11And M, taking all elements and values in the matrix N as the smooth pixel gray of the current pixel point. And respectively carrying out smoothing treatment on the three-channel pixel gray data of the current pixel point to determine the pixel gray data of the current pixel point.
And S230, determining the pixel point of the maximum gray data according to the gray data of the pixel.
In this embodiment, the three-channel pixel gray data after smoothing is processed to determine the pixel point of the maximum gray data after smoothing.
S240, according to the pixel gray data and the maximum gray data of each pixel point, determining a correction coefficient of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
According to the technical scheme of the embodiment, the three-channel pixel gray data of the white field image are smoothed, so that the noise influence is reduced, and the calculation accuracy of the correction coefficient is improved.
EXAMPLE III
Fig. 3A is a flowchart of a lens shading correction method according to a third embodiment of the present invention, and on the basis of the third embodiment, further, a conversion of pixel gray scale data into bayer format data is added before the pixel gray scale data is subjected to smoothing processing; determining current pixel points and associated pixel points of the current pixel points one by one from all the pixel points; and merging the pixel gray data of the Bayer format data of the current pixel point and the associated pixel point to serve as the pixel gray data of the current pixel point, and deleting the associated pixel point. Correspondingly, the method specifically comprises the following steps:
and S310, shooting by a camera under the condition of setting a light source to obtain a white field image, and extracting pixel gray data of the white field image.
And S320, converting the pixel gray data into Bayer format data.
Bayer pattern data is a data arrangement pattern of specific three-channel pixel gray data, specifically, the pixel gray data of R, G, R, G … … is respectively sampled and output by the pixels in odd rows and odd columns, and the pixel gray data of G, B, G, B … … is respectively sampled and output by the pixels in even rows and even columns, as shown in fig. 3B, fig. 3B is a schematic diagram of bayer pattern data. For example, if the data format corresponding to the current pixel point and the bayer pattern data is R pixel gray scale data, the red channel pixel gray scale data of the current pixel point is determined as bayer pattern data of the current pixel point, and the blue channel pixel gray scale data and the green channel pixel gray scale data of the current pixel point are discarded.
In this embodiment, when the pixel gray data of the pixel point is processed, the three-channel pixel gray data of each pixel is composed of the gray data of a certain channel pixel output by the pixel point itself and the gray data of other channels pixels output by the adjacent pixel point, so that the conversion of the pixel gray data into bayer-format data does not affect the image quality basically, and the sampling frequency and the data calculation amount can be reduced by 60%.
S330, determining the current pixel point and the associated pixel point of the current pixel point one by one from the pixel points.
In this embodiment, the current pixel point and the associated pixel point corresponding to the current pixel point are determined by a preset rule in bayer pattern data.
Preferably, step S330 may specifically be:
and determining current pixel points one by one from the pixel points, and taking the pixel points which are adjacent to the current pixel points in the set direction and are set at intervals as associated pixel points.
Wherein the associated pixel point and the current pixel point belong to a pixel point in the same channel of the bayer pattern data, illustratively, PijIn the embodiment, the current pixel points are determined one by one according to a mode of gradually increasing i and j. Illustratively, for a red channel pixel point and a blue channel pixel point, the setting directions of the associated pixel points are i and jIn the increasing direction, the interval setting number may be 1, and the preset number of the associated pixels may be 3, as shown in fig. 3C and 3D, fig. 3C is a schematic diagram of the current pixel point of the red channel and the associated pixel point, and fig. 3D is a schematic diagram of the current pixel point of the blue channel and the associated pixel point, wherein if the current pixel point of the red channel is the pixel point 301, the pixel point 302 is the associated pixel point of the pixel point 301; if the current pixel point of the blue channel is the pixel point 303, the pixel point 304 is the associated pixel point of the pixel point 303.
Since the number of the green channel pixels is twice the number of the red channel pixels and the blue channel pixels, and the determination method for the associated pixels of the green channel pixels in the odd-numbered row and the even-numbered row is different, for example, see fig. 3E and 3F, fig. 3E is a schematic diagram of the current pixels and the associated pixels of the green channel pixels in the odd-numbered row, wherein if the current pixels in the green channel pixels in the odd-numbered row are the pixels 305, the pixels 306 are the associated pixels of the pixels 305; fig. 3F is a schematic diagram of a current pixel point and an associated pixel point in an even row of the green channel, wherein if the current pixel point in the even row of the green channel is the pixel point 307, the pixel point 308 is the associated pixel point of the pixel point 307.
S340, combining the pixel gray data of the Bayer format data of the current pixel point and the associated pixel point to serve as the pixel gray data of the current pixel point, and deleting the associated pixel point.
The combination of the pixel gray data of the current pixel point and the associated pixel point refers to that the sum of the pixel gray data of the current pixel point and the pixel gray data of the associated pixel point is used as the pixel gray data of the current pixel point, so that the bit level of the current pixel point is improved, the accuracy of the pixel gray data of the current pixel point is improved, the number of the pixel points is reduced by deleting the associated pixel points, and the data processing amount is reduced.
After all pixel points under Bayer format data in the white field image are processed, new Bayer format data are formed, three-channel pixel gray data are respectively extracted, and a three-channel pixel gray data matrix is formed.
Illustratively, for a red channel pixel point and a blue channel pixel point, pixel gray data of each pixel point is directly extracted to form a red channel pixel gray data matrix and a blue channel pixel gray data matrix. And for the green channel pixel points, sequentially acquiring two green channel pixel points which are diagonally adjacent to the odd-numbered row and the even-numbered row along the direction that i and j gradually increase, determining the average value of the pixel gray data of the two green channel pixel points as a green channel pixel gray data matrix element, and determining a green channel pixel gray data matrix according to the pixel gray data matrix element.
And S350, smoothing the pixel gray scale data.
And S360, determining the pixel point of the maximum gray data according to the gray data of the pixel.
And S370, determining a correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
According to the technical scheme, the three-channel pixel gray data matrix is reduced in dimension by converting the image pixel gray data into Bayer format data and combining the pixel points with the associated pixel points, the data volume to be processed is reduced, and the data processing efficiency is improved.
Preferably, step S370 may be followed by:
determining a three-channel difference curved surface according to the correction parameters of the three channels;
determining a correction coefficient of the three-channel pixel gray data of the white field image according to the three-channel difference curved surface;
and performing shading correction on the image according to the correction coefficient of the three-channel pixel gray data of the white field image.
In this embodiment, because bayer format data conversion is performed on the pixel gray data of the white field image and the current pixel point and the associated pixel point are combined, the obtained correction coefficient is only the correction coefficient corresponding to the small sample pixel gray data, and the white field image cannot be directly corrected.
In this embodiment, the correction coefficient corresponding to each pixel point of the white field image is determined by a spline curve interpolation algorithm. Illustratively, establishing an XYZ coordinate axis, wherein an X axis corresponds to the column number of correction coefficients, a Y axis corresponds to the row number of the correction coefficients, and a Z axis corresponds to the correction coefficient values, respectively corresponding three-channel correction coefficients into coordinates, fitting the correction coefficients corresponding to pixel points to form a B-spline surface, determining all difference values corresponding to all pixel points in a white field image through a difference value algorithm, determining the correction coefficients of the white field image according to the difference values, and performing shadow correction on the white field image. Illustratively, the shading correction for the white field image may be performed by the following formula: r + ═ t · percent · gain1 · R, where R + is the corrected red channel pixel grayscale data, and R is the red channel pixel grayscale data of the original white field image; t is an amplification factor, illustratively 16 for red channel pixel points and blue channel pixel points, and 32 for green channel pixel points; the percent is the correction strength, and illustratively, the percent can be 90-100; gain1 is the correction factor for all pixel points of the red channel.
It should be noted that, if the number of bit levels of the pixel gray scale data is expanded and increased in the process of processing the three-channel pixel gray scale data, when an image is corrected, the corrected pixel gray scale data needs to be restored to an original bit level, and for example, the image correction may be performed according to the following formula: r + (t pending gainl R) > (m _ nSensorBitWidth-1).
Example four
Fig. 4 is a flowchart of a lens shading correction method according to a fourth embodiment of the present invention, where on the basis of the foregoing embodiment, the correction coefficient of each pixel is further determined to be optimized according to the pixel gray data and the maximum gray data of each pixel, and accordingly, the method may specifically include:
and S410, shooting by a camera under the condition of setting a light source to obtain a white field image, and extracting pixel gray data of the white field image.
And S420, determining the pixel point of the maximum gray data according to the gray data of the pixel.
S430, determining a ratio between the maximum gray data and the pixel gray data of each pixel point, determining a correction coefficient of each pixel point according to the ratio, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition.
In this embodiment, the pixel point of the maximum gray data corresponds to the maximum transmittance position of the lens, the correction coefficient of each pixel point is determined according to the ratio between the maximum gray data and the pixel gray data of each pixel point, and the transmittance of the lens corresponding to each pixel point can be compensated to the maximum transmittance, so that the shadow correction is realized.
Preferably, step S430 may also be:
determining a first ratio between the maximum gray data of the green channel and the gray data of pixels of the green channel according to the maximum gray data of the green channel, and determining a difference value between the first ratio and 1 as a correction coefficient of the pixels of the green channel;
and determining a second ratio between the maximum gray data of the red or blue channel and the pixel gray data of each pixel point of the red or blue channel according to the maximum gray data of the red or blue channel, and taking a third ratio between the second ratio and the first ratio of the corresponding position as a correction coefficient of each pixel point of the red or blue channel.
In the pixel gray data of the white field image or the Bayer format data, the brightness of the red channel pixel point and the blue channel pixel point is about half of the brightness of the green channel pixel point, the sensitivity of human eyes to the green channel is greater than the sensitivity of the red channel pixel point and the blue channel pixel point, and division operation is carried out on a second ratio between the maximum gray data of the red channel or the blue channel pixel point and the pixel gray data of each pixel point of the red channel or the blue channel and a first ratio of the corresponding position, so that the sensitivity of the red channel pixel point and the blue channel pixel point can be improved, and the correction effect of the red channel pixel point and the blue channel pixel point is improved.
According to the technical scheme of the embodiment, the correction coefficient of each pixel point is determined according to the ratio of the three-channel maximum gray data of the white field image to the pixel gray data of each pixel point, the pixel point corresponding to the maximum gray data is used as a correction reference point, the situation that the image is corrected by taking the central pixel point as a reference point in the prior art is replaced, the problem that the image cannot be corrected accurately under the condition that the lens is not in the device level in the prior art is solved, and the self-adaptive image correction under various device states of the lens is realized.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a lens shading correction apparatus according to a fifth embodiment of the present invention, where the apparatus is configured to execute a lens shading correction method according to any embodiment of the present invention, and the apparatus may specifically include:
an image data obtaining module 510, configured to capture a white field image by using a camera under a condition of setting a light source, and extract pixel grayscale data of the white field image;
a first pixel point determining module 520, configured to determine a pixel point of the maximum gray data according to the pixel gray data;
the correction coefficient determining module 530 is configured to determine a correction coefficient of each pixel according to the pixel grayscale data and the maximum grayscale data of each pixel, and record the correction coefficient in correspondence with the set light source condition, where the correction coefficient is used to correct when the camera captures an image under the set light source condition.
Preferably, the apparatus further comprises:
and a data smoothing module 540, configured to perform smoothing processing on the pixel grayscale data before determining a pixel point of the maximum grayscale data according to the pixel grayscale data.
Preferably, the data smoothing module 540 includes:
a current pixel point determining unit 541 configured to determine a current pixel point one by one from the pixel points;
the smooth pixel gray level determining unit 542 is configured to perform multiplication summation on the neighborhood adaptive value and a preset operation factor according to the neighborhood adaptive value of the current pixel gray level data, and determine the smooth pixel gray level of the current pixel;
the pixel gray data determining unit 543 is configured to determine the pixel gray data of the current pixel point according to the smooth pixel gray.
Preferably, the apparatus further comprises:
a data format conversion module 550, configured to convert the pixel grayscale data into bayer format data before performing smoothing processing on the pixel grayscale data;
a second pixel point determining module 560, configured to determine a current pixel point and a related pixel point of the current pixel point one by one from the pixel points;
and the pixel gray data merging module 570 is configured to merge pixel gray data of bayer pattern data of the current pixel point and the associated pixel point, to serve as pixel gray data of the current pixel point, and to delete the associated pixel point.
Preferably, the second pixel point determining module 560 is specifically configured to:
and determining current pixel points one by one from the pixel points, and taking the pixel points which are adjacent to the current pixel points in the set direction and are set at intervals as associated pixel points.
Preferably, the correction coefficient determining module 530 is specifically configured to:
and determining the ratio of the maximum gray data to the pixel gray data of each pixel point, and determining the correction coefficient of each pixel point according to the ratio.
Preferably, the pixel gray data of each pixel point includes red channel pixel gray data, green channel pixel gray data, and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively.
Preferably, the correction coefficient determining module 530 includes:
the first correction coefficient determining unit 531 is configured to determine, according to the maximum grayscale data of the green channel, a first ratio between the maximum grayscale data of the green channel and the grayscale data of pixels in the green channel, and determine a difference between the first ratio and 1 as a correction coefficient of a pixel in the green channel;
the second correction coefficient determining unit 532 is configured to determine a second ratio between the maximum grayscale data of the red or blue channel and the pixel grayscale data of each pixel of the red or blue channel according to the maximum grayscale data of the red or blue channel, and use a third ratio between the second ratio and the first ratio of the corresponding position as the correction coefficient of each pixel of the red or blue channel.
Preferably, the apparatus further comprises:
the bit level expansion module 580 expands the number of bit levels of the pixel gray data to increase before determining the pixel point of the maximum gray data according to the pixel gray data.
Preferably, the image data obtaining module 510 is specifically configured to:
and respectively shooting and acquiring white field images in light fields with different angles, different quantities or different types of light sources through a camera with fixed gain, and recording the conditions that the light sources with different angles, different quantities or different types of light sources are set as the light sources.
The lens shading correction device provided by the embodiment of the invention can execute the lens shading correction method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A lens shading correction method, comprising:
shooting and acquiring a white field image under the condition of setting a light source through a camera, and extracting pixel gray data of the white field image;
determining a pixel point of the maximum gray data according to the gray data of the pixel;
determining a correction coefficient of each pixel point according to the pixel gray data and the maximum gray data of each pixel point, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition;
the pixel gray data of each pixel point comprises red channel pixel gray data, green channel pixel gray data and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively;
the method for determining the ratio between the maximum gray data and the pixel gray data of each pixel point and determining the ratio as the correction coefficient of each pixel point comprises the following steps:
determining a first ratio between the maximum gray data of the green channel and the gray data of pixels of the green channel according to the maximum gray data of the green channel, and determining a difference value between the first ratio and 1 as a correction coefficient of the pixels of the green channel;
and determining a second ratio between the maximum gray data of the red or blue channel and the pixel gray data of each pixel point of the red or blue channel according to the maximum gray data of the red or blue channel, and taking a third ratio between the second ratio and the first ratio at the corresponding position as a correction coefficient of each pixel point of the red or blue channel.
2. The method of claim 1, wherein before determining the pixel point of the maximum gray data from the pixel gray data, the method further comprises:
and carrying out smoothing processing on the pixel gray data.
3. The method of claim 2, wherein smoothing the pixel grayscale data comprises:
determining current pixel points one by one from all the pixel points;
according to the neighborhood adaptive value of the pixel gray data of the current pixel point, multiplying and summing the neighborhood adaptive value and a preset operation factor to determine the smooth pixel gray of the current pixel point;
and determining the pixel gray data of the current pixel point according to the smooth pixel gray.
4. The method of claim 2, wherein prior to smoothing the pixel grayscale data, the method further comprises:
converting the pixel gray data into Bayer format data;
determining current pixel points and associated pixel points of the current pixel points one by one from all the pixel points;
and merging the pixel gray data of the Bayer format data of the current pixel point and the associated pixel point to serve as the pixel gray data of the current pixel point, and deleting the associated pixel point.
5. The method of claim 4, wherein determining the current pixel point one by one from the pixel points, and wherein associating the current pixel point comprises:
and determining current pixel points one by one from the pixel points, and taking the pixel points which are adjacent to the current pixel points in the set direction and are set at intervals as associated pixel points.
6. The method of claim 1, wherein determining the correction factor for each pixel point based on the pixel gray scale data for each pixel point and the maximum gray scale data comprises:
and determining the ratio of the maximum gray data to the pixel gray data of each pixel point, and determining the correction coefficient of each pixel point according to the ratio.
7. The method of any of claims 1-6, wherein prior to determining the pixel having the largest gray level data from the pixel gray level data, the method further comprises:
and expanding and increasing the number of the bit levels of the pixel gray data.
8. The method according to any one of claims 1 to 6, wherein the capturing of the white field image by the camera under the condition of setting the light source comprises:
and respectively shooting and acquiring white field images in light fields with different angles, different quantities or different types of light sources through a camera with fixed gain, and recording the conditions that the light sources with different angles, different quantities or different types of light sources are set as the light sources.
9. A lens shading correction apparatus, comprising:
the image data acquisition module is used for shooting and acquiring a white field image under the condition of setting a light source through a camera and extracting pixel gray data of the white field image;
the first pixel point determining module is used for determining the pixel point of the maximum gray data according to the pixel gray data;
the correction coefficient determining module is used for determining the correction coefficient of each pixel point according to the pixel gray data of each pixel point and the maximum gray data, and correspondingly recording the correction coefficient and the set light source condition, wherein the correction coefficient is used for correcting when the camera shoots an image under the set light source condition;
the pixel gray data of each pixel point comprises red channel pixel gray data, green channel pixel gray data and blue channel pixel gray data, and the pixel gray data of each channel is processed respectively;
wherein the correction coefficient determination module includes:
the first correction coefficient determining unit is used for determining a first ratio between the maximum gray data of the green channel and the gray data of pixels of the green channel according to the maximum gray data of the green channel, and determining the difference between the first ratio and 1 as the correction coefficient of the pixels of the green channel;
and the second correction coefficient determining unit is used for determining a second ratio between the maximum gray data of the red or blue channel and the gray data of the pixel of each pixel of the red or blue channel according to the maximum gray data of the red or blue channel, and taking a third ratio between the second ratio and the first ratio of the corresponding position as the correction coefficient of each pixel of the red or blue channel.
CN201710049647.7A 2017-01-23 2017-01-23 Lens shadow correction method and device Active CN107071234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710049647.7A CN107071234B (en) 2017-01-23 2017-01-23 Lens shadow correction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710049647.7A CN107071234B (en) 2017-01-23 2017-01-23 Lens shadow correction method and device

Publications (2)

Publication Number Publication Date
CN107071234A CN107071234A (en) 2017-08-18
CN107071234B true CN107071234B (en) 2020-03-20

Family

ID=59598362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710049647.7A Active CN107071234B (en) 2017-01-23 2017-01-23 Lens shadow correction method and device

Country Status (1)

Country Link
CN (1) CN107071234B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592516B (en) * 2017-09-14 2020-01-17 长沙全度影像科技有限公司 Color shadow correction method and system for panoramic camera
CN107590840B (en) * 2017-09-21 2019-12-27 长沙全度影像科技有限公司 Color shadow correction method based on grid division and correction system thereof
CN108111777B (en) * 2017-12-15 2021-02-02 武汉精立电子技术有限公司 Dark corner correction system and method
CN109068025B (en) * 2018-08-27 2021-02-05 建荣半导体(深圳)有限公司 Lens shadow correction method and system and electronic equipment
CN109186963B (en) * 2018-11-07 2020-12-18 常州工学院 Real-time measurement method for blue light leakage
CN110084856B (en) * 2019-04-24 2021-07-27 Oppo广东移动通信有限公司 Method and device for adjusting brightness of calibration image, electronic equipment and storage medium
CN110738606A (en) * 2019-09-06 2020-01-31 深圳新视智科技术有限公司 Image correction method, device, terminal and storage medium for multi-light source system
CN111681201B (en) * 2020-04-20 2023-06-23 深圳市鸿富瀚科技股份有限公司 Image processing method, device, computer equipment and storage medium
CN112243116B (en) * 2020-09-30 2022-05-31 格科微电子(上海)有限公司 Method and device for adjusting multichannel lens shadow compensation LSC gain, storage medium and image processing equipment
CN113160082B (en) * 2021-04-16 2023-05-23 华南理工大学 Vignetting correction method, system, device and medium based on reference image
CN113179364A (en) * 2021-04-26 2021-07-27 上海大学 Image brightness calibration method for endoscope imaging
CN113628228A (en) * 2021-07-27 2021-11-09 昆山丘钛微电子科技股份有限公司 Lens shadow correction data detection method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04331571A (en) * 1991-05-07 1992-11-19 Ricoh Co Ltd Image reader
JP2004260474A (en) * 2003-02-25 2004-09-16 Sharp Corp Image correction processing method and image input device
CN102281389A (en) * 2010-06-11 2011-12-14 三星电子株式会社 Apparatus and method for creating lens shading compensation table suitable for photography environment
CN104049352A (en) * 2013-03-14 2014-09-17 索尼公司 Digital microscope apparatus, information processing method and information processing program
CN105025234A (en) * 2014-04-24 2015-11-04 株式会社东芝 Solid-state imaging device
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
CN105933686A (en) * 2016-05-19 2016-09-07 浙江大学 Method for correcting color lens shadow of digital camera with adaptive light source

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04331571A (en) * 1991-05-07 1992-11-19 Ricoh Co Ltd Image reader
JP2004260474A (en) * 2003-02-25 2004-09-16 Sharp Corp Image correction processing method and image input device
CN102281389A (en) * 2010-06-11 2011-12-14 三星电子株式会社 Apparatus and method for creating lens shading compensation table suitable for photography environment
CN104049352A (en) * 2013-03-14 2014-09-17 索尼公司 Digital microscope apparatus, information processing method and information processing program
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
CN105025234A (en) * 2014-04-24 2015-11-04 株式会社东芝 Solid-state imaging device
CN105933686A (en) * 2016-05-19 2016-09-07 浙江大学 Method for correcting color lens shadow of digital camera with adaptive light source

Also Published As

Publication number Publication date
CN107071234A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN107071234B (en) Lens shadow correction method and device
US8406557B2 (en) Method and apparatus for correcting lens shading
US8089533B2 (en) Fixed pattern noise removal circuit, fixed pattern noise removal method, program, and image pickup apparatus
US8804013B2 (en) Method of calculating lens shading compensation factor and method and apparatus for compensating for lens shading by using the method
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
JP2011010108A (en) Imaging control apparatus, imaging apparatus, and imaging control method
JP2016048815A (en) Image processor, image processing method and image processing system
US10778915B2 (en) Dual-aperture ranging system
WO2017028742A1 (en) Image denoising system and image denoising method
WO2018152977A1 (en) Image noise reduction method and terminal, and computer storage medium
US20110311156A1 (en) Method and system for correcting lens shading
US9373158B2 (en) Method for reducing image artifacts produced by a CMOS camera
US9374568B2 (en) Image processing apparatus, imaging apparatus, and image processing method
JP4994158B2 (en) Image correction device
JP2010041682A (en) Image processing apparatus, and image processing method
JP2018160024A (en) Image processing device, image processing method and program
CN114331893A (en) Method, medium and electronic device for acquiring image noise
JP2011205399A (en) Image processing apparatus and method, and image processing program
Lluis-Gomez et al. Chromatic aberration correction in RAW domain for image quality enhancement in image sensor processors
JP2021110885A (en) Image capturing apparatus and control method thereof
CN106920217B (en) Image correction method and device
JP2020061129A (en) Method for processing image, image processor, imaging device, image processing system, program, and storage medium
JP2005175909A (en) Signal processing method and image acquiring device
TWI415031B (en) Image calibration method
KR101065223B1 (en) Image sensor and method for correcting lense shading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210430

Address after: Room 403-406, 4th floor, building C6, Changsha Science and technology new town, No.77, South Dongliu Road, Changsha Economic and Technological Development Zone, Changsha, Hunan 410000

Patentee after: Hunan Xingxin Microelectronics Technology Co.,Ltd.

Address before: Room 4403-k, no.1325 Mudanjiang Road, Baoshan District, Shanghai 201900

Patentee before: X CHIP MICROELECTRONICS TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right