CN109068025B - Lens shadow correction method and system and electronic equipment - Google Patents
Lens shadow correction method and system and electronic equipment Download PDFInfo
- Publication number
- CN109068025B CN109068025B CN201810982999.2A CN201810982999A CN109068025B CN 109068025 B CN109068025 B CN 109068025B CN 201810982999 A CN201810982999 A CN 201810982999A CN 109068025 B CN109068025 B CN 109068025B
- Authority
- CN
- China
- Prior art keywords
- gray
- value
- obtaining
- reference matrix
- correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims abstract description 60
- 238000010586 diagram Methods 0.000 claims abstract description 12
- 238000003705 background correction Methods 0.000 claims description 32
- 230000007547 defect Effects 0.000 abstract description 4
- 241000023320 Luma <angiosperm> Species 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
The invention belongs to the field of image processing, and provides a lens shadow correction method, a system and electronic equipment, wherein the method comprises the steps of obtaining an image and splitting the image into four color component graphs; acquiring a reference matrix of each color component diagram, and acquiring a gray reference value according to the reference matrix; dividing each component image into grids with fixed sizes, and obtaining a first gray average value of each grid; obtaining a lens shadow correction proportion coefficient table according to the gray reference value and the first gray mean value; and correcting lens shading according to the proportional coefficient table. The system comprises a color component image acquisition module, a gray reference value acquisition module, a first gray value acquisition module, a proportion coefficient table acquisition module and a lens shadow correction module. The invention has the advantages of brightness centralization removal, better correction of shadow at the edge, avoidance of the influence of inherent local defect points and the like by adopting an improved grid correction method.
Description
Technical Field
The invention belongs to the field of image processing, and particularly relates to a lens shadow correction method, a lens shadow correction system and electronic equipment.
Background
The lens shading correction is to solve the problem that shading occurs around the lens due to non-uniformity of the lens with respect to optical refraction. The shot shading can be subdivided into luminance shading (luma shading) and color shading (color shading). Luma shading refers to a phenomenon in which the edge regions of the sensor image area receive less light intensity than the center due to the optical characteristics of the lens, resulting in non-uniform brightness at the center and four corners. Color shading refers to the inconsistency of Color shading caused by the inconsistency of refraction angles through the refraction of the lens due to the difference of the wavelengths of various colors.
In order to solve the two shadows (shading), the existing algorithms are also classified into two categories, one is based on counting and analyzing the brightness information of the pixel points, and performing polynomial function fitting or other nonlinear fitting, and the other is based on counting and analyzing the color information (R, G, B) of the pixel points, and the other is based on performing polynomial function fitting or other nonlinear fitting in color channels.
The existing technology may have the problems that the brightest point deviates from the center, the attenuation condition at the edge deviates from the attenuation rule of light, the debugging is affected by local defect points, and the like. The application range is greatly limited, and the problem of correcting lens shadows of various cameras cannot be well solved.
Disclosure of Invention
The invention provides a lens shadow correction method, a system and electronic equipment, and aims to solve the technical problem that the lens shadow correction of various cameras cannot be solved in the prior art.
To this end, in one aspect of the present invention, there is provided a lens shading correction method including the steps of:
s1, acquiring an image and splitting the image into four color component graphs;
s2, acquiring a reference matrix of each color component diagram, and acquiring a gray reference value according to the reference matrix;
s3, dividing each component image into grids with fixed sizes, and obtaining a first gray average value of each grid;
s4, obtaining a lens shadow correction proportion coefficient table according to the gray level reference value and the first gray level mean value;
and S5, correcting lens shading according to the proportional coefficient table.
Preferably, in step S2, the obtaining of the grayscale reference value according to the reference matrix specifically includes:
s21, obtaining a first gray median of the reference matrix;
s22, performing gray correction on the reference matrix according to the absolute value of the difference value between the first gray median and the gray value of each point in the reference matrix;
and S23, obtaining a second gray level mean value according to the reference matrix after gray level correction, and using the second gray level mean value as the gray level reference value.
Preferably, between the step S2 and the step S3, assigning the gray reference value to a center point of the reference matrix is further included.
Preferably, in the step S3, the obtaining the first gray scale mean value of each lattice specifically includes:
s31, acquiring a sub-reference matrix of the grid;
s32, obtaining a second gray scale median value of the sub-reference matrix;
s33, performing gray correction on the secondary reference matrix according to the absolute value of the difference value between the second gray median and the gray value of each point in the secondary reference matrix;
and S34, obtaining a third gray average value as the first gray average value of the grid according to the sub reference matrix after gray correction.
Preferably, in the step S3, when each component map is divided into fixed-size grids, if the edges of the component map do not satisfy the size of the fixed-size grids, the expansion is performed by copying edge gray-scale values.
Preferably, the step S4 is specifically: and normalizing the quotient of the gray reference value and the first gray mean value to obtain a lens shading correction proportional coefficient table.
Preferably, the step S5 is specifically: obtaining an image to be corrected, obtaining the position of a lattice where a pixel point to be corrected is located according to the number of the pixel points of the image to be corrected and the pixel point to be corrected, obtaining a proportional coefficient corresponding to the pixel point to be corrected from the proportional coefficient table according to the position of the lattice where the pixel point to be corrected is located, and normalizing the product of the pixel value of the pixel point to be corrected and the proportional coefficient corresponding to the pixel value to be corrected to finish the lens shadow correction.
Preferably, in step S5, if the pixel to be corrected is located between the grids, the scaling factor corresponding to the pixel to be corrected is obtained by performing bilinear interpolation on the scaling factors of the four grids that are nearest to each other.
In another aspect of the present invention, the present invention further provides a lens shading correction system, including a color component map obtaining module, a gray reference value obtaining module, a first gray value obtaining module, a scale coefficient table obtaining module, and a lens shading correction module;
the color component map acquisition module is used for acquiring an image and splitting the image into four color component maps;
the gray reference value acquisition module is used for acquiring a reference matrix of each color component diagram and acquiring a gray reference value according to the reference matrix;
the first gray value acquisition module is used for dividing each component image into grids with fixed sizes and acquiring a first gray average value of each grid;
the ratio coefficient table acquisition module is used for acquiring a ratio coefficient table for lens shading correction according to the gray reference value and the first gray mean value;
and the lens shading correction module is used for correcting lens shading according to the proportional coefficient table.
In another aspect of the present invention, the present invention also provides an electronic device including a memory and a processor including the lens shading correction module described above.
The lens shadow correction method, the system and the electronic equipment provided by the invention adopt the improved grid correction method, can obtain the proportionality coefficient tables of a plurality of grids of each color component diagram, have the advantages of brightness centralization removal, better correction of the shadow at the edge, avoidance of the influence of inherent local defect points and the like, and are suitable for the shadow correction of various lenses, and the application range is not limited.
Drawings
FIG. 1 is a flowchart illustrating a lens shading correction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of splitting an image into four color component maps according to a first embodiment of the present invention;
FIG. 3 is a flowchart of a method for obtaining a gray reference value according to the reference matrix according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a reference matrix and a center point thereof according to an embodiment of the invention;
FIG. 5 is a flowchart of a method for obtaining a first gray average of each cell according to a first embodiment of the present invention;
fig. 6 is a schematic diagram of a pixel point Pxy to be corrected between lattices in the first embodiment of the present invention;
fig. 7 is a structural diagram of a lens shading correction system according to a second embodiment of the present invention.
Wherein, 1-color component diagram acquisition module; 2-a gray reference value obtaining module; 3-a first grey value acquisition module; 4-a coefficient of proportionality table acquisition module; and 5-a lens shading correction module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention belongs to color shadow correction, is applied to a color domain (namely a bayer domain), adopts an improved grid correction method, and has the advantages of brightness centralization removal, better correction of shadow at the edge, avoidance of the influence of inherent local defect points and the like.
In a first embodiment of the present invention, the present invention provides a lens shading correction method, as shown in fig. 1, including the following steps:
s1, acquiring an image and splitting the image into four color component graphs;
s2, acquiring a reference matrix of each color component diagram, and acquiring a gray reference value according to the reference matrix;
s3, dividing each component image into grids with fixed sizes, and obtaining a first gray average value of each grid;
s4, obtaining a lens shadow correction proportion coefficient table according to the gray level reference value and the first gray level mean value;
and S5, correcting lens shading according to the proportional coefficient table.
Before the step S1, the method further includes: setting the brightness box as a white light source, setting the gear as LV10, after setting the proper exposure value of the sensor, using the sensor to flatly align the central area of the brightness box, and intercepting the raw image, wherein the raw image is used as the image in the step S1.
In an embodiment of the present invention, the luminance box has a model number of LSB-111 BAT.
The raw graph is a picture data source in a bayer format directly obtained from a cmos sensor.
The step S1 specifically includes: and splitting pixel points in the raw image into four color component images according to the position relations of odd rows and odd columns, odd rows and even columns, even rows and odd columns and even rows and even columns. As shown in fig. 2, the left side is an RG/GB format in the bayer format, the RG color components of the odd rows are arranged at intervals, the GB color components of the even rows are arranged at intervals, and due to nearby color interference (crosstalk in cmos), the G components of the odd rows and the even rows have a certain difference, and the raw graph is split into four color components according to the position relationship, so as to facilitate correction processing.
As shown in fig. 3, in step S2, the obtaining of the grayscale reference value according to the reference matrix specifically includes:
s21, obtaining a first gray median of the reference matrix;
s22, performing gray correction on the reference matrix according to the absolute value of the difference value between the first gray median and the gray value of each point in the reference matrix;
and S23, obtaining a second gray level mean value according to the reference matrix after gray level correction, and using the second gray level mean value as the gray level reference value.
The step S22 specifically includes: and obtaining an absolute value of a difference value between the first gray median and the gray value of each point in the reference matrix, judging whether the absolute value is greater than a preset threshold value, and if so, assigning the gray value of the point as the first gray median.
In step S2, the size of the reference matrix is a 5 × 5 matrix.
Between the step S2 and the step S3, assigning the gray reference value to a center point of the reference matrix is further included.
The center point of the reference matrix is shown in fig. 4.
As shown in fig. 5, in the step S3, the obtaining of the first gray scale mean value of each grid specifically includes:
s31, acquiring a sub-reference matrix of the grid;
s32, obtaining a second gray scale median value of the sub-reference matrix;
s33, performing gray correction on the secondary reference matrix according to the absolute value of the difference value between the second gray median and the gray value of each point in the secondary reference matrix;
and S34, obtaining a third gray average value as the first gray average value of the grid according to the sub reference matrix after gray correction.
In a specific embodiment of the present invention, the sub-reference matrix has a size of 5 × 5 matrix.
In the step S3, when each component map is divided into fixed-size lattices, if the edges of the component map do not satisfy the size of the fixed-size lattices, the expansion is performed by copying edge grayscale values.
The step S4 specifically includes: and normalizing the quotient of the gray reference value and the first gray mean value to obtain a lens shading correction proportional coefficient table.
In step S4, the formula for obtaining the scaling factor is: rate _ x ═ (int) (grey _ mid/grey _ x × 256), where x denotes the number of the lattice, rate _ x denotes the scale factor of the lattice numbered x, grey _ mid denotes the gray reference value, grey _ x denotes the first gray mean value to which the lattice numbered x corresponds, and the types of grey _ mid and grey _ x are of double type.
The step S5 specifically includes: obtaining an image to be corrected, obtaining the position of a lattice where a pixel point to be corrected is located according to the number of the pixel points of the image to be corrected and the pixel point to be corrected, obtaining a proportional coefficient corresponding to the pixel point to be corrected from the proportional coefficient table according to the position of the lattice where the pixel point to be corrected is located, and normalizing the product of the pixel value of the pixel point to be corrected and the proportional coefficient corresponding to the pixel value to be corrected to finish the lens shadow correction.
Between the step S4 and the step S5, the method further includes: and configuring the scale coefficient tables of the four color component maps into hardware according to a preset sequence, wherein the hardware can be a video processing chip.
The image to be corrected is an image obtained according to a video processing chip.
The pixel point to be corrected can be any pixel point of the image to be corrected.
In step S5, if the pixel to be corrected is located between the grids, the scaling factor corresponding to the pixel is obtained by performing bilinear interpolation on the scaling factors of the four nearest grids.
As shown in fig. 6, a point Pxy is a pixel point to be corrected located between lattices, the number of rows of the point Pxy is x, the number of columns of the point Pxy is y, P00, P01, P10 and P11 represent proportionality coefficients of four lattices nearest to the point Pxy, w is 64, and h is 32, which represents that the size of the lattice is 64 × 32, and then a formula for obtaining the proportionality coefficient corresponding to the pixel point Pxy through bilinear interpolation is as follows: pxy ═ P00 x (64-x) x (32-y) + P01 xx (32-y) + P10 x (64-x) x y + P11 x xy)/(64 x 32).
In a second embodiment of the present invention, the present invention further provides a lens shading correction system, as shown in fig. 7, including a color component map obtaining module 1, a gray reference value obtaining module 2, a first gray value obtaining module 3, a scale coefficient table obtaining module 4, and a lens shading correction module 5; the first output of color component picture acquisition module 1 is connected the input of grey reference value acquisition module 2, the second output of color component picture acquisition module 1 is connected the input of first grey scale value acquisition module 3, the output of grey reference value acquisition module 2 is connected the first input of coefficient of proportionality table acquisition module 4, the output of first grey scale value acquisition module 3 is connected the second input of coefficient of proportionality table acquisition module 4, the output of coefficient of proportionality table acquisition module 4 is connected the input of lens shading correction module 5.
The color component map acquisition module 1 is configured to acquire an image and split the image into four color component maps;
the gray reference value obtaining module 2 is configured to obtain a reference matrix of each color component map, and obtain a gray reference value according to the reference matrix;
the first gray value obtaining module 3 is configured to divide each component image into grids of a fixed size, and obtain a first gray average value of each grid;
the proportional coefficient table acquisition module 4 is configured to acquire a lens shading correction proportional coefficient table according to the gray reference value and the first gray average value;
the lens shading correction module 5 is configured to perform lens shading correction according to the scale coefficient table.
In a third embodiment of the present invention, an electronic device is provided, which includes a memory and a processor including the lens shading correction module.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (8)
1. A lens shading correction method is characterized by comprising the following steps:
s1, acquiring an image and splitting the image into four color component graphs;
s2, obtaining a reference matrix of each color component diagram, obtaining a gray reference value according to the reference matrix,
the method specifically comprises the following steps:
s21, obtaining a first gray median of the reference matrix;
s22, performing gray correction on the reference matrix according to the absolute value of the difference value between the first gray median and the gray value of each point in the reference matrix; wherein, step S22 specifically includes: obtaining an absolute value of a difference value between the first gray median and the gray value of each point in the reference matrix, judging whether the absolute value is greater than a preset threshold value, and if so, assigning the gray value of the point as the first gray median;
s23, obtaining a second gray average value according to the reference matrix after gray correction, and taking the second gray average value as the gray reference value;
s3, dividing each component image into grids with fixed sizes, and obtaining a first gray average value of each grid;
s4, obtaining a lens shading corrected scale coefficient table according to the gray level reference value and the first gray level mean value,
the method specifically comprises the following steps: normalizing the quotient of the gray reference value and the first gray mean value to obtain a lens shadow correction proportional coefficient table;
and S5, correcting lens shading according to the proportional coefficient table.
2. The lens shading correction method according to claim 1, further comprising, between the step S2 and the step S3, assigning the gray reference value to a center point of the reference matrix.
3. The lens shading correction method according to claim 1, wherein in the step S3, the obtaining the first gray-scale mean value of each lattice is specifically:
s31, acquiring a sub-reference matrix of the grid;
s32, obtaining a second gray scale median value of the sub-reference matrix;
s33, performing gray correction on the secondary reference matrix according to the absolute value of the difference value between the second gray median and the gray value of each point in the secondary reference matrix;
and S34, obtaining a third gray average value as the first gray average value of the grid according to the sub reference matrix after gray correction.
4. The lens shading correction method according to claim 1, wherein in the step S3, when dividing each component map into fixed-size lattices, if an edge of the component map does not satisfy the size of the fixed-size lattices, the expansion is performed by copying an edge gray value.
5. The lens shading correction method according to claim 1, wherein the step S5 specifically comprises: obtaining an image to be corrected, obtaining the position of a lattice where a pixel point to be corrected is located according to the number of the pixel points of the image to be corrected and the pixel point to be corrected, obtaining a proportional coefficient corresponding to the pixel point to be corrected from the proportional coefficient table according to the position of the lattice where the pixel point to be corrected is located, and normalizing the product of the pixel value of the pixel point to be corrected and the proportional coefficient corresponding to the pixel value to be corrected to finish the lens shadow correction.
6. The lens shading correction method according to claim 5, wherein in the step S5, if the pixel point to be corrected is located between the grids, the corresponding scaling factor is obtained by performing bilinear interpolation on the scaling factors corresponding to the four nearest grids.
7. A lens shading correction system is characterized by comprising a color component diagram acquisition module (1), a gray reference value acquisition module (2), a first gray value acquisition module (3), a proportionality coefficient table acquisition module (4) and a lens shading correction module (5);
the color component map acquisition module (1) is used for acquiring an image and splitting the image into four color component maps;
the gray reference value acquisition module (2) is configured to acquire a reference matrix of each color component map, and acquire a gray reference value according to the reference matrix, specifically:
obtaining a first gray median of the reference matrix;
performing gray correction on the reference matrix according to the absolute value of the difference value between the first gray median and the gray value of each point in the reference matrix; wherein the steps are as follows: obtaining an absolute value of a difference value between the first gray median and the gray value of each point in the reference matrix, judging whether the absolute value is greater than a preset threshold value, and if so, assigning the gray value of the point as the first gray median;
obtaining a second gray average value according to the reference matrix after gray correction, and taking the second gray average value as the gray reference value;
the first gray value acquisition module (3) is used for dividing each component image into grids with fixed sizes and acquiring a first gray average value of each grid;
the ratio coefficient table acquisition module (4) is used for acquiring a ratio coefficient table for lens shading correction according to the gray reference value and the first gray average value, and specifically comprises the following steps: normalizing the quotient of the gray reference value and the first gray mean value to obtain a lens shadow correction proportional coefficient table;
the lens shading correction module (5) is used for carrying out lens shading correction according to the proportion coefficient table.
8. An electronic device comprising a memory and a processor including the lens shading correction module of claim 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810982999.2A CN109068025B (en) | 2018-08-27 | 2018-08-27 | Lens shadow correction method and system and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810982999.2A CN109068025B (en) | 2018-08-27 | 2018-08-27 | Lens shadow correction method and system and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109068025A CN109068025A (en) | 2018-12-21 |
CN109068025B true CN109068025B (en) | 2021-02-05 |
Family
ID=64757276
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810982999.2A Active CN109068025B (en) | 2018-08-27 | 2018-08-27 | Lens shadow correction method and system and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109068025B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111182293B (en) * | 2020-01-06 | 2021-07-06 | 昆山丘钛微电子科技有限公司 | Method and system for detecting lens shadow correction data |
CN111818239B (en) * | 2020-03-12 | 2023-05-02 | 成都微光集电科技有限公司 | Lens shading correction method in image sensor |
CN111556228B (en) * | 2020-05-15 | 2022-07-22 | 展讯通信(上海)有限公司 | Method and system for correcting lens shadow |
CN114363480B (en) * | 2020-09-29 | 2023-09-26 | 合肥君正科技有限公司 | Adaptive lens shading correction method and system based on color temperature and illumination |
CN114531521B (en) * | 2020-11-02 | 2024-02-02 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN113055618B (en) * | 2021-03-10 | 2022-10-21 | 展讯半导体(南京)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113132562B (en) * | 2021-04-21 | 2023-09-29 | 维沃移动通信有限公司 | Lens shading correction method and device and electronic equipment |
CN113747066B (en) * | 2021-09-07 | 2023-09-15 | 汇顶科技(成都)有限责任公司 | Image correction method, image correction device, electronic equipment and computer readable storage medium |
CN114007055B (en) * | 2021-10-26 | 2023-05-23 | 四川创安微电子有限公司 | Image sensor lens shading correction method and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007013231A (en) * | 2005-06-28 | 2007-01-18 | Fujitsu Ltd | Device, method and program for compensating shading of image |
JP2015103972A (en) * | 2013-11-25 | 2015-06-04 | 株式会社東芝 | Digital camera and solid state image pick-up device |
CN107071234B (en) * | 2017-01-23 | 2020-03-20 | 上海兴芯微电子科技有限公司 | Lens shadow correction method and device |
CN107590840B (en) * | 2017-09-21 | 2019-12-27 | 长沙全度影像科技有限公司 | Color shadow correction method based on grid division and correction system thereof |
CN111757029A (en) * | 2018-03-26 | 2020-10-09 | 上海小蚁科技有限公司 | Shadow correction detection parameter determination method, shadow correction detection device, storage medium and fisheye camera |
-
2018
- 2018-08-27 CN CN201810982999.2A patent/CN109068025B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109068025A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109068025B (en) | Lens shadow correction method and system and electronic equipment | |
US7733407B2 (en) | Image processing apparatus and method for preferably correcting distortion aberration | |
KR100467610B1 (en) | Method and apparatus for improvement of digital image quality | |
CN107071234B (en) | Lens shadow correction method and device | |
KR101663871B1 (en) | Method and associated apparatus for correcting color artifact of image | |
CN104252700A (en) | Histogram equalization method for infrared image | |
JP2021168048A (en) | Image processing method, image processing device, image processing system, and program | |
CN110087051B (en) | Color image glare removing method and system based on HSV color space | |
CN114697623B (en) | Projection plane selection and projection image correction method, device, projector and medium | |
US20220198625A1 (en) | High-dynamic-range image generation with pre-combination denoising | |
TWI479454B (en) | Method and apparatus for correcting for vignetting in an imaging system | |
CN112348754B (en) | Low-illumination color image enhancement method and device | |
CN114866754A (en) | Automatic white balance method and device, computer readable storage medium and electronic equipment | |
CN110807735A (en) | Image processing method, image processing device, terminal equipment and computer readable storage medium | |
CN117876233A (en) | Mapping image enhancement method based on unmanned aerial vehicle remote sensing technology | |
CN101051117B (en) | Method and device for correcting lens image non-uniformity and extracting lens parameter | |
CN117156289A (en) | Color style correction method, system, electronic device, storage medium and chip | |
US20220210351A1 (en) | Image sensing device and operating method thereof | |
JP2014090410A (en) | Method for white balance adjustment | |
CN114463221A (en) | Self-supervision color correction method for multi-device domain AWB enhancement | |
CN103051904A (en) | Digital half-tone video processing system and method based on floating pixel | |
US7656441B2 (en) | Hue correction for electronic imagers | |
CN107886550B (en) | Image editing propagation method and system | |
CN112381896A (en) | Method and system for correcting brightness of microscopic image and computer equipment | |
Hammadou et al. | Novel image enhancement technique using shunting inhibitory cellular neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220506 Address after: Rooms 1306-1309, 13 / F, 19 science Avenue West, Hong Kong Science Park, Shatin, New Territories, China Patentee after: BUILDWIN INTERNATIONAL (ZHUHAI) LTD. Address before: 1302, yuemeite building, 1 Gaoxin South 7th Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Patentee before: Jianrong semiconductor (Shenzhen) Co.,Ltd. Patentee before: BUILDWIN INTERNATIONAL (ZHUHAI) Ltd. |
|
TR01 | Transfer of patent right |