CN113902644A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113902644A
CN113902644A CN202111245876.9A CN202111245876A CN113902644A CN 113902644 A CN113902644 A CN 113902644A CN 202111245876 A CN202111245876 A CN 202111245876A CN 113902644 A CN113902644 A CN 113902644A
Authority
CN
China
Prior art keywords
image
processed
channel
brightness
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111245876.9A
Other languages
Chinese (zh)
Inventor
郝源
张兴
闫勇
康宇翔
姚文涛
张峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aixin Technology Co ltd
Original Assignee
Beijing Aixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aixin Technology Co ltd filed Critical Beijing Aixin Technology Co ltd
Priority to CN202111245876.9A priority Critical patent/CN113902644A/en
Publication of CN113902644A publication Critical patent/CN113902644A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application provides an image processing method, an apparatus, a device and a storage medium, wherein the image processing method comprises the following steps: acquiring an image to be processed; identifying an imaging area in the image to be processed to obtain the boundary of the imaging area; and correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed. The possibility of abnormal appearance after image correction is reduced to a certain extent.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
Recently, ultra-wide angle cameras have become a new favorite in industries such as mobile phones, security, automobile images, and motion cameras. The super-wide-angle camera has a large visual angle, and can record all scenes in front of a lens in one picture as much as possible. However, in actual imaging, an image captured by a super-wide-angle camera often appears as a black area, such as a dark area in fig. 1, and therefore, correction of the black area is required.
The traditional correction method is to calculate the brightness (or color) of the center of an image and the brightness (or color) of the edge of the image, and then obtain the correction coefficient of each pixel point in the image according to the difference between the center of the image and the edge of the image, thereby realizing the correction of a black area.
However, the brightness (or color) of the black area and the central area of the image has a large difference, and if the correction coefficient of each pixel point in the image is directly obtained according to the difference between the central area of the image and the black area, the calculated correction coefficient may be large, and the image area after correction may be abnormal due to a large probability of correction through the large correction coefficient.
Disclosure of Invention
Based on the above, an image processing method, an image processing apparatus, an image processing device and a storage medium are provided to solve the technical problem that the large correction coefficient causes the abnormality after correction in the prior art.
In a first aspect, an image processing method is provided, including:
acquiring an image to be processed;
identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
Firstly, acquiring an image to be processed, wherein the image to be processed comprises an imaging area and a dark area; then, identifying an imaging area in the image to be processed to obtain the boundary of the imaging area; and finally, correcting the pixel values of the pixel points positioned in the dark area by using the pixel values of the pixel points positioned on the boundary to obtain a processed image of the image to be processed. Therefore, the image processing method corrects the pixel values of the pixels in the dark region through the pixel values of the pixels on the boundary, and can enable the pixel values of the pixels in the dark region in the processed image to be close to the pixel values of the pixels in the imaging region through correction, and then calculates the correction coefficient according to the processed image, so that the correction coefficient calculated according to the processed image is reduced compared with the correction coefficient directly calculated according to the image to be processed, and the possibility of abnormity of the corrected dark region is reduced to a certain extent.
In one embodiment, the imaging region is a circular region; the identifying the imaging area in the image to be processed to obtain the boundary of the imaging area comprises: carrying out binarization processing on the image to be processed to obtain a binarized image; and carrying out circle detection on the binary image to obtain the boundary of the imaging area.
The foregoing embodiments provide a detection method, that is, when an imaging region is a circular region, first perform binarization on an image to be processed to obtain a binarized image, and then use a circle detection method to detect a boundary of the imaging region.
In an embodiment, the modifying, by using the pixel values of the pixel points located on the boundary, the pixel values of the pixel points located in a dark region in the image to be processed includes: calculating the distance between a target pixel point in the image to be processed and the dot of the imaging area to obtain a connecting line distance; if the link distance is smaller than or equal to the radius of the boundary of the imaging area, keeping the pixel value of the target pixel point unchanged; if the distance of the connecting line is larger than the radius of the boundary of the imaging area, determining the intersection point of the connecting line constructed by the target pixel point and the dot of the imaging area and the boundary of the imaging area; and in the image to be processed, correcting the pixel value of the target pixel point into the pixel value of the intersection point to obtain a processed image of the image to be processed.
The above embodiment provides a method for correcting a pixel value of a pixel point in a dark region according to a pixel point in an imaging region, and since an intersection point and the dark region are relatively close in position, the pixel value of the pixel point in the dark region on a connection line is corrected by using the pixel value of the intersection point, so that the correction effect can be more real to a certain extent.
In one embodiment, the image processing method further includes: dividing the processed image into a first number of image regions; determining end points of the first number of image regions; acquiring the maximum brightness of the processed image in a candidate channel; calculating the brightness mean value of each image area in the candidate channel;
obtaining a brightness ratio matrix corresponding to the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel, wherein the brightness ratio matrix records the brightness ratio of each endpoint in the candidate channel, and the brightness ratio reflects the brightness condition of the endpoint; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel, wherein the brightness correction coefficient matrix records the brightness correction coefficient of each endpoint in the candidate channel.
In the above embodiment, the luminance correction coefficient matrix is obtained by means of block calculation, so that the luminance of the image to be processed is corrected by the luminance correction coefficient matrix.
In one embodiment, the end points include an outer end point located at an outer boundary of the processed image and an inner end point not located at the outer boundary; the obtaining a luminance ratio matrix corresponding to the candidate channel according to the maximum luminance of the processed image in the candidate channel and the luminance mean value of each image area in the candidate channel includes: obtaining the brightness ratio of each inner endpoint in the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel; obtaining an endpoint distance matrix according to the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed and the coordinate position of each endpoint in the image to be processed; and obtaining the brightness ratio of each outer endpoint in the candidate channel according to the endpoint distance matrix and the brightness ratio of each inner endpoint in the candidate channel.
In the above embodiment, when calculating the luminance ratio of the inner endpoint in the candidate channel, the luminance ratio of the inner endpoint in the candidate channel may be obtained according to the luminance ratios of the four image regions around the inner endpoint, and when calculating the luminance ratio of the outer endpoint in the candidate channel, the luminance ratio of the inner endpoint in the candidate channel may be calculated according to the luminance ratio of the inner endpoint in the candidate channel, thereby realizing the calculation of the luminance ratios of all endpoints in the candidate channel.
In an embodiment, the obtaining a luminance correction coefficient matrix of the to-be-processed image in the candidate channel according to the luminance ratio matrix corresponding to the candidate channel includes: constructing a mask image corresponding to the image to be processed according to the boundary of the imaging area; dividing the mask image into the first number of mask regions; calculating the average pixel value of each mask area, and obtaining a second number of mask areas and the area positions of the target mask areas with the average pixel values smaller than the preset pixel values according to the average pixel value of each mask area; constructing area masks according to the second number of mask areas, wherein pixel values of all area positions in the area masks are first pixel values; setting the pixel value of the area position of the target mask area as a second pixel value in the area mask to obtain a target area mask; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel and the target area mask.
The above embodiment illustrates how to obtain the luminance correction coefficient matrix.
In one embodiment, the candidate channels include: r channel, GR channel, GB channel and B channel; the image processing method further comprises the following steps: calculating the mean value of the brightness correction coefficient matrix of the image to be processed in the GR channel and the brightness correction coefficient matrix of the GB channel to obtain a corrected mean value matrix; dividing the brightness correction coefficient matrix of the image to be processed in a target channel by the correction mean value matrix to obtain a color correction coefficient matrix of the image to be processed in the target channel, wherein the target channel is one of the candidate channels; calculating the mean value of the brightness correction coefficient matrix of the GR channel and the brightness correction coefficient matrix of the GB channel of the image to be processed under the standard color temperature to obtain a color temperature correction mean value matrix; and obtaining a color brightness correction coefficient matrix of the image to be processed in the target channel according to the color correction coefficient matrix of the image to be processed in the target channel and the color temperature correction mean value matrix.
The above embodiment describes how to obtain the color brightness correction coefficient matrix of the target channel, so that the color brightness correction coefficient matrix of the target channel is used to correct the color and brightness of the image, and compared with a mode of respectively calculating the brightness correction coefficient and the color correction coefficient and then correcting the image by using the brightness correction coefficient and the color correction coefficient, the mode can simultaneously correct the color and the brightness by only one correction, and the correction efficiency is improved to a certain extent.
In a second aspect, there is provided an image processing apparatus comprising:
the acquisition module is used for acquiring an image to be processed;
the boundary module is used for identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and the correction module is used for correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
In one embodiment, the imaging region is a circular region; the boundary module is specifically configured to: carrying out binarization processing on the image to be processed to obtain a binarized image; and carrying out circle detection on the binary image to obtain the boundary of the imaging area.
In one embodiment, the modification module is specifically configured to: calculating the distance between a target pixel point in the image to be processed and the dot of the imaging area to obtain a connecting line distance; if the link distance is smaller than or equal to the radius of the boundary of the imaging area, keeping the pixel value of the target pixel point unchanged; if the distance of the connecting line is larger than the radius of the boundary of the imaging area, determining the intersection point of the connecting line constructed by the target pixel point and the dot of the imaging area and the boundary of the imaging area; and in the image to be processed, correcting the pixel value of the target pixel point into the pixel value of the intersection point to obtain a processed image of the image to be processed.
In one embodiment, the image processing apparatus further includes: a correction coefficient module for dividing the processed image into a first number of image regions; determining end points of the first number of image regions; acquiring the maximum brightness of the processed image in a candidate channel; calculating the brightness mean value of each image area in the candidate channel; obtaining a brightness ratio matrix corresponding to the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel, wherein the brightness ratio matrix records the brightness ratio of each endpoint in the candidate channel, and the brightness ratio reflects the brightness condition of the endpoint; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel, wherein the brightness correction coefficient matrix records the brightness correction coefficient of each endpoint in the candidate channel.
In one embodiment, the end points include an outer end point located at an outer boundary of the processed image and an inner end point not located at the outer boundary; the correction coefficient module is specifically configured to: obtaining the brightness ratio of each inner endpoint in the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel; obtaining an endpoint distance matrix according to the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed and the coordinate position of each endpoint in the image to be processed; and obtaining the brightness ratio of each outer endpoint in the candidate channel according to the endpoint distance matrix and the brightness ratio of each inner endpoint in the candidate channel.
In one embodiment, the correction coefficient module is specifically configured to: constructing a mask image corresponding to the image to be processed according to the boundary of the imaging area; dividing the mask image into the first number of mask regions; calculating the average pixel value of each mask area, and obtaining a second number of mask areas and the area positions of the target mask areas with the average pixel values smaller than the preset pixel values according to the average pixel value of each mask area; constructing area masks according to the second number of mask areas, wherein pixel values of all area positions in the area masks are first pixel values; setting the pixel value of the area position of the target mask area as a second pixel value in the area mask to obtain a target area mask; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel and the target area mask.
In one embodiment, the candidate channels include: r channel, GR channel, GB channel and B channel; the image processing apparatus further includes: a corrective module for: calculating the mean value of the brightness correction coefficient matrix of the image to be processed in the GR channel and the brightness correction coefficient matrix of the GB channel to obtain a corrected mean value matrix; dividing the brightness correction coefficient matrix of the image to be processed in a target channel by the correction mean value matrix to obtain a color correction coefficient matrix of the image to be processed in the target channel, wherein the target channel is one of the candidate channels; calculating the mean value of the brightness correction coefficient matrix of the GR channel and the brightness correction coefficient matrix of the GB channel of the image to be processed under the standard color temperature to obtain a color temperature correction mean value matrix; and obtaining a color brightness correction coefficient matrix of the image to be processed in the target channel according to the color correction coefficient matrix of the image to be processed in the target channel and the color temperature correction mean value matrix.
In a third aspect, a computer device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the image processing method as described above when executing the computer program.
In a fourth aspect, a computer-readable storage medium is provided, in which computer program instructions are stored, which, when read and executed by a processor, perform the steps of the image processing method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic view of an imaging region and a dark region in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating an implementation of an image processing method in an embodiment of the present application;
FIG. 3 is a schematic diagram of a left image region, a left boundary, a right image region, and a right boundary in an embodiment of the present application;
FIG. 4 shows a graph of r in the present embodimentdetectA schematic diagram of (a);
FIG. 5 is a schematic diagram of a connection line t in the embodiment of the present application;
FIG. 6 is a schematic view of an embodiment of the present application after modification;
FIG. 7 is a schematic diagram of an endpoint of an embodiment of the present application;
FIG. 8 is a diagram of an image region, an inner endpoint and an outer endpoint in an embodiment of the present application;
FIG. 9 is a schematic view of a mask region in an embodiment of the present application;
FIG. 10 is an exemplary presentation of results in an embodiment of the present application;
FIG. 11 is a schematic diagram illustrating a structure of an image processing apparatus according to an embodiment of the present disclosure;
fig. 12 is a block diagram of an internal structure of a computer device in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In one embodiment, an image processing method is provided. The execution subject of the image processing method according to the embodiment of the present invention is a computer device capable of implementing the image processing method according to the embodiment of the present invention, and the computer device may include, but is not limited to, a terminal and a server. The terminal comprises a desktop terminal and a mobile terminal, wherein the desktop terminal comprises but is not limited to a desktop computer and a vehicle-mounted computer; mobile terminals include, but are not limited to, cell phones, tablets, laptops, and smartwatches. The server includes a high performance computer and a cluster of high performance computers.
In one embodiment, as shown in fig. 2, there is provided an image processing method including:
step 100, acquiring an image to be processed.
The image to be processed may be an image shot by an ultra-wide-angle camera, where the ultra-wide-angle camera includes a fisheye camera, the image shot by the ultra-wide-angle camera may be an original image shot by an image sensor in the ultra-wide-angle camera, the format of the original image is a RAW format, and for the original image in the RAW format, there are 4 channels in total, which are an R channel, a GR channel, a GB channel, and a B channel, respectively.
Step 200, identifying an imaging area in the image to be processed to obtain the boundary of the imaging area.
The image to be processed comprises an imaging area and a dark area. An imaging area, which is an area in the image to be processed in which a normal image is displayed, and the shape of the imaging area includes, but is not limited to, a circle, for example, the shape of the imaging area may also be an ellipse; the dark area is a dark area in the image to be processed, in which the image cannot be normally displayed, for example, the dark area is a black area. The imaged regions and dark regions may be as shown in fig. 1. In the image to be processed, the brightness difference exists between the imaging area and the dark area, and the brightness of the imaging area is higher than that of the dark area.
Because the imaging area and the dark area have a brightness difference, the imaging area in the image to be processed can be identified according to the brightness difference between the imaging area and the dark area, so as to obtain the boundary of the imaging area, for example, the imaging area in the image to be processed is identified by setting a brightness threshold value according to the brightness difference between the imaging area and the dark area.
And 300, correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
For example, as shown in fig. 3, the image to be processed is divided into two left and right image regions, for a channel i (the channel i may be an R channel, a GR channel, a GB channel, or a B channel), a mean value of pixel values of pixel points located on a left boundary (pixel value of the channel i) is calculated, and the mean value is used as pixel values of a part of pixel points in a dark region in the left image region in the channel i; similarly, the mean value of the pixel values of the pixel points located on the right boundary (the pixel value of the channel i) is calculated, and the mean value is used as the pixel value of the partial pixel point in the dark area of the right image area in the channel i.
Firstly, acquiring an image to be processed, wherein the image to be processed comprises an imaging area and a dark area; then, identifying an imaging area in the image to be processed to obtain the boundary of the imaging area; and finally, correcting the pixel values of the pixel points positioned in the dark area by using the pixel values of the pixel points positioned on the boundary to obtain a processed image of the image to be processed. Therefore, the image processing method corrects the pixel values of the pixels in the dark region through the pixel values of the pixels on the boundary, and can enable the pixel values of the pixels in the dark region in the processed image to be close to the pixel values of the pixels in the imaging region through correction, and then calculates the correction coefficient according to the processed image, so that the correction coefficient calculated according to the processed image is reduced compared with the correction coefficient directly calculated according to the image to be processed, and the possibility of abnormity of the corrected dark region is reduced to a certain extent.
In one embodiment, the imaging region is a circular region; the step 200 of identifying the imaging area in the image to be processed to obtain the boundary of the imaging area includes:
and step 201, performing binarization processing on the image to be processed to obtain a binarized image.
For example, a preset threshold value is obtained, and binarization processing is performed on the image to be processed according to the preset threshold value to obtain a binarized image; and for another example, processing the image to be processed according to an adaptive threshold value binarization algorithm to obtain a binarization image.
And step 202, performing circle detection on the binary image to obtain the boundary of the imaging area.
Since the imaging region is a circular region, after obtaining the binarized image, the radius (r) of the imaging region can be obtained by a circle detection method, for example, a minimum closed circle detection methodboundary) Therefore, the detection of the imaging area is realized, and the boundary of the imaging area is determined according to the radius and the circle center of the imaging area.
In one example, as shown in fig. 4, first, the radius of the imaging area is obtained by a circle detection method, assuming that the radius of the imaging area is rdetectTo prevent rdetectDetect failures and boundary violations, set an offset value (r)offset) And two thresholds (r _ threshold)lowAnd r _ thresholdhigh) Then r isdetect-roffsetAs the radius of the boundary of the imaging region, assumeRadius of the boundary being rboundaryFurther, the radius of the boundary needs to satisfy r _ thresholdlow<=rboundary<=r_thressholdhighThereby preventing border crossing. Due to the radius r of the boundary of the imaging areaboundaryAnd thus according to the radius r of the boundary of the imaging areaboundaryAnd the center of the circle, the boundary of the imaging area can be determined.
The foregoing embodiments provide a detection method, that is, when an imaging region is a circular region, first perform binarization on an image to be processed to obtain a binarized image, and then use a circle detection method to detect a boundary of the imaging region.
In an embodiment, the step 300 of correcting the pixel values of the pixel points located in the dark region in the image to be processed by using the pixel values of the pixel points located on the boundary to obtain the processed image of the image to be processed includes:
step 301, calculating a distance between a target pixel point in the image to be processed and a dot of the imaging area to obtain a connection distance.
And the target pixel point is a pixel point which does not determine the distance between the target pixel point and the dot in the image to be processed.
As shown in fig. 5, assuming that p1 is the target pixel point, the dot is O, the coordinates of p1 are (j, i), and the coordinates of the dot O are (b, a), the connection distance (dist) between p1 and the dot O is thenConnecting wire):distConnecting wire=((a-i)2+(b-j)2)0.5
Step 302, if the link distance is less than or equal to the radius of the boundary of the imaging region, keeping the pixel value of the target pixel point unchanged.
If the distance dist of the connecting lineConnecting wireAnd if the radius of the boundary of the imaging area is smaller than or equal to the radius of the boundary of the imaging area, the target pixel point is considered to be positioned in the circular imaging area, and therefore the pixel value of the target pixel point is kept unchanged.
Step 303, if the connection line distance is greater than the radius of the boundary of the imaging area, determining an intersection point between a connection line constructed by the target pixel point and the dot of the imaging area and the boundary of the imaging area; and in the image to be processed, correcting the pixel value of the target pixel point into the pixel value of the intersection point to obtain a processed image of the image to be processed.
If the distance dist of the connecting lineConnecting wireIf the radius of the boundary of the imaging area is larger than the radius of the boundary of the imaging area, the target pixel point is considered to be positioned outside the circular imaging area, and then the pixel value of the target pixel point is processed, namely the pixel value of the target pixel point is corrected to be the pixel value of the intersection point. As shown in FIG. 5, a line connecting p1 and dot O is a line t, an intersection point of the line t and the boundary of the imaging region is p2, and the coordinates of p1, the coordinates of dot O, and a line distance dist are calculated from the coordinates of p1Connecting wireThe coordinates of p2 can be found, assuming that the coordinates of p2 are (n, m), then:
m=a-rboundary×(a-i)/distconnecting wire,n=b-rboundary×(b-j)/distConnecting wire
After the coordinates of the intersection point p2 are obtained, the pixel values of the four channels of p2 are respectively assigned to the target pixel point p 1. The processed image after the above processing can be seen as shown in fig. 6, a lot of divergent lines appear in a dark area, and the pixel values of the pixel points on the lines are corrected.
The above embodiment provides a method for correcting a pixel value of a pixel point in a dark region according to a pixel point in an imaging region, and since an intersection point and the dark region are relatively close in position, the pixel value of the pixel point in the dark region on a connection line is corrected by using the pixel value of the intersection point, so that the correction effect can be more real to a certain extent.
In one embodiment, the image processing method further includes: step 400 to step 900.
Step 400, the processed image is divided into a first number of image regions.
Where the first number is a preset number, for example, the first number is m (rows) × n (columns), i.e., the processed image is divided into m × n image areas, as shown in fig. 7.
Step 500, determining the end points of the first number of image regions.
As shown in fig. 7, since there are m × n image regions, the endpoints total (m +1) × (n +1), that is, a second number of endpoints total, where the second number is: (m +1) × (n + 1).
Step 600, obtaining the maximum brightness of the processed image in the candidate channel.
The candidate channel is one of an R channel, a GR channel, a GB channel and a B channel.
The maximum brightness is the maximum pixel value of the processed image in the candidate channel, for example, if the candidate channel is an R channel, the pixel value of each pixel point in the processed image in the R channel may be traversed, and if the pixel value of the pixel point y in the R channel is found to be the maximum, the pixel value of the pixel point y in the R channel is set as the maximum brightness of the processed image in the candidate channel. Since the image captured by the image sensor is brighter closer to the central region of the image and darker closer to the edge region of the image, the maximum brightness of the processed image in each channel is generally located in the central region of the processed image.
Step 700, calculating the brightness mean value of each image area in the candidate channel.
As shown in fig. 7, for the image area h, the pixel values of the pixel points in the image area h in the candidate channel are added to obtain the sum of the pixel values, and then the sum of the pixel values is divided by the total number of the pixel points in the image area h, so as to obtain the average brightness value of the image area h in the candidate channel. By such a method, the luminance mean value of each image region in the processed image in the candidate channel can be obtained.
Step 800, obtaining a brightness ratio matrix corresponding to the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel, wherein the brightness ratio of each endpoint in the candidate channel is recorded in the brightness ratio matrix, and the brightness ratio reflects the brightness condition of the endpoint.
Since the maximum brightness of the candidate channel is known, that is, the brightness condition of the central region of the processed image is known, and the brightness mean value of each image region in the candidate channel is known, that is, the brightness condition of each image region is known, the brightness ratio of each endpoint in the candidate channel can be calculated according to the maximum brightness of the candidate channel and the brightness mean value of each image region in the candidate channel, so as to obtain the brightness condition of each endpoint. Since the luminance ratio matrix records the luminance ratio of the endpoint in the candidate channel, the size of the luminance ratio matrix is also: (m +1) × (n + 1).
And 900, obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel, wherein the brightness correction coefficient matrix records the brightness correction coefficient of each endpoint in the candidate channel.
Because the brightness condition of each end point is obtained, the brightness correction coefficient matrix of the image to be processed in the candidate channel can be obtained according to the brightness ratio matrix corresponding to the candidate channel, namely according to the brightness ratio of each end point in the candidate channel. For example, the larger the luminance ratio of the endpoint in the candidate channel is, the larger the luminance correction coefficient of the endpoint in the candidate channel is; the smaller the brightness ratio of the endpoint in the candidate channel is, the smaller the brightness correction coefficient of the endpoint in the candidate channel is, so as to obtain a brightness correction coefficient matrix. After the brightness correction coefficient matrix is obtained, the brightness of the image to be processed can be corrected through the brightness correction coefficient matrix.
In the above embodiment, the luminance correction coefficient matrix is obtained by means of block calculation, so that the luminance of the image to be processed is corrected by the luminance correction coefficient matrix.
In one embodiment, the end points include an outer end point located at the outer boundary of the processed image and an inner end point not located at the outer boundary (the inner end point is located inside the image, such as a white dot in fig. 8, and the outer end point is located at the outer boundary of the image, such as a black dot in fig. 8); correspondingly, the step 800 of obtaining the luminance ratio matrix corresponding to the candidate channel according to the maximum luminance of the processed image in the candidate channel and the luminance average of each image region in the candidate channel includes:
step 801, obtaining the brightness ratio of each inner endpoint in the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness average value of each image area in the candidate channel.
First, dividing the maximum brightness of the processed image in the candidate channel by the average brightness of the image region in the candidate channel to obtain the brightness ratio of the image region in the candidate channel, for example, for the image region tm,nUsing y _ tm,nRepresenting an image area tm,nIn the average value of the brightness of the candidate channel, the maximum brightness of the processed image in the candidate channel is represented by luma _ max, and gain _ y _ t is usedm,nRepresenting the luminance ratio of the image area in the candidate channel, and thus gain _ y _ tm,n=luma_max/y_tm,n(ii) a The luminance ratio of the inner endpoint at the candidate channel is then derived from the luminance ratios of the four image regions around the inner endpoint, e.g., FIG. 8, for inner endpoint fi,jUsing gain _ inner _ fi,jRepresents the inner end point fi,jLuminance ratio in the candidate channel, thus, gain _ inner _ fi,j=gain_y_ti-1,j-1+gain_y_ti-1,j+gain_y_ti,j-1+gain_y_ti,j
Step 802, obtaining an endpoint distance matrix according to the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed and the coordinate position of each endpoint in the image to be processed.
For example, assuming that the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed is (x1, y1), and the coordinate position of a certain endpoint in the image to be processed is (x2, y2), then, the distance between the endpoint and the pixel point corresponding to the maximum brightness is obtained as: (x1-x2)2+(y1-y2)2. According to the calculation mode, the distance between each endpoint and the pixel point corresponding to the maximum brightness of the candidate channel can be calculated, and therefore an endpoint distance matrix is obtained.
It should be noted that, if only the coordinate positions of the pixel point corresponding to the maximum brightness of the candidate channel and the endpoint in the image region are known, then, when the distance between the endpoint and the pixel point corresponding to the maximum brightness of the candidate channel is calculated, the coordinate positions of the pixel point corresponding to the maximum brightness of the candidate channel and the endpoint in the image region also need to be converted.
For example, the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image region is (center _ row, center _ col), the image width of the image to be processed is image _ width, the image height of the image to be processed is image _ height, and the image to be processed is divided into m (rows) × n (columns), so that the horizontal pixel number H _ blocks of each image region is image _ width/n, the vertical pixel number V _ blocks of each image region is image _ height/m, assuming that the horizontal coordinate of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed is represented by center _ point _ x, and the vertical coordinate of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed is represented by center _ point _ y, then:
center_point_x=center_col×H_blocks+H_blocks/2
center_point_y=center_row*V_blocks+V_blocks/2。
in this way, the coordinate position of the endpoint in the image area can be converted into the coordinate position of the endpoint in the image to be processed.
Step 803, obtaining the brightness ratio of each outer endpoint in the candidate channel according to the endpoint distance matrix and the brightness ratio of each inner endpoint in the candidate channel.
Firstly, selecting k first inner end points from all the inner end points, obtaining the brightness ratio of the k first inner end points in a candidate channel, selecting k-1 second inner end points from the rest inner end points, obtaining the distance between the k first inner end points and a pixel point corresponding to the maximum brightness of the candidate channel according to an end point distance matrix, representing the brightness ratio of the ith first inner end point in the k first inner end points in the candidate channel as gain _ nei JD _ i, and representing the distance between the ith first inner end point in the k first inner end points and the pixel point corresponding to the maximum brightness of the candidate channel as distiThe jth second inner endpoint in the k-1 second inner endpointsThe distance between the pixel points corresponding to the maximum luminance of the candidate channels is denoted distjThen:
gain_neiJD_i=a1×dist1+a2×dist2+…+aj×distj+ak-1×distk-1+b×disti
since there are k first inner endpoints, k above equations can be constructed to solve: a1, a2, …, ak-1And b. Since the solution of a1, a2, … and a is obtainedk-1And b, therefore, when the brightness ratio of the outer end point in the candidate channel needs to be obtained, firstly, the distance between the outer end point and the pixel point corresponding to the maximum brightness of the candidate channel is obtained from the end point distance matrix, and then the distance is used as distiThen, the brightness ratio of the outer end point in the candidate channel can be solved.
In the above embodiment, when calculating the luminance ratio of the inner endpoint in the candidate channel, the luminance ratio of the inner endpoint in the candidate channel may be obtained according to the luminance ratios of the four image regions around the inner endpoint, and when calculating the luminance ratio of the outer endpoint in the candidate channel, the luminance ratio of the inner endpoint in the candidate channel may be calculated according to the luminance ratio of the inner endpoint in the candidate channel, thereby realizing the calculation of the luminance ratios of all endpoints in the candidate channel.
In an embodiment, the obtaining, according to the luminance ratio matrix corresponding to the candidate channel in step 900, a luminance correction coefficient matrix of the to-be-processed image in the candidate channel includes:
and step 901, constructing a mask image corresponding to the image to be processed according to the boundary of the imaging area.
According to the boundary of the imaging region, the value of the pixel point inside the boundary is set to a, for example, a equals 1, and the value of the pixel point outside the boundary is set to B, for example, B equals 0, so that the mask image is obtained. In a specific example, when the imaging region is a circular region, the value of the pixel point inside the boundary is set to 1 and the value of the pixel point outside the boundary is set to 0 according to the radius and the center of the imaging region, so as to obtain a mask image corresponding to the image to be processed, as shown in fig. 9, that is, in the mask image, the pixel value of the pixel point is 0 or 1.
Step 902, divide the mask image into the first number of mask regions.
The mask image is divided into mask regions of m (rows) × n (columns).
Step 903, calculating an average pixel value of each mask area, and obtaining a second number of mask areas and an area position of a target mask area with an average pixel value smaller than a preset pixel value according to the average pixel value of each mask area.
For the mask area d, adding the pixel values of all the pixel points in the mask area d to obtain the sum of the pixel values, and then dividing the sum of the pixel values by the total number of the pixel points in the mask area d to obtain the average pixel value of the mask area d. By such a method, an average pixel value of each mask region in the mask image can be obtained. Since the number of mask regions is m × n, for comparison with the number of end points, it is necessary to copy the boundary to obtain a second number of (m +1) × (n +1) mask regions, and the boundary copying is performed to substantially copy the average pixel value of the boundary to obtain (m +1) × (n +1) average pixel values.
The preset pixel value is an average pixel value set in advance. Since the average pixel values of (m +1) × (n +1) mask regions have already been obtained, the target mask region having an average pixel value smaller than the preset pixel value and the region position of the target mask region can be obtained by comparing the average pixel values of (m +1) × (n +1) mask regions with the preset pixel values, respectively.
And 904, constructing a region mask according to the second number of mask regions, wherein the pixel value of each region position in the region mask is the first pixel value.
The second number is (m +1) × (n +1), and thus, the size of the region mask is (m +1) × (n +1), and in the region mask, the pixel value of each region position (each region position corresponds to one pixel point) is the first pixel value, for example, the first pixel value is 1.
Step 905, in the area mask, setting the pixel value of the area position of the target mask area as a second pixel value to obtain a target area mask.
For example, the second pixel value is 0.
Step 906, obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel and the target area mask.
Multiplying the luminance ratio matrix corresponding to the candidate channel by the target area mask, for example, the luminance ratio matrix corresponding to the candidate channel is gain _ kernel _ all, and the target area mask is N, so that the luminance correction coefficient matrix gain _ lut of the image to be processed in the candidate channel is: the gain _ lut is the gain _ corner _ all × N, where × means that the value of the same position (for example, position a) is multiplied to obtain a result, and then the result is put into the corresponding position (position a) in gain _ lut.
In one example, to adjust the correction strength, a scaling factor ratio may be further set, and the gain _ lut is updated by the scaling factor ratio, so that the updated luminance correction coefficient matrix gain _ lut _ final is: gain _ lut _ final (1-gain _ lut) ratio + 1.
It should be noted that, since there are 4 channels, which are an R channel, a GR channel, a GB channel, and a B channel, respectively, the updated luminance correction coefficient matrices of the 4 channels can be represented as: r _ gain _ lut, GR _ gain _ lut, GB _ gain _ lut, and B _ gain _ lut.
The above embodiment illustrates how to obtain the luminance correction coefficient matrix.
In one embodiment, the candidate channels include: r channel, GR channel, GB channel and B channel; the image processing method further comprises the following steps:
step 1000, calculating the average value of the brightness correction coefficient matrix of the image to be processed in the GR channel and the brightness correction coefficient matrix of the GB channel to obtain a corrected average value matrix.
For example, assume that the luminance correction coefficient matrix of the image to be processed on the GR channel is: GR _ gain _ lut, the luminance correction coefficient matrix of the image to be processed in the GB channel is: GB _ gain _ lut, the corrective mean matrix H is then: h ═ 2 (GR _ gain _ lut + GB _ gain _ lut).
And 1001, dividing the brightness correction coefficient matrix of the image to be processed in a target channel by the correction mean value matrix to obtain a color correction coefficient matrix of the image to be processed in the target channel, wherein the target channel is one of the candidate channels.
For example, the target channel is an R channel, and the luminance correction coefficient matrix of the image to be processed in the R channel is: r _ gain _ lut, so that the color correction coefficient matrix of the image to be processed in the R channel is R _ gain _ lut/H; for another example, the target channel is a GR channel, and the brightness correction coefficient matrix of the to-be-processed image in the GR channel is: GR _ gain _ lut, so that the color correction coefficient matrix of the image to be processed on the GR channel is GR _ gain _ lut/H; for another example, the target channel is a GB channel, and a luminance correction coefficient matrix of the image to be processed in the GB channel is: GB _ gain _ lut, so that the color correction coefficient matrix of the image to be processed in the GB channel is GB _ gain _ lut/H; for another example, the target channel is a B channel, and the luminance correction coefficient matrix of the to-be-processed image in the B channel is: b _ gain _ lut, and the color correction coefficient matrix of the image to be processed in the B channel is B _ gain _ lut/H.
Step 1002, calculating an average value of a brightness correction coefficient matrix of a GR channel and a brightness correction coefficient matrix of a GB channel of the image to be processed under the standard color temperature to obtain a color temperature correction average value matrix.
Wherein, the standard color temperature is D50, and the brightness correction coefficient matrix of the GR channel of the image to be processed under D50 is assumed to be: d50_ GR _ gain _ lut, the matrix of the brightness correction coefficients of the GB channel of the image to be processed under D50 is as follows: d50_ GB _ gain _ lut, the color temperature correction mean matrix W is then: (D50_ GR _ gain _ lut + D50_ GB _ gain _ lut)/2. The calculation of the brightness correction coefficient matrix of the GR channel and the brightness correction coefficient matrix of the GB channel at the standard color temperature may refer to steps 901 to 906, and will not be described in detail here.
And 1003, obtaining a color brightness correction coefficient matrix of the image to be processed in the target channel according to the color correction coefficient matrix of the image to be processed in the target channel and the color temperature correction mean value matrix.
For example, if the target channel is an R channel, the color brightness correction coefficient matrix of the R channel is: r _ gain _ lut/H × W; if the target channel is a GR channel, the color brightness correction coefficient matrix of the GR channel is: GR _ gain _ lut/H W; if the target channel is a GB channel, the color-luminance correction coefficient matrix of the GB channel is: GB _ gain _ lut/H × W; if the target channel is B channel, the color brightness correction coefficient matrix of B channel is: b _ gain _ lut/H W. The size of the color-luminance correction coefficient matrix is (m +1) × (n + 1).
The color brightness correction coefficient matrix is obtained, so that the color brightness correction coefficient matrix can be used for correcting the image to be processed, and specifically, the values of the pixel points of each image area in the image to be processed need to be corrected according to the four values in the color brightness correction coefficient matrix. For example, the image to be processed has m × n image regions, when the pixel values of the pixel points in the first image region in the image to be processed are corrected, the pixel values of the pixels in the first image region (e.g., fig. 3) are corrected according to the four values at the (1,1), (1,2), (2,1), (2,2) positions in the color-luminance correction coefficient matrix, and further, when the pixel values of the pixel points in the second image area (as in figure 4) of the image to be processed are to be rectified, and correcting the pixel values of the pixel points in the second image area according to the four values at the positions of (1,2), (1,3), (2,2) and (2,3) in the color and brightness correction coefficient matrix, and repeating the steps, thereby realizing the correction of all the pixel points in the image to be processed.
As shown in fig. 10 (a horizontal row and a vertical column of pixel points in the corrected image corresponding to the image to be processed are captured for display), an effect of correcting the image to be processed by using the color brightness correction coefficient matrix is shown, and each line covered with a dot indicates a correction effect of one channel.
The above embodiment describes how to obtain the color brightness correction coefficient matrix of the target channel, so that the color brightness correction coefficient matrix of the target channel is used to correct the color and brightness of the image, and compared with a mode of respectively calculating the brightness correction coefficient and the color correction coefficient and then correcting the image by using the brightness correction coefficient and the color correction coefficient, the mode can simultaneously correct the color and the brightness by only one correction, and the correction efficiency is improved to a certain extent.
In one embodiment, as shown in fig. 11, there is provided an image processing apparatus 1100, including:
an obtaining module 1101, configured to obtain an image to be processed;
a boundary module 1102, configured to identify an imaging region in the image to be processed, so as to obtain a boundary of the imaging region;
a correcting module 1103, configured to correct, by using the pixel values of the pixels located on the boundary, the pixel values of the pixels located in the dark region in the image to be processed, so as to obtain a processed image of the image to be processed.
In one embodiment, the imaging region is a circular region; the boundary module 1102 is specifically configured to: carrying out binarization processing on the image to be processed to obtain a binarized image; and carrying out circle detection on the binary image to obtain the boundary of the imaging area.
In an embodiment, the modification module 1103 is specifically configured to: calculating the distance between a target pixel point in the image to be processed and the dot of the imaging area to obtain a connecting line distance; if the link distance is smaller than or equal to the radius of the boundary of the imaging area, keeping the pixel value of the target pixel point unchanged; if the distance of the connecting line is larger than the radius of the boundary of the imaging area, determining the intersection point of the connecting line constructed by the target pixel point and the dot of the imaging area and the boundary of the imaging area; and in the image to be processed, correcting the pixel value of the target pixel point into the pixel value of the intersection point to obtain a processed image of the image to be processed.
In one embodiment, the image processing apparatus 1100 further includes: a correction coefficient module for dividing the processed image into a first number of image regions; determining end points of the first number of image regions; acquiring the maximum brightness of the processed image in a candidate channel; calculating the brightness mean value of each image area in the candidate channel; obtaining a brightness ratio matrix corresponding to the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel, wherein the brightness ratio matrix records the brightness ratio of each endpoint in the candidate channel, and the brightness ratio reflects the brightness condition of the endpoint; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel, wherein the brightness correction coefficient matrix records the brightness correction coefficient of each endpoint in the candidate channel.
In one embodiment, the end points include an outer end point located at an outer boundary of the processed image and an inner end point not located at the outer boundary; the correction coefficient module is specifically configured to: obtaining the brightness ratio of each inner endpoint in the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel; obtaining an endpoint distance matrix according to the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed and the coordinate position of each endpoint in the image to be processed; and obtaining the brightness ratio of each outer endpoint in the candidate channel according to the endpoint distance matrix and the brightness ratio of each inner endpoint in the candidate channel.
In one embodiment, the correction coefficient module is specifically configured to: constructing a mask image corresponding to the image to be processed according to the boundary of the imaging area; dividing the mask image into the first number of mask regions; calculating the average pixel value of each mask area, and obtaining a second number of mask areas and the area positions of the target mask areas with the average pixel values smaller than the preset pixel values according to the average pixel value of each mask area; constructing area masks according to the second number of mask areas, wherein pixel values of all area positions in the area masks are first pixel values; setting the pixel value of the area position of the target mask area as a second pixel value in the area mask to obtain a target area mask; and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel and the target area mask.
In one embodiment, the candidate channels include: r channel, GR channel, GB channel and B channel; the image processing apparatus further includes: a corrective module for: calculating the mean value of the brightness correction coefficient matrix of the image to be processed in the GR channel and the brightness correction coefficient matrix of the GB channel to obtain a corrected mean value matrix; dividing the brightness correction coefficient matrix of the image to be processed in a target channel by the correction mean value matrix to obtain a color correction coefficient matrix of the image to be processed in the target channel, wherein the target channel is one of the candidate channels; calculating the mean value of the brightness correction coefficient matrix of the GR channel and the brightness correction coefficient matrix of the GB channel of the image to be processed under the standard color temperature to obtain a color temperature correction mean value matrix; and obtaining a color brightness correction coefficient matrix of the image to be processed in the target channel according to the color correction coefficient matrix of the image to be processed in the target channel and the color temperature correction mean value matrix.
In one embodiment, as shown in fig. 12, a computer device is provided, which may be a terminal or a server in particular. The computer device comprises a processor, a memory and a network interface which are connected through a system bus, wherein the memory comprises a nonvolatile storage medium and an internal memory, the nonvolatile storage medium of the computer device stores an operating system and also stores a computer program, and the computer program can enable the processor to realize the image processing method when being executed by the processor. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM). The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform an image processing method. Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The image processing method provided by the present application may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 12. The memory of the computer device may store therein respective program templates constituting the image processing apparatus. Such as an acquisition module 1101 and a boundary module 1102.
A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring an image to be processed;
identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
In one embodiment, a computer readable storage medium is provided, storing a computer program that, when executed by a processor, causes the processor to perform the steps of:
acquiring an image to be processed;
identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
It should be noted that the image processing method, the image processing apparatus, the computer device and the computer readable storage medium described above belong to one general inventive concept, and the contents in the embodiments of the image processing method, the image processing apparatus, the computer device and the computer readable storage medium may be mutually applicable.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed;
identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
2. The image processing method according to claim 1, wherein the imaging region is a circular region;
the identifying the imaging area in the image to be processed to obtain the boundary of the imaging area comprises:
carrying out binarization processing on the image to be processed to obtain a binarized image;
and carrying out circle detection on the binary image to obtain the boundary of the imaging area.
3. The image processing method according to claim 2, wherein the modifying the pixel values of the pixel points located in the dark area in the image to be processed by using the pixel values of the pixel points located on the boundary comprises:
calculating the distance between a target pixel point in the image to be processed and the dot of the imaging area to obtain a connecting line distance;
if the link distance is smaller than or equal to the radius of the boundary of the imaging area, keeping the pixel value of the target pixel point unchanged;
if the distance of the connecting line is larger than the radius of the boundary of the imaging area, determining the intersection point of the connecting line constructed by the target pixel point and the dot of the imaging area and the boundary of the imaging area;
and in the image to be processed, correcting the pixel value of the target pixel point into the pixel value of the intersection point to obtain a processed image of the image to be processed.
4. The image processing method according to claim 1, further comprising:
dividing the processed image into a first number of image regions;
determining end points of the first number of image regions;
acquiring the maximum brightness of the processed image in a candidate channel;
calculating the brightness mean value of each image area in the candidate channel;
obtaining a brightness ratio matrix corresponding to the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel, wherein the brightness ratio matrix records the brightness ratio of each endpoint in the candidate channel, and the brightness ratio reflects the brightness condition of the endpoint;
and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel, wherein the brightness correction coefficient matrix records the brightness correction coefficient of each endpoint in the candidate channel.
5. The image processing method according to claim 4, wherein the end points include an outer end point located at an outer boundary of the processed image and an inner end point not located at the outer boundary;
the obtaining a luminance ratio matrix corresponding to the candidate channel according to the maximum luminance of the processed image in the candidate channel and the luminance mean value of each image area in the candidate channel includes:
obtaining the brightness ratio of each inner endpoint in the candidate channel according to the maximum brightness of the processed image in the candidate channel and the brightness mean value of each image area in the candidate channel;
obtaining an endpoint distance matrix according to the coordinate position of the pixel point corresponding to the maximum brightness of the candidate channel in the image to be processed and the coordinate position of each endpoint in the image to be processed;
and obtaining the brightness ratio of each outer endpoint in the candidate channel according to the endpoint distance matrix and the brightness ratio of each inner endpoint in the candidate channel.
6. The image processing method according to claim 4, wherein obtaining the luminance correction coefficient matrix of the image to be processed in the candidate channel according to the luminance ratio matrix corresponding to the candidate channel comprises:
constructing a mask image corresponding to the image to be processed according to the boundary of the imaging area;
dividing the mask image into the first number of mask regions;
calculating the average pixel value of each mask area, and obtaining a second number of mask areas and the area positions of the target mask areas with the average pixel values smaller than the preset pixel values according to the average pixel value of each mask area;
constructing area masks according to the second number of mask areas, wherein pixel values of all area positions in the area masks are first pixel values;
setting the pixel value of the area position of the target mask area as a second pixel value in the area mask to obtain a target area mask;
and obtaining a brightness correction coefficient matrix of the image to be processed in the candidate channel according to the brightness ratio matrix corresponding to the candidate channel and the target area mask.
7. The image processing method of claim 4, wherein the candidate channels comprise: r channel, GR channel, GB channel and B channel;
the image processing method further comprises the following steps:
calculating the mean value of the brightness correction coefficient matrix of the image to be processed in the GR channel and the brightness correction coefficient matrix of the GB channel to obtain a corrected mean value matrix;
dividing the brightness correction coefficient matrix of the image to be processed in a target channel by the correction mean value matrix to obtain a color correction coefficient matrix of the image to be processed in the target channel, wherein the target channel is one of the candidate channels;
calculating the mean value of the brightness correction coefficient matrix of the GR channel and the brightness correction coefficient matrix of the GB channel of the image to be processed under the standard color temperature to obtain a color temperature correction mean value matrix;
and obtaining a color brightness correction coefficient matrix of the image to be processed in the target channel according to the color correction coefficient matrix of the image to be processed in the target channel and the color temperature correction mean value matrix.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image to be processed;
the boundary module is used for identifying an imaging area in the image to be processed to obtain the boundary of the imaging area;
and the correction module is used for correcting the pixel values of the pixel points in the dark area in the image to be processed by using the pixel values of the pixel points on the boundary to obtain a processed image of the image to be processed.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the image processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon computer program instructions, which, when read and executed by a processor, perform the steps of the image processing method of any one of claims 1 to 7.
CN202111245876.9A 2021-10-26 2021-10-26 Image processing method, device, equipment and storage medium Pending CN113902644A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111245876.9A CN113902644A (en) 2021-10-26 2021-10-26 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111245876.9A CN113902644A (en) 2021-10-26 2021-10-26 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113902644A true CN113902644A (en) 2022-01-07

Family

ID=79026182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111245876.9A Pending CN113902644A (en) 2021-10-26 2021-10-26 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113902644A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649503A (en) * 2024-01-29 2024-03-05 杭州永川科技有限公司 Image reconstruction method, apparatus, computer device, storage medium, and program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649503A (en) * 2024-01-29 2024-03-05 杭州永川科技有限公司 Image reconstruction method, apparatus, computer device, storage medium, and program product

Similar Documents

Publication Publication Date Title
US10298864B2 (en) Mismatched foreign light detection and mitigation in the image fusion of a two-camera system
CN110298282B (en) Document image processing method, storage medium and computing device
CN111563552B (en) Image fusion method, related device and apparatus
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110996081B (en) Projection picture correction method and device, electronic equipment and readable storage medium
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN113269697B (en) Method and device for generating curved screen image
US20170070650A1 (en) Apparatus for correcting image distortion of lens
CN112258418A (en) Image distortion correction method, device, electronic equipment and storage medium
CN111105367A (en) Face distortion correction method and device, electronic equipment and storage medium
CN113902644A (en) Image processing method, device, equipment and storage medium
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN113222862A (en) Image distortion correction method, device, electronic equipment and storage medium
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN111340722A (en) Image processing method, processing device, terminal device and readable storage medium
CN110852958A (en) Self-adaptive correction method and device based on object inclination angle
CN111428707B (en) Method and device for identifying pattern identification code, storage medium and electronic equipment
CN112819738B (en) Infrared image fusion method, device, computer equipment and storage medium
CN113840135A (en) Color cast detection method, device, equipment and storage medium
CN112233020A (en) Unmanned aerial vehicle image splicing method and device, computer equipment and storage medium
CN113962892A (en) Method and device for correcting wide-angle lens image distortion and photographic equipment
CN115334245A (en) Image correction method and device, electronic equipment and storage medium
CN116823683A (en) Lens detection method, detection device and computer device
CN117392161A (en) Calibration plate corner point for long-distance large perspective distortion and corner point number determination method
US9917972B2 (en) Image processor, image-processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination