CN117156285A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN117156285A
CN117156285A CN202311158737.1A CN202311158737A CN117156285A CN 117156285 A CN117156285 A CN 117156285A CN 202311158737 A CN202311158737 A CN 202311158737A CN 117156285 A CN117156285 A CN 117156285A
Authority
CN
China
Prior art keywords
image
matrix
area
region
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311158737.1A
Other languages
Chinese (zh)
Inventor
许智
曹治国
彭珏文
张华琪
顾弘
唐文峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311158737.1A priority Critical patent/CN117156285A/en
Publication of CN117156285A publication Critical patent/CN117156285A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/815Camera processing pipelines; Components thereof for controlling the resolution by using a single image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, and belongs to the technical field of image processing. The image processing method comprises the following steps: determining a first region and a second region in the first image, wherein the first region and the second region are regions with brightness values higher than a brightness threshold value in the first image; rendering a first region and a second region in a first image respectively, and determining an image matrix corresponding to the first region and the second region; performing weight reassignment treatment on the image matrixes of the first area and the second area to obtain an image matrix with the weight reassigned; and generating a second image according to the image matrix with the weight redistributed.

Description

Image processing method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image processing method and an image processing device.
Background
With the advancement of technology, more and more intelligent electronic devices have a portrait mode and a large aperture mode in a shooting mode. Such shooting modes as the two above can render a foreground image from a shot almost full-resolution image by means of post-processing.
The existing rendering algorithm is better in effect, but when the diffuse effect of the highlight region of the image is processed, the problem that the edge of the highlight region of the diffuse effect is too blurred still easily occurs.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method and an image processing device, which can solve the problem that the edge of a highlight region of a diffusion effect is too fuzzy when the diffusion effect of the highlight region of an image is processed.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a first region and a second region in the first image, wherein the first region and the second region are regions with brightness values higher than a brightness threshold value in the first image;
rendering a first region and a second region in a first image respectively, and determining an image matrix corresponding to the first region and the second region;
performing weight reassignment treatment on the image matrixes of the first area and the second area to obtain an image matrix with the weight reassigned;
and generating a second image according to the image matrix with the weight redistributed.
In a second aspect, an embodiment of the present application provides an image processing apparatus including: the device comprises a determining module, a rendering module, a processing module and a generating module;
the determining module is used for determining a first area and a second area in the first image, wherein the first area and the second area are areas with brightness values higher than a brightness threshold value in the first image;
The rendering module is used for rendering the first area and the second area in the first image respectively;
the determining module is further used for determining an image matrix corresponding to the first area and the second area;
the processing module is used for carrying out weight redistribution processing on the image matrixes of the first area and the second area to obtain an image matrix with the weight redistributed;
the generation module is used for generating a second image according to the image matrix after weight redistribution.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, the program or instructions implementing the steps of the method as in the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method as in the first aspect.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface coupled to the processor for running a program or instructions implementing the steps of the method as in the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement a method as in the first aspect.
In the embodiment of the application, the first area and the second area, namely the highlight areas, of which the brightness values are larger than the brightness threshold value in the first image are determined, then the first area and the second area are respectively rendered, the weight distribution processing is carried out on the image matrix of the first area and the image matrix of the second area obtained by rendering, the diffusion effect is added for the first area and the second area in the processed image, the edges of the first area and the second area of the diffusion effect are clearer, better layering is provided between the highlight areas in the first image, and the problem that the edges of the highlight areas of the diffusion effect are too fuzzy when the diffusion effect of the highlight areas of the image is processed in the related art is solved.
Drawings
FIG. 1 is a flow chart of an image processing method according to some embodiments of the present application;
fig. 2 shows a schematic block diagram of an image processing apparatus provided by some embodiments of the present application;
FIG. 3 illustrates a block diagram of an electronic device provided by some embodiments of the application;
Fig. 4 is a schematic diagram of a hardware structure of an electronic device according to some embodiments of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the objects identified by "first," "second," etc. are generally of a type not limited to the number of objects, for example, the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method and the device provided by the embodiment of the application are described in detail below by referring to fig. 1 to 4 through specific embodiments and application scenes thereof.
In some embodiments of the present application, an image processing method is provided, and fig. 1 is a schematic flow chart of the image processing method provided in some embodiments of the present application. As shown in fig. 1, the image processing method includes:
step 102, determining a first region and a second region in a first image, wherein the first region and the second region are regions with brightness values higher than a brightness threshold value in the first image;
in the embodiment of the application, the first image is an image obtained by processing the acquired initial image, and the scenery map required by the user can be obtained by carrying out scenery processing on the first image.
In the embodiment of the application, the first area and the second area are both areas with brightness values higher than the brightness threshold value in the first image, and the brightness values of the first area and the second area in the first image are higher than the brightness values of other areas in the first image, namely, the first area and the second area in the first image are both highlight areas in the first image.
Illustratively, the luminance value of the first region is different from the luminance value of the second region.
104, rendering a first region and a second region in the first image respectively, and determining an image matrix corresponding to the first region and the second region;
in the embodiment of the application, the image matrix of the first area is an image matrix after rendering the first area in the first image. And rendering the second region to obtain an image matrix corresponding to the second region.
In the embodiment of the application, the first image matrix is an image matrix of the first area, and the second image matrix is an image matrix of the second area. By way of example, the first region and the second region are rendered respectively, so that a first image matrix and a second image matrix corresponding to the first region and the second region can be obtained, wherein the first image matrix and the second image matrix respectively comprise a corresponding weight accumulation matrix and a corresponding color accumulation matrix.
Step 106, carrying out weight redistribution processing on the image matrixes of the first area and the second area to obtain an image matrix with the weight redistributed;
in the embodiment of the application, weight redistribution is respectively carried out on the image matrix of the first area and the image matrix of the second area to obtain two image matrices after weight redistribution, wherein the two image matrices after weight redistribution are respectively the image matrix of the first image after rendering the first area and the image matrix of the first image after rendering the second area.
And step 108, generating a second image according to the image matrix with the weight reassigned.
In the embodiment of the application, after the weight distribution is carried out on the two image matrixes, the fusion processing is carried out on the image matrixes with the weight distributed again to obtain the finally generated second image, so that the edge of the diffusion effect of the highlight region in the processed second image is clearer, and the layering sense of the diffusion effect of the highlight region in the second image is improved.
Illustratively, the weight reassigned image matrix includes a first image matrix that is an image matrix that is rendered for a first region in the first image and a second image matrix that is an image matrix that is rendered for a second region in the first image. Wherein the first image matrix and the second image matrix each comprise a weight accumulation matrix and a color accumulation matrix.
In the process of generating the second image based on the first image matrix and the second image matrix, the color accumulation matrix of the second image is obtained by adding the color accumulation matrix in the first image matrix and the second image matrix through the following formula (1):
C X =C Y +C Z ; (1)
wherein C is X Accumulating matrix for the colors of the second image, C Y Accumulating matrix for color in first image matrix, C Z A matrix is accumulated for the colors in the second image matrix.
And then adding the weight accumulation matrix in the first image matrix and the weight accumulation matrix in the second image matrix through the following formula (2) to obtain the weight accumulation matrix of the second image:
W X =W Y +W Z ; (2)
wherein W is X Weighting the second imageAccumulation matrix, W Y For the weight accumulation matrix in the first image matrix, W Z Is a weight accumulation matrix in the second image matrix.
Then, the color accumulation matrix of the second image is divided point by point with the weight accumulation matrix of the second image by the following formula (3), to obtain a final second image:
B X =C X /W X ; (3)
wherein B is X C is the second image X For the color accumulation matrix of the second image, W X A matrix is accumulated for the weights of the second image.
In the embodiment of the application, the first area and the second area, namely the highlight areas, of which the brightness values are larger than the brightness threshold value in the first image are determined, then the first area and the second area are respectively rendered, the weight distribution processing is carried out on the image matrix of the first area and the image matrix of the second area obtained by rendering, the diffusion effect is added for the first area and the second area in the processed image, the edges of the first area and the second area of the diffusion effect are clearer, better layering is provided between the highlight areas in the first image, and the problem that the edges of the highlight areas of the diffusion effect are too fuzzy when the diffusion effect of the highlight areas of the image is processed in the related art is solved.
In some embodiments of the application, the image matrix comprises a first color accumulation matrix and a first weight accumulation matrix;
performing weight redistribution processing on the image matrixes of the first area and the second area to obtain an image matrix with weight redistribution, wherein the weight redistribution processing comprises the following steps: dividing the first color accumulation matrix and the first weight accumulation matrix point by point to obtain a first scenery map matrix; updating the first weight accumulation matrix through the first adjustment parameters to obtain a second weight accumulation matrix; multiplying the second weight accumulation matrix with the first scenegraph matrix point by point to obtain a second color accumulation matrix; and determining the image matrix with weight reassigned according to the second color accumulation matrix and the second weight accumulation matrix.
The weight assignment process is described below by taking as an example either one of two image matrices:
in the embodiment of the application, the image matrix comprises an image matrix obtained by rendering the first area and an image matrix obtained by rendering the second area, and weight reassignment processing is needed to be carried out on the two image matrices respectively, wherein each image matrix comprises a color accumulation matrix and a weight accumulation matrix, and the weight reassignment of the image matrix is completed by adjusting the color accumulation matrix and the weight accumulation matrix in each image matrix.
In the embodiment of the application, the first color accumulation matrix and the first weight accumulation matrix are matrices obtained after the first image is rendered, and the first color accumulation matrix and the first weight accumulation matrix are matrices before adjustment. The first color accumulation matrix and the first weight accumulation matrix are subjected to point-by-point division to obtain a first scattered scene graph matrix, wherein the first scattered scene graph matrix is an image matrix after scattered scene rendering of the first image.
Illustratively, the first perspective view matrix is calculated by the following equation (4):
B 1 =C 1 /W 1 ; (4)
wherein B is 1 For the first scenegraph matrix, C 1 For the first color accumulation matrix, W 1 For the first weight accumulation matrix,/represents a point-wise division.
In the embodiment of the application, the first weight accumulation matrix is adjusted through the first adjustment parameter to obtain an updated second weight accumulation matrix, namely the reassignment processing of the weight accumulation matrix in the image matrix is completed, and after the second weight accumulation matrix is obtained, the first color accumulation matrix is updated based on the first scenegraph matrix and the second weight accumulation matrix to obtain an updated second color accumulation matrix.
Illustratively, the second color accumulation matrix is calculated by the following equation (5).
Wherein W is 2 For the second weight accumulation matrix, B 1 For the first scenegraph matrix, C 2 For the second color accumulation matrix, omicron represents a point-wise multiplication.
It should be noted that the process of weight distribution on the two image matrixes is the same, where the first adjustment parameters may be set according to actual requirements, and the first adjustment parameters corresponding to the two image matrixes may be the same or different.
In the embodiment of the application, the image matrix comprises a weight accumulation matrix and a color accumulation matrix, and the weight is required to be redistributed, so that the weight accumulation matrix in the image matrix is regulated and updated through a first regulation parameter, and then the color accumulation matrix is correspondingly regulated based on the updated weight accumulation matrix, thus the weight redistribution of the image matrix is completed, and the edge definition of the diffusion of the highlight region in the second image generated by the two image matrixes after the weight redistribution is ensured.
In some embodiments of the present application, updating the first weight accumulation matrix by the first adjustment parameter to obtain the second weight accumulation matrix includes: dividing the first weight accumulation matrix into a first area matrix and a second area matrix; updating the second area matrix according to the first adjustment parameters to obtain a third area matrix; updating the first area matrix according to the third area matrix to obtain a fourth area matrix; and determining a second weight accumulation matrix according to the third area matrix and the fourth area matrix.
In the embodiment of the application, before a first weight accumulation matrix in an image matrix is adjusted by a first adjustment parameter, the first weight accumulation matrix is divided into areas based on weight values to obtain a first area matrix and a second area matrix corresponding to two areas.
By setting a threshold value of the weight value, an area having a weight value lower than the threshold value is determined as a first area matrix, and an area having a weight value higher than or equal to the threshold value is determined as a second area matrix.
In the embodiment of the application, the second area matrix is adjusted and updated through the first adjusting parameter to obtain the third area matrix, the first area matrix is adjusted and updated based on the third area matrix obtained through adjustment and update to obtain the fourth area matrix, and the updated second weight accumulation matrix is obtained after the adjusted third area matrix and the fourth area matrix are added.
Illustratively, the second area matrix is adjusted by the following equation (6), and the first area matrix is adjusted by the following equation (7).
W 2 [P 2 ]=k(W 2 [P 2 ]-min(W 2 [P 2 ]))+max(W 2 [P 2 ]); (6)
Wherein W is 2 [P 2 ]For the third area matrix in the second weight accumulation matrix, W 2 [P 1 ]For the fourth area matrix in the second weight accumulation matrix, k is a first adjustment parameter, the first adjustment parameter is a super parameter, the amplitude of weight redistribution can be controlled, min () represents minimum value, and max () represents maximum value.
In the embodiment of the application, the first weight accumulation matrix is divided into the first area matrix and the second area matrix according to the weight value, the second area matrix is adjusted by the first adjustment parameter to obtain the third area matrix, the first area matrix is adjusted based on the third area matrix obtained by the adjustment, so that the fourth area matrix is obtained, and the accuracy of the obtained second weight accumulation matrix can be improved by adjusting the first weight accumulation matrix in areas.
In some embodiments of the present application, rendering a first region and a second region in a first image, respectively, determines an image matrix corresponding to the first region and the second region, including:
acquiring a second image and a focusing parallax parameter, wherein the second image is a parallax image corresponding to the first image; and respectively rendering the first region and the second region in the first image according to the second image, the ambiguity parameter and the focusing parallax parameter to obtain an image matrix corresponding to the first region and the second region.
In the embodiment of the application, the image matrix comprises a color accumulation matrix and a weight accumulation matrix, and in the process of determining the image matrix, a parallax image corresponding to a first image obtained through shooting and a focusing parallax parameter are required to be acquired, and the parallax image corresponding to the first image is a second image. The ambiguity parameter is an adjustment parameter set according to actual requirements. And respectively rendering the first region and the second region in the first image through the ambiguity parameter, the focusing parallax parameter and the second image to obtain two corresponding image matrixes.
The procedure for rendering the first region and the second region in the first image is the same, and the following description will take as an example rendering any highlight region in the first image:
illustratively, the initializing results in two all-zero accumulation matrices matching the first image size as an initial color accumulation matrix and an initial weight accumulation matrix, respectively. Traversing all pixel points in the first image, calculating the fuzzy radius and fuzzy symbol of each pixel point in the first image according to the second image, the fuzzy degree parameter and the focusing parallax parameter, obtaining a fuzzy matrix through the fuzzy radius, obtaining a color accumulation matrix in the image matrix through the fuzzy matrix and the initial color accumulation matrix, and obtaining a weight accumulation matrix in the image matrix through the fuzzy matrix and the initial weight accumulation matrix.
The blur radius is obtained by the following formula (8), the blur symbol is obtained by the following formula (9), the blur matrix is obtained by the following formula (10), the weight accumulation matrix in the image matrix is obtained by the following formula (11), and the color accumulation matrix in the image matrix is obtained by the following formula (12).
r=K|D i -d f |; (8)
W=W+w ij ; (11)
C=C+w ij I i ; (12)
Wherein r is the blur radius, s is the blur symbol, W is the weight accumulation matrix in the image matrix, C is the color accumulation matrix of the image matrix, I i For the first image, w ij Is a fuzzy matrix, r i For the blur radius of the ith pixel, D i For the ith pixel, d in the second image f For the focus disparity parameter, K is the ambiguity parameter.
For each pixel point i, the corresponding blur radius r is used i Searching all neighborhood pixel points in the diffusion range, wherein the distance between the neighborhood pixel points and the pixel point i is smaller than the fuzzy radius r i Can the neighborhood pixel of (c) receive the energy of pixel i.
For example, when the first region is subjected to diffusion rendering, the whole first region in the first image may be subjected to diffusion rendering, so as to obtain a whole image matrix. When the second area is subjected to diffusion rendering, a plurality of areas to be processed in the second area can be marked, and each area to be processed is subjected to diffusion rendering independently to obtain a color accumulation matrix and a weight accumulation matrix corresponding to the second area.
In the embodiment of the application, the first area and the second area in the first image can be respectively subjected to diffusion rendering through the ambiguity parameter, the focusing parallax parameter and the second image, so that a weight accumulation matrix and a color accumulation matrix matched with the first area and the second area are obtained.
In some embodiments of the present application, before determining the first region and the second region in the first image, further comprising:
under the condition that a third image is acquired, the third image is adjusted through a second adjustment parameter to obtain a fourth image, wherein the third image is an initial image corresponding to the first image, and the fourth image is a Gao Guangyan color chart of the third image; and generating a first image according to the fifth image and the fourth image, wherein the fifth image is a full-focus image of the third image.
In the embodiment of the application, the third image is an initial image of the first image, namely an image acquired in a shooting process, the fourth image is a Gao Guangyan color map generated based on the third image, the fifth image is a full-focus image after full-focus processing of the third image, the first image after image processing can be obtained by adding the highlight color map to the fifth image, and a scenery map required by a user can be obtained by carrying out scenery processing through the first image.
Illustratively, the third image is obtained by the following equation (13):
wherein R is the third image, i.e. the initial image corresponding to the first image, H is the fourth image, i.e. the Gao Guangyan color map corresponding to the initial image, delta 1 And delta 2 Is a second adjusting parameter, which is adjustable, M 1 For the first image region in the third image, M corresponds to the first region in the first image 2 For the second image region in the third image, corresponding to the second region in the first image, omicron represents a point-wise multiplication.
Illustratively, the first image is obtained by the following equation (14):
I 1 =I 2 +H; (14)
wherein H is a Gao Guangyan color map corresponding to the fourth image, I 1 For the first image, I 2 The fifth image is a fully focused image after fully focusing the third image.
According to the embodiment of the application, the Gao Guangyan color map and the full-focus image are obtained by adjusting the original image, so that the full-focus image after highlight treatment is obtained, the required scattered scene map is obtained, and the scattered scene map treatment efficiency and the treatment effect are improved.
In some embodiments of the present application, determining a first region and a second region in a first image includes:
extracting a first image area with a brightness value within a first threshold range from the third image;
extracting a second image area with a brightness value within a second threshold range from the third image, wherein the minimum value of the second threshold range is larger than the maximum value of the first threshold range;
An area in the first image that matches the first image area is determined as a first area, and an area in the first image that matches the second image area is determined as a second area.
In the embodiment of the application, the first image is an image obtained by performing image processing on an initial image, the third image is an initial image corresponding to the first image, and pixels in the third image are screened based on a preset first threshold range, so as to obtain a first image area with brightness values within the first threshold range, wherein the image in the first image area is set as a stage highlight area image and is recorded as a first image area M 1
After the first image area is obtained, continuing to screen pixels in the third image based on a preset second threshold range so as to obtain a second image area with brightness values within the second threshold range, wherein the image in the second image area is set as a two-stage highlight area diagram and is recorded as a second image area M 2
Optionally, the range start of the second threshold range is greater than or equal to the range end of the first threshold range.
Illustratively, the first image region and the second image region are obtained by the following formulas (15) and (16):
M 1 =η 1 <R≤η 2 ; (15)
M 2 =R>η 2 ; (16)
Wherein M is 1 R is the luminance value, (eta) for the first image region 1 ,η 2 ]In the first threshold range of values,M 2 is the second image region, (eta) 2 And infinity) is a second threshold range.
After the first image area is obtained, a corresponding first area is found in the first image after image processing according to the first image area, and a corresponding second area is found in the first image after image processing according to the second image area.
According to the embodiment of the application, the highlight region is determined based on the brightness value of the original image, so that the accurate marking of the highlight region can be limited, the dispersion treatment is performed based on the more accurate highlight region, the edge of the dispersed highlight region is clearer, and the image processing effect is improved.
According to the image processing method provided by the embodiment of the application, the execution subject can be an image processing device. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
In some embodiments of the present application, an image processing apparatus is provided, and fig. 2 shows a schematic block diagram of the image processing apparatus provided in some embodiments of the present application. As shown in fig. 2, the image processing apparatus 200 includes: a determination module 202, a rendering module 204, a processing module 206, and a generation module 208.
The determining module 202 is configured to determine a first region and a second region in the first image, where the first region and the second region are regions in the first image with a luminance value higher than a luminance threshold;
the rendering module 204 is configured to render a first region and a second region in the first image respectively;
the determining module 202 is configured to determine an image matrix corresponding to the first region and the second region;
the processing module 206 is configured to perform weight redistribution processing on the image matrices of the first area and the second area to obtain a weight-redistributed image matrix;
a generating module 208, configured to generate a second image according to the image matrix after weight redistribution.
In the embodiment of the application, the first area and the second area, namely the highlight areas, of which the brightness values are larger than the brightness threshold value in the first image are determined, then the first area and the second area are respectively rendered, the weight distribution processing is carried out on the image matrix of the first area and the image matrix of the second area obtained by rendering, the diffusion effect is added for the first area and the second area in the processed image, the edges of the first area and the second area of the diffusion effect are clearer, better layering is provided between the highlight areas in the first image, and the problem that the edges of the highlight areas of the diffusion effect are too fuzzy when the diffusion effect of the highlight areas of the image is processed in the related art is solved.
In some embodiments of the application, the image matrix comprises a first color accumulation matrix and a first weight accumulation matrix;
the processing module 206 is further configured to:
dividing the first color accumulation matrix and the first weight accumulation matrix point by point to obtain a first scenery map matrix;
updating the first weight accumulation matrix through the first adjustment parameters to obtain a second weight accumulation matrix;
multiplying the second weight accumulation matrix with the first scenegraph matrix point by point to obtain a second color accumulation matrix;
the determining module 202 is further configured to determine a weight-reassigned image matrix according to the second color accumulation matrix and the second weight accumulation matrix.
In the embodiment of the application, the image matrix comprises a weight accumulation matrix and a color accumulation matrix, and the weight is required to be redistributed, so that the weight accumulation matrix in the image matrix is regulated and updated through a first regulation parameter, and then the color accumulation matrix is correspondingly regulated based on the updated weight accumulation matrix, thus the weight redistribution of the image matrix is completed, and the edge definition of the diffusion of the highlight region in the second image generated by the two image matrixes after the weight redistribution is ensured.
In some embodiments of the application, the processing module 206 is further configured to:
dividing the first weight accumulation matrix into a first area matrix and a second area matrix;
updating the second area matrix according to the first adjustment parameters to obtain a third area matrix;
updating the first area matrix according to the third area matrix to obtain a fourth area matrix;
the determining module 202 is further configured to determine a second weight accumulation matrix according to the third area matrix and the fourth area matrix.
In the embodiment of the application, the first weight accumulation matrix is divided into the first area matrix and the second area matrix according to the weight value, the second area matrix is adjusted by the first adjustment parameter to obtain the third area matrix, the first area matrix is adjusted based on the third area matrix obtained by the adjustment, so that the fourth area matrix is obtained, and the accuracy of the obtained second weight accumulation matrix can be improved by adjusting the first weight accumulation matrix in areas.
In some embodiments of the present application, the image processing apparatus 200 further includes:
the acquisition module is used for acquiring a second image and focusing parallax parameters, wherein the second image is a parallax image corresponding to the first image;
The rendering module 204 is further configured to render the first region and the second region in the first image according to the second image, the ambiguity parameter, and the focus parallax parameter, to obtain an image matrix corresponding to the first region and the second region.
In the embodiment of the application, the first area and the second area in the first image can be respectively subjected to diffusion rendering through the ambiguity parameter, the focusing parallax parameter and the second image, so that a weight accumulation matrix and a color accumulation matrix matched with the first area and the second area are obtained.
In some embodiments of the application, the processing module 206 is further configured to:
under the condition that a third image is acquired, the third image is adjusted through a second adjustment parameter to obtain a fourth image, wherein the third image is an initial image corresponding to the first image, and the fourth image is a Gao Guangyan color chart of the third image;
and generating a first image according to the fifth image and the fourth image, wherein the fifth image is a full-focus image of the third image.
According to the embodiment of the application, the Gao Guangyan color map and the full-focus image are obtained by adjusting the original image, so that the full-focus image after highlight treatment is obtained, the required scattered scene map is obtained, and the scattered scene map treatment efficiency and the treatment effect are improved.
In some embodiments of the present application, the processing module 206 is specifically configured to:
extracting a first image area with a brightness value within a first threshold range from the third image;
extracting a second image area with a brightness value within a second threshold range from the third image, wherein the minimum value of the second threshold range is larger than the maximum value of the first threshold range;
an area in the first image that matches the first image area is determined as a first area, and an area in the first image that matches the second image area is determined as a second area.
According to the embodiment of the application, the highlight region is determined based on the brightness value of the original image, so that the accurate marking of the highlight region can be limited, the dispersion treatment is performed based on the more accurate highlight region, the edge of the dispersed highlight region is clearer, and the image processing effect is improved.
The image processing device in the embodiment of the application can be an electronic device, or can be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. Illustratively, the electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a mobile internet appliance (Mobile Internet Device, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant, PDA), or the like, and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, or the like.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an iOS operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
Optionally, the embodiment of the present application further provides an electronic device, which includes the image processing apparatus in any of the embodiments, so that the electronic device has all the advantages of the image processing apparatus in any of the embodiments, and will not be described in detail herein.
Optionally, an embodiment of the present application further provides an electronic device, fig. 3 shows a block diagram of a structure of the electronic device according to an embodiment of the present application, as shown in fig. 3, the electronic device 300 includes a processor 302, a memory 304, and a program or an instruction stored in the memory 304 and capable of running on the processor 302, where the program or the instruction implements each process of the above embodiment of the display control method when executed by the processor 302, and the process can achieve the same technical effect, and is not repeated herein.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 4 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 410 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 410 is configured to determine a first region and a second region in the first image, where the first region and the second region are regions in the first image with a luminance value higher than a luminance threshold; rendering a first region and a second region in a first image respectively, and determining an image matrix corresponding to the first region and the second region; performing weight reassignment treatment on the image matrixes of the first area and the second area to obtain an image matrix with the weight reassigned; and generating a second image according to the image matrix with the weight redistributed.
In the embodiment of the application, the first area and the second area, namely the highlight areas, of which the brightness values are larger than the brightness threshold value in the first image are determined, then the first area and the second area are respectively rendered, the weight distribution processing is carried out on the image matrix of the first area and the image matrix of the second area obtained by rendering, the diffusion effect is added for the first area and the second area in the processed image, the edges of the first area and the second area of the diffusion effect are clearer, better layering is provided between the highlight areas in the first image, and the problem that the edges of the highlight areas of the diffusion effect are too fuzzy when the diffusion effect of the highlight areas of the image is processed in the related art is solved.
Optionally, the image matrix includes a first color accumulation matrix and a first weight accumulation matrix;
the processor 410 is further configured to divide the first color accumulation matrix and the first weight accumulation matrix point by point to obtain a first foreground map matrix; updating the first weight accumulation matrix through the first adjustment parameters to obtain a second weight accumulation matrix; multiplying the second weight accumulation matrix with the first scenegraph matrix point by point to obtain a second color accumulation matrix; and determining the image matrix with weight reassigned according to the second color accumulation matrix and the second weight accumulation matrix.
In the embodiment of the application, the image matrix comprises a weight accumulation matrix and a color accumulation matrix, and the weight is required to be redistributed, so that the weight accumulation matrix in the image matrix is regulated and updated through a first regulation parameter, and then the color accumulation matrix is correspondingly regulated based on the updated weight accumulation matrix, thus the weight redistribution of the image matrix is completed, and the edge definition of the diffusion of the highlight region in the second image generated by the two image matrixes after the weight redistribution is ensured.
Optionally, the processor 410 is further configured to divide the first weight accumulation matrix into a first area matrix and a second area matrix; updating the second area matrix according to the first adjustment parameters to obtain a third area matrix; updating the first area matrix according to the third area matrix to obtain a fourth area matrix; and determining a second weight accumulation matrix according to the third area matrix and the fourth area matrix.
In the embodiment of the application, the first weight accumulation matrix is divided into the first area matrix and the second area matrix according to the weight value, the second area matrix is adjusted by the first adjustment parameter to obtain the third area matrix, the first area matrix is adjusted based on the third area matrix obtained by the adjustment, so that the fourth area matrix is obtained, and the accuracy of the obtained second weight accumulation matrix can be improved by adjusting the first weight accumulation matrix in areas.
Optionally, the processor 410 is further configured to obtain a second image and a focusing parallax parameter, where the second image is a parallax map corresponding to the first image; and respectively rendering the first region and the second region in the first image according to the second image, the ambiguity parameter and the focusing parallax parameter to obtain an image matrix corresponding to the first region and the second region.
In the embodiment of the application, the first area and the second area in the first image can be respectively subjected to diffusion rendering through the ambiguity parameter, the focusing parallax parameter and the second image, so that a weight accumulation matrix and a color accumulation matrix matched with the first area and the second area are obtained.
Optionally, the processor 410 is further configured to adjust the third image through the second adjustment parameter to obtain a fourth image, where the third image is an initial image corresponding to the first image, and the fourth image is a Gao Guangyan color map of the third image; and generating a first image according to the fifth image and the fourth image, wherein the fifth image is a full-focus image of the third image.
According to the embodiment of the application, the Gao Guangyan color map and the full-focus image are obtained by adjusting the original image, so that the full-focus image after highlight treatment is obtained, the required scattered scene map is obtained, and the scattered scene map treatment efficiency and the treatment effect are improved.
Optionally, the processor 410 is further configured to extract a first image area in the third image, where the brightness value is within the first threshold range; extracting a second image area with a brightness value within a second threshold range from the third image, wherein the minimum value of the second threshold range is larger than the maximum value of the first threshold range; an area in the first image that matches the first image area is determined as a first area, and an area in the first image that matches the second image area is determined as a second area.
According to the embodiment of the application, the highlight region is determined based on the brightness value of the original image, so that the accurate marking of the highlight region can be limited, the dispersion treatment is performed based on the more accurate highlight region, the edge of the dispersed highlight region is clearer, and the image processing effect is improved.
It should be appreciated that in embodiments of the present application, the input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 407 includes at least one of a processor 4071 and other input devices 4072. The processor 4071 is also referred to as a touch screen. The processor 4071 may include two parts, a touch detection device and a touch controller. Other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 409 may include volatile memory or nonvolatile memory, or the memory 409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 409 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 410 may include one or more processing units; optionally, the processor 410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The embodiment of the application also provides a readable storage medium, and the readable storage medium stores a program or an instruction, which when executed by a processor, implements each process of the above method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
The processor is a processor in the electronic device in the above embodiment. Readable storage media include computer readable storage media such as computer readable memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the application further provides a chip, the chip comprises a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running programs or instructions, the processes of the embodiment of the image processing method can be realized, the same technical effects can be achieved, and the repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the above-described image processing method embodiments, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in part in the form of a computer software product stored on a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. An image processing method, comprising:
determining a first region and a second region in a first image, wherein the first region and the second region are regions with brightness values higher than a brightness threshold value in the first image;
rendering the first region and the second region in the first image respectively, and determining an image matrix corresponding to the first region and the second region;
performing weight redistribution processing on the image matrixes of the first area and the second area to obtain an image matrix with weight redistributed;
and generating a second image according to the image matrix with the weight reassigned.
2. The image processing method according to claim 1, wherein the image matrix includes a first color accumulation matrix and a first weight accumulation matrix;
the step of performing weight redistribution processing on the image matrixes of the first area and the second area to obtain a weight-redistributed image matrix, including:
dividing the first color accumulation matrix and the first weight accumulation matrix point by point to obtain a first scenery map matrix;
updating the first weight accumulation matrix through a first adjustment parameter to obtain a second weight accumulation matrix;
Multiplying the second weight accumulation matrix with the first scenegraph matrix point by point to obtain a second color accumulation matrix;
and determining the image matrix after weight redistribution according to the second color accumulation matrix and the second weight accumulation matrix.
3. The image processing method according to claim 2, wherein updating the first weight accumulation matrix by the first adjustment parameter to obtain a second weight accumulation matrix includes:
dividing the first weight accumulation matrix into a first area matrix and a second area matrix;
updating the second area matrix according to the first adjustment parameters to obtain a third area matrix;
updating the first area matrix according to the third area matrix to obtain a fourth area matrix;
and determining the second weight accumulation matrix according to the third area matrix and the fourth area matrix.
4. The image processing method according to any one of claims 1 to 3, wherein the rendering the first region and the second region in the first image, respectively, determines an image matrix corresponding to the first region and the second region, includes:
Acquiring a second image and a focusing parallax parameter, wherein the second image is a parallax image corresponding to the first image;
and respectively rendering the first region and the second region in the first image according to the second image, the ambiguity parameter and the focusing parallax parameter to obtain the image matrixes corresponding to the first region and the second region.
5. The image processing method according to any one of claims 1 to 3, characterized by further comprising, before the determining of the first region and the second region in the first image:
under the condition that a third image is acquired, the third image is adjusted through a second adjustment parameter to obtain a fourth image, wherein the third image is an initial image corresponding to the first image, and the fourth image is a Gao Guangyan color chart of the third image;
and generating the first image according to a fifth image and the fourth image, wherein the fifth image is a full-focus image of the third image.
6. An image processing apparatus, comprising: the device comprises a determining module, a rendering module, a processing module and a generating module;
the determining module is used for determining a first area and a second area in a first image, wherein the first area and the second area are areas with brightness values higher than a brightness threshold value in the first image;
The rendering module is used for rendering the first area and the second area in the first image respectively;
the determining module is further configured to determine an image matrix corresponding to the first region and the second region;
the processing module is used for carrying out weight redistribution processing on the image matrixes of the first area and the second area to obtain an image matrix with the weight redistributed;
the generation module is used for generating a second image according to the image matrix after weight redistribution.
7. The image processing apparatus according to claim 6, wherein the image matrix includes a first color accumulation matrix and a first weight accumulation matrix;
the processing module is further configured to:
dividing the first color accumulation matrix and the first weight accumulation matrix point by point to obtain a first scenery map matrix;
updating the first weight accumulation matrix through a first adjustment parameter to obtain a second weight accumulation matrix;
multiplying the second weight accumulation matrix with the first scenegraph matrix point by point to obtain a second color accumulation matrix;
the determining module is further configured to determine the image matrix after weight redistribution according to the second color accumulation matrix and the second weight accumulation matrix.
8. The image processing apparatus of claim 7, wherein the processing module is further configured to:
dividing the first weight accumulation matrix into a first area matrix and a second area matrix, wherein the weight value of the second area matrix is larger than that of the first area matrix;
updating the second area matrix according to the first adjustment parameters to obtain a third area matrix;
updating the first area matrix according to the third area matrix to obtain a fourth area matrix;
the determining module is further configured to determine the second weight accumulation matrix according to the third area matrix and the fourth area matrix.
9. The image processing apparatus according to any one of claims 6 to 8, further comprising:
the acquisition module is used for acquiring a second image and focusing parallax parameters, wherein the second image is a parallax image corresponding to the first image;
the rendering module is further configured to render the first region and the second region in the first image according to the second image, the ambiguity parameter, and the focus parallax parameter, so as to obtain the image matrices corresponding to the first region and the second region.
10. The image processing apparatus according to any one of claims 6 to 8, wherein the processing module is further configured to:
under the condition that a third image is acquired, the third image is adjusted through a second adjustment parameter to obtain a fourth image, wherein the third image is an initial image corresponding to the first image, and the fourth image is a Gao Guangyan color chart of the third image;
and generating the first image according to a fifth image and the fourth image, wherein the fifth image is a full-focus image of the third image.
CN202311158737.1A 2023-09-08 2023-09-08 Image processing method and device Pending CN117156285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311158737.1A CN117156285A (en) 2023-09-08 2023-09-08 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311158737.1A CN117156285A (en) 2023-09-08 2023-09-08 Image processing method and device

Publications (1)

Publication Number Publication Date
CN117156285A true CN117156285A (en) 2023-12-01

Family

ID=88911730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311158737.1A Pending CN117156285A (en) 2023-09-08 2023-09-08 Image processing method and device

Country Status (1)

Country Link
CN (1) CN117156285A (en)

Similar Documents

Publication Publication Date Title
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN114390201A (en) Focusing method and device thereof
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN117156285A (en) Image processing method and device
CN112785490B (en) Image processing method and device and electronic equipment
CN112561787A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113489901B (en) Shooting method and device thereof
CN117201941A (en) Image processing method, device, electronic equipment and readable storage medium
CN117793513A (en) Video processing method and device
CN117135445A (en) Image processing method and device
CN112367470B (en) Image processing method and device and electronic equipment
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium
CN115861110A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115797160A (en) Image generation method and device
CN113850739A (en) Image processing method and device
CN114979479A (en) Shooting method and device thereof
CN117278842A (en) Shooting control method, shooting control device, electronic equipment and readable storage medium
CN116320740A (en) Shooting focusing method, shooting focusing device, electronic equipment and storage medium
CN116528053A (en) Image exposure method and device and electronic equipment
CN117743626A (en) Image processing method, device, electronic equipment and readable storage medium
CN114390182A (en) Shooting method and device and electronic equipment
CN116017105A (en) Shooting method, shooting device, electronic equipment and storage medium
CN117750195A (en) Image processing method, device, readable storage medium and electronic equipment
CN116342992A (en) Image processing method and electronic device
CN115242981A (en) Video playing method, video playing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination