WO2023240452A1 - 图像处理方法、装置、电子设备和存储介质 - Google Patents

图像处理方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2023240452A1
WO2023240452A1 PCT/CN2022/098703 CN2022098703W WO2023240452A1 WO 2023240452 A1 WO2023240452 A1 WO 2023240452A1 CN 2022098703 W CN2022098703 W CN 2022098703W WO 2023240452 A1 WO2023240452 A1 WO 2023240452A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
image
light spot
blur
pixel
Prior art date
Application number
PCT/CN2022/098703
Other languages
English (en)
French (fr)
Inventor
尹双双
陈妹雅
饶强
刘阳晨旭
江浩
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/098703 priority Critical patent/WO2023240452A1/zh
Priority to CN202280004260.9A priority patent/CN117642767A/zh
Publication of WO2023240452A1 publication Critical patent/WO2023240452A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image

Definitions

  • the present disclosure relates to the technical field of image processing, and specifically to an image processing method, device, electronic device and storage medium.
  • the camera program of the terminal device can provide a variety of photo modes, so that it has various functions of a professional camera to satisfy users in various situations.
  • Photography needs in various scenarios.
  • the physical blur function mainly relies on professional cameras.
  • embodiments of the present disclosure provide an image processing method, device, electronic device and storage medium to solve the defects in the related technology.
  • an image processing method including:
  • the blur kernel of each pixel is determined in a preconfigured blur kernel library, where the blur kernel is used to characterize the pixel when it is subjected to blur processing.
  • each pixel According to the blur kernel of each pixel, corresponding blur processing is performed on each pixel to obtain the target image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • Determining the position of each pixel in the first light spot area in the image to be processed includes:
  • the corresponding blur processing is performed on each pixel according to the blur kernel of each pixel to obtain the blur processing result of each pixel, including:
  • the average blur brightness value of all pixels within the blur kernel range of the i-th pixel is determined as the target brightness value of the i-th pixel, where i is an integer greater than 0 and not greater than N, N is the total number of pixels in the first light spot area.
  • determining the average blur brightness value of all pixels within the blur kernel range of the i-th pixel as the target brightness value of the i-th pixel includes:
  • the row integration results of each row of pixels within the blur kernel range are summed, and the summation results are averaged to obtain the target brightness value of the i-th pixel.
  • the blur brightness value is the product of the original brightness value of the pixel and the weight of the pixel;
  • the average blur brightness value of all pixels within the blur kernel range of the pixel includes:
  • the blur kernel includes a contour function with a target pixel as the center of the coordinate circle, where the target pixel is the pixel processed by the blur kernel; and/or,
  • the blur kernel includes a row offset range relative to a target pixel point, and an offset amount of each row within the row offset range, where the target pixel point is a pixel point processed by the blur kernel.
  • it also includes:
  • the blur kernel library is generated according to the blur kernel corresponding to each second light spot area and its position in the light spot image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • the method of obtaining each second light spot area in the light spot collection image includes:
  • Generating the blur kernel library based on the blur kernel corresponding to each second spot area and its position in the spot image includes:
  • the blur kernel under the reference angular coordinate is rotated according to the proportional relationship between the first coordinate other than the reference angular coordinate and the reference angular coordinate to obtain the blur kernel under the first coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the reference angle coordinates in the angle dimension, the reference radius coordinates in the diffusion circle radius dimension, and the fuzzy kernel under the coordinate combination formed by each coordinate in the distance dimension;
  • the blur kernel under the reference radius coordinate is scaled according to the proportional relationship between the second coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the second coordinate.
  • Kernel, wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, each coordinate in the angle dimension, and each coordinate in the distance dimension; or,
  • the blur kernel under the reference radius coordinate is rotated according to the proportional relationship between the third coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the third coordinate.
  • Kernel, wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, the reference angle coordinate in the angle dimension, and each coordinate in the distance dimension;
  • the blur kernel under the base angular coordinate is scaled according to the proportional relationship between the fourth coordinate other than the base angular coordinate and the base angular coordinate to obtain the blur kernel under the fourth coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the fuzzy kernel under the combination of coordinates formed by the reference angle coordinates in the angle dimension, each coordinate in the radius dimension of the circle of confusion, and each coordinate in the distance dimension.
  • extracting the shape information of each second spot area in the spot collection image includes:
  • Shape information of the second light spot area is determined based on the coordinates of each pixel point in the second light spot area on the image coordinate system.
  • determining the shape information of the second light spot area based on the coordinates of each pixel point in the second light spot area on the image coordinate system includes:
  • At least one of the following is also included:
  • Each of the partial images is resized to a preset size.
  • an image processing device includes:
  • An acquisition module configured to acquire the first light spot area in the image to be processed, and determine the position of each pixel in the first light spot area in the image to be processed;
  • a determination module configured to determine the blur kernel of each pixel in a preconfigured blur kernel library according to the position of each pixel in the image to be processed, wherein the blur kernel is used to characterize the pixel The range of pixels involved when blurring a point;
  • the blur module is used to perform corresponding blur processing on each pixel according to the blur kernel of each pixel to obtain the target image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • the acquisition module When the acquisition module is used to determine the position of each pixel in the first light spot area in the image to be processed, it is used to:
  • the blur module is used to:
  • the average blur brightness value of all pixels within the blur kernel range of the i-th pixel is determined as the target brightness value of the i-th pixel, where i is an integer greater than 0 and not greater than N, N is the total number of pixels in the first light spot area.
  • the blur module is used to:
  • the row integration results of each row of pixels within the blur kernel range are summed, and the summation results are averaged to obtain the target brightness value of the i-th pixel.
  • the blur brightness value is the product of the original brightness value of the pixel and the weight of the pixel;
  • the average blur brightness value of all pixels within the blur kernel range of the pixel includes:
  • the blur kernel includes a contour function with a target pixel as the center of the coordinate circle, where the target pixel is the pixel processed by the blur kernel; and/or,
  • the blur kernel includes a row offset range relative to a target pixel point, and an offset amount of each row within the row offset range, where the target pixel point is a pixel point processed by the blur kernel.
  • a configuration module is also included for:
  • the blur kernel library is generated according to the blur kernel corresponding to each second light spot area and its position in the light spot image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • the configuration module When the configuration module is used to obtain each second light spot area in the light spot collection image, it is used for:
  • the configuration module is used to generate the blur kernel library according to the blur kernel corresponding to each second spot area and its position in the spot image, and is used to:
  • the blur kernel under the reference angular coordinate is rotated according to the proportional relationship between the first coordinate other than the reference angular coordinate and the reference angular coordinate to obtain the blur kernel under the first coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the reference angle coordinates in the angle dimension, the reference radius coordinates in the diffusion circle radius dimension, and the fuzzy kernel under the coordinate combination formed by each coordinate in the distance dimension;
  • the blur sum under the reference radius coordinate is scaled according to the proportional relationship between the second coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the second coordinate Kernel
  • the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, each coordinate in the angle dimension, and each coordinate in the distance dimension; or,
  • the blur kernel under the reference radius coordinate is rotated according to the proportional relationship between the third coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the third coordinate.
  • Kernel, wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, the reference angle coordinate in the angle dimension, and each coordinate in the distance dimension;
  • the blur kernel under the base angular coordinate is scaled according to the proportional relationship between the fourth coordinate other than the base angular coordinate and the base angular coordinate to obtain the blur kernel under the fourth coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the fuzzy kernel under the combination of coordinates formed by the reference angle coordinates in the angle dimension, each coordinate in the radius dimension of the circle of confusion, and each coordinate in the distance dimension.
  • the configuration module when used to extract the shape information of each second spot area in the spot collection image, it is used to:
  • Shape information of the second light spot area is determined based on the coordinates of each pixel point in the second light spot area on the image coordinate system.
  • the configuration module when used to determine the shape information of the second light spot area based on the coordinates of each pixel point in the second light spot area on the image coordinate system, it is used to:
  • the configuration module is further configured to perform at least one of the following before performing edge extraction processing on the second light spot area in each partial image:
  • Each of the partial images is resized to a preset size.
  • an electronic device includes a memory and a processor.
  • the memory is used to store computer instructions executable on the processor.
  • the processor is used to execute the The computer instructions are based on the image processing method described in the first aspect.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method described in the first aspect is implemented.
  • the blur kernel is used to characterize the pixel range involved in the blur processing of the pixel points, it can represent the shape and size of the circle of confusion during the blur processing, and the blur kernel library It is equipped with a blur kernel corresponding to each position, so each pixel in the spot area can be blurred according to the corresponding blur kernel, which is highly targeted and has a certain degree of difference, thus imitating the physical blur function of professional cameras. . If this method is applied to the camera program of the terminal device, the functions of the camera program can be enriched and the camera effect can be closer to that of a professional camera.
  • Figure 1 is a flow chart of an image processing method according to an exemplary embodiment of the present disclosure
  • Figure 2 is a schematic diagram of coordinate division in two dimensions of distance and angle on an image according to an exemplary embodiment of the present disclosure
  • Figure 3 is a schematic diagram of the process of searching for blur kernels in the blur kernel library according to location according to an exemplary embodiment of the present disclosure
  • Figure 4 is a schematic diagram of a heart-shaped blur kernel according to an exemplary embodiment of the present disclosure
  • Figure 5 is a schematic diagram of line integration of blurred brightness values illustrating an exemplary embodiment of the present disclosure
  • Figure 6 is a flow chart of a method for configuring a fuzzy kernel library according to an exemplary embodiment of the present disclosure
  • Figure 7 is a schematic diagram of extracting a light spot area from a light spot collection image according to an exemplary embodiment of the present disclosure
  • Figure 8 is a schematic structural diagram of an image processing device according to an exemplary embodiment of the present disclosure.
  • FIG. 9 is a structural block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
  • first, second, third, etc. may be used in this disclosure to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein may be interpreted as "when” or “when” or “in response to determining.”
  • At least one embodiment of the present disclosure provides an image processing method. Please refer to FIG. 1 , which shows the flow of the method, including step S101 and step S103.
  • the method can be applied to a terminal device, for example, to an algorithm that simulates physical blur in a camera program of the terminal device.
  • the terminal device may have an image acquisition device such as a camera. These image acquisition devices can acquire images, and the camera program of the terminal device can control various parameters in the image acquisition process of the image acquisition device.
  • This method can be applied to the scene where the camera program of the terminal device captures images. That is, this method is used to blur and render the images collected by the image acquisition device, thereby obtaining the image output by the camera program, which is what the user obtains when taking pictures using the camera program. Image.
  • step S101 a first light spot area in the image to be processed is obtained, and the position of each pixel in the first light spot area in the image to be processed is determined.
  • the image acquisition device can collect the original image.
  • the original image needs to be blurred and rendered to obtain the output image of the camera program. Therefore, this In the step, the original image can be obtained as an image to be processed, and the image obtained after being processed by the method provided by the present disclosure can be used as an output image of the camera program.
  • Spot recognition processing can be performed on the image to be processed, thereby obtaining one or more first spot areas in the image to be processed.
  • the pixels in the image to be processed whose brightness value is higher than the spot brightness threshold are determined as spot pixels, and then at least one connected domain composed of the spot pixels is determined as the first spot area.
  • the position of the pixel in the first light spot area determined in this step in the image to be processed is used to search for a blur kernel in the preconfigured blur kernel library in step S102. Therefore, the position of the pixel determined in this step is
  • the dimensions can match the index dimensions of the blur kernel in the blur kernel library.
  • the blur kernel in the blur kernel library has three index dimensions: distance, angle, and radius of diffusion circle.
  • Each blur kernel has coordinates in three dimensions: distance, angle, and radius of diffusion circle. You can search through the coordinates in the three dimensions.
  • Blur kernel therefore in this step, the coordinates of each pixel in the first light spot area in the three dimensions of distance, angle and radius of confusion circle of the image to be processed are determined.
  • the distance is used to represent the distance between the pixel point and the image center
  • the angle is used to represent the angle between the connection line between the pixel point and the image center, and the image reference angle line.
  • the accompanying drawing 2 which exemplarily shows a method of dividing the two dimensions of distance and angle on the image.
  • the rays starting from the center of the image represent equal-angle divisions on the image, while the concentric circles represent equal-distance divisions on the image.
  • the coordinate indexes in the angle dimension in the figure are 0 to 39, and the distance The coordinate indexes on the dimensions are 0 to 10.
  • the coordinates in the radius dimension of the confusion circle since the size of the radius of the confusion circle is related to the distance between the pixel point and the focus plane, its coordinates on the radius of the confusion circle can be determined based on the depth information of the pixel point.
  • step S102 according to the position of each pixel in the image to be processed, a blur kernel for each pixel is determined in a preconfigured blur kernel library, where the blur kernel is used to characterize the pixel.
  • the range of pixels involved in blurring the point is determined in step S102.
  • the blur kernel library has a blur kernel corresponding to any position in the image, so the corresponding blur kernel can be found in the blur kernel library according to the position of each pixel.
  • Figure 3 shows the process of searching for blur kernels in the blur kernel library according to coordinates in the three dimensions of distance (distance), angle (direction) and blur circle radius (radius). It can be understood that since the influence of the angular dimension on the shape of the first light spot area is centrally symmetrical, only blur kernels within the 90° range can be stored in the blur kernel library, and other angles can be converted to 90° based on the symmetry relationship. Then search for the fuzzy kernel based on the angle, for example 135°. You can search the fuzzy kernel library for the fuzzy kernel whose angle dimension coordinates are 45°. This can reduce the memory usage of the fuzzy kernel library.
  • Blur processing can use the gather method, which is to sum the parameters of pixels within a certain range around a certain pixel, and use the summation result to adjust the parameters of the pixel to achieve the purpose of blurring; the scatter method can also be used, which is to sum up the parameters of a certain pixel.
  • the parameters of the pixel are dispersed to pixels within a certain range around the pixel, and then a certain pixel uses the parameters of other pixels dispersed to the pixel to adjust the parameters of the pixel to achieve the purpose of blurring.
  • the blur kernel is used to represent a certain range around the pixels mentioned above.
  • the blur kernel includes a contour function with the corresponding pixel point as the center of the coordinate circle, then in this step, the pixel points in the blur kernel can be determined according to the contour function.
  • the following contour function formula represents the contour of the ring-shaped blur kernel:
  • r is the size of the drawing window where the blur kernel is located
  • i is the row offset in the blur kernel relative to the center of the coordinate circle
  • s is the ratio of the inner ring radius and the outer ring radius of the ring
  • ] is the i-th
  • ] is the offset of the right endpoint of the i-th offset row, table_left_bias_minus[
  • ] is the left endpoint of the i-th offset row's internal blank
  • is the offset of the right endpoint of the blank inside the i-th offset row.
  • the blur kernel includes the row offset range relative to the corresponding pixel point, and the offset of each row within the row offset range.
  • the blur kernel found in Figure 3 mentioned above is This type of fuzzy kernel.
  • the pixels in the blur kernel can be determined by searching for the above data in the blur kernel. For example, find the offset of each row of pixels according to the following index:
  • Left_col_bias is the left offset
  • right_col_bias is the left offset
  • distance, direction, and cur_radius are the coordinates of the three dimensions of distance, angle, and radius of diffusion circle respectively.
  • cur_bias_row is the offset type, which means the outer contour. Correspondingly, there are also internal blank contours.
  • the offset of some rows within the row offset range includes not only the outer endpoints of the row, but also the endpoints of the internal blanks of the row, such as blurring
  • the kernel is shown in Figure 4.
  • the blur kernel is the blur kernel corresponding to the (0,0) pixel point.
  • step S103 corresponding blur processing is performed on each pixel according to the blur kernel of each pixel to obtain the target image.
  • Scatter blurring is a forward calculation process, that is, the color of each pixel is distributed to adjacent pixels. Looking backward, the value of each pixel on the final result map is the influence and value of adjacent pixels. This is the gather calculation. and blurry. Whether it is scatter blur or gather blur, they are all based on sliding window processing in the image space, and such a window is the blur kernel. Each point pixel calculates the surrounding pixels according to the shape of the blur kernel.
  • the gather method is used for blur processing, that is, the following steps are performed for each pixel: the average of the blur brightness values of all pixels within the blur kernel range of the i-th pixel is determined. is the target brightness value of the i-th pixel.
  • the target image is obtained.
  • the blur brightness value may be the product of the original brightness value of the pixel and the weight of the pixel.
  • the average blur brightness value of all pixels within the blur kernel range of the pixel is , can be the ratio between the third total amount and the fourth total amount, where the third total amount is the sum of the blur brightness values of all pixels within the blur kernel range of the pixel point, and the fourth total amount is the The sum of the weights of all pixels within the blur kernel range of the pixel. There is a weight value for each pixel in the blur kernel.
  • the weight value of each pixel in the blur kernel can be set to a consistent weight value, so that each The original brightness value of the pixel is multiplied by the weight value of the pixel in the blur kernel to obtain the blur brightness value of each pixel. Then the following formula can be used to calculate the target brightness value val of each pixel:
  • P ij is the blur brightness value of the pixel in the i-th row and j-th column
  • m is the number of rows of the blur kernel
  • n is the number of columns of the blur sum.
  • the integral map can be used to calculate the average blur brightness value of all pixels within the blur kernel range, that is, using the blur brightness value integral map of the image to be processed, the i-th
  • Each row of pixels within the blur kernel range of the pixel performs row integration of the blur brightness value; then the row integration results of each row of pixels within the blur kernel range are summed, and the summation results are averaged to obtain the third The target brightness value of i pixels.
  • Figure 5 shows the principle of the row integration schematic diagram of blur brightness values.
  • the table on the left is a statistical table of blur brightness values for each pixel in the image to be processed.
  • the P value in each cell is the fuzzy brightness value of the corresponding pixel
  • the integral value S of a certain pixel in the integral diagram is the sum of the fuzzy brightness values P of all pixels from the first pixel to that pixel.
  • S 00 P 00
  • S 01 S 00 +P 01
  • S 02 S 01 +P 02
  • S 0n S 0(n-1 ) +P 0n .
  • S i(j+right) is the integral value of the pixel point in the i-th row and j+right column
  • S i(j-left-1) is the integral value of the pixel point in the i-th row and j-left-1 column. value.
  • the pixels at the endpoints of each row can be determined from the contour function of the blur kernel, or the pixels at the endpoints of each row can be determined from the offset of each row within the row offset range of the blur kernel, and then each row is calculated according to the above formula.
  • the sum of the blur brightness values of pixels is a relatively simple calculation method and the calculation efficiency is greatly improved.
  • the first light spot area can be blurred according to the blurring algorithm in the related art, or can be blurred according to the method of blurring the first light spot area by this method. change.
  • the blur kernel is used to characterize the pixel range involved in the blur processing of the pixel points, it can represent the shape and size of the circle of confusion during the blur processing, and the blur kernel library It is equipped with a blur kernel corresponding to each position, so each pixel in the spot area can be blurred according to the corresponding blur kernel, which is highly targeted and has a certain degree of difference, thus imitating the physical blur function of professional cameras. . If this method is applied to the camera program of the terminal device, the functions of the camera program can be enriched and the camera effect can be closer to that of a professional camera.
  • the present disclosure obtains the first light spot area in the image to be processed and determines the position of each pixel in the first light spot area in the image to be processed. According to the location of each pixel, Describe the position in the image to be processed, determine the blur kernel of each pixel in the pre-configured blur kernel library, and finally perform corresponding blur processing on each pixel according to the blur kernel of each pixel (i.e. Blur rendering processing) to obtain the blur processing result of each pixel.
  • the blur kernel is used to represent the pixel range involved in the blur processing of the pixel points, it can represent the shape and size of the confusion circle during blur processing, and the blur kernel library is configured with blur corresponding to each position.
  • each pixel in the first light spot area can be blurred according to the corresponding blur kernel, which is highly targeted and has a certain degree of difference, avoiding that each pixel adopts the same shape and size of the circle of confusion (such as (circle) is blurred, thereby improving the realism of the blurred rendering effect of light spots, thereby improving the camera program's imitation effect of physical blurring.
  • the blur kernel library may be pre-configured as shown in Figure 6, including steps S601 to S603.
  • step S601 each second light spot area in the light spot collection image is acquired, and the position of each second light spot area in the light spot collection image is determined.
  • the light spot collection image is an image collected in advance for a scene where light spots exist. Since light spots are most clearly displayed by small bright defocused spots, which can well reflect the diffuse circle shape of that pixel area, you can choose a simulated professional camera lens to scenes with bright bokeh spots. Go and capture light spots to collect images. It should be noted that because the spot shape is affected by the aperture size and fixed focus distance, the same aperture size can be used to capture all spot collection images. In addition, for scenes with bright bokeh points, you can choose a scene with evenly distributed light spots (such as a scene with multiple point-like light sources evenly arranged in multiple rows and columns), so that the collected light spot image is easier to blur the shape of the kernel. explore.
  • the light spot collection image can be one or multiple, which can be used as a sample set for configuring the blur kernel library.
  • Figure 7 shows the extraction process of the second light spot area in a certain light spot collection image. All light spot collection images need to meet the requirements for configuring the blur kernel library. For example, the number of light spots in the light spot collection image needs to reach a certain number, or it is necessary to ensure that there is a second light spot area at each position in the light spot collection image, or it is necessary to ensure that The second light spot area exists at a certain position in the light spot collection image.
  • each blur kernel in the configured blur kernel library has coordinates in three dimensions: distance, angle, and radius of confusion circle, and the light spot collection image collected in this step needs to satisfy each of the distance dimensions.
  • the second light spot area exists on all coordinates, and the second light spot area on each coordinate of the distance dimension in the light spot collection image can be obtained (if there are two or more second light spot areas on a certain coordinate, the second light spot area can be randomly or Obtain one of the second light spot areas according to preset rules).
  • each coordinate of the distance dimension in the light spot collection image can be obtained under the reference angle coordinate of the angle dimension (for example, 0°) and under the reference radius coordinate of the radius of confusion dimension (for example, the maximum radius). spot area.
  • step S602 the shape information of each second light spot area in the light spot image is extracted, and a corresponding blur kernel is generated based on the shape information of each second light spot area.
  • the coordinates of each pixel point on the area contour on the image coordinate system; finally, the shape information of the second light spot area is determined based on the coordinates of each pixel point on the second light spot area on the image coordinate system.
  • the contour function of the second light spot area can be fitted based on the coordinates of each pixel point in the second light spot area on the image coordinate system, with the centroid of the second light spot area as the coordinate origin,
  • the shape of the second light spot area is divided into left and right half axes for contour spline function fitting; and/or, the coordinates of each pixel point in the second light spot area on the image coordinate system can be scanned to obtain the The row offset range of the outline of the second light spot area relative to the centroid of the second light spot area, and the offset amount of each row within the row offset range.
  • a binary image of the light spot shape can be drawn, and a blur kernel can be generated based on this.
  • edge extraction processing on the second light spot area in each partial image at least one of the following is performed: increasing the brightness difference between the second light spot area and other areas in the partial image, For example, the partial image is sharpened; the brightness of other areas outside the second spot area in the partial image is adjusted to 0, for example, a brightness threshold can be used to distinguish the second spot area from other areas; each partial image is All are adjusted to the preset size.
  • the above three preprocessing methods can be used alone or combined to improve the effect of edge extraction processing.
  • step S603 the blur kernel library is generated according to the blur kernel corresponding to each second light spot area and its position in the light spot image.
  • the shape information of each blur kernel and its position can be saved to the blur kernel library.
  • the left side shows a schematic diagram of the configured fuzzy kernel library.
  • the spot shape of the area closest to the center of the image with a distance index of 0 is closest to an ideal circle, so the blur kernel with a distance coordinate of 0 can be defined as a circle.
  • step S601 under the reference angle coordinate of the angle dimension and under the reference radius coordinate of the radius of confusion circle, the second light spot on each coordinate of the distance dimension in the light spot collection image is obtained area, and in step S602, a blur kernel is generated at each coordinate of the distance dimension under the reference angle coordinate of the angle dimension, and under the reference radius coordinate of the confusion circle radius dimension.
  • the blur kernel under the reference angular coordinate can be firstly processed in the angular dimension (for example, within the 90° range) according to the proportional relationship between the first coordinate outside the reference angular coordinate and the reference angular coordinate.
  • the blur kernel under the reference angle coordinate includes: the reference angle coordinate in the angle dimension, the reference radius coordinate in the radius dimension of the circle of confusion, and each coordinate in the distance dimension.
  • the blur kernel under the coordinate combination then, in the radius dimension of the confusion circle, scale the blur kernel under the reference radius coordinate according to the proportional relationship between the second coordinate outside the reference radius coordinate and the reference radius coordinate, and obtain The blur kernel under the second coordinate, wherein the blur kernel under the reference radius coordinate includes: the coordinate combination formed by the reference radius coordinate of the radius dimension of the circle of confusion, each coordinate of the angle dimension, and each coordinate of the distance dimension. fuzzy kernel; or,
  • the blur kernel under the reference radius coordinate is rotated according to the proportional relationship between the third coordinate outside the reference radius coordinate and the reference radius coordinate, and the blur kernel under the third coordinate is obtained.
  • the fuzzy kernel wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the coordinate combination formed by the reference radius coordinate of the diffusion circle radius dimension, the reference angle coordinate of the angle dimension, and each coordinate of the distance dimension; and then in the angle Dimensionally, the blur kernel under the reference angular coordinate is scaled according to the proportional relationship between the fourth coordinate other than the reference angular coordinate and the reference angular coordinate to obtain the blur kernel under the fourth coordinate, where,
  • the fuzzy kernel under the reference angle coordinates includes: the fuzzy kernel under the coordinate combination formed by the reference angle coordinates in the angle dimension, each coordinate in the radius dimension of the circle of confusion, and each coordinate in the distance dimension.
  • the reference angle coordinate is a reference coordinate in an angular dimension
  • the reference radius coordinate is a reference coordinate in a radius dimension of a circle of confusion.
  • the first coordinate outside the reference angle coordinate is every other coordinate outside the reference angle coordinate in the angular dimension; the second coordinate outside the reference radius coordinate is outside the reference radius coordinate in the radius dimension of the circle of confusion. every other coordinate of .
  • the third coordinate outside the reference radius coordinate is every other coordinate outside the reference radius coordinate in the radius dimension of the circle of confusion; the fourth coordinate outside the reference angle coordinate is outside the reference angle coordinate in the angle dimension. every other coordinate of .
  • the blur kernel generated in step S102 is expanded in the angle dimension and the radius of confusion circle dimension, and a blur kernel library divided according to the three dimensions of distance, angle and radius of confusion circle is obtained.
  • a spot collection image with a second spot area is collected, the second spot area in the spot collection image is segmented, and the shape information of the second spot area is extracted, and then the blur is configured for the sample.
  • kernel library thereby making the blur kernels in the configured blur kernel library real and effective.
  • the blur kernel library is used to process images, and a more realistic blur processing result can be obtained.
  • an image processing device is provided. Please refer to FIG. 8.
  • the device includes:
  • the acquisition module 801 is used to acquire the first light spot area in the image to be processed, and determine the position of each pixel in the first light spot area in the image to be processed;
  • Determination module 802 configured to determine the blur kernel of each pixel in a preconfigured blur kernel library according to the position of each pixel in the image to be processed, wherein the blur kernel is used to characterize the The range of pixels involved when blurring pixels;
  • the blur module 803 is used to perform corresponding blur processing on each pixel according to the blur kernel of each pixel to obtain the target image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • the acquisition module When the acquisition module is used to determine the position of each pixel in the first light spot area in the image to be processed, it is used to:
  • the blur module is used to:
  • the average blur brightness value of all pixels within the blur kernel range of the i-th pixel is determined as the target brightness value of the i-th pixel, where i is an integer greater than 0 and not greater than N, N is the total number of pixels in the first light spot area.
  • the blur module is used to:
  • the row integration results of each row of pixels within the blur kernel range are summed, and the summation results are averaged to obtain the target brightness value of the i-th pixel.
  • the blur brightness value is the product of the original brightness value of the pixel and the weight of the pixel;
  • the average blur brightness value of all pixels within the blur kernel range of the pixel includes:
  • the blur kernel includes a contour function with the target pixel point as the center of the coordinate circle, where the target pixel point is the pixel point processed by the blur kernel; and/or,
  • the blur kernel includes a row offset range relative to a target pixel point, and an offset amount of each row within the row offset range, where the target pixel point is a pixel point processed by the blur kernel.
  • a configuration module is also included for:
  • the blur kernel library is generated according to the blur kernel corresponding to each second light spot area and its position in the light spot image.
  • each blur kernel in the blur kernel library has coordinates in three dimensions: distance, angle and radius of confusion circle;
  • the configuration module When the configuration module is used to obtain each second light spot area in the light spot collection image, it is used for:
  • the configuration module is used to generate the blur kernel library according to the blur kernel corresponding to each second spot area and its position in the spot image, and is used to:
  • the blur kernel under the reference angular coordinate is rotated according to the proportional relationship between the first coordinate other than the reference angular coordinate and the reference angular coordinate to obtain the blur kernel under the first coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the reference angle coordinates in the angle dimension, the reference radius coordinates in the diffusion circle radius dimension, and the fuzzy kernel under the coordinate combination formed by each coordinate in the distance dimension;
  • the blur kernel under the reference radius coordinate is scaled according to the proportional relationship between the second coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the second coordinate.
  • Kernel, wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, each coordinate in the angle dimension, and each coordinate in the distance dimension; or,
  • the blur kernel under the reference radius coordinate is rotated according to the proportional relationship between the third coordinate outside the reference radius coordinate and the reference radius coordinate to obtain the blur under the third coordinate.
  • Kernel, wherein the fuzzy kernel under the reference radius coordinate includes: the fuzzy kernel under the combination of coordinates formed by the reference radius coordinate in the radius dimension of the circle of confusion, the reference angle coordinate in the angle dimension, and each coordinate in the distance dimension;
  • the blur kernel under the base angular coordinate is scaled according to the proportional relationship between the fourth coordinate other than the base angular coordinate and the base angular coordinate to obtain the blur kernel under the fourth coordinate
  • the fuzzy kernel under the reference angle coordinates includes: the fuzzy kernel under the combination of coordinates formed by the reference angle coordinates in the angle dimension, each coordinate in the radius dimension of the circle of confusion, and each coordinate in the distance dimension.
  • the configuration module when used to extract the shape information of each second spot area in the spot collection image, it is used to:
  • Shape information of the second light spot area is determined based on the coordinates of each pixel point in the second light spot area on the image coordinate system.
  • the configuration module when used to determine the shape information of the second light spot area based on the coordinates of each pixel point in the second light spot area on the image coordinate system, it is used to:
  • the configuration module is further configured to perform at least one of the following before performing edge extraction processing on the second light spot area in each partial image:
  • Each of the partial images is resized to a preset size.
  • the device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the device 900 may include one or more of the following components: a processing component 902, a memory 904, a power supply component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and communications component 916.
  • Processing component 902 generally controls the overall operations of device 900, such as operations associated with display, phone calls, data communications, camera program operations, and recording operations.
  • the processing element 902 may include one or more processors 920 to execute instructions to complete all or part of the steps of the above method.
  • processing component 902 may include one or more modules that facilitate interaction between processing component 902 and other components.
  • processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at device 900 . Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power component 906 provides power to various components of device 900.
  • Power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 900 .
  • Multimedia component 908 includes a screen that provides an output interface between the device 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding operation, but also detect the duration and pressure associated with the touch or sliding operation.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when device 900 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 914 includes one or more sensors for providing various aspects of status assessment for device 900.
  • the sensor component 914 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the device 900, and the sensor component 914 can also detect a change in position of the device 900 or a component of the device 900. , the presence or absence of user contact with the device 900 , device 900 orientation or acceleration/deceleration and temperature changes of the device 900 .
  • Sensor assembly 914 may also include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between apparatus 900 and other devices.
  • the device 900 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 916 also includes a near field communication (NFC) module to facilitate short-range communications.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 900 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the power supply method of the above electronic device.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the power supply method of the above electronic device.
  • the present disclosure also provides a non-transitory computer-readable storage medium including instructions, such as a memory 904 including instructions.
  • the instructions may be executed by the processor 920 of the device 900 to complete the above electronic tasks.
  • the method of powering the device may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本公开是关于图像处理方法、装置、电子设备和存储介质,所述方法包括:获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。

Description

图像处理方法、装置、电子设备和存储介质 技术领域
本公开涉及图像处理技术领域,具体涉及一种图像处理方法、装置、电子设备和存储介质。
背景技术
近年来,终端设备的功能越来越丰富,各项功能的性能也逐渐提高,例如终端设备的相机程序能够提供多种拍照模式,从而使其具有专业相机的各种功能,以满足用户在各种场景下的拍照需求。但是终端设备的相机程序与专业相机相比,还是存在一定的差距。以专业相机的物理虚化为例,专业相机在拍摄时对对焦物体所在深度的物体保持清晰,而对其他深度的物体进行模糊和虚化,从而突出拍摄主体,而且焦外点状光源能够在拍得的图像中形成虚化光斑。相关技术中,物理虚化功能主要还是依赖于专业相机来实现。
发明内容
为克服相关技术中存在的问题,本公开实施例提供一种图像处理方法、装置、电子设备和存储介质,用以解决相关技术中的缺陷。
根据本公开实施例的第一方面,提供一种图像处理方法,所述方法包括:
获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;
根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;
根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
在一个实施例中,所述模糊核库内的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置,包括:
确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标,其中,所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。
在一个实施例中,所述根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到每个像素点的模糊处理结果,包括:
将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,其中,i为大于0,且不大于N的整数,N为所述第一光斑区域内的像素点总数量。
在一个实施例中,所述将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,包括:
利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;
将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
在一个实施例中,所述模糊亮度值为所述像素点的原始亮度值和所述像素点的权重的乘积;
所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,包括:
第一总量和第二总量之间的比例,其中,所述第一总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,所述第二总量为所述像素点的模糊核范围内的全部像素点的权重的和。
在一个实施例中,所述模糊核包括以目标像素点为坐标圆心的轮廓函数,其中,所述目标像素点为所述模糊核所处理的像素点;和/或,
所述模糊核包括相对于目标像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,其中,所述目标像素点为所述模糊核所处理的像素点。
在一个实施例中,还包括:
获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置;
提取所述光斑图像中每个第二光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核;
根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库。
在一个实施例中,所述模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述获取光斑采集图像中每个第二光斑区域,包括:
在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域;
所述根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库,包括:
在角度维度上,根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基 准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核。
在一个实施例中,所述提取所述光斑采集图像中每个第二光斑区域的形状信息,包括:
对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;
对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。
在一个实施例中,所述根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息,包括:
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数;和/或,
对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。
在一个实施例中,在所述对每个局部图像中的光斑区域进行边缘提取处理之前,还包括下述至少一项:
增加所述局部图像中第二光斑区域与其他区域的亮度差值;
将所述局部图像中第二光斑区域外的其他区域的亮度调节为0;
将每个所述局部图像均调整至预设尺寸。
根据本公开实施例的第二方面,提供一种图像处理装置,所述装置包括:
获取模块,用于获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;
确定模块,用于根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;
模糊模块,用于根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
在一个实施例中,所述模糊核库内的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述获取模块用于确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置时,用于:
确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标,其中,所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。
在一个实施例中,所述模糊模块用于:
将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,其中,i为大于0,且不大于N的整数,N为所述第一光斑区域内的像素点总数量。
在一个实施例中,所述模糊模块用于:
利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;
将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
在一个实施例中,所述模糊亮度值为所述像素点的原始亮度值和所述像素点的权重的乘积;
所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,包括:
第一总量和第二总量之间的比例,其中,所述第一总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,所述第二总量为所述像素点的模糊核范围内的全部像素点的权重的和。
在一个实施例中,所述模糊核包括以目标像素点为坐标圆心的轮廓函数,其中,所述目标像素点为所述模糊核所处理的像素点;和/或,
所述模糊核包括相对于目标像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,其中,所述目标像素点为所述模糊核所处理的像素点。
在一个实施例中,还包括配置模块,用于:
获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置;
提取所述光斑图像中每个第二光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核;
根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库。
在一个实施例中,所述模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述配置模块用于获取光斑采集图像中每个第二光斑区域时,用于:
在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域;
所述配置模块用于根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库时,用于:
在角度维度上,根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准 角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊和进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核。
在一个实施例中,所述配置模块用于提取所述光斑采集图像中每个第二光斑区域的形状信息时,用于:
对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;
对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。
在一个实施例中,所述配置模块用于根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息时,用于:
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数;和/或,
对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。
在一个实施例中,所述配置模块还用于在所述对每个局部图像中的第二光斑区域进行边缘提取处理之前,执行下述至少一项:
增加所述局部图像中第二光斑区域与其他区域的亮度差值;
将所述局部图像中第二光斑区域外的其他区域的亮度调节为0;
将每个所述局部图像均调整至预设尺寸。
根据本公开实施例的第三方面,提供一种电子设备,所述电子设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时基于第一方面所述的图像处理方法。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现第一方面所述的方法。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开所提供的图像处理方法,由于所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围,即可以表征模糊处理时的弥散圆的形状和尺寸,且模糊核库中配置有每个位置对应的模糊核,因此可以使光斑区域的每个像素点按照对应的模糊核进行模糊处理,针对性强且具有一定的差异性,从而可以模仿专业相机的物理虚化功能。若将该方法应用于终端设备的相机程序中,则可以使得相机程序的功能更加丰富,更加贴近专业相机的拍照效果。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1是本公开一示例性实施例示出的图像处理方法的流程图;
图2是本公开一示例性实施例示出的图像上距离和角度两个维度上的坐标划分示意图;
图3是本公开一示例性实施例示出的按照位置在模糊核库中查找模糊核的过程示意图;
图4是本公开一示例性实施例示出的心形模糊核的示意图;
图5是本公开一示例性实施例示出的模糊亮度值的行积分示意图;
图6是本公开一示例性实施例示出的配置模糊核库的方法的流程图;
图7是本公开一示例性实施例示出的从光斑采集图像中提取光斑区域的示意图;
图8是本公开一示例性实施例示出的图像处理装置的结构示意图;
图9是本公开一示例性实施例示出的电子设备的结构框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第 二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
当使用专业的相机拍摄图片时,如果采用长焦或大光圈镜头,那么会得到景深较小的图片,对焦物体及其所在深度的其他物体会保持清晰,而前景和背景会有不同程度的模糊和虚化,这可以起到突出摄影主体的作用。其中在被虚化的背景或前景中,点状光源往往由于其更高的能力密度,会在成像平面被虚化成为光斑。一般来说,点状光源的亮度越大,距离对焦平面越远,所形成的光斑半径也会越大。
当前由于智能手机等终端设备便携性与成本的要求,往往会采用尺寸较小的摄像头,这导致手机摄影很难拍摄出具有虚化效果的图片,因此在智能手机的相机程序中引入软件算法来模拟物理虚化,也就是将采集到的原始图像进行虚化渲染等处理,但是当采集到的原始图像中存在光斑时,光斑虚化渲染处理时均采用相同的形状和大小的弥散圆(例如圆形)进行虚化处理,因此光斑的虚化渲染效果的真实感较差,造成相机程序对物理虚化的模仿效果较差。
第一方面,本公开至少一个实施例提供了一种图像处理方法,请参照附图1,其示出了该方法的流程,包括步骤S101和步骤S103。
其中,该方法可以应用于终端设备,例如应用于终端设备的相机程序中的模拟物理虚化的算法中。终端设备可以具有摄像头等图像采集设备,这些图像采集设备可以采集图像,而且终端设备的相机程序可以控制图像采集设备采集图像过程中的各项参数。该方法可以应用于终端设备的相机程序拍摄图像的场景下,即利用该方法对图像采集设备采集的图像进行虚化渲染处理,从而得到相机程序输出的图像,即用户利用相机程序拍照时所得到的图像。
在步骤S101中,获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置。
其中,在终端设备的相机程序被启动,且用户通过操作触发相机程序的拍摄功能时,图像采集设备可以采集原始图像,该原始图像需要经过虚化渲 染处理后得到相机程序的输出图像,因此本步骤中可以获取该原始图像作为待处理图像,而经过本公开提供的方法处理后所得到的图像可以作为相机程序的输出图像。
可以对待处理图像进行光斑识别处理,从而获取到待处理图像中一个或多个第一光斑区域。示例性的,将待处理图像中亮度值高于光斑亮度阈值的像素点确定为光斑像素点,进而将光斑像素点组成的至少一个连通域确定为第一光斑区域。
本步骤所确定的第一光斑区域内像素点在待处理图像中的位置,用于在步骤S102中在预先配置的模糊核库中查找模糊核,因此本步骤中所确定的像素点的位置的维度可以与模糊核库中模糊核的索引维度相匹配。例如模糊核库中的模糊核具有距离、角度和弥散圆半径三个索引维度,每个模糊核均具有距离、角度和弥散圆半径三个维度上的坐标,通过三个维度上的坐标可以查找模糊核,因此本步骤中确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标。
所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。请参照附图2,其示例性的示出了图像上距离和角度两个维度的划分方法。从附图2中可以看出,从图像中心出发的射线表示图像上的等角度划分,而同心圆则表示图像上的等距离划分,图中角度维度上的坐标索引依次为0~39,距离维度上的坐标索引依次为0~10。至于弥散圆半径维度上的坐标,由于弥散圆半径的大小与像素点与对焦平面的距离有关,因此可以根据像素点的深度信息确定其在弥散圆半径上的坐标。
在步骤S102中,根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围。
模糊核库中具有图像中任何一个位置所对应的模糊核,因此可以按照每个像素点的位置在模糊核库内查找到对应的模糊核。示例性的,可以参照附 图3,其示出了按照距离(distance)、角度(direction)和模糊圆半径(radius)三个维度上的坐标,在模糊核库中查找模糊核的过程。可以理解的是,由于角度维度对第一光斑区域的形状的影响是中心对称的,因此模糊核库内可以仅保存90°范围内的模糊核,其他角度可以根据对称关系转换至90°内的角度再查找模糊核,例如135°可以到模糊核库内查找角度维度的坐标为45°的模糊核,这样可以减少模糊核库的内存占用。
模糊处理可以采用gather方式,即将某像素点周围一定范围内的像素点的参数求和,并使用求和结果对该像素点的参数进行调整,达到模糊的目的;还可以采用scatter方式,即将某像素点的参数分散到该像素点周围一定范围内的像素点上,然后某个像素点使用其他像素点分散到该像素点上的参数,对该像素点的参数进行调整,达到模糊的目的。
模糊核用来表征上述提到的像素点周围的一定范围。例如所述模糊核包括以对应的像素点为坐标圆心的轮廓函数,则本步骤中可以根据轮廓函数确定模糊核中的像素点。例如下述轮廓函数公式来表征环状的模糊核的轮廓:
Figure PCTCN2022098703-appb-000001
Figure PCTCN2022098703-appb-000002
其中,r为模糊核所在划窗的尺寸,i为模糊核中相对于坐标圆心的行偏移,s为圆环的内圈半径和外圈半径的比例,table_left_bias[|i|]为第i个偏移行左侧端点的偏移量,table_right_bias[|i|]为第i个偏移行右侧端点的偏移量,table_left_bias_minus[|i|]为第i个偏移行内部空白的左侧端点的偏移量,table_right_bias_minus[|i|为第i个偏移行内部空白的右侧端点的偏移量。
再例如,所述模糊核包括相对于对应的像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,上述提到的图3中所查找到的模糊核即为这一类模糊核。则本步骤中可以通过查找模糊核中的上述数据确定模糊核中的像素点,例如按照下述索引查找每行像素点的偏移量:
Left_col_bias=KernelMap[distance][direction][cur_radius][cur_bias_row]
right_col_bias=KernelMap[distance][direction][cur_radius][cur_bias_row]
其中,Left_col_bias为左侧偏移量,right_col_bias为左侧偏移量,distance、direction、cur_radius分别为距离、角度、弥散圆半径三个维度的坐标,cur_bias_row为偏移种类,即表示外侧轮廓,相对应的还存在内部空白轮廓。
在模糊核的形状为心形、五角星形等凹多边形时,所述行偏移范围内某些行的偏移量不仅包括该行的外侧端点,还包括该行内部空白的端点,例如模糊核如图4所示,该模糊核为(0,0)像素点对应的模糊核,y=-3和y=-4两行均存在内部空白,因此y=-3这一行的偏移量不仅包括x=-4和x=4这两个外侧端点,还包括x=-1和x=1这两个内部空白的端点。
在步骤S103中,根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
scatter打散模糊是一种正向计算过程,即将每个像素的颜色分布到相邻像素,逆向来看,最后结果图上每个像素的值是相邻像素的影响和值,这就是gather求和模糊。无论是scatter模糊或者gather模糊,它们都是基于图像空间的滑窗处理,而这样一个窗口即为模糊核,每个点像素按照模糊核的形状对周围像素去计算。
在一个可能的实施例中,使用gather方式进行模糊处理,即针对每个像素点执行下述步骤:将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值。待处理图像中的每个像素点均得到目标亮度值,则得到了目标图像。
示例性的,模糊亮度值可以为所述像素点的原始亮度值和所述像素点的权重的乘积,基于此,所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,可以为第三总量和第四总量之间的比例,其中,第三总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,第四总量为所述像素点的模糊核范围内的全部像素点的权重的和。模糊核内具有每个像素点的权重值,而为了简化计算过程,降低终端设备的运算负荷,可以将模糊核内每个像素点的权重值均设置为一致的权重值,从而可以将每个像素点的原始亮度值均与模糊核内像素点的权重值相乘,得到每个像素点的模糊亮度值。 进而可以采用下述公式计算每个像素点的目标亮度值val:
Figure PCTCN2022098703-appb-000003
其中,P ij为第i行第j列的像素点的模糊亮度值,m为模糊核的行数,n为模糊和的列数。
为了进一步提高模糊处理的效率,可以使用积分图的方式计算模糊核范围内的全部像素点的模糊亮度值的平均值,即利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;再将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
可以参照附图5,其示出了模糊亮度值的行积分示意图的原理,从附图5中可以看出,左侧表格为待处理图像中的每个像素点的模糊亮度值的统计表格,每个单元格中的P值为对应的像素点的模糊亮度值,而积分示意图中某个像素点的积分值S为首个像素点至该像素点的所有像素点的模糊亮度值P的和,例如图中右侧的首行像素点的行积分示意图中,S 00=P 00,S 01=S 00+P 01,S 02=S 01+P 02……S 0n=S 0(n-1)+P 0n。因此,利用积分图计算第i行第j列的像素点、第i行第j列的像素点的左侧left个像素和右侧right个像素的模糊亮度值的和Sk时,可以按照下述公式计算:
S k=S i(j+right)-S i(j-left-1)
其中,S i(j+right)为第i行第j+right列的像素点的积分值,S i(j-left-1)为第i行第j-left-1列的像素点的积分值。
可以从模糊核的轮廓函数中确定每行的端点处的像素点,或者从模糊核中行偏移范围内每行的偏移量确定每行的端点处的像素点,进而按照上述公式计算每行像素点的模糊亮度值的和,这种计算方式较为简单,计算效率极大提高。
可以理解的是,待处理图像中除第一光斑区域外的其他区域可以按照相关技术中的虚化算法进行虚化,或者按照本方法对第一光斑区域所进行的虚 化处理的方式进行虚化。
本公开所提供的图像处理方法,由于所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围,即可以表征模糊处理时的弥散圆的形状和尺寸,且模糊核库中配置有每个位置对应的模糊核,因此可以使光斑区域的每个像素点按照对应的模糊核进行模糊处理,针对性强且具有一定的差异性,从而可以模仿专业相机的物理虚化功能。若将该方法应用于终端设备的相机程序中,则可以使得相机程序的功能更加丰富,更加贴近专业相机的拍照效果。
具体来说,本公开通过获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置,可以根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,最后便可以根据每个像素点的模糊核,对每个像素点进行对应的模糊处理(即虚化渲染处理),得到每个像素点的模糊处理结果。由于所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围,即可以表征模糊处理时的弥散圆的形状和尺寸,且模糊核库中配置有每个位置对应的模糊核,因此可以使第一光斑区域的每个像素点按照对应的模糊核进行模糊处理,针对性强且具有一定的差异性,避免每个像素点均采用相同的形状和大小的弥散圆(例如圆形)进行虚化处理,从而提高了光斑的虚化渲染效果的真实感,进而提高了相机程序对物理虚化的模仿效果。
本公开的一些实施例中,可以按照如图6所示的方式预先配置模糊核库,包括步骤S601至步骤S603。
在步骤S601中,获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置。
其中,光斑采集图像为预先针对存在光斑的场景所采集的图像。由于光斑是由小的明亮散焦光点最清晰地显示出来,其能很好地体现该像素区域的弥散圆形状,因此可以选择一款仿真的专业相机镜头,到具有明亮散景点的场景中去拍摄光斑采集图像。需要注意的是,因为光斑形状受光圈大小、定 焦距离影响,所以拍摄所有的光斑采集图像可以采用相同的光圈大小。另外明亮散景点的场景,可以选择具有均匀分布的光斑的场景(例如多个呈多行多列均匀排布的点状光源的场景),这样采集到的光斑采集图像更易于模糊核的形状规律探索。
光斑采集图像可以为一张,也可以多张,其可以作为配置模糊核库的样本集,可以按照附图7,其示出了某个光斑采集图像中第二光斑区域的提取过程。所有的光斑采集图像需要满足配置模糊核库的需求,例如光斑采集图像中的光斑数量需要达到一定数量,或者需要保证光斑采集图像中的每个位置上均存在第二光斑区域,或者保证需要保证光斑采集图像中的一定的位置上均存在第二光斑区域。
在一个可能的实施例中,配置的模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标,而本步骤中采集的光斑采集图像需要满足距离维度的每个坐标上均存在第二光斑区域,并且可以获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域(若某个坐标上具有两个或更多第二光斑区域,可以随机或按照预先设置的规则获取其中的一个第二光斑区域)。示例性的,可以在角度维度的基准角度坐标(例如0°)下,且在弥散圆半径维度的基准半径坐标(例如最大半径)下,获取所述光斑采集图像中距离维度的每个坐标上的光斑区域。
在步骤S602中,提取所述光斑图像中每个第二光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核。
可选的,首先对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;接下来,对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;最后,根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。示例性的,可以根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数,例如将第二光斑区域的形状分为左右半轴进行轮 廓spline函数拟合;和/或,可以对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。得到第二光斑区域的形状信息后,可以绘制光斑形状的二值化图,并以此为基础生成模糊核。
可以理解的是,在所述对每个局部图像中的第二光斑区域进行边缘提取处理之前,执行下述至少一项:增加所述局部图像中第二光斑区域与其他区域的亮度差值,例如对局部图像进行锐化处理;将所述局部图像中第二光斑区域外的其他区域的亮度调节为0,例如可以采用亮度阈值区分第二光斑区域和其他区域;将每个所述局部图像均调整至预设尺寸。上述三种预处理方式可以单独使用,或者结合在一起使用,可以提高边缘提取处理的效果。
在步骤S603中,根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库。
可以将每个模糊核的形状信息和其位置保存至模糊核库。可以参照附图2,其左侧示出了配置完成的模糊核库的示意图。一般距离索引为0的最靠图像中心的区域的光斑形状最近接理想的圆形,因此距离坐标为0的模糊核可以定义为圆形。
在一个可能的实施例中,步骤S601中在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域,并且在步骤S602中生成了在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,距离维度的每个坐标上的模糊核。接下来,可以先在角度维度上,(例如在90°范围内)根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;然后在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐 标的比例关系对所述基准半径坐标下的模糊核进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的每个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
先在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;然后在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核。
其中,所述基准角度坐标为角度维度的基准坐标,所述基准半径坐标为弥散圆半径维度的基准坐标。所述基准角度坐标之外的第一坐标为角度维度上所述基准角度坐标之外的其他每个坐标;所述基准半径坐标之外的第二坐标为弥散圆半径维度上基准半径坐标之外的其他每个坐标。所述基准半径坐标之外的第三坐标为弥散圆半径维度上基准半径坐标之外的其他每个坐标;所述基准角度坐标之外的第四坐标为角度维度上所述基准角度坐标之外的其他每个坐标。
从而对于步骤S102所生成的模糊核进行了角度维度和弥散圆半径维度上的扩展,得到了按照距离、角度和弥散圆半径三个维度划分的模糊核库。
本实施例中,通过采集具有第二光斑区域的光斑采集图像,并对该光斑采集图像中的第二光斑区域进行分割处理,并提取第二光斑区域的形状信息,进而以此为样本配置模糊核库,从而使得配置的模糊核库中的模糊核真实有效,附图1所示的图像处理方法中利用该模糊核库对图像进行处理,可以得到真实感较强的模糊处理结果。
根据本公开实施例的第二方面,提供一种图像处理装置,请参照附图8,所述装置包括:
获取模块801,用于获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;
确定模块802,用于根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;
模糊模块803,用于根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
在本公开的一些实施例中,所述模糊核库内的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述获取模块用于确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置时,用于:
确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标,其中,所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。
在本公开的一些实施例中,所述模糊模块用于:
将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,其中,i为大于0,且不大于N的整数,N为所述第一光斑区域内的像素点总数量。
在本公开的一些实施例中,所述模糊模块用于:
利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;
将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
在本公开的一些实施例中,所述模糊亮度值为所述像素点的原始亮度值 和所述像素点的权重的乘积;
所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,包括:
第一总量和第二总量之间的比例,其中,所述第一总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,所述第一总量为所述像素点的模糊核范围内的全部像素点的权重的和。
在本公开的一些实施例中,所述模糊核包括以目标像素点为坐标圆心的轮廓函数,其中,所述目标像素点为所述模糊核所处理的像素点;和/或,
所述模糊核包括相对于目标像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,其中,所述目标像素点为所述模糊核所处理的像素点。
在本公开的一些实施例中,还包括配置模块,用于:
获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置;
提取所述光斑图像中每个第二光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核;
根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库。
在本公开的一些实施例中,所述模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
所述配置模块用于获取光斑采集图像中每个第二光斑区域时,用于:
在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域;
所述配置模块用于根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库时,用于:
在角度维度上,根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的 坐标组合下的模糊核;
在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核。
在本公开的一些实施例中,所述配置模块用于提取所述光斑采集图像中每个第二光斑区域的形状信息时,用于:
对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;
对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。
在本公开的一些实施例中,所述配置模块用于根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息时,用于:
根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数;和/或,
对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。
在本公开的一些实施例中,所述配置模块还用于在所述对每个局部图像中的第二光斑区域进行边缘提取处理之前,执行下述至少一项:
增加所述局部图像中第二光斑区域与其他区域的亮度差值;
将所述局部图像中第二光斑区域外的其他区域的亮度调节为0;
将每个所述局部图像均调整至预设尺寸。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在第一方面有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
根据本公开实施例的第三方面,请参照附图9,其示例性的示出了一种电子设备的框图。例如,装置900可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,装置900可以包括以下一个或多个组件:处理组件902,存储器904,电源组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件914,以及通信组件916。
处理组件902通常控制装置900的整体操作,诸如与显示,电话呼叫,数据通信,相机程序操作和记录操作相关联的操作。处理元件902可以包括一个或多个处理器920来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件902可以包括一个或多个模块,便于处理组件902和其他组件之间的交互。例如,处理部件902可以包括多媒体模块,以方便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在设备900的操作。这些数据的示例包括用于在装置900上操作的任何应用程序或方法的指令,联 系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件906为装置900的各种组件提供电力。电力组件906可以包括电源管理系统,一个或多个电源,及其他与为装置900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述装置900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触控面板(TP)。如果屏幕包括触控面板,屏幕可以被实现为触控屏,以接收来自用户的输入信号。触控面板包括一个或多个触控传感器以感测触控、滑动和触控面板上的手势。所述触控传感器可以不仅感测触控或滑动动作的边界,而且还检测与所述触控或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。当装置900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),当装置900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910还包括一个扬声器,用于输出音频信号。
I/O接口912为处理组件902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件914包括一个或多个传感器,用于为装置900提供各个方面 的状态评估。例如,传感器组件914可以检测到装置900的打开/关闭状态,组件的相对定位,例如所述组件为装置900的显示器和小键盘,传感器组件914还可以检测装置900或装置900一个组件的位置改变,用户与装置900接触的存在或不存在,装置900方位或加速/减速和装置900的温度变化。传感器组件914还可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件914还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件914还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于装置900和其他设备之间有线或无线方式的通信。装置900可以接入基于通信标准的无线网络,如WiFi,2G或3G,4G或5G或它们的组合。在一个示例性实施例中,通信部件916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置900可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述电子设备的供电方法。
第四方面,本公开在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器904上述指令可由装置900的处理器920执行以完成上述电子设备的供电方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性 变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (24)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;
    根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;
    根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述模糊核库内的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
    所述确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置,包括:
    确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标,其中,所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,包括:
    将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,其中,i为大于0,且不大于N的整数,N为所述第一光斑区域内的像素点总数量。
  4. 根据权利要求3所述的图像处理方法,其特征在于,所述将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i 个像素点的目标亮度值,包括:
    利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;
    将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
  5. 根据权利要求3所述的图像处理方法,其特征在于,所述模糊核内具有每个像素点的权重,所述模糊亮度值为所述像素点的原始亮度值和所述像素点的权重的乘积;
    所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,包括:
    第一总量和第二总量之间的比例,其中,所述第一总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,所述第二总量为所述像素点的模糊核范围内的全部像素点的权重的和。
  6. 根据权利要求1至5任一项所述的图像处理方法,其特征在于,所述模糊核包括以目标像素点为坐标圆心的轮廓函数,其中,所述目标像素点为所述模糊核所处理的像素点;和/或,
    所述模糊核包括相对于目标像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,其中,所述目标像素点为所述模糊核所处理的像素点。
  7. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置;
    提取所述第二光斑图像中每个光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核;
    根据每个第二光斑区域对应的模糊核和在所述光斑采集图像中的位置, 生成所述模糊核库。
  8. 根据权利要求7所述的图像处理方法,其特征在于,所述模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
    所述获取光斑采集图像中每个光斑区域,包括:
    在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的光斑区域;
    所述根据每个第二光斑区域对应的模糊核和在所述光斑采集图像中的位置,生成所述模糊核库,包括:
    在角度维度上,根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
    在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
    在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
    在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度 的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核。
  9. 根据权利要求7所述的图像处理方法,其特征在于,所述提取所述光斑采集图像中每个第二光斑区域的形状信息,包括:
    对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;
    对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;
    根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。
  10. 根据权利要求9所述的图像处理方法,其特征在于,所述根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息,包括:
    根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数;和/或,
    对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。
  11. 根据权利要求9所述的图像处理方法,其特征在于,在所述对每个局部图像中的光斑区域进行边缘提取处理之前,还包括下述至少一项:
    增加所述局部图像中光斑区域与其他区域的亮度差值;
    将所述局部图像中光斑区域外的其他区域的亮度调节为0;
    将每个所述局部图像均调整至预设尺寸。
  12. 一种图像处理装置,其特征在于,所述装置包括:
    获取模块,用于获取待处理图像中的第一光斑区域,并确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置;
    确定模块,用于根据每个像素点在所述待处理图像中的位置,在预先配置的模糊核库中对应确定每个像素点的模糊核,其中,所述模糊核用于表征所述像素点进行模糊处理时所涉及到的像素范围;
    模糊模块,用于根据每个像素点的模糊核,对每个像素点进行对应的模糊处理,得到目标图像。
  13. 根据权利要求12所述的图像处理装置,其特征在于,所述模糊核库内的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
    所述获取模块用于确定所述第一光斑区域内的每个像素点在所述待处理图像中的位置时,用于:
    确定所述第一光斑区域内的每个像素点在所述待处理图像的距离、角度和弥散圆半径三个维度上的坐标,其中,所述距离用于表征所述像素点与图像中心间的距离,所述角度用于表征所述像素点与图像中心间的连线,和图像基准角度线间的角度。
  14. 根据权利要求12所述的图像处理装置,其特征在于,所述模糊模块用于:
    将第i个像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,确定为第i个像素点的目标亮度值,其中,i为大于0,且不大于N的整数,N为所述第一光斑区域内的像素点总数量。
  15. 根据权利要求14所述的图像处理装置,其特征在于,所述模糊模块用于:
    利用所述待处理图像的模糊亮度值积分图,分别对第i个像素点的模糊核范围内的每行像素点进行模糊亮度值的行积分;
    将所述模糊核范围内的各行像素点的行积分结果求和,并对求和结果进行平均处理,得到第i个像素点的目标亮度值。
  16. 根据权利要求14所述的图像处理装置,其特征在于,所述模糊亮度值为所述像素点的原始亮度值和所述像素点的权重的乘积;
    所述像素点的模糊核范围内的全部像素点的模糊亮度值的平均值,包括:
    第一总量和第二总量之间的比例,其中,所述第一总量为所述像素点的模糊核范围内的全部像素点的模糊亮度值的和,所述第一总量为所述像素点的模糊核范围内的全部像素点的权重的和。
  17. 根据权利要求12至16任一项所述的图像处理装置,其特征在于,所述模糊核包括以目标像素点为坐标圆心的轮廓函数,其中,所述目标像素点为所述模糊核所处理的像素点;和/或,
    所述模糊核包括相对于目标像素点的行偏移范围,以及所述行偏移范围内每行的偏移量,其中,所述目标像素点为所述模糊核所处理的像素点。
  18. 根据权利要求12所述的图像处理装置,其特征在于,还包括配置模块,用于:
    获取光斑采集图像中每个第二光斑区域,并确定每个第二光斑区域在所述光斑采集图像中的位置;
    提取所述光斑图像中每个第二光斑区域的形状信息,并根据每个第二光斑区域的形状信息生成对应的模糊核;
    根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库。
  19. 根据权利要求18所述的图像处理装置,其特征在于,所述模糊核库中的每个模糊核均具有距离、角度和弥散圆半径三个维度的坐标;
    所述配置模块用于获取光斑采集图像中每个第二光斑区域时,用于:
    在角度维度的基准角度坐标下,且在弥散圆半径维度的基准半径坐标下,获取所述光斑采集图像中距离维度的每个坐标上的第二光斑区域;
    所述配置模块用于根据每个第二光斑区域对应的模糊核和在所述光斑图像中的位置,生成所述模糊核库时,用于:
    在角度维度上,根据所述基准角度坐标之外的第一坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行旋转,得到所述第一坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的基准半径坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
    在弥散圆半径维度上,根据所述基准半径坐标之外的第二坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行缩放,得到所述第二坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的各个坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;或,
    在弥散圆半径维度上,根据所述基准半径坐标之外的第三坐标与所述基准半径坐标的比例关系对所述基准半径坐标下的模糊核进行旋转,得到所述第三坐标下的模糊核,其中,所述基准半径坐标下的模糊核包括:弥散圆半径维度的基准半径坐标、角度维度的基准角度坐标、和距离维度的各个坐标形成的坐标组合下的模糊核;
    在角度维度上,根据所述基准角度坐标之外的第四坐标与所述基准角度坐标的比例关系对所述基准角度坐标下的模糊核进行缩放,得到所述第四坐标下的模糊核,其中,所述基准角度坐标下的模糊核包括:角度维度的基准角度坐标、弥散圆半径维度的各个坐标、和距离维度的各个坐标形 成的坐标组合下的模糊核。
  20. 根据权利要求18所述的图像处理装置,其特征在于,所述配置模块用于提取所述光斑采集图像中每个第二光斑区域的形状信息时,用于:
    对所述光斑采集图像进行分割处理,得到每个第二光斑区域所在的局部图像;
    对每个局部图像中的第二光斑区域进行边缘提取处理,得到第二光斑区域轮廓上每个像素点在图像坐标系上的坐标;
    根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息。
  21. 根据权利要求20所述的图像处理装置,其特征在于,所述配置模块用于根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,确定所述第二光斑区域的形状信息时,用于:
    根据所述第二光斑区域上每个像素点在图像坐标系上的坐标,以所述第二光斑区域的质心为坐标原点,拟合所述第二光斑区域的轮廓函数;和/或,
    对所述第二光斑区域上每个像素点在图像坐标系上的坐标进行扫描,得到所述第二光斑区域的轮廓相对于所述第二光斑区域的质心的行偏移范围,以及所述行偏移范围内每行的偏移量。
  22. 根据权利要求21所述的图像处理装置,其特征在于,所述配置模块还用于在所述对每个局部图像中的第二光斑区域进行边缘提取处理之前,执行下述至少一项:
    增加所述局部图像中第二光斑区域与其他区域的亮度差值;
    将所述局部图像中第二光斑区域外的其他区域的亮度调节为0;
    将每个所述局部图像均调整至预设尺寸。
  23. 一种电子设备,其特征在于,所述电子设备包括存储器、处理器,所述存储器用于存储可在处理器上运行的计算机指令,所述处理器用于在执行所述计算机指令时基于权利要求1至11中任一项所述的图像处理方法。
  24. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现权利要求1至11中任一项所述的方法。
PCT/CN2022/098703 2022-06-14 2022-06-14 图像处理方法、装置、电子设备和存储介质 WO2023240452A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/098703 WO2023240452A1 (zh) 2022-06-14 2022-06-14 图像处理方法、装置、电子设备和存储介质
CN202280004260.9A CN117642767A (zh) 2022-06-14 2022-06-14 图像处理方法、装置、电子设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/098703 WO2023240452A1 (zh) 2022-06-14 2022-06-14 图像处理方法、装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2023240452A1 true WO2023240452A1 (zh) 2023-12-21

Family

ID=89192975

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098703 WO2023240452A1 (zh) 2022-06-14 2022-06-14 图像处理方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN117642767A (zh)
WO (1) WO2023240452A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160255323A1 (en) * 2015-02-26 2016-09-01 Dual Aperture International Co. Ltd. Multi-Aperture Depth Map Using Blur Kernels and Down-Sampling
CN106600559A (zh) * 2016-12-21 2017-04-26 东方网力科技股份有限公司 模糊核获取以及图像去模糊方法及装置
CN112561777A (zh) * 2019-09-25 2021-03-26 北京迈格威科技有限公司 图像添加光斑的方法及装置
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质
CN114155138A (zh) * 2020-09-07 2022-03-08 武汉Tcl集团工业研究院有限公司 一种虚化照片生成方法、装置及设备
CN114493988A (zh) * 2020-11-11 2022-05-13 武汉Tcl集团工业研究院有限公司 一种图像虚化方法、图像虚化装置及终端设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160255323A1 (en) * 2015-02-26 2016-09-01 Dual Aperture International Co. Ltd. Multi-Aperture Depth Map Using Blur Kernels and Down-Sampling
CN106600559A (zh) * 2016-12-21 2017-04-26 东方网力科技股份有限公司 模糊核获取以及图像去模糊方法及装置
CN112561777A (zh) * 2019-09-25 2021-03-26 北京迈格威科技有限公司 图像添加光斑的方法及装置
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质
CN114155138A (zh) * 2020-09-07 2022-03-08 武汉Tcl集团工业研究院有限公司 一种虚化照片生成方法、装置及设备
CN114493988A (zh) * 2020-11-11 2022-05-13 武汉Tcl集团工业研究院有限公司 一种图像虚化方法、图像虚化装置及终端设备

Also Published As

Publication number Publication date
CN117642767A (zh) 2024-03-01

Similar Documents

Publication Publication Date Title
US11114130B2 (en) Method and device for processing video
CN108764091B (zh) 活体检测方法及装置、电子设备和存储介质
WO2022179026A1 (zh) 图像处理方法及装置、电子设备和存储介质
CN109889724B (zh) 图像虚化方法、装置、电子设备及可读存储介质
CN113205568B (zh) 图像处理方法、装置、电子设备及存储介质
WO2020221012A1 (zh) 图像特征点的运动信息确定方法、任务执行方法和设备
EP3057304B1 (en) Method and apparatus for generating image filter
US11308692B2 (en) Method and device for processing image, and storage medium
US11030733B2 (en) Method, electronic device and storage medium for processing image
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
WO2018120662A1 (zh) 一种拍照方法,拍照装置和终端
CN108154466B (zh) 图像处理方法及装置
CN109784327B (zh) 边界框确定方法、装置、电子设备及存储介质
JP7210089B2 (ja) リソースの表示方法、装置、機器及びコンピュータプログラム
US20220329729A1 (en) Photographing method, storage medium and electronic device
WO2022001648A1 (zh) 图像处理方法、装置、设备及介质
CN111968052A (zh) 图像处理方法、图像处理装置及存储介质
CN112508959A (zh) 视频目标分割方法、装置、电子设备及存储介质
WO2023240452A1 (zh) 图像处理方法、装置、电子设备和存储介质
CN110012208B (zh) 拍照对焦方法、装置、存储介质及电子设备
CN114390189A (zh) 图像处理方法、装置、存储介质及移动终端
WO2023231009A1 (zh) 一种对焦方法、装置及存储介质
WO2023245363A1 (zh) 图像处理方法及装置、电子设备、存储介质
WO2023206475A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN116486039A (zh) 点云数据处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280004260.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946151

Country of ref document: EP

Kind code of ref document: A1