CN116051391B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN116051391B
CN116051391B CN202211036085.XA CN202211036085A CN116051391B CN 116051391 B CN116051391 B CN 116051391B CN 202211036085 A CN202211036085 A CN 202211036085A CN 116051391 B CN116051391 B CN 116051391B
Authority
CN
China
Prior art keywords
image
electronic device
background images
pixel
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211036085.XA
Other languages
Chinese (zh)
Other versions
CN116051391A (en
Inventor
王宇
陈铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211036085.XA priority Critical patent/CN116051391B/en
Publication of CN116051391A publication Critical patent/CN116051391A/en
Application granted granted Critical
Publication of CN116051391B publication Critical patent/CN116051391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and electronic equipment, comprising the following steps: under the condition that the electronic equipment displays a shooting preview picture, responding to a first operation of a user, acquiring a first image and first shooting data by the electronic equipment, wherein the first shooting data comprises a distant view defocusing distance, a depth and coordinates of each pixel of the first image; the electronic equipment divides the first image to obtain a target image and N first background images; the electronic equipment determines first point spread function PSF distribution data based on first shooting data, wherein the first PSF distribution data is fuzzy core data of each pixel of a first image; the electronic equipment performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images; and the electronic equipment inputs the target image and the N second background images into a second neural network model for restoration processing to obtain a second image. In the embodiment of the application, the definition of the image can be improved.

Description

Image processing method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image processing method and an electronic device.
Background
During shooting, the electronic device needs to focus on a target object, and the main principle of focusing is to adjust the focal distance. The object distance during shooting is basically determined, and focusing is carried out by adjusting the distance, so that the aim of clear focusing target is fulfilled. However, in photographing a picture, various things can be included, each of which is different in object distance, which causes that if one object is focused on in the picture, things other than the object in the photographing picture (things different from the object distance) are likely to be blurred, i.e., the object after focusing is clear, but the background picture is blurred.
Disclosure of Invention
The embodiment of the application discloses an image processing method and electronic equipment, which can improve the definition of an image.
In a first aspect, the present application provides an image processing method, which is applied to an electronic device, the method including: under the condition that the electronic equipment displays a shooting preview picture, responding to a first operation of a user, the electronic equipment acquires a first image and first shooting data, wherein the first image is an image of the shooting preview picture acquired by a camera, the first operation is an operation of clicking a shooting control in the shooting preview picture by the user, and the first shooting data comprises a distant view defocusing distance, a depth and coordinates of each pixel of the first image; the distance between the far view and the focus is the distance between the focus and the far view; the electronic equipment divides the first image to obtain a target image and N first background images, wherein N is a positive integer; the electronic equipment determines first Point Spread Function (PSF) distribution data based on the first shooting data, wherein the first PSF distribution data is fuzzy core data of each pixel of a first image; the electronic equipment performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images; and the electronic equipment inputs the target image and the N second background images into a second neural network model for restoration processing to obtain a second image.
The depth is the distance between the camera and the shooting objects of each pixel, and the coordinates are used for representing the positions of each pixel in the image.
In the embodiment of the application, the electronic equipment can separate the images and respectively perform deblurring treatment to ensure the accuracy and effect of deblurring, and then integrate the deblurred images together to ensure that the problem of image blurring caused by defocusing after image processing is solved. In addition, in the process, the electronic equipment performs segmentation and deblurring processing on the image, so that a segmented area can be ensured to be clearer, and in addition, through fusion processing of the second neural network model, pixels at the junction can be further ensured to be deblurred, so that the whole second image is clearer compared with the second image, and the quality is better.
In one possible implementation manner, the electronic device acquires a first image and first shooting data, and specifically includes:
the electronic equipment responds to the first operation, acquires the first image, a focusing plane and a depth map, wherein the focusing plane is a plane which is overfocal and is perpendicular to an optical axis, and the depth map is used for representing the distance between objects corresponding to all pixels in the first image and the camera; the electronic device determines the depth of each pixel in the first image based on the depth map; the electronic device determines a perspective defocus distance of each pixel in the first image based on a difference between the focus plane and the depth map; the electronic device determines coordinates of each pixel based on where the pixel is located in the first image. Thus, the electronic device can determine the depth and the distance of the distant view from focus through the depth map and determine the position, and can determine the accuracy of the blur kernel for the follow-up.
In a possible implementation manner, the electronic device segments the first image to obtain a target image and N first background images, and specifically includes: the electronic equipment inputs the first image into a first neural network model for processing to obtain a target image and N first background images, wherein the first neural network model is a network model for image semantic segmentation; or the electronic equipment performs segmentation based on the depth of the first image to obtain a target image and N first background images, wherein the pixels of each image in the segmented target image and N first background images are in a corresponding preset depth range. In this way, the electronic device can divide different matters in the first image by dividing through the neural network model of image division. Generally, the depth of field of the pixel where the same object is located is basically close, so that the segmented image can be ensured to be more accurate in deblurring treatment, and a foundation is laid for the subsequent deblurring treatment of the background area of the first image. The depth L can distinguish the distance between the photographed object and the camera, and a specific calculation relation exists between the depth L and the depth of field, so that the electronic equipment can divide pixels with different depths according to a certain range, the divided images are favorable for distinguishing the distribution condition of the first PSF distribution data in advance, and the deblurring effect of the divided images is further improved.
In a possible implementation manner, the electronic device determines first PSF distribution data based on the first shooting data, specifically including: the electronic equipment determines first PSF distribution data corresponding to the first image based on first mapping information and the first shooting data, wherein the first mapping information is a mapping relation between a distant view defocusing distance, a depth and coordinates and a fuzzy core. In this way, when the PSF distribution data is determined, the distribution data of the PSF is obtained through specific experiments performed on the FOV and L and D of the actual lens, so that accurate L, D mapping information between coordinates and blur kernels can be formed. The mapping information is stored in the electronic equipment in advance, and when the fuzzy core is determined, the accuracy of the fuzzy core can be ensured, the efficiency of acquiring the fuzzy core can be improved, the calculation process of the electronic equipment is simplified, and the processing resources and the energy consumption of the electronic equipment are saved. In addition, each fuzzy core has its location attribute. Since the angles of view corresponding to the cameras of each electronic device are different, the effect of photographing is also different for different pixels. For example, a pixel at a lens edge position may be distorted to some extent due to the angle of view, resulting in a blurred image edge position than an intermediate position. Therefore, based on the positions of the pixels, the corresponding relation exists between the pixel coordinates and the blur kernels in the mapping information, so that the image can be clearer after the pixels are processed by the blur kernels at the corresponding positions. That is, the blur kernel determined by the mapping information can reduce the degree of blurring of image distortion due to lens after deblurring.
In a possible implementation manner, the electronic device determines, based on first mapping information and the first shooting data, first PSF distribution data corresponding to the first image, specifically including: the electronic equipment determines a first fuzzy core corresponding to each pixel based on the first mapping information and the first shooting data; or the electronic equipment divides the first image according to a specific size to form a plurality of pixel matrixes, determines second shooting data of each pixel matrix, and determines a second fuzzy core corresponding to the second shooting data of each pixel matrix based on the first mapping information; the second shooting data are the average value of the first shooting data of each pixel in the pixel matrix; or the electronic equipment determines third shooting data corresponding to the target image and the N first background images based on the first shooting data, and determines a third fuzzy core corresponding to the target image and the third shooting data of the N first background images based on the first mapping information; the third shooting data is the average value of the first shooting data of each pixel in the target image or the first background image. Thus, if the electronic device can divide the first image according to a specific size, a pixel matrix is obtained, and the pixel matrix is not large in difference of shooting data corresponding to each pixel due to the fact that the distance in the image is relatively close. Therefore, it is possible to perform specification that can ensure accuracy of photographed data and blur kernels while one blur kernel is to be determined based on a plurality of pixels, and in convolution calculation, the calculation amount can be reduced.
In a possible implementation manner, the electronic device performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images, which specifically includes: under the condition that the first blur kernel is determined, the electronic equipment carries out deconvolution processing on each pixel of N first background images based on the first blur kernel to obtain N second background images; under the condition that the second blur kernel is determined, the electronic equipment carries out deconvolution processing on each pixel matrix of N first background images based on the second blur kernel to obtain N second background images; and under the condition that the third blur kernel is determined, the electronic equipment carries out deconvolution processing on N first background images based on the third blur kernel to obtain N second background images. Therefore, the deconvolution results of the electronic equipment are different due to different fineness of the fuzzy cores, deconvolution processing is carried out on each pixel, the fineness and accuracy of the fuzzy cores are optimal, the deblurring effect is best, the processing efficiency is lowest, and the consumed processing resources and energy consumption are also the greatest. The deconvolution processing is carried out on the pixel matrix in one background image, the deblurring effect is not obviously reduced, but the processing efficiency is obviously improved, and the consumed energy consumption and processing resources are also reduced. A segmented background image is processed by a fuzzy kernel deconvolution process, so that the deblurring effect is reduced, but the processing efficiency can be obviously improved, and the processing resources and the energy consumption are saved.
In a possible implementation manner, the electronic device inputs the target image and the N second background images into a second neural network model for repair processing, so as to obtain a second image, and specifically includes: the electronic device inputs the target image and the N second background images into a second neural network model, determines fusion weights of intersections of the target image and the N second background images through the second neural network model, and determines pixels of the intersections based on the fusion weights to obtain the second image. Therefore, the electronic equipment repairs and fuses the 'gaps' of the spliced images, and the effect of the junction of the second image is guaranteed to be good.
In a possible implementation manner, the electronic device determines, through the second neural network model, a fusion weight of intersections of the target image and the N second background images, and specifically includes: the electronic equipment determines the depth of the junction of two adjacent images in the target image and the N second background images through the second neural network model; the electronic equipment determines fusion weights of the intersections based on pixel depths of the intersections through the second neural network model; the fusion weight is the proportion of the junction between two adjacent images spliced together to the two images; the fusion weight of adjacent images with smaller depth is larger; the larger the depth, the smaller the fusion weight of neighboring images. In this way, in the weight fusion weight processing process, the fused weight can be determined according to the depth distance, and the closer the depth is, the closer the object is to the camera, which means that the closer the junction is in the picture represented by the image, the heavier the duty ratio of the junction is, so that the fusion or repair effect of different background images can be ensured to be better.
In a second aspect, the present application provides an electronic device comprising: one or more processors and one or more memories, the kernel of the electronic device comprising a file system and a block device layer, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform: under the condition that the electronic equipment displays a shooting preview picture, a first image and first shooting data are obtained in response to a first operation of a user, wherein the first image is an image of the shooting preview picture acquired by a camera, the first operation is an operation of clicking a shooting control in the shooting preview picture by the user, and the first shooting data comprise a distant view defocusing distance, a depth and coordinates of each pixel of the first image; the distance between the far view and the focus is the distance between the focus and the far view; dividing the first image to obtain a target image and N first background images, wherein N is a positive integer; determining first Point Spread Function (PSF) distribution data based on the first shooting data, wherein the first PSF distribution data is fuzzy core data of each pixel of a first image; deblurring the N first background images based on the first PSF distribution data to obtain N second background images; and inputting the target image and the N second background images into a second neural network model for restoration processing to obtain a second image.
The depth is the distance between the camera and the shooting objects of each pixel, and the coordinates are used for representing the positions of each pixel in the image.
In the embodiment of the application, the electronic equipment can separate the images and respectively perform deblurring treatment to ensure the accuracy and effect of deblurring, and then integrate the deblurred images together to ensure that the problem of image blurring caused by defocusing after image processing is solved. In addition, in the process, the electronic equipment performs segmentation and deblurring processing on the image, so that a segmented area can be ensured to be clearer, and in addition, through fusion processing of the second neural network model, pixels at the junction can be further ensured to be deblurred, so that the whole second image is clearer compared with the second image, and the quality is better.
In one possible implementation manner, the electronic device acquires the first image and the first shooting data, specifically performs: responding to the first operation, obtaining a first image, a focusing plane and a depth map, wherein the focusing plane is a plane which is overfocal and perpendicular to an optical axis, and the depth map is used for representing the distance between objects corresponding to each pixel in the first image and the camera; determining the depth of each pixel in the first image based on the depth map; determining a distance from focus of each pixel in the first image based on a difference between the focus plane and the depth map; coordinates of each pixel are determined based on the location of each pixel in the first image. Thus, the electronic device can determine the depth and the distance of the distant view from focus through the depth map and determine the position, and can determine the accuracy of the blur kernel for the follow-up.
In a possible implementation manner, the electronic device segments the first image to obtain a target image and N first background images, and specifically performs: inputting the first image into a first neural network model for processing to obtain a target image and N first background images, wherein the first neural network model is a network model for image semantic segmentation; or dividing the image based on the depth of the first image to obtain a target image and N first background images, wherein the pixels of each image in the target image and the N first background images after dividing are in a corresponding preset depth range. In this way, the electronic device can divide different matters in the first image by dividing through the neural network model of image division. Generally, the depth of field of the pixel where the same object is located is basically close, so that the segmented image can be ensured to be more accurate in deblurring treatment, and a foundation is laid for the subsequent deblurring treatment of the background area of the first image. The depth L can distinguish the distance between the photographed object and the camera, and a specific calculation relation exists between the depth L and the depth of field, so that the electronic equipment can divide pixels with different depths according to a certain range, the divided images are favorable for distinguishing the distribution condition of the first PSF distribution data in advance, and the deblurring effect of the divided images is further improved.
In one possible implementation manner, the electronic device determines first PSF distribution data based on the first shooting data, specifically performs: and determining first PSF distribution data corresponding to the first image based on first mapping information and the first shooting data, wherein the first mapping information is a mapping relation among a distant view defocusing distance, a depth, coordinates and a fuzzy core. In this way, when the PSF distribution data is determined, the distribution data of the PSF is obtained through specific experiments performed on the FOV and L and D of the actual lens, so that accurate L, D mapping information between coordinates and blur kernels can be formed. The mapping information is stored in the electronic equipment in advance, and when the fuzzy core is determined, the accuracy of the fuzzy core can be ensured, the efficiency of acquiring the fuzzy core can be improved, the calculation process of the electronic equipment is simplified, and the processing resources and the energy consumption of the electronic equipment are saved. In addition, each fuzzy core has its location attribute. Since the angles of view corresponding to the cameras of each electronic device are different, the effect of photographing is also different for different pixels. For example, a pixel at a lens edge position may be distorted to some extent due to the angle of view, resulting in a blurred image edge position than an intermediate position. Therefore, based on the positions of the pixels, the corresponding relation exists between the pixel coordinates and the blur kernels in the mapping information, so that the image can be clearer after the pixels are processed by the blur kernels at the corresponding positions. That is, the blur kernel determined by the mapping information can reduce the degree of blurring of image distortion due to lens after deblurring.
In a possible implementation manner, the electronic device determines, based on first mapping information and the first shooting data, first PSF distribution data corresponding to the first image, specifically performing: determining a first blur kernel corresponding to each pixel based on the first mapping information and the first shooting data; or dividing the first image according to a specific size to form a plurality of pixel matrixes, determining second shooting data of each pixel matrix, and determining a second fuzzy core corresponding to the second shooting data of each pixel matrix based on the first mapping information; the second shooting data are the average value of the first shooting data of each pixel in the pixel matrix; or determining third shooting data corresponding to the target image and the N first background images based on the first shooting data, and determining a third blur kernel corresponding to the target image and the third shooting data of the N first background images based on the first mapping information; the third shooting data is the average value of the first shooting data of each pixel in the target image or the first background image. Thus, if the electronic device can divide the first image according to a specific size, a pixel matrix is obtained, and the pixel matrix is not large in difference of shooting data corresponding to each pixel due to the fact that the distance in the image is relatively close. Therefore, it is possible to perform specification that can ensure accuracy of photographed data and blur kernels while one blur kernel is to be determined based on a plurality of pixels, and in convolution calculation, the calculation amount can be reduced.
In a possible implementation manner, the electronic device performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images, and specifically performs: under the condition that the first blur kernel is determined, performing deconvolution processing on each pixel of N first background images based on the first blur kernel to obtain N second background images; under the condition that the second blur kernel is determined, deconvolution processing is carried out on each pixel matrix of the N first background images based on the second blur kernel, so that N second background images are obtained; and under the condition that the third blur kernel is determined, deconvolution processing is carried out on the N first background images based on the third blur kernel, so that N second background images are obtained. Therefore, the deconvolution results of the electronic equipment are different due to different fineness of the fuzzy cores, deconvolution processing is carried out on each pixel, the fineness and accuracy of the fuzzy cores are optimal, the deblurring effect is best, the processing efficiency is lowest, and the consumed processing resources and energy consumption are also the greatest. The deconvolution processing is carried out on the pixel matrix in one background image, the deblurring effect is not obviously reduced, but the processing efficiency is obviously improved, and the consumed energy consumption and processing resources are also reduced. A segmented background image is processed by a fuzzy kernel deconvolution process, so that the deblurring effect is reduced, but the processing efficiency can be obviously improved, and the processing resources and the energy consumption are saved.
In a possible implementation manner, the electronic device inputs the target image and the N second background images into a second neural network model to perform repair processing to obtain a second image, and specifically performs: the electronic device inputs the target image and the N second background images into a second neural network model, determines fusion weights of intersections of the target image and the N second background images through the second neural network model, and determines pixels of the intersections based on the fusion weights to obtain the second image. Therefore, the electronic equipment repairs and fuses the 'gaps' of the spliced images, and the effect of the junction of the second image is guaranteed to be good.
In a possible implementation manner, the electronic device determines, through the second neural network model, a fusion weight of intersections of the target image and the N second background images, and specifically performs: the electronic equipment determines the depth of the junction of two adjacent images in the target image and the N second background images through the second neural network model; the electronic equipment determines fusion weights of the intersections based on pixel depths of the intersections through the second neural network model; the fusion weight is the proportion of the junction between two adjacent images spliced together to the two images; the fusion weight of adjacent images with smaller depth is larger; the larger the depth, the smaller the fusion weight of neighboring images. In this way, in the weight fusion weight processing process, the fused weight can be determined according to the depth distance, and the closer the depth is, the closer the object is to the camera, which means that the closer the junction is in the picture represented by the image, the heavier the duty ratio of the junction is, so that the fusion or repair effect of different background images can be ensured to be better.
In a third aspect, the present application provides an electronic device comprising one or more processors and one or more memories. The one or more processors are coupled with one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the image processing method in any of the possible implementations of the above.
In a fourth aspect, the present application provides an electronic device, comprising: one or more functional modules. One or more functional modules are used to perform the image processing method in any of the possible implementations of the above aspect.
In a fifth aspect, embodiments of the present application provide a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the image processing method in any one of the possible implementations of the above aspect.
In a sixth aspect, embodiments of the present application provide a computer program product which, when run on a computer, causes the computer to perform the image processing method in any one of the possible implementations of the above aspect.
Drawings
Fig. 1 is a schematic view of a depth of field of a camera lens according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a user interface provided by an embodiment of the present application;
fig. 3 is a flow chart of a depth of field extension method based on depth segmentation according to an embodiment of the present application;
fig. 4A to fig. 4C are schematic diagrams of image segmentation results according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an image effect according to an embodiment of the present application;
FIG. 6 is a flowchart of another depth-of-field extension method based on depth segmentation according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device 100 according to an embodiment of the present application.
Detailed Description
In embodiments of the present application, the words "first," "second," and the like are used to distinguish between identical or similar items that have substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to denote examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In order to facilitate the clear description of the technical solutions of the embodiments of the present application, the following simply describes some terms and techniques involved in the embodiments of the present application:
(1) Image segmentation is a technique and process of dividing an image into several specific regions with unique properties and presenting objects of interest. It is a key step from image processing to image analysis. The existing image segmentation methods are mainly divided into the following categories: a threshold-based segmentation method, a region-based segmentation method, an edge-based segmentation method, a segmentation method based on a specific theory, and the like.
(2) Field of view (FOV): the angle of view is also known as the field of view in optical engineering, and the size of the angle of view determines the field of view of the optical instrument. The field angle, in turn, can be represented by FOV, which is related to focal length as follows: image height=efl×tan (FOV/2); EFL is the focal length; the FOV is the field angle. The lens of the camera is taken as the vertex, and the included angle formed by the two edges of the maximum range of the object image of the measured object passing through the lens is called the angle of view. The size of the angle of view determines the field of view of the optical instrument, and the larger the angle of view, the larger the field of view and the smaller the optical magnification. Colloquially, the target object beyond this angle will not be caught in the lens.
(3) Depth of field (DoF) refers to the range of distances between the front and back of a subject measured by the imaging of the front edge of a camera lens or other imager capable of taking a clear image. I.e. the distance of the sharp image presented in the range before and after the focus is completed, this range before and after being called depth of field. The aperture, lens, and distance of the focal plane to the subject are important factors affecting the depth of field.
When shooting, the process of adjusting the camera lens to clearly image a scene at a certain distance from the camera is called focusing, the point where that scene is located is called focusing point, and because "clearly" is not an absolute concept, the imaging of the scene in front of (near to the camera) and behind a certain distance can be clearly performed, and the sum of the front and rear ranges is called depth of field. Meaning that only scenes within this range can be clearly photographed. After the light is taken into the lens, the ideal lens should collect all the light at one point, and then spread out in a cone shape, and the point where all the light is collected is the focus. Light is concentrated and diffused before and after the focus, and the image of the point becomes blurred to form an enlarged circle, which is a circle of confusion.
Illustratively, fig. 1 is a schematic view of the depth of field of a camera lens illustratively disclosed in the present application. The camera collects light through the lens and displays the light on the focal plane. The distance between the focal point and the lens is the distance between the object, namely the object distance, and the distance between the lens and the focal plane is the distance. The sum of the object distance and the distance is the shooting distance, i.e. the distance between the focal planes. After focusing, if the focusing point is in the range from the closer near point to the farther far point, the image of the focusing plane is a clear image. I.e. the distance between the near and far points is the depth of field, the front depth of field before the focus and the rear depth of field after the focus. The distance between the near point and the lens is the near point distance, and the distance between the far point and the lens is the far point distance. The focal plane formed at the time of near point focusing to the focal plane formed at the time of far point focusing is called focal depth.
(4) A point spread function (point spread function, PSF) is a light field distribution of an output image of an optical system when an input object is a point light source, and is called a point spread function, or a point spread function. I.e. the spot formed by the point light source after passing through the system. Even an ideal system without aberration, the point light source cannot be converged into an infinitesimal point due to diffraction factors brought by the aperture, and can only be scattered into a pool of PSFs. The smaller the PSF, the better, ideally infinitesimal, so that the resolution of the point light sources for different positions is the highest.
When the defocused blurred image is restored, a commonly adopted model is called a disc defocusing model, and the model can well simulate a point spread function based on geometrical optics. When the image distance, focal length and object distance of the imaging system do not meet ideal conditions, the point light source imaging through the imaging system is not a point, but a disk which is in a diffuse shape and has uniform gray scale straight distribution. The time spread function can be expressed by a circular light spot with uniformly distributed gray scale, which can be expressed as:
wherein R is the circle center facula radius with uniformly distributed gray scale, namely the defocus blur radius, and reflects the blur degree of an imaging system.
In a shooting scene, the electronic device needs to focus on the shooting subject, which can make the shooting subject who takes the picture have higher definition. According to the principle of focusing, whether a picture is clear or not mainly depends on whether the focal length and the object distance of a focusing main body are in a clear imaging range or not, however, the distances between objects in a camera for shooting the picture and a camera are uneven, so that focusing ranges required by different objects are different, the shot pictures are only clear in focusing targets, and other objects in the picture are relatively blurred.
Illustratively, FIG. 2 is a schematic diagram of a user interface disclosed in an embodiment of the present application. The electronic device displays a shooting preview interface as shown in fig. 2. The shot preview interface may include a shot screen 210, a conversion camera control 240, a shot (video) control 230, and an album 220. Wherein:
the conversion camera control 240 is configured to switch a camera for capturing an image between a front camera and a rear camera.
A photographing control 230 for causing the electronic device to photograph in response to a user's operation.
Album 220 for viewing the captured pictures and videos by a user.
The shooting preview interface may also include other controls, which are not limiting of the application.
In the case where the user clicks the photographing control 230, the electronic device may acquire the photograph of the current preview box and determine that the photograph is photographed by the electronic device.
As shown in fig. 2, the electronic device displays a photographing screen 210 of the camera. The person in the current shot 210 of the electronic device is focused, so that the person is more clear in the shot 210, and the background of the person is more blurred, e.g., the trees, roads, street lamp moon, etc., are more blurred, and the light is also more dim.
In the above photographing process, due to the limitation of focusing, other things except for the focusing object in the photographed image may be blurred, so that the background of the photographed image is not clear enough.
Based on the above-mentioned problems, an embodiment of the present application provides an image processing method, where an electronic device may obtain a first captured image and corresponding first captured data. And then the electronic equipment divides the first image to obtain a target image and N background images. And then acquiring first PSF distribution data based on the first shooting data of the first image, and performing deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images. And inputting the N second background images and the target image into a second neural network model for processing to obtain second images. The second image is more sharp than the first image. Thus, the electronic equipment can eliminate focusing blurring caused by different depth of field by respectively carrying out deblurring treatment on blurring distribution, improve the background definition of the image and ensure the image effect.
Referring to fig. 3, fig. 3 is a flow chart of a depth-of-field extension method based on depth segmentation according to an embodiment of the present application. As shown in fig. 3, the method may include, but is not limited to, steps S301 to S305.
S301, the electronic device acquires a first image and first shooting data.
After the electronic device opens the camera application, a shooting preview image may be displayed, where the preview image is collected by the camera, and the shooting preview image may refer to a user interface shown in fig. 2, which is not described herein. The electronic device may acquire the first image in response to a photographing operation by a user. For a specific clicking operation, reference may be made to the related description of clicking the shooting control 230 by the user in fig. 2, which is not repeated. The first image is an image taken by a user acquired by the electronic device from the camera module.
The electronic device may acquire a depth map of the first image while acquiring the first image, and calculate first photographing data based on the depth map. The shot data may include a distance, depth, and coordinates of the far view defocus. The first photographing data, that is, the distance D, depth L, and coordinates of the far view defocus of each pixel of the first image photographed by the electronic device. The coordinates may be matrix coordinates (m, n) or polar coordinates (ρ, θ), where ρ is the distance from the center point of the center camera FOV and θ is the angle at which θ is located. It is understood that the first image and the first photographing data are acquired in pairs. The depth L is the distance between the camera and the shooting object of each pixel; the distance D is the difference between the distances of the focal plane and the depth map, that is, the difference between the focal plane and the depth map, and the distance D may be positive or negative or zero, and the coordinates may represent the positions of the pixels in the image. The focal plane is a plane perpendicular to the optical axis and the focal point is located.
Specifically, in the shooting process, the electronic device may acquire a depth map of the first image, where the depth map may represent a distance between an object corresponding to each pixel and the camera. The electronics can directly obtain the depth L of each pixel based on the depth map. The electronic device can acquire the distance between the focusing surface and the depth L in the focusing process of the first image, and can determine the difference value between the focusing surface and the depth L as the distant view defocusing distance D. The electronic device may determine coordinates of each pixel based on where the pixel is located in the first image.
S302, the electronic equipment divides the first image to obtain a target image and N first background images.
The electronic device may segment the first image into one target image and N first background images. Wherein the target image is an image of a focusing target in a picture of the first image. The N first background images are background picture images formed by other things except the area of the target image in the first images. N is a positive integer. The target image may be an image formed by the electronic device dividing the in-focus target.
In one possible implementation, the electronic device may segment the first image based on image segmentation methods such as threshold, region, edge, clustering, graph theory, and deep learning. The electronic device can input the first image into the first neural network model for processing, so as to obtain a target image and N first background images. Wherein the first neural network model is a neural network model for image segmentation, e.g. the first neural network model is a network model for image semantic segmentation.
Image semantic segmentation (PSP net, pyramid scene parsing network) is a classification at the pixel level, and pixels belonging to the same class are classified into one class, so that semantic segmentation is an understanding of an image from the pixel level. For example, an image includes a person and a motorcycle, and pixels belonging to the person are classified into one type, and pixels belonging to the motorcycle are classified into one type, and background pixels are classified into one type. In the embodiment of the application, the images divided into one type form one image, so that the target image and N first background images can be divided.
Illustratively, the electronic device assigns each pixel, i.e., classifies each pixel, by way of an advanced semantic tag of the image, which refers to a variety of object categories (e.g., people, animals, automobiles, etc.) as well as background categories (e.g., sky) in the image. The semantic segmentation task has high requirements on classification precision and positioning precision: the electronic device needs to accurately position the outline boundary of the object and accurately classify the area in the outline, so that the specific object can be well segmented from the background. The divided sub-images (the target image and the N first background images) have the largest similarity inside, and the sub-images have the smallest similarity.
Fig. 4A to 4C are schematic diagrams of image segmentation results according to embodiments of the present application. As shown in fig. 4A, in the shooting scene in fig. 2, the image acquired by the electronic device is shown in fig. 4A, where the definition of the focusing person is relatively high, and other background objects are blurred. The electronic device may perform the segmentation process of fig. 4A to obtain fig. 4B. As shown in fig. 4B, the electronic device may divide the screen in fig. 4A into a target image a and first background images B, c1, c2, d, and e (5 background images).
In the above embodiment, the electronic device performs segmentation by using the neural network model for image segmentation, so that different matters in the first image can be segmented. Generally, the depth of field of the pixel where the same object is located is basically close, so that the segmented image can be ensured to be more accurate in deblurring treatment, and a foundation is laid for the subsequent deblurring treatment of the background area of the first image.
In another possible implementation, the electronic device may divide based on the depth L of the first image to obtain one target image and N first background images.
The electronic device stores a preset depth range, which is a depth range for dividing each pixel in the first image. After determining the depth L of each pixel in the first image based on the depth map, the electronic device may divide different pixels into one region or different regions according to a preset depth range, where the pixels divided into the same region form one divided image (the target image and the N first background images).
Illustratively, the preset depth ranges include 0-5 m, 5-15 m, 15-30 m, … …, and K ranges after 100 m. In the case where the depth L is within the first range 0 to 5m, determining its pixel as the pixel of this (region) background image (or target image); in the case where the depth L is within the second range 5 to 15m, determining its pixel as the pixel of another background image (or target image); in the case where the depth L is in the third range 15 to 30m, determining its pixel as a pixel of yet another background image (or target image); … …; in the case where the depth L is in the kth range, i.e., greater than 100m, the pixel thereof is determined as the pixel of this background image (or target image). k is an integer greater than 1.
As shown in fig. 4C, the electronic device may divide the image into a target image and N first background images according to a distance range of depths. Wherein the target image is g (g includes a person in focus), and the first background images are 7 first background images of f, h1, h2, h3, l, s, and r.
In the above embodiment, the depth L can distinguish the distance between the photographed object and the camera, and a specific calculation relationship exists between the depth L and the depth of field, so in the above segmentation method, the electronic device may segment the pixels with different depths according to a certain range, and the segmented image is favorable for distinguishing the distribution situation of the first PSF distribution data in advance, so as to further improve the deblurring effect of the segmented image.
S303, the electronic device determines first PSF distribution data based on the first shooting data.
The first PSF distribution data is PSF distribution data of the first image, the PSF distribution data including blur kernel data in the first image. Wherein the blur kernel is actually a matrix, and the clear image is blurred after being convolved with the blur kernel, and the blur kernel is one of the convolution kernels. The size of the blur kernel can be expressed as x, and the range of x is 3-50; y ranges from 3 to 50.
The electronic device may store mapping information between the photographing data and the PSF distribution data. The mapping information is the distance D, depth L of the distant view defocus and the mapping relation between the coordinates and the fuzzy core. The mapping information is a 4-dimensional mapping table. The electronic device may determine a blur kernel corresponding to the photographing data (D, L, (m, n) (or (ρ, θ)) based on the mapping information, resulting in a blur kernel corresponding to each pixel.
When the PSF distribution data is determined, the PSF distribution data is obtained through specific experiments of the FOV and L and D of the actual lens, so that accurate L, D, coordinate and fuzzy core mapping information can be formed. The mapping information is stored in the electronic equipment in advance, and when the fuzzy core is determined, the accuracy of the fuzzy core can be ensured, the efficiency of acquiring the fuzzy core can be improved, the calculation process of the electronic equipment is simplified, and the processing resources and the energy consumption of the electronic equipment are saved.
In addition, each fuzzy core has its location attribute. Since the angles of view corresponding to the cameras of each electronic device are different, the effect of photographing is also different for different pixels. For example, a pixel at a lens edge position may be distorted to some extent due to the angle of view, resulting in a blurred image edge position than an intermediate position. Therefore, based on the positions of the pixels, the corresponding relation exists between the pixel coordinates and the blur kernels in the mapping information, so that the image can be clearer after the pixels are processed by the blur kernels at the corresponding positions. That is, the blur kernel determined by the mapping information can reduce the degree of blurring of image distortion due to lens after deblurring.
In one possible implementation, the electronic device may determine a first blur kernel for each pixel in the first background image. At this time, the first PSF distribution data is a blur kernel corresponding to each pixel of the first background image. The electronic device can determine the first shot data for each pixel, and then determine the first blur kernel for each pixel based on the first shot data and the mapping relationship described above.
In another possible implementation, the electronic device divides the first image by a specific size to form a plurality of pixel matrices, and determines the second photographing data of each pixel matrix. Then, the electronic device may determine a second blur kernel corresponding to the second shot data of each pixel matrix in the first image based on the above-mentioned mapping information. The second shooting data is the average value of the first shooting data of each pixel in the pixel matrix. At this time, the first PSF distribution data is a second blur kernel corresponding to every 10×10 (assuming that 10×10 is a specific size) pixels in the first image. I.e. one blur kernel for every 100 pixels. The pixel matrix includes one or more pixels.
Specifically, the electronic device divides a first image with x y size according to a specific size of q x p to form a matrix of a x b pixels, wherein a value of a is a positive integer obtained by rounding up x/q; b is a positive integer which is rounded up for y/p. x, y, q and p are positive integers, and x > q, y > p. The majority of the divided pixel matrix is the size of the feature size q×p. The electronic device may determine the second photographing data of each pixel matrix based on the above-described first photographing data. That is, the electronic device may determine the average value of the first shot data of the respective pixels in each pixel matrix as the second shot data of that pixel matrix. The electronic device may then determine a second blur kernel for each pixel matrix based on the second shot data. The electronic equipment determines a second fuzzy core corresponding to the second shooting data of each pixel matrix based on the mapping relation.
The electronic device may divide the first image first, for example, pixels of the first image are 1920×1080 (x×y), and the electronic device may divide the first image according to a specific size of 10×10 (q×p) to form a smaller pixel matrix: for example, a pixel matrix of 192×108 (a×b) is formed. The electronic device then calculates second shot data for each 10 x 10 pixel matrix. For example, a first shot of 100 pixels may be determined and an average calculated to obtain a second shot of the one 10×10 pixel matrix. A corresponding second blur kernel may then be determined in accordance with the second shot data. Specific modes can refer to the above description and are not repeated.
It should be noted that the specific dimension q×p may be 10×10, 20×20, 50×50, 100×100, etc., and q may range from 1 to the total number of columns of the first image pixels; p may range from 1 to the total number of rows of the first image pixel.
Alternatively, the specific dimensions may be adjusted to specific needs. The electronic device may perform intelligent analysis on the first background image to determine the size of the specific size. Specifically, the electronic device identifies the screen content of the first background image, and determines the importance level of the screen content, and determines the size of the specific size based on the importance level. Wherein, the higher the importance level is, the smaller the corresponding specific size is; the lower the importance level, the larger the corresponding specific size. In a possible case, the electronic device may store a correspondence between specific screen content and importance level, where the specific correspondence may be set by a user, or may be preset by default by the electronic device, and is not limited. For example, in the case that the screen content 1 is a tree, the electronic device may determine that the importance level of the first background image is a low level, and the specific size is 50×50; the picture content 2 is a person, and the importance level of the first background image is determined to be high, and the specific size is 10 x 10. In another possible scenario, after the electronic device determines the N first background images, an option user interface for the N first background images may be displayed. For example, K options of importance levels (e.g., important, secondary, unimportant, etc.) are displayed in each of the first background images. The user may select one importance level option for each first background image according to the importance level. Then, in response to a determination operation by the user, the electronic device may determine the size of the specific resident based on the importance level options. The determining operation is an operation of clicking the determining control after the user completes the importance level selection.
In the above manner, the larger the specific size, the larger the granularity of the pixel matrix, the simpler the calculation, the higher the efficiency, but the poorer the effect of improving the definition; conversely, the smaller the specific size, the smaller the granularity of the pixel matrix, the more complex the computation, the lower the efficiency, but the better the sharpness improvement effect. Through the determining process of the specific size, the efficiency can be improved and the computing resource can be saved while the definition effect can be ensured.
In yet another possible implementation, the electronic device may obtain a third blur kernel for the target image and the N background images. Before the present embodiment is executed, S303 described above needs to be executed. The electronic device may determine the third blur kernel according to the above-described division result of the target image and the N background images, i.e., one blur kernel for each divided image. The electronic device may determine photographing data of each of the first background images in units of the divided first background images. At this time, the first PSF distribution data is the corresponding blur kernels of the N first background images.
Specifically, the electronic device determines third shooting data corresponding to the target image and the N first background images based on the first shooting data, and determines a third blur kernel corresponding to the target image and the third shooting data of the N first background images based on the first mapping information; the third photographing data is a mean value of the first photographing data of each pixel in the target image or the first background image.
Optionally, the electronic device determines pixels of the first background image, determines a mean value of the shot data of all pixels of the background image as the first shot data of the background image, and determines the third blur kernel according to the shot data.
Alternatively, the electronic device may divide the first image according to the specific size, determine a pixel matrix to be divided included in the first background image, determine the second shot data of the pixel matrix, determine the third shot data of the first background image from the average value of the shot data of each pixel matrix of the first background image, and determine the third blur kernel according to the shot data of the background image. The specific process is referred to the above description and will not be repeated.
In the above embodiment, the first photographing data is photographing data of each pixel in the first image; the second shooting data are shooting data of each pixel matrix in the first image; the third photographing data is photographing data of the target image or each first background image formed after the first image division.
After the electronic device determines the first PSF distribution data, the first PSF distribution data may be stored.
In the above embodiment, if the electronic device may divide the first image according to a specific size, a pixel matrix is obtained, where the pixel matrix is smaller in distance in the image, and the difference between the shooting data corresponding to each pixel is not large. Therefore, it is possible to perform specification that can ensure accuracy of photographed data and blur kernels while one blur kernel is to be determined based on a plurality of pixels, and in convolution calculation, the calculation amount can be reduced.
S304, the electronic equipment performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images.
The electronic device may perform deblurring processing on the N first background images based on the first PSF distribution data, to obtain N second background images. The deblurring process may be a deconvolution process based on a blur kernel, such as wiener filtering, deconvolution of a neural network, and the like.
Illustratively, the deblurring procedure for the i-th first background image of the N first background images according to the Wiener Filter (Wiener Filter) is described below:
let the ith first background image be b i The corresponding fuzzy core is c i The obtained ith second background image X i Can be expressed as:
wherein F is Fourier transform, F -1 For the inverse fourier transform, SNR is the signal-to-noise ratio of frequency ω, which represents the conjugate calculation.
In the above embodiment, the electronic device performs deblurring processing on the background image in a deblurring manner, and the deblurring processing is performed separately, so that the deblurring of the blur kernel in the area can be ensured to be more accurate, and the clear deblurring effect is ensured.
In a possible implementation manner, in a case of determining the first blur kernel, the electronic device performs deconvolution processing on each pixel of the N first background images based on the first blur kernel, to obtain N second background images. That is, the electronic device performs a deconvolution process based on each pixel of the N first background images and the corresponding first blur kernel. At this time, the number of deconvolution calculations is the same as the number of pixels in the background image, and the deconvolution result calculated is the same-position pixel result in the second background image.
In another possible implementation manner, in the case of determining the second blur kernel, the electronic device performs deconvolution processing on each pixel matrix of the N first background images based on the second blur kernel, to obtain N second background images. That is, the electronic device performs a deconvolution, that is, a process, based on the second blur kernel of each pixel matrix in the N first background images. Assuming that the specific size of the pixel matrix is 10×10, the corresponding second blur kernel is blur kernel 1, and performing deconvolution processing on all the 100 image points by using the one blur kernel to obtain 100 pixel values of the same pixel matrix corresponding to the second background image. After deconvolution processing is carried out on all pixel matrixes in the first background image, a second background image corresponding to the deconvolution processing can be obtained.
In yet another possible implementation manner, in a case where the third blur kernel is determined, the electronic device performs deconvolution processing on the N first background images based on the third blur kernel, to obtain N second background images. That is, the electronic device performs deconvolution processing based on the third blur kernel of each of the N first background images. And deconvoluting the first background image according to the corresponding third fuzzy core to obtain the pixels after deconvolution of all the pixels, and mapping the pixels to the same position to obtain a second background image.
Alternatively, in the above three embodiments, the electronic device may perform deconvolution processing on the target image, so that the deconvoluted target image may be obtained, and the deconvoluted target image may be clearer.
In the above embodiment, since the deconvolution results of the electronic device are different due to different fineness of the blur kernel, deconvolution processing is performed on each pixel, the fineness and accuracy of the blur kernel are optimal, the deblurring effect is best, the processing efficiency is lowest, and the consumed processing resources and energy consumption are also the greatest. The deconvolution processing is carried out on the pixel matrix in one background image, the deblurring effect is not obviously reduced, but the processing efficiency is obviously improved, and the consumed energy consumption and processing resources are also reduced. A segmented background image is processed by a fuzzy kernel deconvolution process, so that the deblurring effect is reduced, but the processing efficiency can be obviously improved, and the processing resources and the energy consumption are saved.
S305, the electronic device inputs the target image and the N second background images into a second neural network model for restoration processing, and a second image is obtained.
The second neural network model is a network model for performing repair fusion on the target image and the N second background images.
In one possible implementation, the target image and the N second background images are input into a second neural network model at the electronic device, and the second neural network model performs the repair process. Because a gap exists between the second background images, that is, pixels cannot be spliced into the first image size completely in a tight seam mode, the electronic device can firstly determine the fusion weight of the junction of the target image and the second background images of each object, and fusion processing, that is, "bridging" is performed based on the fusion weight. The fusion weight is the proportion of the junction between two images spliced together. The junction is the pixel point between two adjacent images that are spliced together. Specifically, if the junction of the two images (which may be the junction of the two second background images, or the junction between the target image and the second background image) is calculated according to different fusion weights of the two images. In comparison, the smaller the depth, the larger the fusion weight of the neighboring images. Conversely, the larger the depth, the smaller the fusion weight of the adjacent images.
Illustratively, the current second background image 1 and the second background image 2 (two adjacent images) have a junction where they need to be fused, and the second neural network calculates the respective fusion weights based on the depth of the junction second background images 1 and 2. Assuming that the depth of the second background image 1 is L1 and the depth of the second background image 2 is L2, it can be determined that the fusion weight of the second background image 1 is The fusion weight of the second background image 2 is +.>
According to the weight fusion weight processing process, the fused weight can be determined according to the depth distance, and the closer the depth is, the closer the object is to the camera, namely, the closer the object is to the camera, the more the occupied ratio of the boundary is in the picture represented by the image, so that the fusion or repair effect of different background images can be guaranteed to be better.
Fig. 5 is a schematic view of an image effect exemplarily shown in an embodiment of the present application. In fig. 5, a is an image which is not processed by the embodiment of the present application, the focusing target "hand" in a is clearer, and the background part presents a more blurred condition, namely, the background is out of focus. In contrast to b in fig. 5, b is the image processed as described above, and both the focusing target and the background portion in b are clearer than a, and the image quality is better.
In the above embodiment, the electronic device can separate the images and perform deblurring processing to ensure the accuracy and effect of deblurring, and then integrate the deblurred images together, so that the problem of image blurring caused by defocus can be reduced after image processing. In addition, in the process, the electronic equipment performs segmentation and deblurring processing on the image, so that a segmented area can be ensured to be clearer, and in addition, through fusion processing of the second neural network model, pixels at the junction can be further ensured to be deblurred, so that the whole second image is clearer compared with the second image, and the quality is better.
Fig. 6 is a flow chart of a depth of field extension method based on depth segmentation according to an embodiment of the present application. As shown in fig. 6, the method may include, but is not limited to, steps S601 to S608.
S601, the electronic device acquires a first image and first shooting data.
S602, the electronic equipment divides the first image to obtain a target image and N first background images.
The specific descriptions of S601 to S602 may refer to the specific descriptions of S301 to S302, and are not repeated.
S603, the electronic device identifies the main body categories of the N first background images.
In the above-described process of dividing the first image, the electronic device may identify a subject category of things contained in each of the divided first background images. Specifically, when the electronic device performs semantic segmentation on the first image, the electronic device needs to be a segmentation subject of each part, for example, a vehicle, a pedestrian, a tree, and the like. The electronic device may also determine, after the segmentation is completed, a subject class of the N first background images including the object obtained by the segmentation. Specifically, the electronic device performs semantic recognition on the N first background images to obtain a subject category of the shooting object of each first background image. The subject category may be a classification category for things. Such as trees, people, vehicles, hills, buildings, etc.
Illustratively, the subject in the first background image 1 of the electronic device is a vehicle, and the first background image 1 may be recognized as a vehicle.
Optionally, the electronic device may also determine a subject class of the target image.
S604, the electronic device judges whether the main body categories of the N first background images are important categories, secondary categories or common categories.
The electronic device may preset the subject class to include three classes, an important class, a minor class, and a general class. The importance of the three categories decreases in turn.
The important category and the minor category may be categories that are predefined by the electronic device, as a possible case may be. Under the condition that the first background image of the frame is the important category or the secondary category, the electronic equipment can select the first background image of the frame through the intelligent system, the current category is determined, and if the first background image of the frame is not the important category or the secondary category, the first background image of the frame is determined to be the common category.
For example, for a traffic system (intelligent system), an important class is vehicles (or license plates). Thus, in the event that a vehicle (or license plate) is identified as being included in the first background image, the electronic device determines that this frame of the first background image is of an important class. The secondary category is a pedestrian, and therefore, in the event that a pedestrian is included in the first background image, the electronic device determines that this frame of the first background image is the secondary category. If neither the vehicle nor the pedestrian is included, the electronic device determines that the frame first background image is of a normal category.
In another possible scenario, the user may select the options of the important category, the minor category. The electronic device may determine which frame of the first background image is the important category, the minor category, or the general category based on the user's selection.
For example, in the case where the user selects the important category as a building and the secondary category is a person, the electronic device may divide the first background image determination including the building into the important category and the first background image determination including the person into the secondary category; the remaining first background images are classified into general categories.
In the case where it is judged that the frame of the first background image is of an important class, S605 is executed; in the case of the secondary category, S606 is performed; in the case of the normal category, S607 is performed.
S605, under the condition that the main body class is an important class, the electronic equipment determines a first blur kernel corresponding to first shooting data of each pixel of the first background image, and performs deconvolution processing on each pixel of N first background images based on the first blur kernel to obtain N second background images.
In the case that the subject class is an important class, the electronic device may include a first blur kernel corresponding to each pixel of the first background image. Then, the electronic device may perform deconvolution processing on each pixel of the N first background images based on the first blur kernel, to obtain N second background images.
In S605, reference may be made to descriptions of related embodiments in S303 and S304, which are not repeated.
S606, under the condition that the main body class is the secondary class, the electronic equipment determines a second fuzzy core corresponding to second shooting data of each pixel matrix of the first background image, and carries out deconvolution processing on each pixel matrix of the N first background images based on the second fuzzy cores to obtain N second background images.
In the case that the main body class is the secondary class, the electronic device may divide the first background image by a specific size to form a plurality of pixel matrices, and then may calculate second photographing data of each pixel matrix based on the first photographing data of each pixel in the pixel matrices, and determine a second blur kernel of each pixel matrix. Then, the electronic device may perform deconvolution processing on each pixel matrix of the N first background images based on the second blur kernel, to obtain N second background images.
The specific reference in S606 may be described in the related embodiments in S303 and S304, which is not repeated.
S607, under the condition that the main body class is the common class, the electronic equipment determines a third fuzzy core corresponding to the third shooting data of each first background image, and carries out deconvolution processing on N first background images based on the third fuzzy core to obtain N second background images.
In the case that the subject class is the normal class, the electronic device may determine third photographing data corresponding to the N first background images based on the first photographing data, and determine third blur kernels corresponding to the third photographing data of the N first background images based on the first mapping information. Then, the electronic device may perform deconvolution processing on the N first background images based on a third blur kernel to obtain N second background images, where the third shot data is a mean value of the first shot data of each pixel in the target image or the first background image.
In S607, reference may be specifically made to descriptions of related embodiments in S303 and S304, which are not repeated.
In the above steps S604-S607, the image of the important class, the secondary class and the common class of the electronic device is decreased once in the definition or the image quality, so that the electronic device may select the corresponding deconvolution scheme based on the corresponding requirement in order to ensure the corresponding definition and satisfy the image definition. The important category calculation process is complex and has low efficiency, but the improved image definition effect is best; the secondary type has simple calculation process, improves the efficiency, but reduces the definition of the image; the improvement effect of the common class image definition map is not as good as the former two, but the calculation efficiency is highest. Therefore, through the scheme, the electronic equipment can improve the calculation efficiency and save the calculation resources and the energy consumption while ensuring the definition of the image.
And S608, the electronic equipment inputs the target image and the N second background images into a second neural network model for restoration and processing, so as to obtain a second image.
The specific description of S608 may refer to the specific description of S305, which is not repeated.
Fig. 7 is a schematic software structure of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, from top to bottom, an application layer, an application framework layer, runtime (run time) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 7, the application package may include applications (also referred to as applications) such as cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (Application Programming Interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 7, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification presented in the form of a chart or scroll bar text in the system top status bar, such as a notification of a background running application, or a notification presented on a screen in the form of a dialog interface. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The Runtime (run time) includes core libraries and virtual machines. Run time is responsible for scheduling and management of the system.
The core library consists of two parts: one part is the function that the programming language (e.g., java language) needs to call, and the other part is the core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes the programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), media library (Media Libraries), three-dimensional graphics processing library (e.g., openGL ES), two-dimensional graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2D) and three-Dimensional (3D) layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
The workflow of the electronic device software and hardware is illustrated below in connection with a shooting scenario.
The electronic device obtains an image through a camera driver, and can obtain a first image when the camera application obtains shooting operation of a user. After that, the camera application may perform the above-described steps S301 to S305. The electronic device can acquire the depth map of the first image through the camera while acquiring the first image. The electronic device may calculate first PSF distribution data of the first image based on the depth map. Further, the electronic device may determine the location of the individual pixels through the camera application. And the processes of S301 to S305 may be performed by the camera application.
The following describes the apparatus according to the embodiment of the present application.
Fig. 8 is a schematic hardware structure of an electronic device 100 according to an embodiment of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (Subscriber Identification Module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing unit, GPU), an image signal processor (Image Signal Processor, ISP), a controller, a memory, a video codec, a digital signal processor (Digital Signal Processor, DSP), a baseband processor, and/or a Neural network processor (Neural-network Processing Unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
It is understood that an AE system may also be included in the processor 110. The AE system may be specifically provided in the ISP. AE systems may be used to enable automatic adjustment of exposure parameters. Alternatively, the AE system may also be integrated in other processor chips. The embodiment of the present application is not limited thereto.
In the embodiment provided by the present application, the electronic device 100 may perform the exposure intensity adjustment method through the processor 110.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices 100, such as AR devices, etc.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), an Active-matrix Organic Light-Emitting Diode (AMOLED) or an Active-matrix Organic Light-Emitting Diode (Matrix Organic Light Emitting Diode), a flexible Light-Emitting Diode (Flex), a Mini LED, a Micro-OLED, a quantum dot Light-Emitting Diode (Quantum Dot Light Emitting Diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement acquisition functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image or video visible to naked eyes. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to an ISP to be converted into a digital image or video signal. The ISP outputs digital image or video signals to the DSP for processing. The DSP converts digital image or video signals into standard RGB, YUV, etc. format image or video signals. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. For example, in some embodiments, the electronic device 100 may acquire images of a plurality of exposure coefficients using the N cameras 193, and in turn, in the video post-processing, the electronic device 100 may synthesize an HDR image by an HDR technique from the images of the plurality of exposure coefficients. In the embodiment of the present application, the electronic device can acquire the N-frame first image through the camera 193.
The digital signal processor is used to process digital signals, and may process other digital signals in addition to digital image or video signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a Neural-Network (NN) computing processor, and can rapidly process input information by referencing a biological Neural Network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image video playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by a computer program that is stored on a computer readable storage medium and that, when executed, may comprise the steps of the above-described method embodiments. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (10)

1. An image processing method, wherein the method is applied to an electronic device, the method comprising:
under the condition that the electronic equipment displays a shooting preview picture, responding to a first operation of a user, the electronic equipment acquires a first image and first shooting data, wherein the first image is an image of the shooting preview picture acquired by a camera, the first operation is an operation of clicking a shooting control in the shooting preview picture by the user, and the first shooting data comprises a distant view defocusing distance, a depth and coordinates of each pixel of the first image; the distance between the far view and the focus is the distance between the focus and the far view;
the electronic equipment divides the first image to obtain a target image and N first background images, wherein N is a positive integer;
The electronic equipment determines first Point Spread Function (PSF) distribution data based on the first shooting data, wherein the first PSF distribution data is fuzzy core data of each pixel of a first image;
the electronic equipment performs deblurring processing on the N first background images based on the first PSF distribution data to obtain N second background images;
and the electronic equipment inputs the target image and the N second background images into a second neural network model for restoration processing to obtain a second image.
2. The method of claim 1, wherein the electronic device obtains the first image and the first captured data, specifically comprising:
the electronic equipment responds to the first operation, acquires the first image, a focusing plane and a depth map, wherein the focusing plane is a plane which is overfocal and is perpendicular to an optical axis, and the depth map is used for representing the distance between objects corresponding to all pixels in the first image and the camera;
the electronic device determines the depth of each pixel in the first image based on the depth map;
the electronic device determines a perspective defocus distance of each pixel in the first image based on a difference between the focus plane and the depth map;
The electronic device determines coordinates of each pixel based on where the pixel is located in the first image.
3. The method according to claim 1 or 2, wherein the electronic device segments the first image to obtain a target image and N first background images, and specifically includes:
the electronic equipment inputs the first image into a first neural network model for processing to obtain a target image and N first background images, wherein the first neural network model is a network model for image semantic segmentation; or alternatively, the first and second heat exchangers may be,
the electronic equipment performs segmentation based on the depth of the first image to obtain a target image and N first background images, wherein the pixels of each image in the segmented target image and N first background images are in a corresponding preset depth range.
4. A method according to any of claims 1-3, wherein the electronic device determines first PSF distribution data based on the first shot data, comprising in particular:
the electronic equipment determines first PSF distribution data corresponding to the first image based on first mapping information and the first shooting data, wherein the first mapping information is a mapping relation between a distant view defocusing distance, a depth and coordinates and a fuzzy core.
5. The method of claim 4, wherein the electronic device determines first PSF distribution data corresponding to the first image based on first mapping information and the first photographing data, specifically comprising:
the electronic equipment determines a first fuzzy core corresponding to each pixel based on the first mapping information and the first shooting data; or (b)
Dividing the first image by the electronic equipment according to a specific size to form a plurality of pixel matrixes, determining second shooting data of each pixel matrix, and determining a second fuzzy core corresponding to the second shooting data of each pixel matrix based on the first mapping information; the second shooting data are the average value of the first shooting data of each pixel in the pixel matrix; or (b)
The electronic equipment determines third shooting data corresponding to the target image and the N first background images based on the first shooting data, and determines a third fuzzy core corresponding to the target image and the third shooting data of the N first background images based on the first mapping information; the third shooting data is the average value of the first shooting data of each pixel in the target image or the first background image.
6. The method of claim 5, wherein the electronic device deblurring the N first background images based on the first PSF distribution data to obtain N second background images, specifically comprising:
under the condition that the first blur kernel is determined, the electronic equipment carries out deconvolution processing on each pixel of N first background images based on the first blur kernel to obtain N second background images;
under the condition that the second blur kernel is determined, the electronic equipment carries out deconvolution processing on each pixel matrix of N first background images based on the second blur kernel to obtain N second background images;
and under the condition that the third blur kernel is determined, the electronic equipment carries out deconvolution processing on N first background images based on the third blur kernel to obtain N second background images.
7. The method according to any one of claims 1-6, wherein the electronic device inputs the target image and the N second background images into a second neural network model for repair processing, so as to obtain a second image, and specifically includes:
the electronic device inputs the target image and the N second background images into a second neural network model, determines fusion weights of intersections of the target image and the N second background images through the second neural network model, and determines pixels of the intersections based on the fusion weights to obtain the second image.
8. The method according to claim 7, wherein the electronic device determines, through the second neural network model, a fusion weight of the target image and the N second background image intersections, specifically comprising:
the electronic equipment determines the depth of the junction of two adjacent images in the target image and the N second background images through the second neural network model;
the electronic equipment determines fusion weights of the intersections based on pixel depths of the intersections through the second neural network model; the fusion weight is the proportion of the junction between two adjacent images spliced together to the two images; the fusion weight of adjacent images with smaller depth is larger; the larger the depth, the smaller the fusion weight of neighboring images.
9. An electronic device, comprising: one or more processors and one or more memories; the one or more processors being coupled with the one or more memories, the one or more memories being configured to store computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-8.
10. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN202211036085.XA 2022-08-27 2022-08-27 Image processing method and electronic equipment Active CN116051391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211036085.XA CN116051391B (en) 2022-08-27 2022-08-27 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211036085.XA CN116051391B (en) 2022-08-27 2022-08-27 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN116051391A CN116051391A (en) 2023-05-02
CN116051391B true CN116051391B (en) 2023-09-22

Family

ID=86130273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211036085.XA Active CN116051391B (en) 2022-08-27 2022-08-27 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN116051391B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522939B (en) * 2024-01-04 2024-03-19 电子科技大学 Monocular list Zhang Mohu image depth calculation method
CN117880630B (en) * 2024-03-13 2024-06-07 杭州星犀科技有限公司 Focusing depth acquisition method, focusing depth acquisition system and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330549A (en) * 2020-10-16 2021-02-05 西安工业大学 Blind deconvolution network-based blurred image blind restoration method and system
CN113326924A (en) * 2021-06-07 2021-08-31 太原理工大学 Depth neural network-based key target photometric positioning method in sparse image
CN114926351A (en) * 2022-04-12 2022-08-19 荣耀终端有限公司 Image processing method, electronic device, and computer storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330549A (en) * 2020-10-16 2021-02-05 西安工业大学 Blind deconvolution network-based blurred image blind restoration method and system
CN113326924A (en) * 2021-06-07 2021-08-31 太原理工大学 Depth neural network-based key target photometric positioning method in sparse image
CN114926351A (en) * 2022-04-12 2022-08-19 荣耀终端有限公司 Image processing method, electronic device, and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
运动模糊图像复原算法的研究;任金凡;《中国优秀硕士学位论文全文数据库信息科技辑》;全文 *

Also Published As

Publication number Publication date
CN116051391A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN116051391B (en) Image processing method and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2022505115A (en) Image processing methods and equipment and devices
KR20230084486A (en) Segmentation for Image Effects
CN105247567B (en) A kind of image focusing device, method, system and non-transient program storage device again
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
CN112581379A (en) Image enhancement method and device
CN116048244B (en) Gaze point estimation method and related equipment
CN111597922A (en) Cell image recognition method, system, device, equipment and medium
CN110443766A (en) Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN113538227B (en) Image processing method based on semantic segmentation and related equipment
CN109005367A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN115359105B (en) Depth-of-field extended image generation method, device and storage medium
CN111105370B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN113781370A (en) Image enhancement method and device and electronic equipment
CN115379208A (en) Camera evaluation method and device
CN116916151B (en) Shooting method, electronic device and storage medium
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
CN104184936A (en) Image focusing processing method and system based on light field camera
CN105893578A (en) Method and device for selecting photos
CN110971813B (en) Focusing method and device, electronic equipment and storage medium
CN116245741B (en) Image processing method and related device
CN116668773B (en) Method for enhancing video image quality and electronic equipment
CN116051386B (en) Image processing method and related device
CN117014561B (en) Information fusion method, training method of variable learning and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant