CN117616777A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117616777A
CN117616777A CN202280004273.6A CN202280004273A CN117616777A CN 117616777 A CN117616777 A CN 117616777A CN 202280004273 A CN202280004273 A CN 202280004273A CN 117616777 A CN117616777 A CN 117616777A
Authority
CN
China
Prior art keywords
image
light spot
spot area
determining
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004273.6A
Other languages
Chinese (zh)
Inventor
尹双双
董家旭
饶强
陈妹雅
刘阳晨旭
江浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117616777A publication Critical patent/CN117616777A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Abstract

The present disclosure relates to an image processing method, apparatus, electronic device, and storage medium, the method comprising: acquiring a first image and a second image which are acquired by image acquisition equipment aiming at the same scene, wherein the first image is a normal exposure image, and the second image is an underexposure image; determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image; and performing virtual rendering processing on the first image according to the effective light spot area.

Description

Image processing method, device, electronic equipment and storage medium Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
In recent years, functions of terminal equipment are more and more abundant, and performances of various functions are gradually improved, for example, a camera program of the terminal equipment can provide a plurality of photographing modes, so that the terminal equipment has various functions of a professional camera, and photographing requirements of users in various scenes are met. However, there is still a certain gap between the camera program of the terminal device and the professional camera. Taking physical blurring of the professional camera as an example, the professional camera keeps clear objects at the depth of the focusing object during shooting, and blurs and blurring objects at other depths, so that a shooting subject is highlighted. In the related art, photographing with a physical blurring function is mainly realized through a professional lens of a professional camera.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide an image processing method, apparatus, electronic device, and storage medium to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
acquiring a first image and a second image which are acquired by image acquisition equipment aiming at the same scene, wherein the first image is a normal exposure image, and the second image is an underexposure image;
determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image;
and performing virtual rendering processing on the first image according to the effective light spot area.
In one embodiment, the determining the effective light spot area in the at least one first light spot area in the first image according to the at least one second light spot area in the second image includes:
and determining a first light spot area belonging to a first intersection among the at least one first light spot area as an effective light spot area, wherein the first intersection is the intersection of the at least one first light spot area and the at least one second light spot area.
In one embodiment, after determining the effective spot area in the at least one first spot area in the first image according to the at least one second spot area in the second image, the method further comprises:
determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area;
and in each effective light spot area, determining a color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for carrying out blurring rendering processing on the first image.
In one embodiment, the determining, in each of the effective light spot areas, the color parameter of each pixel according to the color parameter of the target pixel includes:
under the condition that the difference value of the color parameter between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold value, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer which is larger than 0 and not larger than N, and N is the total number of the effective light spot areas in the first image;
And under the condition that the color parameter difference value between at least one pixel point in the j-th effective light spot area and the target pixel point in the j-th effective light spot area is larger than a preset difference value threshold, determining that the color parameter of each pixel point in the j-th effective light spot area is unchanged, wherein j is an integer which is larger than 0 and not larger than N.
In one embodiment, after determining the effective spot area in the at least one first spot area in the first image according to the at least one second spot area in the second image, the method further comprises:
determining a target brightness parameter of each effective light spot area, wherein the target brightness parameter is a brightness parameter of a pixel point with the minimum brightness parameter in the effective light spot area;
and in each effective facula area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image.
In one embodiment, before the determining the effective spot area in the at least one second spot area in the first image according to the at least one second spot area in the second image, the method further comprises:
And respectively carrying out light spot detection on the first image and the second image to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
In one embodiment, performing spot detection on the first image and the second image respectively to obtain at least one first spot area in the first image and at least one second spot area in the second image, including:
and respectively carrying out light spot detection on the first image and the second image in the YUV domain to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
In one embodiment, the performing spot detection on the first image and the second image respectively to obtain at least one first spot area in the first image and at least one second spot area in the second image includes:
determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region;
And determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
In one embodiment, the determining the at least one connected domain composed of the first light spot pixels as the first light spot region includes:
determining at least one connected domain, which is formed by the pixel points of the first light spots and has the number of the pixel points within a preset number range, as a first light spot area;
the determining the at least one connected area formed by the second light spot pixel points as a second light spot area includes:
and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus including:
the acquisition module is used for acquiring a first image and a second image which are acquired by the image acquisition equipment aiming at the same scene, wherein the first image is a normal exposure image, and the second image is an underexposure image;
a determining module, configured to determine an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image;
And the rendering module is used for performing virtual rendering processing on the first image according to the effective light spot area.
In one embodiment, the determining module is specifically configured to:
and determining a first light spot area belonging to a first intersection among the at least one first light spot area as an effective light spot area, wherein the first intersection is the intersection of the at least one first light spot area and the at least one second light spot area.
In one embodiment, the system further comprises a color module for:
after determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image, determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area;
and in each effective light spot area, determining a color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for carrying out blurring rendering processing on the first image.
In one embodiment, the color module is configured to, in each of the effective light spot areas, determine a color parameter of each pixel according to the color parameter of the target pixel, and specifically configured to:
Under the condition that the difference value of the color parameter between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold value, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer which is larger than 0 and not larger than N, and N is the total number of the effective light spot areas in the first image;
and under the condition that the color parameter difference value between at least one pixel point in the j-th effective light spot area and the target pixel point in the j-th effective light spot area is larger than a preset difference value threshold, determining that the color parameter of each pixel point in the j-th effective light spot area is unchanged, wherein j is an integer which is larger than 0 and not larger than N.
In one embodiment, the device further comprises a brightness module for:
after determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image, determining a target brightness parameter of each effective light spot area, wherein the target brightness parameter is a brightness parameter of a pixel point with the minimum brightness parameter in the effective light spot area;
And in each effective facula area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image.
In one embodiment, the device further comprises a detection module for:
spot detection is carried out on the first image and the second image respectively, so that at least one first spot area in the first image and at least one second spot area in the second image are obtained;
in one embodiment, the detection module is specifically configured to:
and respectively carrying out light spot detection on the first image and the second image in the YUV domain to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
In one embodiment, the detection module is specifically configured to:
determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region;
and determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
In one embodiment, the detection module is configured to, when determining at least one connected domain formed by the first light spot pixels as the first light spot area, specifically:
determining at least one connected domain, which is formed by the pixel points of the first light spots and has the number of the pixel points within a preset number range, as a first light spot area;
the determining the at least one connected area formed by the second light spot pixel points as a second light spot area includes:
and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory for storing computer instructions executable on a processor for performing the image processing method according to the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
According to the image processing method provided by the disclosure, since the first image is a normal exposure image and the second image is an underexposure image, light-colored substances and other light spot areas which are mistakenly identified in the normal exposure image are not identified as light spot areas in the underexposure image, so that effective light spot areas screened out by the second light spot areas in the first light spot areas are accurate, the mistakenly identified first light spot areas are removed, and therefore blurring rendering processing can be carried out on the first image on the basis of determining the effective light spot areas, and the physical blurring function of a professional camera is imitated. If the method is applied to the camera program of the terminal equipment, the functions of the camera program are richer and are closer to the photographing effect of the professional camera.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flowchart of an image processing method shown in an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart of an image processing method shown in another exemplary embodiment of the present disclosure;
fig. 3 is a schematic structural view of an image processing apparatus shown in an exemplary embodiment of the present disclosure;
Fig. 4 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
When a professional camera is used for shooting pictures, if a long focus or large aperture lens is adopted, pictures with smaller depth of field can be obtained, focusing objects and other objects with the depth of the focusing objects can be kept clear, and the foreground and the background can be blurred and blurred to different degrees, so that the effect of highlighting a shooting main body can be achieved. In which point-like light sources are often blurred into spots in the imaging plane due to their higher power density in the background or foreground being blurred. Generally, the larger the brightness of the point-like light source, the farther from the focal plane, and the larger the radius of the formed spot.
At present, due to the requirements of portability and cost of terminal equipment such as a smart phone and the like, a camera with a smaller size is often adopted, so that a picture with a blurring effect is difficult to shoot by mobile phone photography, and therefore, a software algorithm is introduced into a camera program of the smart phone to simulate physical blurring, namely, the acquired original image is subjected to blurring rendering and the like, but when light spots exist in the acquired original image, the recognition of the light spot areas is inaccurate, and the blurring rendering result has poorer brightness level and less color information.
In a first aspect, at least one embodiment of the present disclosure provides an image processing method, please refer to fig. 1, which illustrates a flow of the method, including step S101 and step S103.
The method can be applied to the terminal equipment, for example, to an algorithm simulating physical blurring in a camera program of the terminal equipment. The terminal device may have image capturing devices such as cameras, which may capture images, and the camera program of the terminal device may control parameters of the image capturing device during the process of capturing images. The method can be applied to a scene of an image shot by a camera program of the terminal equipment, namely, the image acquired by the image acquisition equipment is subjected to blurring rendering and difficult processing by the method, so that an image output by the camera program is obtained, namely, an image obtained when a user shoots by the camera program.
In step S101, a first image and a second image acquired by an image acquisition device for the same scene are acquired, wherein the first image is a normal exposure image, and the second image is an underexposure image.
When the camera program of the terminal device is started and the shooting function of the camera program is triggered by the user through operation, the image acquisition device can continuously acquire the first image and the second image aiming at the same scene. The same scene is the scene aimed by the user in photographing, namely the real scene in the field of view of the image acquisition equipment. It can be understood that the present step is not limited to the acquisition sequence of the first image and the second image, and the first image and the second image may be acquired first, and then the first image may be acquired, or the first image and the second image may be acquired simultaneously by different sub cameras in the image acquisition device.
The first image is a normally exposed image, i.e. an image acquired with a normal exposure (i.e. a default exposure) when the camera program takes a picture, and the second image is an underexposed image, i.e. an image acquired with an exposure less than the normal exposure when the camera program takes a picture. It is understood that the ratio of the exposure amount at the time of underexposure image acquisition to the exposure amount at the time of normal exposure image acquisition may be set in advance, for example, 80%, 75%, 60%, etc.; the exposure amount can be controlled by controlling the exposure time.
In step S102, an effective spot area is determined in at least one first spot area in the first image from at least one second spot area in the second image.
The first light spot area in the first image and the second light spot area in the second image may be acquired in advance. That is, before step S102, spot detection may be performed on the first image and the second image, respectively, to obtain at least one first spot area in the first image and at least one second spot area in the second image.
The purpose of the spot detection is to detect a spot area in an image, and the difference between the spot area and other areas is mainly reflected in brightness, so that the spot detection can be performed on the first image and the second image respectively in the YUV domain. If the first image and the second image are RGB images, the first image and the second image may be converted from the RGB domain to the YUV domain before the first image and the second image are detected, and then the light spot detection may be completed by using the Y channels (i.e., luminance channels) of the first image and the second image.
Alternatively, the spot detection may be performed by using a brightness threshold, an energy function, deep learning, or the like. Taking the method of the brightness threshold as an example, the spot detection can be performed on the first image and the second image respectively in the following manner: determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region; and determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
The brightness value of the pixel point is the value of the pixel point in the Y channel. By traversing the brightness value of each pixel of the first image, a pixel with a brightness value higher than a first brightness threshold value can be determined as a first facula pixel, and other pixels are determined as non-first facula pixels; by traversing the luminance value of each pixel of the second image, pixels having luminance values higher than the second luminance threshold may be determined as second spot pixels, and other pixels may be determined as non-second spot pixels.
The connected domain composed of the first light spot pixels and the connected domain composed of the second light spot pixels can be determined by adopting the connected domain division standard of the four-connection connected domain or the eight-connection connected domain. Then, a unique label value, such as a number, may be assigned to each independent connected domain in the first image, where the label value is the label value of the first light spot area determined by the connected domain; each individual connected region in the second image may be assigned a unique tag value, e.g. a number or the like, i.e. the tag value of the second spot region determined by the connected region. The tag value may be given to the connected domain by way of two-pass or seed-fill.
Because the light spot is formed by an out-of-focus point light source, a large area light source outside the focus does not form a light source. Therefore, when the first light spot area and the second light spot area are determined, the area of the connected domain can be further measured, the connected domain with the overlarge area is eliminated, the connected domain with the area within a reasonable range is determined to be the light spot area, and therefore the large-area light source can be eliminated to be mistakenly identified as the light spot area, and the accuracy of light spot area detection is improved. The area of the connected domain can be represented by the number of pixel points in the connected domain, so that at least one connected domain with the number of pixel points in a preset number range can be determined as a first light spot area, wherein the first light spot pixel point consists of the pixel points of the first light spot; and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot. The preset number range may be preset, may be less than a preset number threshold (e.g., 300), etc.
The image acquisition device may have a position change when acquiring the first image and acquiring the second image, for example, shake when a user holds the terminal device to take a picture, etc. The change in position of the image acquisition device may be such that the first image and the second image do not completely coincide, but rather there is some deviation. Therefore, before determining the effective light spot area, the first image and the second image may be aligned, and the alignment of the first image and the second image may be completed by using one channel of the Y channel in the YUV domain, that is, the Y channel of the first image and the Y channel of the second image are aligned, so as to complete the alignment of the first image and the second image. For example, the brightness of the second image is first increased by the Y-channel histogram of the second image, then the first image and the second image after the brightness is increased are aligned by adopting an optical flow alignment mode, and finally the alignment processing of the first image and the second image is completed according to the alignment result, for example, the second image after the brightness is increased is aligned with the first image by shifting 15 pixels upwards and 20 pixels to the right, then the second image can be shifted 15 pixels upwards and 20 pixels to the right, so as to be aligned with the first image. Through the alignment processing, the pixels at the same positions on the first image and the second image can be ensured to correspond to the same real scene, so that the screening accuracy of the effective light spot area can be improved.
In a possible embodiment, each first spot area of the at least one first spot area belonging to a first intersection, wherein the first intersection is the intersection of the at least one first spot area and the at least one second spot area, may be determined as an effective spot area. For example, the first intersection may be determined from the position coordinates of the first spot area in the first image and the position coordinates of the second spot area in the second image, i.e. the first spot area and the second spot area with the same position coordinates are added to the first intersection. Further exemplary, the first image labeled with the first spot area and the second image labeled with the second spot area are superimposed, and the first spot area where the second spot area overlapping exists on the second image is determined as the effective spot area.
In step S103, a virtual rendering process is performed on the first image according to the effective spot area.
When the first image is subjected to blurring rendering processing, the depth information of the current imaging scene can be calculated through a multi-shot system or a deep learning algorithm of the terminal equipment, then the blurring radius corresponding to the pixels outside the focusing plane is calculated according to different depth information, and finally the picture with blurring effect is generated according to the blurring radius corresponding to each pixel.
According to the image processing method provided by the disclosure, since the first image is a normal exposure image and the second image is an underexposure image, light-colored substances and other light spot areas which are mistakenly identified in the normal exposure image are not identified as light spot areas in the underexposure image, so that effective light spot areas screened out by the second light spot areas in the first light spot areas are accurate, the mistakenly identified first light spot areas are removed, and therefore blurring rendering processing can be carried out on the first image on the basis of determining the effective light spot areas, and the physical blurring function of a professional camera is imitated. If the method is applied to the camera program of the terminal equipment, the functions of the camera program are richer and are closer to the photographing effect of the professional camera.
Specifically, by acquiring a first image and a second image acquired by an image acquisition device for the same scene, the present disclosure may perform light spot detection on the first image and the second image, so as to obtain at least one first light spot area in the first image and at least one second light spot area in the second image, then may determine effective light spot areas in the first image by using the second light spot areas in the second image, that is, determine part or all of the first light spot areas in the first image as effective light spot areas, and finally may perform virtual rendering processing on the first image according to the effective light spot areas. Because the first image is a normal exposure image and the second image is an underexposure image, light-colored substances and other light spot areas which are mistakenly identified in the normal exposure image are not identified as light spot areas in the underexposure image, so that effective light spot areas screened out in the first light spot areas by utilizing the second light spot areas are more accurate, the mistakenly identified first light spot areas are removed, a better blurring rendering effect is achieved, and blurring rendering of non-light spot areas as light spot effects is avoided. If the method is applied to the camera program of the terminal equipment, the camera program can accurately identify the facula area in the image to be subjected to blurring rendering, so that the camera program can obtain a better blurring rendering effect.
In some embodiments of the present disclosure, color information in the spot area is easy to lose, especially color information in the overexposed spot area is easier to lose, and a speckled distribution may occur in the U, V channel of the overexposed spot area, that is, N, V channel values are all close to 128, so that there is a possibility that color information in the effective spot area in the first image is lost. The color information within the effective spot area may thus be restored after said determining of the effective spot area from at least one second spot area in said second image, in a first spot area in said at least one first image, in the following way: firstly, determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area, and the pixel point with the highest color saturation is mostly present at a position, close to a boundary, of the effective light spot area, namely, in a halo, and exemplarily, the color saturation of each pixel point can be judged according to the value of each pixel point in each effective light spot area on a U, V channel; and then, in each effective light spot area, determining the color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for performing blurring rendering processing on the first image.
For example, when the color parameter difference value between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer greater than 0 and not greater than N, and N is the total number of the effective light spot areas in the first image. This is because it is explained that the original color in the effective spot area is one color, that is, the color of the target pixel, and therefore the color parameter of each pixel can be adjusted using the color parameter of the target pixel. When the method is specifically adjusted, the value of each pixel point on the U channel can be directly replaced by the value of the target pixel point on the U channel, and the value of each pixel point on the V channel is replaced by the value of the target pixel point on the V channel; alternatively, this may be performed for each pixel: randomly using a certain value between the values of the target pixel point and the U channel, replacing the values of the pixel point and the U channel, randomly using a certain value between the values of the target pixel point and the V channel, and replacing the values of the pixel point and the V channel.
In an exemplary embodiment, when the color parameter difference between at least one pixel point in the jth effective light spot area and the target pixel point in the jth effective light spot area is greater than a preset difference threshold, the color parameter of each pixel point in the jth effective light spot area is determined to be unchanged, where j is an integer greater than 0 and not greater than N. This is because, in this case, it is explained that the original colors in the effective spot area are at least two colors, and if the color parameters of each pixel point are adjusted by using the color parameters of the target pixel point, then some pixels in the effective spot area are adjusted to be different from the original colors.
In this embodiment, by determining the target pixel point in the effective light spot area, color information lost by the pixel point in the effective light spot area can be recovered, so that the color in the light spot area is kept in a real state in the image obtained by virtual rendering according to the effective light spot area, and color loss is avoided.
In some embodiments of the present disclosure, it is difficult to generate a brightness gradation according to actual brightness in an effective spot area because of a low latitude of an imaging system of a terminal device. The brightness level within the effective spot area may thus be enhanced after said determining the effective spot area from at least one second spot area in said second image in at least one first spot area in said first image in the following way: firstly, determining a target brightness parameter of each effective facula area, wherein the target brightness parameter is the brightness parameter of a pixel point with the minimum brightness parameter in the effective facula area, and the brightness parameter of a certain pixel point is the value of the pixel on a Y channel in an exemplary manner; and then, in each effective light spot area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image. Illustratively, since the human eye's perception of luminance is nonlinear, the luminance parameters of the pixel points can be adjusted by the following formula that characterizes the gamma curve:
Wherein Y' is the brightness parameter adjustment result of the pixel point, Y is the brightness parameter of the pixel point, Y min Is the target luminance parameter.
In this embodiment, according to the brightness parameters of the pixel points in the effective light spot area, the energy response values of the pixel points in the Y channel are remapped to enrich the brightness gradation, so that the brightness in the light spot area has a sense of reality and a sense of gradation in the image obtained by virtual rendering according to the effective light spot area.
Referring to fig. 2, a complete flow of the image processing method provided by the present disclosure is illustrated. It can be seen from the figure that the normal exposure frame EV0 and the underexposure frame EV-acquired by the image acquisition device are firstly acquired, then the EV0 and the EV-are respectively subjected to YUV domain conversion, namely, to the YUV domain, then the EV0 and the EV-are subjected to image alignment, then the EV0 is subjected to intensity threshold detection to obtain a first light spot pixel point, then the connected domain detection is performed to obtain a first light spot region, then the EV-is subjected to the same detection as the EV0 to obtain a second light spot region, the second light spot region is used for screening the first light spot region, namely, the intersection of the two is reserved, then whether the residual first light spot region is enhanced in color is judged, the overexposed light spot is subjected to color enhancement under the condition of having light explosion light spots, finally, the residual first light spot region is subjected to light spot energy value remapping, so that the brightness level of the first light spot region is improved, the image to be rendered is obtained, and finally the image to be rendered can be subjected to virtual rendering.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, referring to fig. 3, the apparatus includes:
an acquiring module 301, configured to acquire a first image and a second image acquired by an image acquisition device for the same scene, where the first image is a normal exposure image, and the second image is an underexposure image;
a determining module 302, configured to determine an effective spot area in at least one first spot area in the first image according to at least one second spot area in the second image;
and the rendering module 303 is configured to perform virtual rendering processing on the first image according to the effective light spot area.
In some embodiments of the disclosure, the determining module is specifically configured to:
and determining a first light spot area belonging to a first intersection among the at least one first light spot area as an effective light spot area, wherein the first intersection is the intersection of the at least one first light spot area and the at least one second light spot area.
In some embodiments of the present disclosure, a color module is further included for:
after determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image, determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area;
And in each effective light spot area, determining a color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for carrying out blurring rendering processing on the first image.
In some embodiments of the present disclosure, the color module is configured to, in each of the effective light spot areas, determine a color parameter of each pixel according to a color parameter of the target pixel, and specifically configured to:
under the condition that the difference value of the color parameter between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold value, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer which is larger than 0 and not larger than N, and N is the total number of the effective light spot areas in the first image;
and under the condition that the color parameter difference value between at least one pixel point in the j-th effective light spot area and the target pixel point in the j-th effective light spot area is larger than a preset difference value threshold, determining that the color parameter of each pixel point in the j-th effective light spot area is unchanged, wherein j is an integer which is larger than 0 and not larger than N.
In some embodiments of the present disclosure, the apparatus further comprises a brightness module for:
after determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image, determining a target brightness parameter of each effective light spot area, wherein the target brightness parameter is a brightness parameter of a pixel point with the minimum brightness parameter in the effective light spot area;
and in each effective facula area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image.
In some embodiments of the present disclosure, the apparatus further comprises a detection module for:
spot detection is carried out on the first image and the second image respectively, so that at least one first spot area in the first image and at least one second spot area in the second image are obtained;
in some embodiments of the present disclosure, the detection module is specifically configured to:
and respectively carrying out light spot detection on the first image and the second image in the YUV domain to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
In some embodiments of the present disclosure, the detection module is specifically configured to:
determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region;
and determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
In some embodiments of the present disclosure, the detection module is configured to, when determining at least one connected domain composed of the first light spot pixels as the first light spot region, specifically:
determining at least one connected domain, which is formed by the pixel points of the first light spots and has the number of the pixel points within a preset number range, as a first light spot area;
the determining the at least one connected area formed by the second light spot pixel points as a second light spot area includes:
and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot.
The specific manner in which the various modules perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method of the first aspect and will not be described in detail here.
In accordance with a third aspect of embodiments of the present disclosure, reference is made to fig. 4, which schematically illustrates a block diagram of an electronic device. For example, apparatus 400 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 4, apparatus 400 may include one or more of the following components: a processing component 402, a memory 404, a power supply component 406, a multimedia component 408, an audio component 410, an input/output (I/O) interface 412, a sensor component 414, and a communication component 416.
The processing component 402 generally controls the overall operation of the apparatus 400, such as operations associated with display, telephone calls, data communications, camera program operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 may include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
Memory 404 is configured to store various types of data to support operations at device 400. Examples of such data include instructions for any application or method operating on the apparatus 400, contact data, phonebook data, messages, pictures, videos, and the like. The memory 404 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 406 provides power to the various components of the device 400. The power components 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 400.
The multimedia component 408 includes a screen between the device 400 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 408 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 further includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 414 includes one or more sensors for providing status assessment of various aspects of the apparatus 400. For example, the sensor assembly 414 may detect the on/off state of the device 400, the relative positioning of the components, such as the display and keypad of the device 400, the sensor assembly 414 may also detect the change in position of the device 400 or a component of the device 400, the presence or absence of user contact with the device 400, the orientation or acceleration/deceleration of the device 400, and the change in temperature of the device 400. The sensor assembly 414 may also include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate communication between the apparatus 400 and other devices in a wired or wireless manner. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G or 5G, or a combination thereof. In one exemplary embodiment, the communication part 416 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for performing the power supply methods of electronic devices described above.
In a fourth aspect, the present disclosure also provides, in an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the method of powering an electronic device as described above. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

  1. An image processing method, the method comprising:
    acquiring a first image and a second image which are acquired by image acquisition equipment aiming at the same scene, wherein the first image is a normal exposure image, and the second image is an underexposure image;
    determining an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image;
    And performing virtual rendering processing on the first image according to the effective light spot area.
  2. The image processing method according to claim 1, wherein said determining an effective spot area in at least one first spot area in the first image from at least one second spot area in the second image comprises:
    and determining a first light spot area belonging to a first intersection among the at least one first light spot area as an effective light spot area, wherein the first intersection is the intersection of the at least one first light spot area and the at least one second light spot area.
  3. The image processing method according to claim 1, characterized by, after said determining a valid spot area in at least one first spot area in the first image from at least one second spot area in the second image, further comprising:
    determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area;
    and in each effective light spot area, determining a color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for carrying out blurring rendering processing on the first image.
  4. The image processing method according to claim 3, wherein said determining the color parameter of each pixel in each of the effective spot areas from the color parameter of the target pixel comprises:
    under the condition that the difference value of the color parameter between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold value, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer which is larger than 0 and not larger than N, and N is the total number of the effective light spot areas in the first image;
    and under the condition that the color parameter difference value between at least one pixel point in the j-th effective light spot area and the target pixel point in the j-th effective light spot area is larger than a preset difference value threshold, determining that the color parameter of each pixel point in the j-th effective light spot area is unchanged, wherein j is an integer which is larger than 0 and not larger than N.
  5. The image processing method according to claim 1, characterized by, after said determining a valid spot area in at least one second spot area in the first image from at least one second spot area in the second image, further comprising:
    Determining a target brightness parameter of each effective light spot area, wherein the target brightness parameter is a brightness parameter of a pixel point with the minimum brightness parameter in the effective light spot area;
    and in each effective facula area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image.
  6. The image processing method according to claim 1, characterized by, before said determining a valid spot area in at least one second spot area in the first image from at least one second spot area in the second image, further comprising:
    and respectively carrying out light spot detection on the first image and the second image to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
  7. The image processing method according to claim 6, wherein performing spot detection on the first image and the second image respectively to obtain at least one first spot area in the first image and at least one second spot area in the second image, includes:
    And respectively carrying out light spot detection on the first image and the second image in the YUV domain to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
  8. The image processing method according to claim 6, wherein the performing spot detection on the first image and the second image to obtain at least one first spot area in the first image and at least one second spot area in the second image includes:
    determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region;
    and determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
  9. The image processing method according to claim 8, wherein the determining the at least one connected domain composed of the first light spot pixels as the first light spot region includes:
    Determining at least one connected domain, which is formed by the pixel points of the first light spots and has the number of the pixel points within a preset number range, as a first light spot area;
    the determining the at least one connected area formed by the second light spot pixel points as a second light spot area includes:
    and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot.
  10. An image processing apparatus, characterized in that the apparatus comprises:
    the acquisition module is used for acquiring a first image and a second image which are acquired by the image acquisition equipment aiming at the same scene, wherein the first image is a normal exposure image, and the second image is an underexposure image;
    a determining module, configured to determine an effective light spot area in at least one first light spot area in the first image according to at least one second light spot area in the second image;
    and the rendering module is used for performing virtual rendering processing on the first image according to the effective light spot area.
  11. The image processing apparatus according to claim 10, wherein the determining module is specifically configured to:
    and determining a first light spot area belonging to a first intersection among the at least one first light spot area as an effective light spot area, wherein the first intersection is the intersection of the at least one first light spot area and the at least one second light spot area.
  12. The image processing apparatus of claim 10, further comprising a color module for:
    after determining an effective light spot area in the at least one second light spot area in the first image according to the at least one second light spot area in the second image, determining a target pixel point of each effective light spot area, wherein the target pixel point is a pixel point with highest color saturation in the effective light spot area;
    and in each effective light spot area, determining a color parameter of each pixel point according to the color parameter of the target pixel point, wherein the color parameter is used for carrying out blurring rendering processing on the first image.
  13. The image processing apparatus according to claim 12, wherein the color module is configured to, in each of the effective spot areas, determine a color parameter of each pixel according to a color parameter of the target pixel, specifically configured to:
    under the condition that the difference value of the color parameter between each pixel point in the ith effective light spot area and the target pixel point in the ith effective light spot area is smaller than a preset difference value threshold value, the color parameter of each pixel point is adjusted according to the color parameter of the target pixel point in the ith effective light spot area, wherein i is an integer which is larger than 0 and not larger than N, and N is the total number of the effective light spot areas in the first image;
    And under the condition that the color parameter difference value between at least one pixel point in the j-th effective light spot area and the target pixel point in the j-th effective light spot area is larger than a preset difference value threshold, determining that the color parameter of each pixel point in the j-th effective light spot area is unchanged, wherein j is an integer which is larger than 0 and not larger than N.
  14. The image processing apparatus of claim 10, further comprising a brightness module for:
    after determining an effective light spot area in the at least one second light spot area in the first image according to the at least one second light spot area in the second image, determining a target brightness parameter of each effective light spot area, wherein the target brightness parameter is a brightness parameter of a pixel point with the minimum brightness parameter in the effective light spot area;
    and in each effective facula area, adjusting the brightness parameter of each pixel point according to the target brightness parameter, wherein the brightness parameter is used for carrying out blurring rendering processing on the first image.
  15. The image processing apparatus of claim 10, further comprising a detection module configured to:
    And respectively carrying out light spot detection on the first image and the second image to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
  16. The image processing apparatus according to claim 15, wherein the detection module is specifically configured to:
    and respectively carrying out light spot detection on the first image and the second image in the YUV domain to obtain at least one first light spot area in the first image and at least one second light spot area in the second image.
  17. The image processing apparatus according to claim 15, wherein the detection module is specifically configured to:
    determining a pixel point with the brightness higher than a first brightness threshold value in the first image as a first facula pixel point, and determining at least one connected domain formed by the first facula pixel points as a first facula region;
    and determining a pixel point with the brightness higher than a second brightness threshold value in the second image as a second facula pixel point, and determining at least one communication area formed by the second facula pixel points as a second facula area.
  18. The image processing apparatus according to claim 17, wherein the detection module is configured to, when determining at least one connected domain composed of the first light spot pixels as the first light spot region, specifically:
    Determining at least one connected domain, which is formed by the pixel points of the first light spots and has the number of the pixel points within a preset number range, as a first light spot area;
    the determining the at least one connected area formed by the second light spot pixel points as a second light spot area includes:
    and determining at least one communication area with the number of the pixels within a preset number range as a second light spot area, wherein the second light spot area is formed by the pixels of the second light spot.
  19. An electronic device comprising a memory for storing computer instructions executable on the processor, the processor for executing the computer instructions based on the image processing method of any of claims 1 to 9.
  20. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method of any one of claims 1 to 9.
CN202280004273.6A 2022-06-10 2022-06-10 Image processing method, device, electronic equipment and storage medium Pending CN117616777A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/098240 WO2023236209A1 (en) 2022-06-10 2022-06-10 Image processing method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN117616777A true CN117616777A (en) 2024-02-27

Family

ID=89117464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004273.6A Pending CN117616777A (en) 2022-06-10 2022-06-10 Image processing method, device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN117616777A (en)
WO (1) WO2023236209A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013115000A1 (en) * 2013-01-07 2014-07-10 GM Global Technology Operations LLC (n. d. Gesetzen des Staates Delaware) Method for generating glare-reduced image from images captured by camera device of subject vehicle, involves generating glare-reduced image based on modified identified glaring region upon resulting high dynamic range image
TWI498848B (en) * 2014-10-13 2015-09-01 Quanta Comp Inc Multi-exposure imaging system and white balance method
CN107197146B (en) * 2017-05-31 2020-06-30 Oppo广东移动通信有限公司 Image processing method and device, mobile terminal and computer readable storage medium
KR102574649B1 (en) * 2018-11-29 2023-09-06 삼성전자주식회사 Method for Processing Image and the Electronic Device supporting the same
CN114565517B (en) * 2021-12-29 2023-09-29 骨圣元化机器人(深圳)有限公司 Image denoising method and device of infrared camera and computer equipment

Also Published As

Publication number Publication date
WO2023236209A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
CN108986199B (en) Virtual model processing method and device, electronic equipment and storage medium
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
CN108154465B (en) Image processing method and device
CN107463052B (en) Shooting exposure method and device
CN108154466B (en) Image processing method and device
CN108040204B (en) Image shooting method and device based on multiple cameras and storage medium
CN112614064B (en) Image processing method, device, electronic equipment and storage medium
CN113747067B (en) Photographing method, photographing device, electronic equipment and storage medium
CN111586280B (en) Shooting method, shooting device, terminal and readable storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN108156381B (en) Photographing method and device
CN112669231B (en) Image processing method, training method, device and medium of image processing model
CN113315903B (en) Image acquisition method and device, electronic equipment and storage medium
CN111277754B (en) Mobile terminal shooting method and device
CN117616777A (en) Image processing method, device, electronic equipment and storage medium
CN111461950B (en) Image processing method and device
CN107707819B (en) Image shooting method, device and storage medium
CN111866373B (en) Method, device and medium for displaying shooting preview image
CN114073063B (en) Image processing method and device, camera assembly, electronic equipment and storage medium
CN117392034A (en) Image processing method, device, storage medium and chip
CN111225158B (en) Image generation method and device, electronic equipment and computer readable storage medium
EP4304188A1 (en) Photographing method and apparatus, medium and chip
CN109862252B (en) Image shooting method and device
CN114727025A (en) Image shooting method and device, electronic equipment and storage medium
CN117499776A (en) Shooting method, shooting device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination