CN114500843A - Shooting method, shooting device, storage medium and electronic equipment - Google Patents

Shooting method, shooting device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114500843A
CN114500843A CN202210098796.3A CN202210098796A CN114500843A CN 114500843 A CN114500843 A CN 114500843A CN 202210098796 A CN202210098796 A CN 202210098796A CN 114500843 A CN114500843 A CN 114500843A
Authority
CN
China
Prior art keywords
color
image
parameter
exposure
exposure value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210098796.3A
Other languages
Chinese (zh)
Inventor
孙少辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210098796.3A priority Critical patent/CN114500843A/en
Publication of CN114500843A publication Critical patent/CN114500843A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Abstract

The application discloses a shooting method, a shooting device, a storage medium and electronic equipment, and relates to the technical field of digital image processing. Firstly, acquiring a first image acquired by a camera, and dividing a target object area in the first image into areas with different colors; then determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values; and finally, controlling the camera to shoot based on the second exposure value to obtain a second image acquired by the camera. Because the image is divided into different color areas according to different colors, and the exposure value of the whole image is finally determined according to the exposure value corresponding to each color area, the influence on different colors in the image is considered by the determined exposure value, and the imaging of various colors in the image can be accurate by shooting the image based on the exposure value, so that the imaging color of the multi-color image is more real.

Description

Shooting method, shooting device, storage medium and electronic equipment
Technical Field
The present application relates to the field of digital image processing technologies, and in particular, to a shooting method, an apparatus, a storage medium, and an electronic device.
Background
With the development of modern internet and various advanced electronic devices, the application in the aspect of image acquisition is more and more extensive, however, compared with a real image perceived by naked eyes, an image acquired by the device has differences in color, brightness and the like, and in order to improve image quality, various electronic devices generally adopt processing such as compensation exposure and the like on the image so as to restore the color of the image and improve the imaging quality.
Disclosure of Invention
The application provides a shooting method, a shooting device, a storage medium and electronic equipment, which can solve the technical problems of inaccurate exposure and large color and actual difference of shot images in the related technology.
In a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
acquiring a first image acquired by a camera, and dividing a target object area in the first image into areas with different colors;
determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values;
and controlling the camera to shoot based on the second exposure value, and acquiring a second image acquired by the camera.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the device comprises a region dividing module, a color region dividing module and a color region dividing module, wherein the region dividing module is used for acquiring a first image acquired by a camera and dividing a target object region in the first image into different color regions;
the exposure calculation module is used for determining first exposure values corresponding to the color regions and determining second exposure values of the first image according to the first exposure values;
and the image shooting module is used for controlling the camera to shoot based on the second exposure value so as to obtain a second image acquired by the camera.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the above-mentioned method.
In a fourth aspect, embodiments of the present application provide an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the computer program being adapted to be loaded by the processor and to perform the steps of the above-mentioned method.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
the application provides a shooting method, which comprises the steps of firstly, obtaining a first image collected by a camera, and dividing a target object area in the first image into areas with different colors; then determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values; and finally, controlling the camera to shoot based on the second exposure value to obtain a second image acquired by the camera. Because the image is divided into different color areas according to different colors, and the exposure value of the whole image is finally determined according to the exposure value corresponding to each color area, the influence on different colors in the image is considered by the determined exposure value, and the imaging of various colors in the image can be accurate by shooting the image based on the exposure value, so that the imaging color of the multi-color image is more real.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is an exemplary system architecture diagram of a shooting method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an algorithm for digital image processing according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a shooting method according to another embodiment of the present application;
fig. 5 is a schematic diagram of a user terminal interaction provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of an algorithm for digital image processing according to another embodiment of the present application;
fig. 7 is a schematic flowchart of a photographing method according to another embodiment of the present application;
fig. 8 is a schematic flowchart illustrating a color parameter adjustment method according to an embodiment of the present disclosure;
fig. 9 is a block diagram of a shooting device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the features and advantages of the present application more obvious and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
Referring to fig. 1, fig. 1 is a schematic diagram of an exemplary system architecture of a shooting method according to an embodiment of the present disclosure.
As shown in fig. 1, the system architecture may include an electronic device 101, a network 102, and a server 103. Network 102 is the medium used to provide communication links between electronic device 101 and server 103. Network 102 may include various types of wired or wireless communication links, such as: the wired communication link includes an optical fiber, a twisted pair wire or a coaxial cable, and the Wireless communication link includes a bluetooth communication link, a Wireless-Fidelity (Wi-Fi) communication link, a microwave communication link, or the like.
The electronic device 101 may interact with the server 103 via the network 102 to receive messages from the server 103 or to send messages to the server 103, or the electronic device 101 may interact with the server 103 via the network 102 to receive messages or data sent by other users to the server 103. The electronic device 101 may be hardware or software. When the electronic device 101 is hardware, it may be a variety of electronic devices including, but not limited to, smart watches, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the electronic device 101 is software, it may be installed in the electronic device listed above, and it may be implemented as multiple software or software modules (for example, for providing distributed services), or may be implemented as a single software or software module, and is not limited in this respect.
The server 103 may be a business server providing various services. The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module, and is not limited in particular herein.
It should be understood that the number of electronic devices, networks, and servers in FIG. 1 is merely illustrative, and that any number of electronic devices, networks, and servers may be used, as desired for an implementation.
Referring to fig. 2, fig. 2 is a schematic diagram of an algorithm for digital image processing according to an embodiment of the present application.
With the development of modern electronic computers, the processing on the image is also gradually digitized, and the image signal is converted into a digital signal, so that the image is convenient to modify, store and transmit. Because of the inevitable difference between the digital image product and the eyes of the human body, when a user obtains an image through the digital image product, the difference between the acquired image and the real scene sensed by the user is possible, so that the digital image product can carry out digital processing on the acquired image, the content presented by the image is close to the real scene, and more real use experience is brought to the user. Generally, users are more sensitive to color and brightness, and are more likely to feel the difference change of color and brightness, so that the color and brightness of an image become important indexes for measuring the reduction degree of the image compared with a real scene, and therefore, optimization and reduction of color and brightness also become an important development field of digital image processing.
When an Image is processed, an Image Signal Processing algorithm (ISP) is usually adopted to perform post-Processing on a Signal output by a front-end Image sensor, and the main functions include linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control and the like. As shown in fig. 2, when the automatic processing of the image is implemented by the ISP, the imaging parameters of the image are usually adjusted by combining Auto Exposure (AE), Auto Focus (AF), and Auto White Balance (AWB), where the Auto Exposure, that is, the AE refers to an algorithm mechanism that the photographing device automatically adjusts the Exposure, the Exposure time, and the photosensitive gain according to the intensity of the light of the real scene, so as to control the brightness of the imaging of the image; the automatic focusing, namely AF, is a mode that reflected light is received by a sensor on shooting equipment by utilizing the principle of object light reflection, and the reflected light is processed by a computer to drive electric focusing equipment to focus; automatic white balance, namely AWB is a color processing algorithm combined in an ISP link (ISP pipeline), the real image color is restored by estimating the environment of an imaging light source, and color cast caused by color temperature can be eliminated by algorithm adjustment of the ISP under different color temperature environments, so that the imaging effect is close to the visual habit of human eyes.
Specifically, the ISP link includes multiple image processing units, each processing graphics from different sides, and referring to fig. 2, at an initial image level, Black Level Correction (BLC), in order to reduce the influence of dark current on an image signal, a reference dark current signal may be subtracted from an obtained image signal to perform black level Correction; lens Shade Correction (LSC) eliminates the influence of a vignetting phenomenon on the brightness of an image through Lens Correction, usually, a region with uniform brightness in the middle of the image does not need to be corrected, the region is taken as the center, the image darkening speed caused by attenuation of each point is calculated, and the compensation factor of a corresponding R, G, B channel is calculated; automatic white balance processing (AWB apply), which generally processes an image according to automatic white balance parameters obtained by AWB in 3A; color interpolation (Demosaic), a process in which after the color filter is used, each pixel point can sense only one color and the information of the other two channels of the pixel point is restored by color interpolation, and the values of the other two channels of the pixel point are searched. At the Color image level, Color Correction (CCM), mainly to correct Color errors caused by Color penetration between Color blocks at the filter plate, generally, the Color Correction process is to compare an image captured by the image sensor with a standard image, so as to calculate a Correction Matrix, which is the Color Correction Matrix of the image sensor; tone Mapping (Tone Mapping), which is to calculate the average brightness of a scene according to the current scene, select a proper brightness domain according to the average brightness, and map the whole scene to the brightness domain to obtain a correct result; noise removal (Denoise) generally adopts a nonlinear denoising algorithm, not only considers the relation of pixels in space distance during sampling, but also adds the consideration of the similarity degree between pixels, and further keeps edges. On the image color component level, a Color Space Matrix (CSM), a process of converting an RGB image to color space such as SRGB and the like through a conversion matrix; a color adjustment (2DLUT) for obtaining accurate reduction of three dimensions of color or preference setting of color style by performing main adjustment (also performing minor adjustment on Hue dimension (Hue, H)) on Saturation dimension (S) and lightness dimension (Value, V) of HSV color space; sharpening (sharp), in order to eliminate the loss of image details in the noise reduction process, the image needs to be sharpened, and the relevant details of the image are restored.
Optionally, the exposure parameters output according to the shooting scene in the AE automatic exposure affect the initial brightness of the image, and further affect the subsequent processing effect of the image. Therefore, in order to generate an accurate image processing effect, the AE algorithm structure can be adjusted to realize that the exposure parameters output by the AE are more real, and then the image generated on the basis of exposure can also output a more real restored image in a subsequent processing link. In consideration of the overall brightness of the image, the average brightness of the whole scene is generally calculated to serve as the reference brightness of the exposure value, so as to determine the uniform exposure parameter and the weight coefficient, and further, the image imaging effect is good.
In the process of outputting the exposure parameters and the weight coefficients by the AE, the exposure value superposed on the image is determined according to the average brightness of the current scene or the average brightness of a certain area, and in a real scene, especially for a multi-color scene, a plurality of complex colors exist, different colors can generate unequal differences in performance under the same brightness, under the uniform exposure brightness, the exposure imaging of partial colors is good, and the over exposure or under exposure of partial colors under the exposure brightness can also occur, so that the colors are too bright or dim, color distortion occurs, the overall imaging effect of the image is poor, and finally the real image exposure effect cannot be output.
Therefore, the present invention provides a shooting method to solve the above technical problem.
Referring to fig. 3, fig. 3 is a schematic flow chart of a shooting method according to an embodiment of the present disclosure. The execution subject of the embodiment of the application can be a shooting system, a server in the system, or any electronic device in the system. For convenience of description, a specific implementation procedure of the shooting method will be described below by taking as an example that the implementation subject is a processor in an electronic device. The photographing method as shown in fig. 3 may include at least:
s301, acquiring a first image acquired by a camera, and dividing a target object area in the first image into different color areas.
Alternatively, for users who use digital image processing products, the user is most intuitively aware of differences from a real scene with respect to the brightness and color of an output image, and thus exposure and color also become important development fields of digital image processing. In the current image processing technology, in order to make the exposure effect of an image better, during automatic exposure, the exposure value of the current image is usually determined according to the overall brightness of an image scene, however, the exposure value is calculated by taking the overall brightness as a reference, if the color difference or the brightness difference in the current scene is not large, a better imaging effect may occur, but when in a multi-color scene, part of colors in the image may be distorted due to different representation differences of the colors under the same exposure value, which greatly affects the user experience.
Optionally, because in the image, the difference between different colors may cause the exposure value required by each color to be different, and the exposure value affects the imaging effect of the real color in the image, so determining an accurate exposure value becomes the basis of image processing, in a multi-color scene, the image may be divided into different regions according to the color, and the exposure value of the image may be determined according to the imaging parameters such as the brightness of each color region, that is, after the user acquires the image through the camera, the image may be divided into different color regions according to the color, so that the color representation of different colors and the influence of the imaging parameters on the image exposure value may be considered when determining the exposure value.
Optionally, some multi-color scenes in real scenes are often representative scenes, such as amusement park scenes, toy shop scenes, flower scenes, etc., and these scenes have certain characteristics, and some targeted shooting subjects exist in the scenes, for example, in amusement park scenes, the shooting subjects exist in amusement park facilities; in a toy shop scenario, the photographic subjects are present in a variety of toys; in a flower scene, flowers with different colors are shot as a main body; the exposure value calculation method can be adjusted according to the main body characteristics for different shooting subjects, so that the exposure value calculation for different multi-color scenes is realized.
Further, when a user uses the shooting device to shoot a scene, the imaging effect of the shooting subject may be more emphasized, then a multi-color scene type corresponding to the current scene may be identified first, the shooting subject and an exposure value calculation method corresponding to the shooting subject are determined according to the current scene type, if the current multi-color scene is a preset scene, an acquired image collected by the camera is used as a first image, a target object corresponding to the subject in the image is identified and determined, and then a target object area in the first image is divided into different color areas, so that color area division is performed for different target objects in different multi-color scenes. The method for identifying the preset scene and the type of the preset scene are not limited, and the method for identifying the preset scene and the type of the preset scene can be set according to actual conditions.
It can be understood that the camera can constantly acquire images within the viewing range during the operation, and the images are only used as the basis for acquiring the environmental parameters, the imaging parameters and the like of the current scene by the camera, when the user does not control the camera to shoot, the camera can not display or store the images acquired in the process as the images corresponding to the shooting requirements of the user, in the application, only when the user controls the camera to shoot, the shot images can be used as the second images corresponding to the shooting requirements of the user, so that the acquired first images are reference images pre-acquired by the camera, and the images cannot be displayed or stored.
S302, determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values.
Optionally, since different colors are under the same brightness, the required exposure values may be different, and the exposure values of the color regions all affect the exposure of the image, in order to determine the exposure values of the image by combining the exposure values of the color regions, after the target object region in the first image is divided into the color regions, the first exposure values corresponding to the color regions are determined according to the imaging parameters such as the brightness of the color regions, and then the second exposure values of the first image are calculated according to the first exposure values of the color regions, so that the obtained second exposure values are not calculated by taking the overall brightness of the image as a reference, but taking the colors of the target object as the considered dimensions, and the shooting under the second exposure values will generate a more real and accurate image.
Alternatively, when the second exposure value is calculated according to the first exposure value of each color region, different exposure weights may be preset for different image region positions, the exposure weight of the first exposure value corresponding to each color region in the second exposure value is determined according to the exposure weight corresponding to each color region position, and the second exposure value is calculated according to different weight coefficients; or presetting different exposure weights for different colors, determining the exposure weight of the first exposure value in the second exposure value according to the exposure weight corresponding to the color of each color area, and further calculating the second exposure value. The embodiment of the present application does not limit the specific algorithm and the related preset rule for calculating the second exposure value according to the first exposure value.
And S303, controlling the camera to shoot based on the second exposure value, and acquiring a second image collected by the camera.
Optionally, the second exposure value of the first image determined according to the first exposure value of each color region is a factor that considers the exposure value of each color region in the image to the exposure value of the whole image, so that the image forming effect of the image is more real, after the second exposure value of the first image is determined, the camera can be controlled to shoot the current scene based on the second exposure value, the shot image is the truest image, and the second image collected by the camera can be obtained.
Optionally, considering that there may be other objects besides the target object in the first image, and when the proportion of the other objects is also large, for example, the proportion reaches about 30%, it is considered that there is an obvious proportion in the image, and for good overall effect, when shooting is performed based on the second exposure value, the exposure weight of the target object region in the image is determined according to the proportion of the area of the target object region to the area of the other object region, the second exposure value and the third exposure value are multiplied by their respective corresponding exposure weight coefficients, and finally a fourth exposure value is obtained, and the camera is controlled to shoot based on the fourth exposure value, and a corresponding image after shooting is obtained.
Optionally, after the second image acquired by the camera is acquired, the second image can be directly displayed on a display terminal of the camera of the device; or directly storing the data into a preset memory; in addition, the second image can be processed in aspects of definition, color and the like, and then displayed after being processed; or stored after subsequent image processing. The embodiment of the present application does not limit subsequent processing or use of the second image.
In the embodiment of the application, a shooting method is provided, which includes the steps of firstly obtaining a first image collected by a camera, and dividing a target object area in the first image into areas with different colors; then determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values; and finally, controlling the camera to shoot based on the second exposure value to obtain a second image acquired by the camera. Because luminance can influence the color development degree of colour, the exposure value adjustment image that obtains through the whole luminance of image can lead to the distortion of part colour in the image, consequently, in this application, through dividing the image into different colour regions according to different colours, according to the exposure value that each colour region corresponds, the exposure value of whole image is finally confirmed, the exposure value of confirming like this has considered the influence to different colours in the image, shoot the image based on this exposure value and can make in the image multiple colour all image accurately, and then make the formation of image color of many colours image also more true.
Referring to fig. 4, fig. 4 is a schematic flow chart of a shooting method according to another embodiment of the present application.
As shown in fig. 4, the photographing method may include at least:
s401, responding to a shooting instruction, and judging whether a target object is included in a view range corresponding to the camera.
Alternatively, it can be known from the above embodiments that, when shooting a multi-color scene, the exposure parameters and the weighting coefficients obtained based on the overall brightness of the image may cause color distortion of some colors due to overexposure or underexposure, and further cause the exposure brightness and the color of the image to be greatly different from the real scene. Moreover, since the multi-color scene in the real scene is many and complex, and the user usually has a memory color for the main body in the specific multi-color scene, the memory color means that people know some colors in long-term practice to form deep memory, so that the knowledge of the colors has certain rules and forms inherent habits, such colors are called memory colors, for example, in a flower scene, people can recognize fixed colors for an object, such as a flower, which is often present in life, and when people use a shooting device to shoot the flower scene, the difference between the brightness color in the image and the brightness color experienced by the real recognition is more sensitive, so that if the exposure parameter for the specific multi-color scene is obtained based on the brightness, the user experience feeling is greatly influenced.
Optionally, in order to perform specific corresponding exposure value calculation on the multicolor scene, it is necessary to first determine a current shooting requirement of the user, and after the user generates the shooting requirement, the user may send a shooting instruction through the terminal having the camera, so that the shooting instruction may be responded, and it is further determined whether a target object corresponding to the multicolor scene is included in the camera view range.
Specifically, for a specific multi-color scene, the subjects of different multi-color scenes may be different, and the required exposure and exposure value calculation manners are also different, so when it is determined whether a target object is included in the viewing range corresponding to the camera, an initial image acquired by the camera within the viewing range may be first acquired, a target optimized scene set by the camera and a target object corresponding to the target optimized scene may be acquired, and then it is determined whether the acquired initial image includes the target object, so that the corresponding exposure value calculation manner may be determined according to the target optimized scene and the corresponding target object, and the imaging of the image is more realistic.
Optionally, a user may control the camera through a terminal with the camera, and on the user terminal, a target optimization scene set by the camera may be a default of the camera, may also be set by the user in advance, and may also be selected by the user in a shooting instruction. Some multi-color scenes in the real scene are often representative scenes, and such scenes usually have corresponding specific shooting subjects as target objects, so that after a shooting target optimization scene is determined, the corresponding target objects can be determined according to the scenes, and in the amusement park scene, the target objects are amusement park facilities; in a toy shop scenario, the target objects are a variety of toys; in a flower scene, the target objects are flowers of different colors.
Use the flower scene as an example, when the user produced the demand of shooing the flower scene, can get into through shooting software or shooting icon and shoot the interface, after the user got into and shoots the interface, can set up "flower scene" according to concrete scene and optimize the scene for the target that the camera corresponds, the camera just confirms the target object who finds a view in the scope and corresponds for the flower according to the flower scene after setting up, when the user need shoot and acquire the image, just can shoot the instruction through the terminal input, the terminal receives and controls the camera after shooting the instruction and shoots, and then obtain the corresponding flower image of shooing.
Referring to fig. 5, fig. 5 is a schematic view of a user terminal interaction provided in an embodiment of the present application.
As shown in fig. 5, in the display interface 510 of the user terminal 500, the user 520 may enter the shooting interface 540 through the shooting software 530, and in the shooting interface 540, after the user 520 clicks or selects the "flower" option 550 among multiple options such as "normal", "entertainment", "toy", "flower", and the like, the flower scene is set as the target optimized scene for the user 520, and it is determined that the target object is a flower, then in the shooting interface 540, a prompt word of "flower scene" is displayed, and a specific process of a subsequent shooting method is executed.
In a preferred embodiment, when determining whether the initial image includes the target object, the target optimization scene and the corresponding target object may be identified based on Artificial Intelligence (AI), taking a flower scene as an example, a large number of flower image samples may be provided to a flower scene neural network for learning, until the flower scene neural network can identify flowers in the scene according to flower features, then the flower area ratio is calculated according to the area of the flower area in the image, and when the ratio reaches a preset ratio, the scene in the image is determined to be the flower scene and the flowers therein are determined to be the target object. It can be understood that, in the embodiment of the present application, the specific AI network training method and the preset ratio are not limited, and may be set according to actual situations. The target object is automatically determined according to the target optimization scene, the target object is determined in the image, the current view finding range is judged to contain the target object enough, and then the corresponding exposure value calculation mode is determined, the user does not need to manually set or manually select the object in the view finding range, the shooting by the user is facilitated, and the use experience of the user is improved.
S402, when the target object is contained in the view range corresponding to the camera, a first image collected by the camera is obtained.
Optionally, when it is determined that the current viewing range includes the target object, it may be determined that the current scene is a preset scene that the user wishes to photograph, and then the image acquired by the camera may be used as the first image, and the first image acquired by the camera may be acquired, so that the exposure parameter of the first image may be calculated and adjusted.
And S403, dividing the target object area in the first image into different color areas.
Optionally, in a multi-color scene, different colors of the target object affect exposure values required by the whole image, and the exposure values affect imaging effects of the colors in the image, so that in order to obtain the most appropriate exposure value of the first image by considering the exposure values of the colors of the target object, the image may be divided into different regions according to the colors, and the exposure value of the image is determined according to imaging parameters such as brightness of each color region, that is, after the user acquires the image through the camera, the first image may be divided into different color regions according to the colors, so that the exposure value of the first image can be determined by considering color representation of the different colors and the influence of the imaging parameters on the image exposure value.
Optionally, in the target object region of the first image, there may be a portion of the color region with a smaller area, for example, 1%, which is considered to be less obvious and not to have a serious mutual influence with the image exposure, so that when the target object region is divided, it is not necessary to divide the target object region into multiple color regions, that is, the first exposure value is not included in the calculation range of the second exposure value, the calculation amount is reduced without affecting the calculation accuracy of the second exposure value and the imaging reality of the image, and the calculation processing speed is increased.
S404, determining a first exposure value corresponding to each color area according to the imaging parameter corresponding to each color area.
Optionally, since the exposure value corresponding to each color region affects the exposure of the image, the exposure value of the image may be determined by combining the exposure values of the color regions, and therefore, the exposure value corresponding to each color region needs to be determined first, and the second exposure value of the first image can be further calculated.
Optionally, since the exposure value is not only related to the brightness, but also may be related to the color representation, the exposure value obtained only based on the brightness may result in a poor imaging effect of a part of the colors, and further, on the basis that the exposure value is considered by the brightness, the influence of various other imaging parameters on the exposure value may also be considered, the exposure value is calculated from multiple imaging dimensions, the most accurate exposure value corresponding to each color region is determined, that is, the first exposure value corresponding to each color region is determined according to the imaging parameters corresponding to each color region.
Specifically, the imaging parameters at least include hue parameters, which are easy to understand, and hue is the primary characteristic of color and the most accurate standard for distinguishing various colors, and generally, when color is considered in image processing, the hue parameters are usually represented by hue dimensions, and when the imaging parameters of an image are quantized, the hue parameters become one of important indexes in the imaging parameters, and can be used for representing the expression of hue in the image. Therefore, when the first exposure value corresponding to each color region is determined according to the imaging parameter of each color region, the first exposure value corresponding to each color region can be specifically determined according to the hue parameter in the imaging parameter of each color region, so that the first exposure value of each color region is calculated through two dimensions of the hue and the brightness of an image without being limited to the brightness dimension, and the imaging can be more accurate and more real.
S405, determining a second exposure value of the first image according to the first exposure value corresponding to each color area and the imaging parameter corresponding to each color area.
Optionally, after the first exposure value of each color region is calculated, the second exposure value of the first image may be calculated according to each first exposure value, and since the accuracy of the second exposure value, which is affected by the imaging parameters such as color and brightness of each color region, further affects imaging, the second exposure value may be determined according to the first exposure value corresponding to each color region and the imaging parameter corresponding to each color region when the second exposure value is calculated, so that a more ideal image may be obtained by shooting with the camera according to each imaging parameter of each color region and the second exposure value determined by the first exposure value.
Specifically, in calculating the second exposure value, it is necessary to consider the weight of the influence of each color region on the image, for example, a bright color region has a larger influence on the exposure value of the image than a light color region, and a large area region has a larger influence on the exposure value of the image than a small area region, so that in addition to the second exposure value of the image, the first exposure value of each color region may be influenced by the first exposure value of each color region, and the other imaging parameters of each color region are also important factors influencing the second exposure value, and the degree of influence of the imaging parameters of each color region on the second exposure value is represented by the weight value corresponding to the imaging parameters If the exposure value is, for example, the first exposure value of a color region is 20 and the weight of the color region is 0.4, the two are multiplied to obtain the exposure value 8, which is the exposure value included in the second exposure value.
Optionally, the imaging parameter of each color region may be a plurality of parameters that affect imaging, for example, an area parameter, a saturation parameter, a brightness parameter, a color parameter, a noise parameter, and the like, which is not limited in this embodiment of the present application. In the embodiment of the present application, for convenience of description, the imaging parameters including the area parameter, the hue parameter, and the brightness parameter are selected as a preferred embodiment, and a specific description of the subsequent exposure value calculation is performed based on the area parameter, the hue parameter, and the brightness parameter of each color region, it is easy to understand that the area parameter, the hue parameter, and the brightness parameter of each color region all have respective corresponding weight values, and the product of the area parameter, the hue parameter, and the brightness parameter of each color region can be used as the weight value corresponding to the color region, and then the color region is marked as i, and then the first exposure value of the color region is TiThe area parameter is represented as AiThe hue parameter is represented as HiAnd the brightness parameter is represented as BiAnd then AiCorresponding weight value of
Figure BDA0003491707710000131
HiCorresponding weight value of
Figure BDA0003491707710000132
BiCorresponding weight value of
Figure BDA0003491707710000133
The specific process of calculating the first weight exposure value of each color region at this time can be expressed as
Figure BDA0003491707710000134
Optionally, to calculate the second exposure value, a weighted value of each color region in the image needs to be calculated, and a product of an area parameter, a hue parameter, and a brightness parameter of each color region may be used as the weighted value corresponding to the color region, that is, the weighted product of the imaging parameters of each color region is calculated as
Figure BDA0003491707710000135
And then, according to the first weight exposure value of each color area and the imaging parameter weight product of each color area, determining a second exposure value of the first image, summing corresponding values of all color areas to obtain the sum of the first weight exposure values of each color area and the sum of the imaging parameter weight products of each color area, then, taking the sum of the first weight exposure values and the sum of the imaging parameter weight products as a quotient, and obtaining the second exposure value T of the first image according to the quotient as follows:
Figure BDA0003491707710000136
the second exposure value obtained in this way is the most reductive and accurate exposure value for shooting the current scene.
And S406, controlling the camera to shoot based on the second exposure value, and acquiring a second image collected by the camera.
For step S406, please refer to the detailed description in step S303, which is not repeated herein.
In the embodiment of the application, a shooting method is provided, which specifies that a user can freely set an individualized target optimization scene when a first image acquired by a camera is acquired, so that the user can conveniently shoot the target scene by using a camera device; when the first exposure value of each color area is calculated, the first exposure value is calculated from three dimensions of area, hue and brightness, and the influence of color on the exposure value is considered, so that the multi-color imaging is more real; and further, when a second exposure value is calculated, the influence of each color in the image on the whole image is considered, the image exposure value is also calculated from multiple dimensions of multiple colors, the optimal exposure of the global imaging effect under the current scene is finally obtained, the second exposure value enables the shot picture to be more real, the color reality and the reduction degree of the picture are improved, and further the user experience is enhanced.
Referring to fig. 6, fig. 6 is a schematic diagram of an algorithm for digital image processing according to another embodiment of the present application.
In the process of digital image processing, after shooting parameters are output through Auto Exposure (AE), Auto Focus (AF), and Auto White Balance (AWB), that is, 3A, a camera collects an image based on the shooting parameters, and image signal processing (ISP link) further processes the collected image, and the main functions include noise removal, color restoration processing, White Balance processing, shadow dead pixel processing, and the like. Wherein, the automatic white balance processing unit (AWB application) and the color correction unit (CCM) and the color adjustment unit (2DLUT) are mainly used for color restoration.
For a multi-color scene, brightness and color are important indexes for representing image quality, when AE automatic exposure is performed based on brightness only to generate an exposure value, brightness and color of a part of colors are poor in color expression and are greatly different from those of a real scene, and the brightness of the color affects the color of the color, so that when an image acquired based on the exposure value is used for image processing of an ISP link, accurate brightness information cannot be provided for an automatic white balance processing unit (AWB apply) and a color restoration unit (CCM color correction unit, 2DLUT color adjustment unit) in the ISP link to process the color, and accuracy of color restoration of the ISP link is reduced.
Therefore, another embodiment of the present application provides a shooting method to solve the above technical problem.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a photographing method according to another embodiment of the present application.
As shown in fig. 7, the photographing method may include at least:
s701, acquiring a first image acquired by a camera, and dividing a target object area in the first image into areas with different colors.
S702, determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values.
And S703, controlling the camera to shoot based on the second exposure value, and acquiring a second image acquired by the camera.
For the steps S701 to S703, please refer to the detailed descriptions in steps S301 to S303, which are not repeated herein.
And S704, adjusting the color parameters of each color area in the second image according to the second exposure value.
Optionally, as can be known from the description of the foregoing embodiment, in a multi-color scene, in order to avoid that an exposure value calculated based on overall brightness may cause partial color distortion, the image is divided into different color regions, a second exposure value of the image is calculated according to a first exposure value of each color region, the first exposure value of each color region is obtained based on a hue parameter of each color region, an exposure weight of each color region is also calculated based on not only a brightness parameter, but also the exposure weight of each color scene is determined according to an imaging parameter (an area parameter, a hue parameter, and a brightness parameter) of each color region, and then the most accurate and reasonable second exposure value of the image is determined from multiple dimensions, such as different color regions, areas, hues, and brightness, so that the image is imaged truly and the color is accurate.
Furthermore, in order to finally provide the most true restored image for the user, after the image is acquired through the second exposure value, the color of the image needs to be restored through the ISP link, and the second exposure value integrates information such as hue parameters of each color region of the image, so that after the camera acquires the second image based on the accurate second exposure value, the exposure deviation degree of different color regions can be calculated according to the second exposure value, so as to adjust the color parameters of each color region in the second image.
Specifically, when color is reduced, firstly, the difference between the color in the second image and the real ideal color needs to be known, the first exposure value of each color region describes the optimal exposure required by each color region, and the second exposure value describes the optimal exposure required by the image, so that data support can be provided for adjusting parameters in color reduction according to the exposure deviation between the first exposure value and the second exposure value, namely, data support is provided for adjusting each first exposure value TiThe relative difference from the second exposure value T can be noted as a luminance Ratio (BR) parameter. IntoWhen the color parameter of each color region in the second image is adjusted according to the second exposure value, the brightness ratio parameter corresponding to each color region in the second image may be specifically calculated according to the first exposure value and the second exposure value corresponding to each color region in the second image.
Alternatively, the brightness ratio parameter in the color region i is expressed as BRiIn order to accurately express the difference ratio between the first exposure value and the second exposure value of the corresponding color region through the brightness ratio parameter, the color parameter adjustment value required by each color region is conveniently and accurately measured subsequently, so as to carry out color restoration on each color region, and the brightness ratio parameter is the specific quantification of the exposure degree of each color region:
Figure BDA0003491707710000151
wherein the content of the first and second substances,
Figure BDA0003491707710000152
for the proportional difference between the first exposure value and the second exposure value, the brightness difference of the color parameter adjustment value needed by the color region is described, taking the natural logarithm can realize clear magnitude representation, in ln1.03, 1,03 is a multiple reference of the adjacent exposure level, namely, the brightness level of the next exposure level is 1.03 times of the brightness level of the previous exposure level, and the quotient of the natural logarithm of 1.03 can realize the specific value of quantifying the AE exposure level, and BR can realize the quantification of the AE exposure leveliThe value specifically indicates which exposure level the color region is at, so as to facilitate subsequent quantization of the color parameter corresponding to the color region. It can be understood that the exposure level coefficient is not limited in the embodiment of the present application, and may be 1.03, or may be another customized value.
Optionally, after determining the luminance ratio parameter corresponding to each color region, the luminance ratio parameter is input to the relevant color adjusting unit of the ISP link, so that each unit adjusts the corresponding color parameter. In the AWB automatic white balance of 3A, the three primary color parameters corresponding to the hues of the color regions are acquired through the sensor, and then the white balance of the image is corrected according to the calculated white balance gain value, and since the white balance correction of the image has a certain influence on the brightness of the image and the hues of the color regions, the brightness ratio parameter can be adjusted based on the white balance gain value for the accuracy of the brightness ratio parameter, so that the influence of the white balance gain value on the brightness ratio parameter is offset, and the subsequent accurate restoration of the image color is ensured.
Specifically, after the hue parameters of each color region acquired by the sensor, the corresponding three-primary-color components can be directly acquired, and the automatic white balance AWB is obtained by multiplying the three-primary-color data (R/Gr/Gb/B) parameters acquired by the RAW sensor by the corresponding AWB gain value (gain)r/gaing/gainb) Based on the three-primary-color mixing principle, the hues H of different colors are formed by mixing RGB three primary colors in different proportions, and accordingly, the AWB gain values corresponding to the hues H of the color regions are also obtained in the same weight proportion, and therefore, the white balance gain values of the color regions in the second image can be obtained according to the three-primary-color parameters corresponding to the hues of the color regions in the second image as follows:
gain′=gainr*Wr+gaing*Wg+gainb*Wb
wherein the gain value represents a white balance gain value, and the W value represents a weight ratio corresponding to the three primary colors (red-R, green-G, blue-B) in the hue of the current color region. Still further, the luminance ratio parameter corresponding to each color region in the second image may be adjusted according to the white balance gain value of each color region in the second image:
BRi=BRi*gain′,
for each color area after white balance correction, the calculated brightness ratio parameter is a more accurate parameter, so that the subsequent color processing is more realistic, and therefore, the color parameter of each color area in the second image can be adjusted based on each brightness ratio parameter.
Then, when the color parameters of the color regions in the second image are adjusted based on the luminance ratio parameters, specifically, the three dimensions of the hue (H), the saturation (S), and the lightness (V) in the color parameters are respectively adjusted based on the luminance ratio parameters, that is, at least one of the hue parameter, the saturation parameter, and the lightness parameter in the color parameters of the color regions in the second image is adjusted based on the luminance ratio parameters.
Referring to fig. 8, fig. 8 is a schematic flow chart illustrating a color parameter adjustment method according to an embodiment of the present disclosure. It will be appreciated that the steps of the flowchart all adjust the color parameters of the color regions in the second image based on the brightness ratio parameters. As shown in fig. 8, the color parameter adjustment method at least includes:
s801, adjusting the hue conversion matrix of each color area in the second image based on each brightness ratio parameter.
Optionally, in the image color processing process, generally, RGB three primary color values of different hues H are multiplied by the same 3 × 3 hue conversion matrix M in a color correction unit (CCM) to adjust R 'G' B 'color values corresponding to the hue H' of a real color in an actual scene, but since the conversion correction of the color hue at this time only considers default conversion color values corresponding to the hue and does not consider the influence of brightness and exposure on the different hues, in this embodiment of the present application, a brightness ratio parameter of each color region represents a brightness condition and an exposure condition of each color region in an image, and therefore, after the brightness ratio parameter of each color region is adjusted, the hue conversion matrix M of each color region may be adjusted based on the brightness ratio parameter.
Specifically, when adjusting the hue conversion matrix M, a hue weight matrix may be generated based on each luminance ratio parameter, and the hue conversion matrix of each color region in the second image may be adjusted according to the hue weight matrix. Wherein the hue weight matrix is:
Figure BDA0003491707710000171
wherein the content of the first and second substances,
Figure BDA0003491707710000172
obtaining the brightness ratio weight corresponding to the three primary colors according to the weight ratio corresponding to each of the three primary colors in the color phase of the color region, and further obtaining an adjusted color phase conversion matrix M':
Figure BDA0003491707710000173
finally, the hue conversion matrix M' adjusted based on the brightness ratio parameter is obtained, so that the hues of different color areas can be accurately converted through the adaptive conversion matrix.
S802, calculating hue parameters in the color parameters of the color areas in the second image based on the hue conversion matrix.
Optionally, after the adjusted hue conversion matrix M ' is determined, the R ' G ' B ' color value corresponding to the converted hue H ' can be calculated as:
Figure BDA0003491707710000181
at the moment, the color of the scene is corrected to be closer to the color of the real scene, the color of the image is restored, and the user experience is improved.
S803, a saturation parameter adjustment value corresponding to a saturation parameter in the color parameters of each color region in the second image is obtained, and a brightness parameter adjustment value corresponding to a brightness parameter in the color parameters of each color region in the second image is obtained.
Optionally, in a color space of the image, the color is generally processed in all three dimensions, that is, a Hue dimension (Hue, H), a Saturation dimension (Saturation, S), and a Value dimension (Value, V), so after the Hue parameter in the color of the image is corrected, the Saturation parameter and the Value parameter corresponding to each color region need to be adjusted, and the color adjusting unit (2DLUT) may perform main adjustment (also perform corresponding minor adjustment on the Hue parameter) on the Saturation parameter and the Value parameter to obtain accurate restoration of the three dimensions of the color, or preference setting of a color style, for example, a user-defined filter mode set by a user.
Optionally, the hue, saturation, and brightness of the color may be directly obtained by a sensor, and when the saturation parameter and the brightness parameter are adjusted, the saturation parameter adjustment value corresponding to the saturation parameter and the brightness parameter adjustment value corresponding to the brightness parameter of each color region may be obtained by looking up a table, and the saturation parameter adjustment value and the brightness parameter adjustment value are applied to image processing, so as to obtain an image that meets the brightness and color requirements of the user. The embodiments of the present application do not limit the methods for obtaining the saturation parameter, the brightness parameter, the saturation parameter adjustment value, and the brightness parameter adjustment value.
S804, generating an adjustment amount limiting parameter for each color region in the second image based on each brightness ratio parameter for each color region in the second image, and generating an adjustment amount limiting parameter for each color region in the second image based on each brightness ratio parameter for each color region in the second image.
Optionally, in the adjustment process, both the saturation parameter adjustment value and the brightness parameter adjustment value need to satisfy a certain limitation, so as to avoid that the adjustment value causes unreasonable saturation and brightness of the corresponding color region and affects the overall effect of the image, and then the brightness ratio parameter of each color region may be applied in the table lookup mapping adjustment process, and the adjustment amounts of the saturation parameter and the brightness parameter of each color region are limited by referring to the brightness and the exposure degree corresponding to the color region, so as to prevent the color overflow phenomenon from occurring due to excessive adjustment. The saturation and brightness before adjustment are S and V, and the saturation and brightness after adjustment are S 'and V', and an adjustment limiting parameter can be generated by the brightness ratio parameter
Figure BDA0003491707710000191
S805, adjusting a saturation parameter of the color parameters of each color region in the second image based on each adjustment amount limiting parameter and each saturation parameter adjustment value of each color region in the second image, and adjusting a brightness parameter of the color parameters of each color region in the second image based on each adjustment amount limiting parameter and each brightness parameter adjustment value of each color region in the second image.
Optionally, the effective regulation lookup table regulation range obtained based on each regulation amount limiting parameter is:
Figure BDA0003491707710000192
Figure BDA0003491707710000193
wherein min is the minimum value in the corresponding bracket, max is the maximum value in the corresponding bracket, and S 'and V' are the final saturation parameter adjustment value and the brightness parameter adjustment value determined based on the adjustment range. Through adjustment on three dimensions of the color space HSV, optimization and promotion of brightness exposure information on color reduction processing are comprehensively achieved, targeted optimization is conducted on colors of the color areas according to brightness exposure of the color areas, and a multicolor scene image with accurate brightness, real colors and bright colors is obtained.
In an embodiment of the present application, a photographing method is provided. The second exposure value of the image determined based on the area, the hue, the brightness and the first exposure value of each color area is the most accurate exposure value of the second image, the color is further reduced based on the second exposure value, firstly, a brightness ratio parameter is obtained according to the first exposure value and the second exposure value, the exposure level of each color area is quantized, and after the brightness ratio parameter is adjusted according to a white balance gain value, a specific hue conversion matrix can be adjusted according to the brightness ratio parameter subsequently, the hue accurate conversion of different color areas is realized, a more reasonable limit range is adjusted according to the brightness ratio parameter to optimize the saturation and the brightness, the accuracy of color reduction in the image is ensured, the image with accurate reduction of the brightness and the color is obtained, and an ISP color processing unit (AWB/CCM/2DLUT) is more precise and accurate in the color reduction and adjustment in the optimization process, the method improves the effects of color and brightness from the aspect of algorithm, makes the relationship between the exposure brightness and the color restoration more close, and provides a new idea and scheme for the color processing and the brightness presentation of the multi-color scene image.
Referring to fig. 9, fig. 9 is a block diagram of a camera according to an embodiment of the present disclosure. As shown in fig. 9, the photographing apparatus 900 includes:
the region dividing module 910 is configured to obtain a first image acquired by a camera, and divide a target object region in the first image into regions with different colors;
an exposure calculation module 920, configured to determine first exposure values corresponding to the color regions, and determine second exposure values of the first image according to the first exposure values;
and an image shooting module 930, configured to control the camera to shoot based on the second exposure value, so as to obtain a second image collected by the camera.
Optionally, the area dividing module 910 is further configured to respond to the shooting instruction, and determine whether a target object is included in a view range corresponding to the camera; when the target object is contained in the view range corresponding to the camera, a first image collected by the camera is acquired.
Optionally, the exposure calculating module 920 is further configured to determine a first exposure value corresponding to each color region according to the imaging parameter corresponding to each color region.
Optionally, the exposure calculating module 920 is further configured to determine a second exposure value of the first image according to the first exposure value corresponding to each color region and the imaging parameter corresponding to each color region.
Optionally, the exposure calculating module 920 is further configured to calculate a first weighted exposure value of each color region according to a weighted value corresponding to each imaging parameter in each color region and a first exposure value corresponding to each color region; calculating the imaging parameter weight product of each color area according to the weight value corresponding to each imaging parameter in each color area; a second exposure value of the first image is determined based on the first weighted exposure value of each color region and the imaging parameter weighted product of each color region.
Optionally, the camera 900 further comprises: and the color processing module is used for adjusting the color parameters of each color area in the second image according to the second exposure value.
Optionally, the color processing module is further configured to calculate a brightness ratio parameter corresponding to each color region in the second image according to the first exposure value and the second exposure value corresponding to each color region in the second image; and adjusting the color parameters of the color areas in the second image based on the brightness ratio parameters.
Optionally, the color processing module is further configured to obtain a white balance gain value of each color region in the second image according to the three primary color parameters corresponding to the hues of each color region in the second image; and adjusting the brightness ratio parameter corresponding to each color area in the second image according to the white balance gain value of each color area in the second image.
Optionally, the color processing module is further configured to adjust at least one of a hue parameter, a saturation parameter, and a brightness parameter in the color parameters of each color region in the second image based on each luminance ratio parameter.
Optionally, the color processing module is further configured to adjust a hue conversion matrix of each color region in the second image based on each brightness ratio parameter; calculating hue parameters in the color parameters of each color area in the second image based on the hue conversion matrix; and/or acquiring a saturation parameter adjustment value corresponding to a saturation parameter in the color parameters of each color area in the second image; generating an adjustment quantity limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image; adjusting saturation parameters in the color parameters of the color areas in the second image based on the adjustment quantity limiting parameters and the adjustment values of the saturation parameters of the color areas in the second image; and/or obtaining brightness parameter adjustment values corresponding to brightness parameters in the color parameters of the color areas in the second image; generating an adjustment quantity limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image; and adjusting the brightness parameter in the color parameters of each color area in the second image based on each adjustment quantity limiting parameter and each brightness parameter adjustment value of each color area in the second image.
In an embodiment of the present application, there is provided a photographing apparatus including: the area dividing module is used for acquiring a first image acquired by the camera and dividing a target object area in the first image into areas with different colors; the exposure calculation module is used for determining first exposure values corresponding to the color regions and determining second exposure values of the first image according to the first exposure values; and the image shooting module is used for controlling the camera to shoot based on the second exposure value and acquiring a second image collected by the camera. Because luminance can influence the color development degree of colour, the exposure value adjustment image that obtains through the whole luminance of image can lead to the distortion of part colour in the image, consequently, in this application, through dividing the image into different colour regions according to different colours, according to the exposure value that each colour region corresponds, the exposure value of whole image is finally confirmed, the exposure value of confirming like this has considered the influence to different colours in the image, shoot the image based on this exposure value and can make in the image multiple colour all image accurately, and then make the formation of image color of many colours image also more true.
Embodiments of the present application also provide a computer storage medium, which may store a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any of the above embodiments.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 10, the electronic device 1000 may include: at least one electronic device processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
The electronic device processor 1001 may include one or more processing cores, among other things. The electronic device processor 1001 interfaces various parts throughout the electronic device 1000 using various interfaces and circuitry to perform various functions of the electronic device 1000 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Optionally, the electronic device processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The electronic device processor 1001 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the electronic device processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one storage device located remotely from the electronic device processor 1001. As shown in fig. 6, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a photographing program.
In the electronic device 1000 shown in fig. 10, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the electronic device processor 1001 may be configured to call the shooting program stored in the memory 1005, and specifically perform the following operations:
acquiring a first image acquired by a camera, and dividing a target object area in the first image into areas with different colors;
determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values;
and controlling the camera to shoot based on the second exposure value, and acquiring a second image acquired by the camera.
In some embodiments, when the electronic device processor 1001 performs the acquiring of the first image acquired by the camera, the following steps are specifically performed: responding to a shooting instruction, and judging whether a view finding range corresponding to the camera contains a target object or not; when the target object is contained in the view range corresponding to the camera, a first image collected by the camera is acquired.
In some embodiments, when determining the first exposure value corresponding to each color region, the electronic device processor 1001 specifically performs the following steps: and determining the first exposure value corresponding to each color area according to the imaging parameter corresponding to each color area.
In some embodiments, the electronic device processor 1001, when performing the determining of the second exposure value of the first image according to each first exposure value, specifically performs the following steps: and determining a second exposure value of the first image according to the first exposure value corresponding to each color area and the imaging parameter corresponding to each color area.
In some embodiments, when the electronic device processor 1001 determines the second exposure value of the first image according to the first exposure value corresponding to each color region and the imaging parameter corresponding to each color region, the following steps are specifically performed: calculating a first weight exposure value of each color area according to the weight value corresponding to each imaging parameter in each color area and the first exposure value corresponding to each color area; calculating the imaging parameter weight product of each color area according to the weight value corresponding to each imaging parameter in each color area; a second exposure value of the first image is determined based on the first weighted exposure value of each color region and the imaging parameter weighted product of each color region.
In some embodiments, after the electronic device processor 1001 performs the step of acquiring the second image acquired by the camera, the following steps are further specifically performed: and adjusting the color parameters of each color area in the second image according to the second exposure value.
In some embodiments, when the electronic device processor 1001 performs the adjustment of the color parameter of each color region in the second image according to the second exposure value, the following steps are specifically performed: calculating a brightness ratio parameter corresponding to each color area in the second image according to the first exposure value and the second exposure value corresponding to each color area in the second image; and adjusting the color parameters of the color areas in the second image based on the brightness ratio parameters.
In some embodiments, after the electronic device processor 1001 calculates the luminance ratio parameter corresponding to each color region in the second image according to the first exposure value and the second exposure value corresponding to each color region in the second image, the following steps are further specifically performed: obtaining a white balance gain value of each color area in the second image according to the three-primary-color parameters corresponding to the hues of the color areas in the second image; and adjusting the brightness ratio parameter corresponding to each color area in the second image according to the white balance gain value of each color area in the second image.
In some embodiments, when the electronic device processor 1001 performs the adjustment of the color parameter of each color region in the second image based on each brightness ratio parameter, the following steps are specifically performed: and adjusting at least one of hue parameters, saturation parameters and brightness parameters in the color parameters of each color area in the second image based on each brightness ratio parameter.
In some embodiments, when the electronic device processor 1001 performs the adjustment of the color parameter of each color region in the second image based on each brightness ratio parameter, the following steps are specifically performed: adjusting a hue conversion matrix of each color area in the second image based on each brightness ratio parameter; calculating hue parameters in the color parameters of each color area in the second image based on the hue conversion matrix; and/or acquiring a saturation parameter adjustment value corresponding to a saturation parameter in the color parameters of each color area in the second image; generating an adjustment quantity limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image; adjusting saturation parameters in the color parameters of the color areas in the second image based on the adjustment quantity limiting parameters and the adjustment values of the saturation parameters of the color areas in the second image; and/or obtaining brightness parameter adjustment values corresponding to brightness parameters in the color parameters of the color areas in the second image; generating an adjustment quantity limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image; and adjusting the brightness parameter in the color parameters of each color area in the second image based on each adjustment quantity limiting parameter and each brightness parameter adjustment value of each color area in the second image.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In view of the above description of the shooting method, the shooting device, and the electronic apparatus provided in the present application, those skilled in the art will recognize that the invention is not limited to the above description.

Claims (13)

1. A photographing method, characterized in that the method comprises:
acquiring a first image acquired by a camera, and dividing a target object area in the first image into areas with different colors;
determining first exposure values corresponding to the color regions, and determining second exposure values of the first image according to the first exposure values;
and controlling the camera to shoot based on the second exposure value, and acquiring a second image acquired by the camera.
2. The method of claim 1, wherein the acquiring the first image captured by the camera comprises:
responding to a shooting instruction, and judging whether a view finding range corresponding to the camera contains a target object or not;
and when determining that the view range corresponding to the camera contains a target object, acquiring a first image acquired by the camera.
3. The method of claim 1, wherein determining the first exposure value for each color region comprises:
and determining the first exposure value corresponding to each color area according to the imaging parameter corresponding to each color area.
4. The method of claim 1, wherein determining a second exposure value for the first image from each first exposure value comprises:
and determining a second exposure value of the first image according to the first exposure value corresponding to each color area and the imaging parameter corresponding to each color area.
5. The method of claim 4, wherein determining a second exposure value for the first image based on the first exposure value for each color region and the imaging parameter for each color region comprises:
calculating a first weight exposure value of each color area according to the weight value corresponding to each imaging parameter in each color area and the first exposure value corresponding to each color area;
calculating the imaging parameter weight product of each color area according to the weight value corresponding to each imaging parameter in each color area;
and determining a second exposure value of the first image according to the first weight exposure value of each color area and the imaging parameter weight product of each color area.
6. The method of claim 1, wherein after acquiring the second image captured by the camera, further comprising:
and adjusting the color parameters of each color area in the second image according to the second exposure value.
7. The method according to claim 6, wherein the adjusting the color parameters of the color regions in the second image according to the second exposure value comprises:
calculating a brightness ratio parameter corresponding to each color area in the second image according to the first exposure value and the second exposure value corresponding to each color area in the second image;
and adjusting the color parameters of the color areas in the second image based on the brightness ratio parameters.
8. The method according to claim 7, wherein after calculating the luminance ratio parameter corresponding to each color region in the second image according to the first exposure value and the second exposure value corresponding to each color region in the second image, further comprising:
obtaining white balance gain values of the color areas in the second image according to the three-primary-color parameters corresponding to the hues of the color areas in the second image;
and adjusting the brightness ratio parameter corresponding to each color area in the second image according to the white balance gain value of each color area in the second image.
9. The method of claim 7, wherein the adjusting the color parameter of each color region in the second image based on each brightness ratio parameter comprises:
adjusting at least one of a hue parameter, a saturation parameter, and a lightness parameter in the color parameters of each color region in the second image based on each brightness ratio parameter.
10. The method according to claim 9, wherein the adjusting at least one of a hue parameter, a saturation parameter, and a brightness parameter of the color parameters of each color region in the second image based on each luminance ratio parameter comprises:
adjusting a hue conversion matrix of each color area in the second image based on each brightness ratio parameter;
calculating hue parameters in the color parameters of each color area in the second image based on the hue conversion matrix; and/or
Acquiring a saturation parameter adjustment value corresponding to a saturation parameter in the color parameters of each color area in the second image;
generating an adjustment amount limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image;
adjusting saturation parameters in the color parameters of the color regions in the second image based on the adjustment quantity limiting parameters and the saturation parameter adjustment values of the color regions in the second image; and/or
Acquiring brightness parameter adjustment values corresponding to brightness parameters in the color parameters of the color areas in the second image;
generating an adjustment amount limiting parameter of each color area in the second image based on each brightness ratio parameter of each color area in the second image;
and adjusting the brightness parameter in the color parameters of each color area in the second image based on each adjustment quantity limiting parameter and each brightness parameter adjustment value of each color area in the second image.
11. A camera, characterized in that the camera comprises:
the device comprises a region dividing module, a color region dividing module and a color region dividing module, wherein the region dividing module is used for acquiring a first image acquired by a camera and dividing a target object region in the first image into different color regions;
the exposure calculation module is used for determining first exposure values corresponding to the color regions and determining second exposure values of the first image according to the first exposure values;
and the image shooting module is used for controlling the camera to shoot based on the second exposure value to obtain a second image collected by the camera.
12. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method according to any of claims 1 to 10.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 10 when executing the program.
CN202210098796.3A 2022-01-27 2022-01-27 Shooting method, shooting device, storage medium and electronic equipment Pending CN114500843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210098796.3A CN114500843A (en) 2022-01-27 2022-01-27 Shooting method, shooting device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210098796.3A CN114500843A (en) 2022-01-27 2022-01-27 Shooting method, shooting device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114500843A true CN114500843A (en) 2022-05-13

Family

ID=81477457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210098796.3A Pending CN114500843A (en) 2022-01-27 2022-01-27 Shooting method, shooting device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114500843A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074636A (en) * 2022-05-30 2023-05-05 荣耀终端有限公司 Shooting method and electronic equipment
CN117082362A (en) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 Underwater imaging method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080266418A1 (en) * 2007-04-26 2008-10-30 Samsung Electronics Co., Ltd. Method and apparatus for generating image
CN101304489A (en) * 2008-06-20 2008-11-12 北京中星微电子有限公司 Automatic exposure method and apparatus
CN104349071A (en) * 2013-07-25 2015-02-11 奥林巴斯株式会社 Imaging device and imaging method
CN105227852A (en) * 2014-06-11 2016-01-06 南京理工大学 Based on the automatic explosion method of color weighting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080266418A1 (en) * 2007-04-26 2008-10-30 Samsung Electronics Co., Ltd. Method and apparatus for generating image
CN101304489A (en) * 2008-06-20 2008-11-12 北京中星微电子有限公司 Automatic exposure method and apparatus
CN104349071A (en) * 2013-07-25 2015-02-11 奥林巴斯株式会社 Imaging device and imaging method
CN105227852A (en) * 2014-06-11 2016-01-06 南京理工大学 Based on the automatic explosion method of color weighting

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074636A (en) * 2022-05-30 2023-05-05 荣耀终端有限公司 Shooting method and electronic equipment
CN116074636B (en) * 2022-05-30 2023-11-17 荣耀终端有限公司 Shooting method and electronic equipment
CN117082362A (en) * 2023-08-25 2023-11-17 山东中清智能科技股份有限公司 Underwater imaging method and device

Similar Documents

Publication Publication Date Title
CN110447051B (en) Perceptually preserving contrast and chroma of a reference scene
CN108174118B (en) Image processing method and device and electronic equipment
EP3542347B1 (en) Fast fourier color constancy
US8948545B2 (en) Compensating for sensor saturation and microlens modulation during light-field image processing
US8965120B2 (en) Image processing apparatus and method of controlling the same
US7023580B2 (en) System and method for digital image tone mapping using an adaptive sigmoidal function based on perceptual preference guidelines
US20100103268A1 (en) Digital camera
CN114500843A (en) Shooting method, shooting device, storage medium and electronic equipment
KR20150142038A (en) Reference image selection for motion ghost filtering
KR20210118233A (en) Apparatus and method for shooting and blending multiple images for high-quality flash photography using a mobile electronic device
WO2023098251A1 (en) Image processing method, device, and readable storage medium
CN109478316B (en) Real-time adaptive shadow and highlight enhancement
JP2018014646A (en) Image processing apparatus and image processing method
CN116113976A (en) Image processing method and device, computer readable medium and electronic equipment
CN110677557B (en) Image processing method, image processing device, storage medium and electronic equipment
US10275860B2 (en) Image processing device, image capturing device, image processing method, and program
US8164650B2 (en) Image processing apparatus and method thereof
US20170163852A1 (en) Method and electronic device for dynamically adjusting gamma parameter
WO2018175337A1 (en) Perceptually preserving scene-referred contrasts and chromaticities
Kwon et al. Radiance map construction based on spatial and intensity correlations between LE and SE images for HDR imaging
KR20230041648A (en) Multi-frame depth-based multi-camera relighting of images
CN110796689A (en) Video processing method, electronic equipment and storage medium
CN112995633A (en) Image white balance processing method and device, electronic equipment and storage medium
JP5050141B2 (en) Color image exposure evaluation method
Lakshmi et al. Analysis of tone mapping operators on high dynamic range images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination