CN115797160A - Image generation method and device - Google Patents

Image generation method and device Download PDF

Info

Publication number
CN115797160A
CN115797160A CN202211535350.9A CN202211535350A CN115797160A CN 115797160 A CN115797160 A CN 115797160A CN 202211535350 A CN202211535350 A CN 202211535350A CN 115797160 A CN115797160 A CN 115797160A
Authority
CN
China
Prior art keywords
region
image
target
value
average value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211535350.9A
Other languages
Chinese (zh)
Inventor
张翀宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211535350.9A priority Critical patent/CN115797160A/en
Publication of CN115797160A publication Critical patent/CN115797160A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses an image generation method and an image generation device, which belong to the field of image shooting, and comprise the following steps: dividing a first original image into a plurality of regions according to image depth information of the first original image in the plurality of original images, and determining a depth average value of each region of the first original image, wherein the first original image is an original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene; determining the average value of brightness of each region in the target HDR image according to the average value of depth of each region of the first original image; and generating the target HDR image according to the brightness average value of each region in the target HDR image and the image information of the original images.

Description

Image generation method and device
Technical Field
The application belongs to the field of image shooting, and particularly relates to an image generation method and device, electronic equipment and a storage medium.
Background
At present, with the popularization of photographing, more and more people have greater and greater requirements for photographing, and in scenes such as travel and daily life, it is very common that users record good-looking photographing behaviors, but most of users do not have professional training in photographing technologies, and in the daily photographing process of life, scenes unsuitable for novice photographing, such as a backlight scene, can appear, and at the moment, a subject to be photographed is between light source cameras, and at the moment, photographing equipment can acquire more detailed pictures and present richer pictures through an HDR (High Dynamic Range Imaging) technology.
However, in practical applications, the current HDR technology has many problems, such as a problem of tone reversal, which means that the effect of the photo is too bad for human eyes, causing abnormal appearance that the dark area is bright and the bright area is dark, and affecting the user experience
Disclosure of Invention
An object of the embodiments of the present application is to provide an image generation method and apparatus thereof, which can solve the problem of tone inversion when generating an HDR image.
In a first aspect, an embodiment of the present application provides an image generation method, where the method includes: dividing a first original image into a plurality of regions according to image depth information of the first original image in the plurality of original images, and determining a depth average value of each region of the first original image, wherein the first original image is an original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene; determining the average value of brightness of each region in the target HDR image according to the average value of depth of each region of the first original image; and generating the target HDR image according to the brightness average value of each region in the target HDR image and the image information of the original images.
In a second aspect, an embodiment of the present application provides an apparatus for generating an image, the apparatus including: the image processing device comprises a dividing module, a calculating module and a calculating module, wherein the dividing module is used for dividing a first original image into a plurality of areas according to the image depth information of the first original image in the plurality of original images and determining the depth average value of each area of the first original image, the first original image is the original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene; a first determining module, configured to determine a luminance average value of each region in the target HDR image according to the depth average value of each region of the first original image; and the generating module is used for generating the target HDR image according to the brightness average value of each region in the target HDR image and the image information of the plurality of original images.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the first original image may be divided into a plurality of regions according to the depth information of the first original image with the most image information in the images of the same backlight scene, the luminance average value of each region in the target HDR image is determined, and the target HDR image is generated based on the luminance average value of each region in the target HDR image and the image information of the plurality of original images. By the method, the brightness average value of each region in the target HDR image can be determined according to the depth of each region, so that the tone of the target HDR image can be integrally and naturally determined according to the image depth, the tone inversion is avoided in the HDR image synthesis process, and the photographing experience of a user is improved.
Drawings
Fig. 1 is a flowchart of an image generation method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image generating apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below clearly with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in other sequences than those illustrated or otherwise described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense to distinguish one object from another, and not necessarily to limit the number of objects, e.g., the first object may be one or more. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the image generation method provided in the embodiments of the present application with reference to the accompanying drawings.
Please refer to fig. 1, which is a flowchart illustrating an image generating method according to an embodiment of the present application. As shown in fig. 1, the method may include steps S11 to S13, which will be described in detail below.
Step S11, according to image depth information of a first original image in the multiple original images, dividing the first original image into multiple regions, and determining a depth average value of each region of the first original image, where the first original image is an original image with the most image information in the original image, and the multiple original images correspond to the same backlight scene.
In an example of the embodiment, after the user takes a picture in the HDR mode, the photographing device performs photographing to obtain a plurality of frames of original images and their image information, in this example, a plurality of original images are images in the same scene for synthesizing the HDR image. Also, in the HDR technique, the exposure times of the plurality of original images are different, resulting in different image information in the plurality of original images. In one example, at least 3 original images of the same scene, i.e. an underexposed image, a normally exposed image and an overexposed image, are acquired when capturing the HDR image, wherein the original image with the most image information is an image capable of displaying more contents, and is generally an original image with a normal exposure. Also, the HDR image and the plurality of original images are all the same in size and position in the image of each element.
It should be noted that, although the example illustrates that 3 original images are required to be acquired when capturing an HDR image, it is understood by those skilled in the art that the present disclosure is not limited thereto, and the number of original images and the exposure amount thereof may be flexibly set according to actual situations.
In an example of this embodiment, while the first original image is obtained, image information of the first original image, such as brightness information and depth information of each pixel point of the first original image, may also be obtained. The brightness information can be automatically acquired when the image is shot, and the depth information can be acquired by an algorithm for calculating the depth of the image.
In one example of this embodiment, dividing the first original image into a plurality of regions according to the image depth information of the first original image includes: the method comprises the steps of obtaining a depth map of a first original image according to image depth information of the first original image, identifying a portrait part and a background part in the depth map, dividing the background part of the depth map into a plurality of depth background areas, and dividing the first original image into the portrait area and the plurality of background areas based on the positions of the portrait part and the plurality of depth background areas in the first original image.
In one example of the present embodiment, image depth information of the first original image may be determined using a Correlation-based method or a Cost Volume-based method, and a depth map of the first original image may be further generated based on the depth information.
In one example of this embodiment, a portrait portion in the depth image may be identified using a preset algorithm, and the portrait portion may include all parts of the human body and parts of the clothing accessory on the body. After the portrait portion is identified, the rest of the depth map is taken as the background portion.
In one example of this embodiment, the portrait portion may also be identified in the first original image, and the portrait portion may be determined in the depth map according to the position of the portrait portion in the first original image.
In one example of the present embodiment, after determining the background portion in the depth image, the depth map may be divided into a plurality of depth background regions based on the depth of each pixel point. Specifically, in the case that the background is relatively complex, the background area with the depth within the threshold may be divided into one area based on a preset depth threshold, for example, the background area with the depth of 0-20cm is one depth background area, and the background area with the depth of 20-40cm is another depth background area. In one example, the large depth background region may be further divided into a plurality of small depth background regions having relatively uniform sizes.
In one example of this embodiment, after dividing the depth map into a plurality of depth background regions, the corresponding positions of the first original image may be correspondingly divided into a portrait region and a plurality of background regions based on the positions of the portrait portion and the depth background regions in the depth map.
In an example of this embodiment, the average luminance values of the respective regions may be determined according to the luminance value of each pixel of the first original image in the acquired image information. Meanwhile, according to the depth value of each pixel, the depth average value of each area is determined.
In an example of this embodiment, when an original image is obtained, it may be determined in advance whether a scene corresponding to the original image is a backlight scene, specifically, the first original image may be divided into a portrait area and a plurality of background areas in advance, and then a luminance average value of the portrait area of the first original image and a luminance average value of each background area of the first original image are obtained; and under the condition that the brightness average value of the portrait area of the first original image is smaller than the brightness average values of the preset number of background areas, determining that the first original image is an image of a backlight scene.
In an example of this embodiment, before determining the average luminance value of each region of the target HDR image, it may be determined whether the scene in which the first original image is located is a backlight scene, specifically, a luminance value of each pixel of the portrait region of the first original image may be obtained, and an average value of the luminance values may be calculated as an average value of the luminance values of the portrait region of the first original image. Similarly, the brightness value of each pixel in any background region in the first original image may also be obtained, and the average brightness value of the region may also be calculated.
In the first example of this embodiment, after the brightness average value of each region is calculated, the brightness average value of the portrait region and the brightness average value of each background region may be respectively compared to determine how many background regions the brightness average value of the portrait region is smaller than, and in a case that the brightness average value of the portrait region is smaller than the preset number of background regions, it represents that the brightness of the portrait region is smaller than the brightness of most background regions, that is, the first original image is an image of a backlight scene. The preset number may be set according to the total number of background areas. In one example, the first original image includes 10 background regions, the preset number may be set to 70% of the number of the background regions, that is, 7, and in the case where the average value of the brightness of the portrait region is less than the average value of the brightness of 7 and more than 7 background regions, the first original image may be determined to be an image of a backlit scene.
In an example of this embodiment, whether a scene corresponding to an original image is a backlight scene is determined, and the number of pixels of which the luminance value is an upper limit value among pixels of a plurality of background regions of a first original image may also be determined; and under the condition that the number of the pixels is larger than the preset number, determining that the first original image is an image of a backlight scene.
In an example of this embodiment, whether the scene in which the first original image is located is a backlight scene is determined, and the number of pixels whose luminance values reach an upper limit value may also be determined according to the luminance values of the respective pixels of all background areas of the first original image acquired at the time of shooting, and in an example, the upper limit of the luminance values may be 255. When the number of pixels with the brightness value of the upper limit value is greater than the preset number, it may be indicated that a light source is captured in the background region of the first original image, that is, the first original image is an image of a backlight scene.
Step S12, determining the average value of the luminance of each region in the target HDR image according to the average value of the depth of each region of the first original image.
In one example of the embodiment, determining the average value of the brightness of each region in the target HDR image according to the average value of the depth of each region of the first original image comprises determining a first region in a plurality of regions of the first original image, and taking the brightness value of each pixel point in the first region of the first original image as the brightness value of each pixel point in the first region of the target HDR image; the average value of the brightness of the first area of the target HDR image is determined, and the average value of the brightness of each second area in the target HDR image is determined according to the average value of the brightness of the first area of the target HDR image and the average value of the depth of each area of the first original image, wherein the second area is any area except the first area in the plurality of areas.
In one example of this embodiment, when generating the HDR image, at least one region in the first original image does not need to be adjusted in brightness values. The first region is a region where the luminance values of corresponding pixels of the first original image and the target HDR image are completely the same, that is, a region that does not need to be adjusted. Therefore, the luminance value of each pixel of the first region of the first original image may be taken as the luminance value of the corresponding each pixel of the first region of the target HDR image.
In one example of this embodiment, determining the average value of the luminance of each second region in the target HDR image according to the average value of the luminance of the first region of the target HDR image and the average value of the depth of each region of the first original image comprises: determining a luminance average value of a second region of the target HDR image according to a luminance average value of the first region of the target HDR image, a depth average value of the first region of the first original image, and a depth average value of the second region of the first original image, wherein the luminance average value of the second region of the target HDR image is a sum of a first product and a preset error value, the first product is a product of the luminance average value of the first region of the target HDR image and a first ratio, and the first ratio is a ratio of a square of the depth average value of the first region to a square of the depth average value of the second region.
In one example of the present embodiment, the first region may be a portrait region or a background region. After determining the luminance values of the respective pixels in the first region of the target HDR image, further determining an average luminance value of the first region of the target HDR image, and determining an average luminance value of other regions of the target HDR image according to a formula of luminance per unit area, specifically, since the total luminance I of the light sources in one image is fixed, determining an average luminance value of other second regions according to a formula (1) of luminance per unit area and a distance variable.
Figure BDA0003973558620000071
Figure BDA0003973558620000072
Where y is the luminance per unit area, I is the total luminance of the light source, and x is the distance variable. y is n Is the average value of the brightness of any one of the second regions, y 1 Is the average value of the brightness of the first region, x 1 Is the mean value of the depth of the first region, x 2 Is the depth average value of the second region, c is a preset error value, which may be an error range in which the calculated distance value is inaccurate。
In one example of the present embodiment, the average value of the luminance of each region may be determined by a light luminance decay curve.
Step S13 is to generate a target HDR image according to the average luminance value of each region in the target HDR image and the image information of the plurality of original images.
In one example of this embodiment, generating a target HDR image according to a luminance average value of each region in the target HDR image and image information of a plurality of original images includes: determining the average brightness value of each area in the first original image according to the brightness value of each pixel of the first original image; determining the brightness value of each pixel in the target HDR image according to the brightness average value of each region in the target HDR image, the brightness average value of each region in the first original image and the brightness value of each pixel in the first original image; and generating the target HDR image according to the brightness value of each pixel in the target HDR image and the image information of the plurality of original images.
In one example of the present embodiment, determining the luminance value of each pixel in the target HDR image according to the luminance average value of each region in the target HDR image, the luminance average value of each region in the first original image, and the luminance value of each pixel of the first original image includes: determining the brightness value of the target pixel in the target HDR image according to the brightness value of the target pixel in the first original image, the brightness average value of the target region in the target HDR image and the brightness average value of the target region in the first original image, wherein the target region is the region where the target pixel is located, the brightness value of the target pixel in the target HDR image is the sum of the brightness value of the target pixel in the first original image and a first difference value, and the first difference value is the difference value of the brightness average value of the target region in the target HDR image and the brightness average value of the target region in the first original image.
In an example of the embodiment, after determining the luminance average value of each region in the target HDR image, the luminance value of each pixel in the target HDR image may be further determined, specifically, for the luminance value of any one pixel in the target HDR image, the adjustment value of the luminance value of each pixel in the region may be determined by the difference between the luminance average value of the target HDR image of the region in which the pixel is located and the luminance average value of the region in the first original image, and then the sum of the luminance value of the corresponding pixel of the pixel in the first original image and the adjustment value is used as the luminance value of the pixel in the target HDR image.
After determining the luminance value of each pixel in the target HDR image, the target HDR image may be synthesized based on the image information of the plurality of original images and the luminance values of the respective pixels in the target HDR image.
In this example, the first original image may be divided into a plurality of regions according to depth information of the first original image having the most image information among the plurality of images of the same backlight scene, a luminance average value of each region in the target HDR image may be determined, and the target HDR image may be generated based on the luminance average value of each region in the target HDR image and the image information of the plurality of original images. By the method, the brightness average value of each region in the target HDR image can be determined according to the depth of each region, so that the tone of the target HDR image can be integrally and naturally determined according to the image depth, the tone inversion is avoided in the HDR image synthesis process, and the photographing experience of a user is improved.
According to the image generation method provided by the embodiment of the application, the execution subject can be an image generation device. In the embodiment of the present application, an image generation apparatus executes an image generation method as an example, and an apparatus for generating an image according to the embodiment of the present application is described.
Corresponding to the above embodiment, referring to fig. 2, an embodiment of the present application further provides an image generating apparatus 100, including: the dividing module 101 is configured to divide a first original image into multiple regions according to image depth information of the first original image in the multiple original images, and determine a depth average value of each region of the first original image, where the first original image is an original image with the most image information in the multiple original images, and the multiple original images correspond to the same backlight scene; a first determining module 102, configured to determine a luminance average value of each region in the target HDR image according to a depth average value of each region of the first original image; the generating module 103 is configured to generate the target HDR image according to the average luminance values of the regions in the target HDR image and the image information of the multiple original images.
Optionally, the first determining module includes a first determining submodule, configured to determine a first region in multiple regions of a first original image, and use a luminance value of each pixel in the first region of the first original image as a luminance value of each pixel in the first region in the target HDR image; and the second determining sub-module is used for determining the brightness average value of the first region of the target HDR image and determining the brightness average value of each second region in the target HDR image according to the brightness average value of the first region of the target HDR image and the depth average value of each region of the first original image, wherein the second region is any region except the first region in the multiple regions.
Optionally, the second determining submodule is specifically configured to: determining a brightness average value of a second area of the target HDR image according to the brightness average value of the first area, the depth average value of the first area and the depth average value of the second area of the target HDR image; the average value of the luminance of the second region of the target HDR image is a sum of a first product and a preset error value, the first product is a product of the average value of the luminance of the first region of the target HDR image and a first ratio, and the first ratio is a ratio of a square of the average value of the depth of the first region to a square of the average value of the depth of the second region.
Optionally, a generating module, comprising: the third determining submodule is used for determining the brightness average value of each area in the first original image according to the brightness value of each pixel of the first original image; a fourth determining submodule, configured to determine a luminance value of each pixel in the target HDR image according to the luminance average value of each region in the target HDR image, the luminance average value of each region in the first original image, and the luminance value of each pixel in the first original image; and the generation submodule is used for generating the target HDR image according to the brightness value of each pixel in the target HDR image and the image information of the plurality of original images.
Optionally, the fourth determining submodule is specifically configured to: determining the brightness value of the target pixel in the target HDR image according to the brightness value of the target pixel in the first original image, the brightness average value of the target area in the target HDR image and the brightness average value of the target area in the first original image; the target area is an area where the target pixel is located, the luminance value of the target pixel in the target HDR image is the sum of the luminance value of the target pixel in the first original image and a first difference value, and the first difference value is the difference value between the luminance average value of the target area in the target HDR image and the luminance average value of the target area in the first original image.
In this example, there is also provided an apparatus that may divide a first original image into a plurality of regions according to depth information of the first original image having the most image information among images of the same backlight scene, determine a luminance average value of each region in a target HDR image, and generate the target HDR image based on the luminance average value of each region in the target HDR image and image information of the plurality of original images. By the method, the brightness average value of each region in the target HDR image can be determined according to the depth of each region, so that the tone of the target HDR image can be integrally and naturally determined according to the image depth, the tone inversion is avoided in the HDR image synthesis process, and the photographing experience of a user is improved.
The image generating apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), an assistant, a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image generation device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which is not specifically limited in the embodiment of the present application.
The image generation device provided in the embodiment of the present application can implement each process implemented by the method embodiment, and is not described here again to avoid repetition.
Corresponding to the above embodiments, optionally, as shown in fig. 3, an electronic device 800 is further provided in the embodiment of the present application, and includes a processor 801 and a memory 802, where the memory 802 stores a program or an instruction that can be executed on the processor 801, and when the program or the instruction is executed by the processor 801, the steps of the embodiment of the image generation method are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic diagram of a hardware structure of an electronic device implementing the embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 1010 is configured to divide a first original image into a plurality of regions according to image depth information of the first original image in the plurality of original images, and determine a depth average value of each region of the first original image, where the first original image is an original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene; determining the brightness average value of each area in the target HDR image according to the depth average value of each area of the first original image; and generating the target HDR image according to the brightness average value of each region in the target HDR image and the image information of the plurality of original images.
Optionally, determining the average brightness value of each region in the target HDR image according to the average depth value of each region in the first original image, including determining a first region in a plurality of regions of the first original image, and using the brightness value of each pixel point in the first region of the first original image as the brightness value of each pixel point in the first region in the target HDR image; determining a brightness average value of a first region of the target HDR image, and determining a brightness average value of each second region in the target HDR image according to the brightness average value of the first region of the target HDR image and a depth average value of each region of the first original image, wherein the second region is any region except the first region in the plurality of regions.
Optionally, determining the average value of the luminance of each second region in the target HDR image according to the average value of the luminance of the first region of the target HDR image and the average value of the depth of each region of the first original image comprises: determining a brightness average value of a second region of the target HDR image according to the brightness average value of the first region of the target HDR image, the depth average value of the first region of the first original image and the depth average value of the second region of the first original image; the average brightness value of the second region of the target HDR image is a sum of a first product and a preset error value, the first product is a product of the average brightness value of the first region of the target HDR image and a first ratio, and the first ratio is a ratio of a square of a depth average value of the first region to a square of a depth average value of the second region.
Optionally, generating the target HDR image according to the average luminance values of the respective regions in the target HDR image and the image information of the plurality of original images includes: determining the average brightness value of each area in the first original image according to the brightness value of each pixel of the first original image; determining the brightness value of each pixel in the target HDR image according to the brightness average value of each region in the target HDR image, the brightness average value of each region in the first original image and the brightness value of each pixel in the first original image; and generating the target HDR image according to the brightness value of each pixel in the target HDR image and the image information of the original images.
Optionally, determining the luminance value of each pixel in the target HDR image according to the luminance average value of each region in the target HDR image, the luminance average value of each region in the first original image, and the luminance value of each pixel in the first original image, includes: determining the brightness value of the target pixel in the target HDR image according to the brightness value of the target pixel in the first original image, the brightness average value of the target area in the target HDR image and the brightness average value of the target area in the first original image; the target area is an area where the target pixel is located, the luminance value of the target pixel in the target HDR image is the sum of the luminance value of the target pixel in the first original image and a first difference value, and the first difference value is the difference value between the luminance average value of the target area in the target HDR image and the luminance average value of the target area in the first original image.
In this example, an electronic device is provided that may divide a first original image into a plurality of regions according to depth information of the first original image having the most image information among a plurality of images of the same backlit scene, determine a luminance average value of each region in a target HDR image, and generate the target HDR image based on the luminance average value of each region in the target HDR image and image information of the plurality of original images. By the method, the brightness average value of each region in the target HDR image can be determined according to the depth of each region, so that the tone of the target HDR image can be integrally and naturally determined according to the image depth, the tone inversion is avoided in the HDR image synthesis process, and the photographing experience of a user is improved.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071, also called a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 909 may be used to store software programs as well as various data. The memory 909 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 909 may include volatile memory or nonvolatile memory, or the memory 909 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct bus RAM (DRRAM). The memory 909 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 910 may include one or more processing units; optionally, the processor 910 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above image generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
The embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the above embodiment of the image generation method, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method of generating an image, comprising:
dividing a first original image into a plurality of regions according to image depth information of the first original image in the plurality of original images, and determining a depth average value of each region of the first original image, wherein the first original image is an original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene;
determining the average value of brightness of each region in the target HDR image according to the average value of depth of each region of the first original image;
and generating the target HDR image according to the brightness average value of each region in the target HDR image and the image information of the original images.
2. The method of claim 1, wherein determining the average luminance value of each region in the target HDR image according to the average depth value of each region in the first original image comprises:
determining a first region in a plurality of regions of the first original image, and taking the brightness value of each pixel point in the first region of the first original image as the brightness value of each pixel point in the first region of the target HDR image;
determining a brightness average value of a first region of the target HDR image, and determining a brightness average value of each second region in the target HDR image according to the brightness average value of the first region of the target HDR image and a depth average value of each region of the first original image, wherein the second region is any region except the first region in the plurality of regions.
3. The method as claimed in claim 2, wherein the determining the average luminance value of each second region in the target HDR image according to the average luminance value of the first region of the target HDR image and the average depth value of each region of the first original image comprises:
determining a brightness average value of a second region of the target HDR image according to the brightness average value of the first region of the target HDR image, the depth average value of the first region of the first original image and the depth average value of the second region of the first original image;
wherein the average value of the luminance of the second region of the target HDR image is a sum of a first product and a preset error value, the first product is a product of the average value of the luminance of the first region of the target HDR image and a first ratio, and the first ratio is a ratio of a square of the average value of the depth of the first region and a square of the average value of the depth of the second region.
4. The method as claimed in claim 3, wherein the generating the target HDR image according to the luminance average of each region in the target HDR image and the image information of the plurality of original images comprises:
determining the average brightness value of each area in a first original image according to the brightness value of each pixel of the first original image;
determining the brightness value of each pixel in the target HDR image according to the brightness average value of each region in the target HDR image, the brightness average value of each region in the first original image and the brightness value of each pixel in the first original image;
generating the target HDR image according to the brightness value of each pixel in the target HDR image and the image information of the original images.
5. The method of claim 4, wherein determining the luminance value of each pixel in the target HDR image according to the average luminance value of each region in the target HDR image, the average luminance value of each region in the first original image, and the luminance value of each pixel in the first original image comprises:
determining a brightness value of a target pixel in a target HDR image according to the brightness value of the target pixel in the first original image, the brightness average value of a target area in the target HDR image and the brightness average value of the target area in the first original image;
wherein the target region is a region where the target pixel is located, the luminance value of the target pixel in the target HDR image is a sum of the luminance value of the target pixel in the first original image and a first difference value, and the first difference value is a difference value between a luminance average value of the target region in the target HDR image and a luminance average value of the target region in the first original image.
6. An image generation apparatus, comprising:
the image processing device comprises a dividing module, a calculating module and a calculating module, wherein the dividing module is used for dividing a first original image into a plurality of areas according to the image depth information of the first original image in the plurality of original images and determining the depth average value of each area of the first original image, the first original image is the original image with the most image information in the plurality of original images, and the plurality of original images correspond to the same backlight scene;
a first determining module, configured to determine a luminance average value of each region in a target HDR image according to a depth average value of each region of the first original image;
and the generation module is used for generating the target HDR image according to the brightness average value of each area in the target HDR image and the image information of the plurality of original images.
7. The apparatus of claim 6, wherein the first determining module comprises:
a first determining submodule, configured to determine a first region in multiple regions of the first original image, and use a luminance value of each pixel in the first region of the first original image as a luminance value of each pixel in the first region of the target HDR image;
and the second determining sub-module is used for determining the brightness average value of the first region of the target HDR image and determining the brightness average value of each second region in the target HDR image according to the brightness average value of the first region of the target HDR image and the depth average value of each region of the first original image, wherein the second region is any region except the first region in the plurality of regions.
8. The apparatus of claim 7, wherein the second determining submodule is specifically configured to:
determining a luminance average value of a second region of the target HDR image according to the luminance average value of the first region, the depth average value of the first region and the depth average value of the second region of the target HDR image;
wherein the mean luminance value of the second region of the target HDR image is a sum of a first product and a preset error value, the first product is a product of the mean luminance value of the first region of the target HDR image and a first ratio, and the first ratio is a ratio of a square of the mean depth value of the first region to a square of the mean depth value of the second region.
9. The apparatus of claim 8, wherein the generating module comprises:
the third determining submodule is used for determining the brightness average value of each area in the first original image according to the brightness value of each pixel of the first original image;
a fourth determining sub-module, configured to determine a luminance value of each pixel in the target HDR image according to the average luminance value of each region in the target HDR image, the average luminance value of each region in the first original image, and the luminance value of each pixel in the first original image;
a generating sub-module, configured to generate the target HDR image according to the luminance values of the pixels in the target HDR image and the image information of the multiple original images.
10. The apparatus of claim 9, wherein the fourth determination submodule is specifically configured to:
determining a brightness value of a target pixel in a target HDR image according to the brightness value of the target pixel in the first original image, the brightness average value of a target area in the target HDR image and the brightness average value of the target area in the first original image;
wherein the target region is a region where the target pixel is located, the luminance value of the target pixel in the target HDR image is a sum of the luminance value of the target pixel in the first original image and a first difference value, and the first difference value is a difference value between a luminance average value of the target region in the target HDR image and a luminance average value of the target region in the first original image.
CN202211535350.9A 2022-11-30 2022-11-30 Image generation method and device Pending CN115797160A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211535350.9A CN115797160A (en) 2022-11-30 2022-11-30 Image generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211535350.9A CN115797160A (en) 2022-11-30 2022-11-30 Image generation method and device

Publications (1)

Publication Number Publication Date
CN115797160A true CN115797160A (en) 2023-03-14

Family

ID=85444755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211535350.9A Pending CN115797160A (en) 2022-11-30 2022-11-30 Image generation method and device

Country Status (1)

Country Link
CN (1) CN115797160A (en)

Similar Documents

Publication Publication Date Title
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
CN111835982B (en) Image acquisition method, image acquisition device, electronic device, and storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114390197A (en) Shooting method and device, electronic equipment and readable storage medium
CN115439386A (en) Image fusion method and device, electronic equipment and storage medium
CN115797160A (en) Image generation method and device
CN112446848A (en) Image processing method and device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN112399092A (en) Shooting method and device and electronic equipment
CN113489901B (en) Shooting method and device thereof
CN114143448B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112367470B (en) Image processing method and device and electronic equipment
CN112399091B (en) Image processing method and device and electronic equipment
CN113923367B (en) Shooting method and shooting device
CN111815531B (en) Image processing method, device, terminal equipment and computer readable storage medium
CN114979479A (en) Shooting method and device thereof
CN117793513A (en) Video processing method and device
CN116128844A (en) Image quality detection method, device, electronic equipment and medium
CN116320729A (en) Image processing method, device, electronic equipment and readable storage medium
CN116017146A (en) Image processing method and device
CN116320769A (en) Exposure adjustment method, device, electronic equipment and readable storage medium
CN116342992A (en) Image processing method and electronic device
CN114363507A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination