CN115066880A - Method for generating captured image and electronic device - Google Patents
Method for generating captured image and electronic device Download PDFInfo
- Publication number
- CN115066880A CN115066880A CN202080090782.6A CN202080090782A CN115066880A CN 115066880 A CN115066880 A CN 115066880A CN 202080090782 A CN202080090782 A CN 202080090782A CN 115066880 A CN115066880 A CN 115066880A
- Authority
- CN
- China
- Prior art keywords
- image
- captured
- light source
- electronic device
- initial images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 238000003384 imaging method Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 48
- 230000000694 effects Effects 0.000 claims description 14
- 238000009877 rendering Methods 0.000 claims description 12
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 description 23
- 238000004891 communication Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 239000000463 material Substances 0.000 description 4
- 230000035945 sensitivity Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 125000001475 halogen functional group Chemical group 0.000 description 2
- 238000003702 image correction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- HBBGRARXTFLTSG-UHFFFAOYSA-N Lithium ion Chemical compound [Li+] HBBGRARXTFLTSG-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 229910001416 lithium ion Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
A method, comprising: capturing a first initial image within a first dynamic range, wherein the first initial image is captured at a different imaging setting; generating an intermediate image of a second dynamic range from the first initial image, wherein the second dynamic range is higher than the first dynamic range; detecting a light source region according to a light detection image to generate a light source map, wherein the light detection image is at least one image in the first initial image and is suitable for detecting the light source region; a captured image is generated from the intermediate image and the light source map, wherein the position of the light source region of the captured image is determined from the light source map, and the color of the light source region of the captured image is determined from the intermediate image.
Description
Technical Field
The present application relates to a method of generating a captured image in an electronic device comprising a camera assembly and such an electronic device.
Background
Electronic devices such as smart phones and tablet terminals are widely used in daily life. Today, many electronic devices are equipped with camera assemblies to capture images. Some electronic devices are portable and easily carried so that their users can easily capture images using the camera assembly of the electronic device at any time and place.
However, if a user of the electronic device captures an image at night, the captured image includes one or more light source regions that are over-exposed or halo. After the image is captured, the light source region and the halo region that are overexposed in the captured image may be subjected to image correction processing.
However, even if the captured image has been subjected to the image correction processing, the quality of the light source region on the captured image is sometimes insufficient for the user, for example, because its color is not suitable for the light source region or the flare effect (Bokeh effect) introduced to the light source region on the captured image is too strong. Therefore, a technique for improving the quality of the light source area on the captured image is required.
Disclosure of Invention
The present application is directed to solving at least one of the above technical problems. Accordingly, there is a need to provide a method and an electronic device for generating a captured image.
According to the present application, a method of generating a captured image in an electronic device including a camera assembly may include:
capturing, by a camera assembly, a plurality of first initial images in a first dynamic range, wherein the plurality of first initial images are captured at different imaging settings;
generating an intermediate image in a second dynamic range from the plurality of first initial images, wherein the second dynamic range is higher than the first dynamic range;
detecting a light source region from a light detection image to generate a light source map, wherein the light detection image is at least one of the plurality of first initial images and is adapted to detect the light source region;
a captured image is generated from the intermediate image and the light source map, wherein the position of the light source region of the captured image is determined from the light source map, and the color of the light source region of the captured image is determined from the intermediate image.
In some embodiments, in the method, capturing the first plurality of initial images may comprise: a plurality of first initial images are captured at different exposure values.
In some embodiments, in the method, the light detection image may be one of the first plurality of initial images captured at the lowest exposure value.
In some embodiments, in the method, the lowest exposure value may be low enough not to detect light from regions outside the light source region in the first initial image.
In some embodiments, in the method, the lowest exposure value may be a negative value.
In some embodiments, in the method, the method may further include:
capturing a second initial image by the camera assembly; and
a depth map is computed from the plurality of first initial images and the second initial image.
In some embodiments, in the method, generating the captured image further comprises: a shot rendering process is performed to introduce a shot effect to the captured image according to the depth map.
In some embodiments, in the method, the plurality of first initial images may be captured by a first main camera of the camera assembly and the second initial image may be captured by a second main camera of the camera assembly.
In some embodiments, in the method, the generating the captured image may further include: a segmentation process is performed to introduce a shot effect to the captured image based on the intermediate image.
In some embodiments, in the method, the plurality of first initial images may be captured by a first primary camera in a back side of the electronic device or may be captured by a secondary camera in a front side of a camera assembly of the electronic device.
According to the present application, an electronic device may include:
a camera assembly capable of capturing images in a first dynamic range; and
an image processor for:
-capturing a plurality of first initial images in a first dynamic range by a camera assembly, wherein the plurality of first initial images are captured at different imaging settings;
-generating an intermediate image in a second dynamic range from the plurality of first initial images, wherein the second dynamic range is higher than the first dynamic range;
-detecting light source regions from a light detection image to generate a light source map, wherein the light detection image is at least one of the plurality of first initial images and is adapted to detect the light source regions;
-generating a captured image from the intermediate image and the light source map, wherein the position of the light source area of the captured image is determined from the light source map and the color of the light source area of the captured image is determined from the intermediate image.
In some embodiments, in the electronic device, when capturing the plurality of first initial images, the plurality of first initial images may be captured at different exposure values.
In some embodiments, in the electronic device, the light detection image may be one of the first plurality of initial images captured at the lowest exposure value.
In some embodiments, in the electronic device, the lowest exposure value may be low enough not to detect light from regions outside the light source region in the first initial image.
In some embodiments, the lowest exposure value may be a negative value in the electronic device.
In some embodiments, in the electronic device, the image processor may be further configured to:
capturing a second initial image by the camera assembly; and
a depth map is computed from the plurality of first and second initial images.
In some embodiments, in the electronic device, when generating the captured image, a shot rendering process may also be performed to introduce a shot effect to the captured image according to the depth map.
In some embodiments, in the electronic device, the plurality of first initial images may be captured by a first primary camera of the camera assembly and the second initial image may be captured by a second primary camera of the camera assembly.
In some embodiments, in the electronic device, a shot effect may also be introduced to the captured image from the intermediate image by the segmentation process when generating the captured image.
In some embodiments, in the electronic device, the plurality of first initial images may be captured by a first primary camera in a back side of the electronic device or may be captured by a secondary camera in a front side of a camera assembly of the electronic device.
Drawings
These and/or other aspects and advantages of embodiments of the present application will become apparent and more readily appreciated from the following description, taken in conjunction with the accompanying drawings, in which:
fig. 1 shows a back view of an electronic device according to a first embodiment of the present application;
FIG. 2 shows a front view of an electronic device according to a first embodiment of the application;
FIG. 3 shows a block diagram of an electronic device according to a first embodiment of the application;
fig. 4 shows a schematic flow chart of an image capturing process performed by the electronic device according to the first embodiment of the present application;
FIG. 5 shows a back view of an electronic device according to a second embodiment of the present application; and
fig. 6 shows a schematic flow chart of an image capturing process performed by an electronic device according to a second embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings. Throughout the specification, the same or similar elements and elements having the same or similar functions are denoted by the same reference numerals. The embodiments described herein in connection with the drawings are illustrative and intended to be illustrative of the present application and should not be construed as limiting the present application.
[ first embodiment ]
Fig. 1 shows a back view of an electronic device according to a first embodiment of the present application. Fig. 2 shows a front view of an electronic device according to a first embodiment of the present application.
As shown in fig. 1 and 2, electronic device 10 may include a display 20 and a camera assembly 30. In the present embodiment, the camera assembly 30 includes a first main camera 32, a second main camera 34, and a sub-camera 36. The first and second primary cameras 32, 34 may capture images at the back of the electronic device 10, while the secondary camera 36 may capture images at the front of the electronic device 10. Accordingly, the first and second main cameras 32 and 34 are referred to as outer cameras, and the sub-camera 36 is referred to as an inner camera.
The first and second main cameras 32, 34 may have the same performance and/or characteristics, or the first and second main cameras 32, 34 may have different performance and/or characteristics. For example, in a case where the first main camera 32 and the second main camera 34 have different performances and/or characteristics from each other, the first main camera 32 may be equipped with a full-color image sensor, and the second main camera 34 may be equipped with a black-and-white image sensor. Further, the first main camera 32 may be a camera adapted to capture still images, and the second main camera 34 may be a camera adapted to capture moving images. Further, the first main camera 32 may be a camera equipped with a wide-angle lens, and the second main camera 34 may be a camera equipped with a telephoto lens.
In the present embodiment, the performance of the sub-camera 36 is lower than the performance of the first and second main cameras 32 and 34. However, the performance of the sub-camera 36 may also be the same as the performance of the first and second main cameras 32 and 34.
Although the electronic apparatus 10 according to the present embodiment has 3 cameras, the electronic apparatus may have 3 or more cameras. For example, the electronic device 10 may have 3, 4, 5, etc. primary cameras.
Fig. 3 shows a block diagram of an electronic device according to a first embodiment of the application. As shown in FIG. 3, in addition to display 20 and camera assembly 30, electronic device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power circuit 46, and a communication circuit 48. Display 20, camera assembly 30, main processor 40, image signal processor 42, memory 44, power circuit 46, and communication circuit 48 are coupled together by bus 50.
The main processor 40 executes one or more programs stored in the memory 44. The main processor 40 implements various applications and data processing of the electronic device 10 by executing programs. The main processor 40 may be one or more computer processors. The main processor 40 is not limited to one CPU core, and may have a plurality of CPU cores. The main processor 40 may be the main CPU of the electronic device 10, an Image Processing Unit (IPU), or a DSP provided with the camera assembly 30.
In the present embodiment, the main processor 40 and the image signal processor 42 cooperate to acquire images captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are used to capture an image through the camera assembly 30 and perform various image processes on the captured image.
The memory 44 stores a program to be executed by the main processor 40 and various data. For example, data of the captured image is stored in the memory 44.
The memory 44 may be a high speed RAM memory or may be a non-volatile memory such as flash memory, disk memory, or the like.
The power supply circuit 46 may have a battery (not shown) such as a lithium-ion rechargeable battery and a Battery Management Unit (BMU) for managing the battery.
The communication circuit 48 is used to receive and transmit data to communicate with the internet or other devices via wireless communication. The wireless communication may employ any communication standard or protocol, including but not limited to: global system for mobile communications (GSM), Code Division Multiple Access (CDMA), Long Term Evolution (LTE), enhanced long term evolution (LTE-Advanced), fifth generation mobile communication system (5G).
The communication circuit 48 may include an antenna and Radio Frequency (RF) circuitry.
Fig. 4 shows a flowchart of an image capturing process performed by the electronic apparatus according to the present embodiment. In the present embodiment, the image capturing process is executed by the main processor 40 in cooperation with the image signal processor 42. Thus, the main processor 40 and the image signal processor 42 constitute an image processor in the present embodiment.
As shown in fig. 4, the electronic apparatus 10 captures a plurality of first initial images within a first dynamic range through the first main camera 32 of the camera assembly 30 (step S10), and captures a second initial image within the first dynamic range through the second main camera 34 of the camera assembly 30 (step S32).
In the present embodiment, for example, the camera assembly 30 may have a setting of auto focus, a setting of auto exposure, and a setting of auto white balance. In other words, the camera assembly 30 may have setting of ISO sensitivity, setting of exposure time, setting of exposure value, and the like, respectively.
The imaging settings of camera assembly 30 may be automatically set by electronic device 10. In other words, the user may select automatic settings for the imaging settings of camera assembly 30. Alternatively, the imaging settings of camera assembly 30 may be preset by the user. In other words, the user can set each item of imaging settings of the camera assembly 30 in advance.
Further, in the present embodiment, when the electronic apparatus 10 captures a plurality of first initial images in step S10, the electronic apparatus 10 changes the exposure value of each first initial image to generate an intermediate image in the second dynamic range in a subsequent process. In the present embodiment, the first dynamic range corresponds to a low dynamic range, and the second dynamic range corresponds to a high dynamic range.
For example, in step S10, the electronic device 10 captures three first initial images with exposure values set to-1, 0, and +1, respectively. That is, it is assumed that the electronic apparatus 10 has set the exposure value of the camera assembly 30 to 0. To capture two further first initial images, the exposure value is increased by 1 and decreased by 1, respectively.
However, the number of first initial images is optional. For example, the number of the first initial images captured in step S10 may be two, four, five, or the like.
Upon capturing the second initial image in step S12, the electronic device 10 captures one image of the second initial image to calculate a depth map in subsequent processing. In the present embodiment, although the plurality of first initial images and the plurality of second initial images are captured within the first dynamic range, the dynamic range of the first initial images may be different from that of the second initial dynamic range. In other words, the first dynamic range is the dynamic range of the first main camera 32 and the second main camera 34.
In the present embodiment, the first initial image and the second initial image having an exposure value of 0 are captured simultaneously because the exposure value at 0 is the imaging setting, set by the electronic apparatus 10. However, the capturing time of the first initial image and the capturing time of the second initial image, which have an exposure value of 0, may be different.
Next, the electronic device 10 executes the high dynamic range generation process (step S14). More specifically, in the high dynamic range generation process, the electronic device 10 generates an intermediate image in the second dynamic range from the plurality of first initial images.
In the present embodiment, the electronic device 10 generates an intermediate image from three first initial images captured at exposure values of-1, 0, and 1, respectively. The high dynamic range intermediate image may be generated by combining and synthesizing a plurality of low dynamic range first initial images.
Next, as shown in fig. 4, the electronic device 10 calculates a depth map from the intermediate image generated in step S14 and the second initial image captured in step S12 (step S16).
More specifically, the position of the first main camera 32 is different from the position of the second main camera 34. Therefore, the viewpoint of the intermediate image generated from the first initial image captured by the first main camera 32 is different from the viewpoint of the second initial image captured by the second main camera 34. Using the disparity of the intermediate image and the second initial image, the electronic device 10 may generate a depth map that indicates the distance between the electronic device 10 and the surface of the object in the intermediate image and the second main image.
For example, the depth map may be calculated by the host processor 40 and/or the image signal processor 42. In addition, the depth map may be calculated internally within a camera module of camera assembly 30. However, a depth map calculation circuit for calculating a depth map may be placed in the electronic device 10.
Next, as shown in fig. 4, the electronic apparatus 10 detects the light source region from the light detection image, and generates a light source map by detecting the light source in the light detection image to indicate the position of the light source region in the intermediate image (step S18).
The light source map may be an image indicating the center of the light source or may be a set of one or more coordinates of the center of the light source on the captured image. The light source map may be temporarily stored in the memory 44.
The light detection image is at least one of the plurality of first initial images and is adapted to detect the light source region. For example, in this case, one of the first initial images captured at the exposure value of-1 is a light detection image because the image captured at the exposure value of-1 is the lowest exposure value among the plurality of first initial images captured in step S10. In other words, the exposure value of-1 is sufficiently low so that light from a region other than the light source region cannot be detected in the first initial image.
In general, in order to detect the light source region in the photo detection image, the Exposure Value (EV) is preferably a negative value. That is, the light detection image should be one of the first initial images captured at the negative exposure value. Here, the exposure value may be constituted by a shutter speed, ISO sensitivity, and aperture.
In the present embodiment, in order to generate the light source map, the electronic device 10 compares the luminance of a certain region in the light detection image with a threshold value, and if the luminance of the region is greater than the threshold value, the region is regarded as the light source region.
More specifically, the electronic device 10 compares the brightness of each region in the light detection image with the threshold. In other words, the electronic device 10 compares the brightness of each pixel in the light detection image with the threshold value to detect the light source region in the light detection image. The threshold value may be stored in the memory 44, set in a program executed by the main processor 40, or set in the image signal processor 42.
For example, in the case where each pixel in the light detection image includes a luminance composed of 256 gradations, if the luminance value of the pixel is greater than 128, the electronic device 10 may determine that the pixel is in the light source region.
Incidentally, the threshold value is not necessarily set to a specific value, but may also be set to a percentage. For example, in the case where the maximum luminance is 100% and the minimum luminance is 0%, the threshold value may be set to 50% of the luminance.
Thereafter, the electronic device 10 detects the center of the region having the luminance greater than the threshold value and determines the center as the center of the light source to generate a light source map.
Further, in the present embodiment, although the electronic apparatus 10 detects the light source area in the light detection image after the depth map is calculated, the electronic apparatus 10 may detect the light source area in the light detection image before the depth map is calculated. In this case, step S18 is performed before step S16.
For example, the light source regions may be detected by the main processor 40 and/or the image signal processor 42 from the light detection image. Further, the light source region may be detected from a light detection image captured by camera assembly 30.
Next, as shown in fig. 4, the electronic device 10 performs an image generation process to generate a captured image (step S20). In the present embodiment, the image generation processing is performed at least from the intermediate image generated in step S14, the light source map generated in step S18, and the depth map calculated in step S16. Of course, in the image generation process of step S20, other images and/or any information may be used to generate the captured image.
In the present embodiment, the image generation processing includes at least light source region adjustment processing that appropriately renders a light source region onto a captured image. In the light source region adjustment processing, the position of the light source region is determined from the light source map, and the color of the light source region is determined from the intermediate image.
Further, the image generation processing includes at least a shot rendering processing based on the depth map. That is, after generating the intermediate image from the plurality of first initial images, a shot rendering process is applied to the intermediate image to defocus the background of the intermediate image from the depth map. In other words, the background of the intermediate image is defocused to be blurred by the shot rendering process.
In general, the color of the high dynamic range intermediate image is more natural and accurate than the color of each of the first initial image and each of the second initial image. On the other hand, if the flare rendering process is applied to the intermediate image of a high dynamic range, a flare effect is also introduced into the light source region, thereby making the light source region excessively blurred. Therefore, in the present embodiment, the position of the light source region is adjusted according to the light source map generated in step S18, and the color of the light source region is adjusted according to the color of the intermediate image.
When the image generation processing in step S20 is completed, a captured image that the user wants to capture is generated. Of course, the image generation process of generating the captured image may include various processes other than the shot rendering process and the light source region adjustment process applied to the intermediate image. For example, the image generation process may include a demosaicing process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process, and the like to generate the captured image.
For example, the image generation process may be performed by the main processor 40 and/or the image signal processor 42. However, a shot rendering circuit for performing a shot rendering process may be placed in the electronic device 10. Similarly, a light source area adjustment circuit for performing the light source area adjustment process may be placed in the electronic device 10.
Next, as shown in fig. 4, the electronic apparatus 10 outputs the captured image (step S22). For example, the electronic device 10 may display the captured image on the display 20. Further, the electronic device 10 may store the captured image in the memory 44.
After the electronic device 10 outputs the captured image in step S22, the image capturing process of the present embodiment is completed.
As described above, according to the electronic apparatus 10 of the present embodiment, when the electronic apparatus 10 generates a captured image, the position of the light source region on the captured image is determined from the light source map and the color of the captured image, and the light source region on the captured image is determined from the intermediate image of the high dynamic range. Therefore, the light source area on the captured image is more beautiful and natural. As such, the user can be made satisfied with the quality of the captured image of the electronic apparatus 10 according to the present embodiment.
Furthermore, since one of the plurality of first initial images can be used as a light detection image to detect a light source region, there is no need for the camera assembly 30 to capture an additional image or no need for additional mechanical elements. Therefore, the user's satisfaction with the captured image can be improved without any additional cost.
Although the above-described electronic apparatus 10 changes the exposure value used to capture each of the plurality of first initial images in step S10 so as to generate the intermediate image in the high dynamic range in step S14, the electronic apparatus 10 may change the imaging settings other than the exposure value used to capture each of the plurality of first initial images. For example, the electronic apparatus 10 may change the ISO sensitivity, exposure time, and the like to capture each of the plurality of first initial images in step S10.
In any case, the most suitable image for detecting the light source region and generating the light source map in the first initial image may be used as the light detection image in step S18. In other words, since the electronic device 10 captures a plurality of first initial images in order to generate an intermediate image of a high dynamic range, an image that is most suitable for detecting a light source region may be selected as a light detection image in step S18.
[ second embodiment ]
Although the electronic device 10 has two main cameras on the back surface thereof in the first embodiment of the present application, the electronic device 10 has one main camera on the back surface thereof in the second embodiment of the present application. Hereinafter, differences from the first embodiment will be explained.
Fig. 5 shows a back view of an electronic device 10 according to a second embodiment of the present application. Fig. 5 corresponds to fig. 1 in the first embodiment. The front view of the electronic device 10 according to the second embodiment is substantially the same as fig. 2 in the first embodiment.
As shown in fig. 5, the electronic apparatus 10 according to the present embodiment has a camera assembly 30, and the camera assembly 30 includes a main camera 38 but does not include other main cameras. That is, the camera assembly 30 of the electronic apparatus 10 according to the present embodiment has one main camera 38 at the rear of the electronic apparatus 10 to capture an image at the rear of the electronic apparatus 10.
As shown in fig. 2, in the second embodiment, the electronic apparatus 10 has the sub-camera 36, as in the first embodiment. That is, the camera assembly 30 of the electronic apparatus 10 according to the second embodiment further includes a built-in camera on the front surface of the electronic apparatus 10 so that it captures an image on the front surface.
Fig. 6 shows a schematic flow chart of an image capturing process performed by an electronic device according to a second embodiment of the present application. In the present embodiment, the image capturing process is performed by the main processor 40 in cooperation with the image signal processor 42. Accordingly, the main processor 40 and the image signal processor 42 constitute an image processor in this embodiment.
As shown in fig. 6, the electronic device 10 captures a plurality of first initial images within a first dynamic range by the main camera 38 of the camera assembly 30 (step S30), wherein the first dynamic range is lower than the second dynamic range. Step S30 in the second embodiment is substantially the same as step S10 in the first embodiment, except that a plurality of first initial images are captured by a single main camera 38.
On the other hand, since the camera assembly 30 of the electronic apparatus 10 of the second embodiment does not have the second main camera 34, the second initial image is not captured. In other words, step S12 of capturing the second initial image in the first embodiment is omitted in the second embodiment.
Next, as shown in fig. 6, the electronic apparatus 10 performs a high dynamic range generating process to generate a high dynamic range intermediate image from the plurality of low dynamic range first initial images (step S14). Step S14 in the second embodiment is substantially the same as the corresponding step in the first embodiment.
Next, as shown in fig. 6, the electronic device 10 detects the light source region from the light detection image to generate a light source map by detecting the light source in the light detection image, thereby indicating the position where the light source region is located in the intermediate image (step S18). Step S18 in the second embodiment is substantially the same as the corresponding step in the first embodiment.
Next, as shown in fig. 6, the electronic apparatus 10 performs an image generation process to generate a captured image (step S32). In the present embodiment, the image generation processing is performed in accordance with at least the intermediate image generated in step S14 and the light source map generated in step S18. Of course, in the image generation process of step S32, other images and/or any information may be used to generate the captured image.
However, since the camera assembly 30 includes the single main camera 38, the depth map is not used in the image generation process according to the present embodiment. That is, step S16 of generating a depth map in the first embodiment is omitted in the second embodiment.
Therefore, the image generation processing according to the present embodiment includes segmentation processing instead of the shot rendering processing. In the segmentation process, a shot effect is introduced into the captured image without the depth map calculated in step S16 of the first embodiment. For example, by analyzing and recognizing faces in the intermediate image, a shot effect is introduced into the captured image, a human silhouette (human silhouette) is segmented and a shot effect is introduced into the background of the human silhouette in the intermediate image.
The image generation processing according to the present embodiment includes light source region adjustment processing, as in the first embodiment. That is, in the present embodiment, the position of the light source region is determined from the light source map, and the color of the light source region is determined from the intermediate image.
Next, as shown in fig. 4, the electronic apparatus 10 outputs the captured image (step S22). Step S22 in the second embodiment is substantially the same as the corresponding step in the first embodiment.
As described above, according to the electronic apparatus 10 of the present embodiment, when the electronic apparatus 10 generates a captured image, the position of the light source region on the captured image is decided from the light source map, and the light source color on the captured image is decided from the intermediate image of the high dynamic range. Therefore, the light source area on the captured image can be more beautiful and natural. As such, the user can be made satisfied with the quality of the captured image captured by the electronic apparatus 10 according to the present embodiment.
Further, even if the electronic device 10 includes a single main camera 38, the electronic device 10 can execute image generation processing including light source area adjustment processing. Thus, the present embodiment may be implemented with a single main camera 38 of the electronic device 10 with the camera assembly 30.
Similarly, the present embodiment may be implemented by a secondary camera 36 on the front of the electronic device 10. In other words, if the sub-camera 36 captures a plurality of first initial images, the image capturing process in fig. 6 may be performed from the plurality of first initial images captured by the sub-camera 36. Therefore, the quality of the captured image captured by the sub-camera 36 can also be improved.
In the description of the embodiments of the present application, it should be understood that terms such as "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," and "counterclockwise" are to be understood as referring to the orientation or position as shown in the drawing figures being described or discussed. These relative terms are used only to simplify the description of the present application and do not indicate or imply that the referenced devices or elements must have a particular orientation or be constructed or function in a particular orientation. Accordingly, these terms should not be construed as limiting the present application.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or importance, or to imply a number of the technical features indicated. Thus, a feature defined by "first" and "second" may include one or more of that feature. In the description of the present application, "plurality" means two or more, unless otherwise specified.
In the description of the embodiments of the present application, unless otherwise specified or limited, the terms "mounted," "connected," "coupled," and the like are used broadly and can be, for example, a fixed connection, a detachable connection, or an integral connection; mechanical or electrical connections are also possible; or may be directly connected or indirectly connected through intervening structures; or two elements, as those skilled in the art will understand from the specific case.
In embodiments of the present application, unless stated or limited otherwise, a structure in which a first feature is "on" or "under" a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are in contact with an additional feature formed therebetween. Furthermore, a first feature being "on," "above" or "top" a second feature may include the first feature being "on," "above" or "top" the second feature straight or inclined, or merely indicating that the height of the first feature is higher than the height of the second feature; and a first feature being "under", "below" or "bottom" a second feature may include the first feature being directly or obliquely "under", "below" or "bottom" the second feature, or simply indicating that the first feature is at a lower elevation than the second feature.
Various embodiments and examples are provided in the above description to achieve different configurations of the present application. Certain elements and arrangements are described above to simplify the present application. However, these elements and arrangements are merely examples and are not intended to limit the present application. Further, reference numerals and/or reference letters may be repeated in different examples of the application. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. Further, examples of different processes and materials are provided in the present application. However, one skilled in the art will appreciate that other processes and/or materials may also be applied.
Reference throughout this specification to "one embodiment," "some embodiments," "an example embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Thus, the appearances of the foregoing phrases in this specification are not necessarily all referring to the same embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any processes or methods described in flow charts or otherwise described herein may be understood as including one or more modules, segments, or portions of code which embody executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present application includes other embodiments in which functions may be implemented in an order different from that shown or discussed, including substantially the same order, or in reverse order, as would be understood by one of ordinary skill in the art.
The logic and/or steps described otherwise herein or shown in flowcharts, such as the particular sequence of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device (e.g., a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions). For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples of the computer readable medium include, but are not limited to: an electronic connection (electronic device) having one or more wires, a portable computer case (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other suitable medium, then compiled, decrypted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that various portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, also in another embodiment, the steps or methods may be implemented by one or a combination of the following techniques known in the art: a discrete logic circuit having logic gate circuits for realizing a logic function of a data signal, an application specific integrated circuit having an appropriate combination of appropriate combinational logic gate circuits, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), and the like.
It will be understood by those skilled in the art that all or part of the steps in the above-described exemplary methods of the present application may be implemented by program-directing associated hardware. The program may be stored in a computer-readable storage medium, which when executed on a computer comprises a combination of one or more steps of a method embodiment of the application.
In addition, each functional unit in the embodiments of the present application may be integrated into one processing module, may exist in a physical manner independently of each other, or may be integrated into one processing module by two or more units. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. When the integration module is implemented in the form of a software functional module and sold or used as a separate product, the integration module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a CD, etc.
Although embodiments of the present application have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations may be made herein without departing from the scope of the application.
Claims (20)
1. A method of generating a captured image in an electronic device, the electronic device including a camera assembly, the method comprising:
capturing, by the camera assembly, a plurality of first initial images within a first dynamic range, wherein the plurality of first initial images are captured at different imaging settings;
generating an intermediate image of a second dynamic range from the plurality of first initial images, wherein the second dynamic range is higher than the first dynamic range;
detecting a light source region from a light detection image to generate a light source map, wherein the light detection image is at least one image of the plurality of first initial images and is adapted to detect the light source region;
generating the captured image from the intermediate image and the light source map, wherein the position of the light source region of the captured image is determined from the light source map, and the color of the light source region of the captured image is determined from the intermediate image.
2. The method of claim 1, wherein capturing the first plurality of initial images comprises: the plurality of first initial images are captured at different exposure values.
3. The method of claim 2, wherein the light detection image is one of the first plurality of initial images captured at a lowest exposure value.
4. A method according to claim 3, wherein the lowest exposure value is too low to be sufficient to detect light from regions outside the light source region in the first initial image.
5. The method of claim 4, wherein the lowest exposure value is a negative value.
6. The method of any of claims 1-5, further comprising:
capturing a second initial image by the camera assembly; and
a depth map is computed from the plurality of first initial images and the second initial image.
7. The method of claim 6, wherein the generating the captured image further comprises: performing a shot rendering process to introduce a shot effect to the captured image according to the depth map.
8. The method of claim 7, wherein the plurality of first initial images are captured by a first main camera of the camera assembly and the second initial images are captured by a second main camera of the camera assembly.
9. The method according to any one of claims 1-5, wherein the generating the captured image further comprises: a segmentation process is performed to introduce a shot effect to the captured image from the intermediate image.
10. The method of claim 9, wherein the first plurality of initial images are captured by a first primary camera in a back side of the electronic device or by a secondary camera in a front side of the camera assembly of the electronic device.
11. An electronic device, comprising:
a camera assembly capable of capturing images in a first dynamic range; and
an image processor configured to:
capturing, by the camera assembly, a plurality of first initial images within the first dynamic range, wherein the plurality of first initial images are captured at different imaging settings;
generating an intermediate image of a second dynamic range from the plurality of first initial images, wherein the second dynamic range is higher than the first dynamic range;
detecting a light source region from a light detection image to generate a light source map, wherein the light detection image is at least one image of the plurality of first initial images and is adapted to detect the light source region;
generating the captured image from the intermediate image and the light source map, wherein the position of the light source region of the captured image is determined from the light source map, and the color of the light source region of the captured image is determined from the intermediate image.
12. The electronic device of claim 11, wherein when capturing the plurality of first initial images, the plurality of first initial images are captured at different exposure values.
13. The electronic device of claim 12, wherein the light detection image is one of the plurality of first initial images captured at a lowest exposure value.
14. The electronic device of claim 13, wherein the lowest exposure value is low enough to be insufficient to detect light from regions other than the light source region in the first initial image.
15. The electronic device of claim 14, wherein the lowest exposure value is a negative value.
16. The electronic device of any one of claims 11-15, the image processor further configured to:
capturing a second initial image by the camera assembly; and
a depth map is computed from the plurality of first initial images and the second initial image.
17. The electronic device of claim 16, wherein when generating the captured image, further performs a shot rendering process to introduce a shot effect to the captured image according to the depth map.
18. The electronic device of claim 17, wherein the plurality of first initial images are captured by a first primary camera of the camera assembly and the second initial images are captured by a second primary camera of the camera assembly.
19. The electronic device according to any of claims 11-15, wherein a bokeh effect is also introduced to the captured image from the intermediate image by a segmentation process when generating the captured image.
20. The electronic device of claim 19, wherein the plurality of first initial images are captured by a first primary camera in a back side of the electronic device or by a secondary camera in a front side of the camera assembly of the electronic device.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/074828 WO2021159295A1 (en) | 2020-02-12 | 2020-02-12 | Method of generating captured image and electrical device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115066880A true CN115066880A (en) | 2022-09-16 |
Family
ID=77291986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202080090782.6A Pending CN115066880A (en) | 2020-02-12 | 2020-02-12 | Method for generating captured image and electronic device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115066880A (en) |
WO (1) | WO2021159295A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140192227A1 (en) * | 2013-01-07 | 2014-07-10 | GM Global Technology Operations LLC | Glaring reduction for dynamic rearview mirror |
US20150146014A1 (en) * | 2013-11-27 | 2015-05-28 | Aptina Imaging Corporation | Imaging systems and methods for location-specific image flare mitigation |
CN110177221A (en) * | 2019-06-25 | 2019-08-27 | 维沃移动通信有限公司 | The image pickup method and device of high dynamic range images |
US10554890B1 (en) * | 2019-02-18 | 2020-02-04 | Samsung Electronics Co., Ltd. | Apparatus and method for generating low-light images with improved bokeh using mobile electronic device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6109456B1 (en) * | 2015-06-30 | 2017-04-05 | オリンパス株式会社 | Image processing apparatus and imaging system |
CN106534677B (en) * | 2016-10-27 | 2019-12-17 | 成都西纬科技有限公司 | Image overexposure optimization method and device |
CN108088658A (en) * | 2016-11-23 | 2018-05-29 | 深圳华萤光电技术有限公司 | A kind of dazzle measuring method and its measuring system |
EP3462410A1 (en) * | 2017-09-29 | 2019-04-03 | Thomson Licensing | A user interface for manipulating light-field images |
-
2020
- 2020-02-12 CN CN202080090782.6A patent/CN115066880A/en active Pending
- 2020-02-12 WO PCT/CN2020/074828 patent/WO2021159295A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140192227A1 (en) * | 2013-01-07 | 2014-07-10 | GM Global Technology Operations LLC | Glaring reduction for dynamic rearview mirror |
US20150146014A1 (en) * | 2013-11-27 | 2015-05-28 | Aptina Imaging Corporation | Imaging systems and methods for location-specific image flare mitigation |
US10554890B1 (en) * | 2019-02-18 | 2020-02-04 | Samsung Electronics Co., Ltd. | Apparatus and method for generating low-light images with improved bokeh using mobile electronic device |
CN110177221A (en) * | 2019-06-25 | 2019-08-27 | 维沃移动通信有限公司 | The image pickup method and device of high dynamic range images |
Also Published As
Publication number | Publication date |
---|---|
WO2021159295A1 (en) | 2021-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028189B (en) | Image processing method, device, storage medium and electronic equipment | |
EP3565236B1 (en) | Control method, control apparatus, mobile terminal and computer-readable storage medium | |
CN108322646B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
US11461910B2 (en) | Electronic device for blurring image obtained by combining plural images based on depth information and method for driving the electronic device | |
KR101916355B1 (en) | Photographing method of dual-lens device, and dual-lens device | |
CN111028190A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN107358593B (en) | Image forming method and apparatus | |
CN113647094B (en) | Electronic device, method and computer readable medium for providing out-of-focus imaging effects in video | |
KR20150099302A (en) | Electronic device and control method of the same | |
US9961319B2 (en) | Image processing apparatus and control method thereof | |
CN105611182B (en) | Brightness compensation method and device | |
CN112653845B (en) | Exposure control method, exposure control device, electronic equipment and readable storage medium | |
CN110766729B (en) | Image processing method, device, storage medium and electronic equipment | |
US11871123B2 (en) | High dynamic range image synthesis method and electronic device | |
CN113298735A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111434104A (en) | Image processing device, imaging device, image processing method, and program | |
JP2013078112A (en) | Imaging apparatus | |
US20130222632A1 (en) | Electronic camera | |
US11368630B2 (en) | Image processing apparatus and image processing method | |
CN115066880A (en) | Method for generating captured image and electronic device | |
CN114946170B (en) | Method for generating image and electronic equipment | |
US11405562B2 (en) | Image processing apparatus, method of controlling the same, image capturing apparatus, and storage medium | |
CN116438568A (en) | Position difference map generation method and device, electronic equipment, chip and medium | |
CN114762313B (en) | Image processing method, device, storage medium and electronic equipment | |
CN116897530A (en) | Method, electronic device, apparatus, and computer-readable medium for generating image data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |