WO2022198525A1 - Method of improving stability of bokeh processing and electronic device - Google Patents

Method of improving stability of bokeh processing and electronic device Download PDF

Info

Publication number
WO2022198525A1
WO2022198525A1 PCT/CN2021/082832 CN2021082832W WO2022198525A1 WO 2022198525 A1 WO2022198525 A1 WO 2022198525A1 CN 2021082832 W CN2021082832 W CN 2021082832W WO 2022198525 A1 WO2022198525 A1 WO 2022198525A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
area
successful
depth
Prior art date
Application number
PCT/CN2021/082832
Other languages
French (fr)
Inventor
Takuya Oi
Jun Luo
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to PCT/CN2021/082832 priority Critical patent/WO2022198525A1/en
Priority to CN202180094935.9A priority patent/CN116917933A/en
Publication of WO2022198525A1 publication Critical patent/WO2022198525A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the present disclosure relates to a method of improving a stability of bokeh processing, and an electronic device performing the method.
  • a subject such as a person in the foreground should be clearly displayed, while the background such as buildings should be blurred.
  • a bokeh intensity is set to be 0 in an area of the subject, and the bokeh intensity of the other areas increases as the distance from the subject increases.
  • the area of the subject is determined based on an autofocus area such as an AF rectangle obtained by an autofocus operation of the camera module.
  • the autofocus area R is easy to drift or fly as shown in FIG. 7.
  • the images with bokeh at times t1 and t3 are not appropriate because the autofocus area R is out of the area of a person P as the subject.
  • the autofocus area tends to be out of the area of the subject when the subject moves or light reflected from the subject changes, for example.
  • the present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of improving a stability of bokeh processing and an electrical device implementing such method.
  • a method of improving a stability of bokeh processing includes acquiring an image, an autofocus area in the image and a depth map, performing subject segmentation processing on the image to acquire a subject area indicating a target subject, extracting, from the depth map, depth values in the subject area when the subject segmentation processing is successful, or depth values in the autofocus area when the subject segmentation processing is not successful and an autofocus operation is successful, determining a reference depth value based on the extracted depth values, and performing bokeh processing on the image based on the reference depth value.
  • the depth map corresponds to the image.
  • the subject segmentation processing separates an area of the target subject in the image from the other areas.
  • the acquiring an image, an autofocus area in the image and a depth map which corresponds to the image may be performed every time a new video frame is captured by a camera module provided with a mobile device.
  • the performing bokeh processing on the image based on the reference depth value may use a reference depth value of the previous video frame when both the subject segmentation processing and the autofocus operation are not successful.
  • the performing bokeh processing on the image based on the reference depth value may include using the reference depth value as a depth value of the target subject.
  • the image may be a master image captured by a master camera module of a stereo camera module.
  • the autofocus area may be an AF rectangle.
  • the depth map may be generated based on a stereo image captured by a stereo camera module, the depth map may be a ToF depth map based on an image captured by a range sensor module, or the depth map may be estimated based on the image by Artificial Intelligence (AI) .
  • AI Artificial Intelligence
  • the target subject may be a person, an animal or an object.
  • the subject segmentation processing may be determined to be successful when a shape of the subject area indicates the target subject.
  • the subject segmentation processing may be determined to be successful when at least a part of the subject area overlaps with the autofocus area.
  • the determining a reference depth value based on the extracted depth values may include calculating a representative value of the extracted depth values.
  • an electronic device includes a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform the method according to the present disclosure.
  • a non-transitory computer-readable storage medium on which a computer program is stored, wherein the computer program is executed by a computer to implement the method according to the present disclosure, is provided.
  • FIG. 1 is a functional block diagram illustrating an example of a configuration of an electronic device according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart for generating a camera image with bokeh by the electronic device shown in FIG. 1.
  • FIG. 3A shows an example of a depth map (upper) and an example of a MAT image (lower) .
  • FIG. 3B shows an example of a depth map (upper) and an example of a MAT image (lower) .
  • FIG. 3C shows an example of a depth map (upper) and an example of a MAT image (lower) .
  • FIG. 4 shows an example of time-series images with bokeh according to the present disclosure.
  • FIG. 5 shows an example of images with bokeh which contain a cow in the field as a subject.
  • FIG. 6 shows an example of images with bokeh which contain a car in front of a building as a subject.
  • FIG. 7 shows time-series images with bokeh according to a prior art.
  • FIG. 1 is a circuit diagram illustrating an example of a configuration of the electronic device 100 according to an embodiment of the present disclosure.
  • the electronic device 100 is a mobile device such as a smartphone in this embodiment, but may be other types of electronic devices equipped with one or more camera modules.
  • the electronic device 100 includes a stereo camera module 10, a range sensor module 20, and an image signal processor 30 that controls the stereo camera module 10 and the range sensor module 20.
  • the image signal processor 30 may perform image processing on an image acquired from the stereo camera module 10.
  • the stereo camera module 10 includes a master camera module 11 and a slave camera module 12 to be used for binocular stereo viewing, as shown in FIG. 1.
  • the master camera module 11 includes a first lens 11a that is capable of focusing on a subject, a first image sensor 11b that detects an image inputted via the first lens 11a, and a first image sensor driver 11c that drives the first image sensor 11b, as shown in FIG. 1.
  • the slave camera module 12 includes a second lens 12a that is capable of focusing on a subject, a second image sensor 12b that detects an image inputted via the second lens 12a, and a second image sensor driver 12c that drives the second image sensor 12b, as shown in FIG. 1.
  • the master camera module 11 captures a master camera image.
  • the slave camera module 12 captures a slave camera image.
  • the master camera image and the slave camera image may be a color image such as an RGB image, or a monochrome image.
  • a depth map can be generated based on the master camera image and the slave camera image by means of the stereo match technique. Specifically, an amount of parallax for each corresponding pixel from a stereo image (i.e., the master camera image and the slave camera image) is calculated. The depth value is proportional with the amount of parallax.
  • the camera module 10 can shoot a video at a given frame rate.
  • the range sensor module 20 captures a depth map.
  • the range sensor module 20 is a ToF camera and captures a time of flight depth map (a ToF depth map) by emitting pulsed light toward a subject and detecting light reflected from the subject.
  • the ToF depth map indicates an actual distance between the electronic device 100 and the subject.
  • the range sensor module can be omitted.
  • the image signal processor (ISP) 30 controls the master camera module 11, the slave camera module 12 and the range sensor module 20.
  • the ISP 30 controls the camera module 10 to acquire an image and performs bokeh processing on the image to generate an image with bokeh.
  • the image with bokeh can be generated based on the master camera image (or the slave camera image) and the depth map (or the ToF depth map) .
  • the ISP 30 also acquires an autofocus area (e.g., an AF rectangle) in the captured image from the stereo camera module 10.
  • the autofocus area indicates an in-focus area.
  • the autofocus area is obtained by an autofocus operation of the camera module 10.
  • the electronic device 100 includes a global navigation satellite system (GNSS) module 40, a wireless communication module 41, a CODEC 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (IMU) 47, a main processor 48, and a memory 49.
  • GNSS global navigation satellite system
  • IMU inertial measurement unit
  • the GNSS module 40 measures a current position of the electronic device 100.
  • the wireless communication module 41 performs wireless communications with the Internet.
  • the CODEC 42 bi-directionally performs encoding and decoding, using a predetermined encoding/decoding method.
  • the speaker 43 outputs a sound in accordance with sound data decoded by the CODEC 42.
  • the microphone 44 outputs sound data to the CODEC 42 based on inputted sound.
  • the display module 45 displays an image captured by the camera module 10.
  • the display module 45 may display the image in real-time.
  • the input module 46 inputs information via a user’s operation. For example, the input module 46 inputs an instruction to capture and store an image displayed on the display module 45.
  • the IMU 47 detects the angular velocity and the acceleration of the electronic device 100. A posture of the electronic device 100 can be grasped by a measurement result of the IMU 47.
  • the main processor 48 controls the global navigation satellite system (GNSS) module 40, the wireless communication module 41, the CODEC 42, the speaker 43, the microphone 44, the display module 45, the input module 46, and the IMU 47.
  • GNSS global navigation satellite system
  • the memory 49 stores data of the image, data of depth map, and a program which runs on the image signal processor 30 and/or the main processor 48.
  • the image signal processor 30 determines whether a new video frame is captured or not. If it is determined that a new video frame is captured, the process proceeds to the step S2.
  • the ISP 30 acquires an image, an autofocus area in the image, and a depth map which corresponds to the image.
  • the image, the autofocus area and the depth map are acquired from the camera module 10.
  • the image is a master camera image.
  • the image may be a slave camera image.
  • the autofocus area is typically a rectangular area, i.e., an AF rectangle.
  • the depth map may be a depth map generated by the master camera image and the slave camera image, the ToF depth map acquired from the range sensor module 20, or the depth map may be estimated based on the image by Artificial Intelligence (AI) .
  • the AI may estimate the depth map based on either the master camera image or the slave camera image.
  • Examples of the depth map are shown in an upper diagram of FIGs. 3A to 3C.
  • the depth map is a greyscale image.
  • the brightness of an area in the depth map and the distance from the electronic device 100 to the area are inversely proportional.
  • the sign P indicates a person as the subject and the signs B indicate buildings in the background.
  • the ISP 30 performs subject segmentation processing on the image to acquire a subject area indicating a target subject.
  • the subject segmentation processing separates an area of the target subject from the other areas in the image.
  • a MAT image is generated by the segmentation processing. Examples of the MAT image are shown in a lower diagram of FIGs. 3A to 3C.
  • the MAT image is a binary image.
  • the sign R1 indicates the subject area indicating a person as the subject.
  • the sign R2 indicates the autofocus area for reference.
  • the ISP 30 determines whether the segmentation processing is successful or not.
  • the MAT image in FIG. 3A and FIG. 3B show the case where the segmentation processing is successful.
  • FIG. 3C shows the case where the segmentation processing is not successful.
  • the ISP 30 determines that the subject segmentation processing is successful when a shape of the subject area R1 indicates a target subject such as a person, as shown in FIGs 3A and 3B.
  • the ISP 30 may determine that the subject segmentation processing is successful when at least a part of the subject area R1 overlaps with the autofocus area R2 as shown in FIG. 3B.
  • the ISP 30 may determine that the subject segmentation processing is successful when the subject area R1 has not changed in position and/or size beyond a predetermined reference compared to a subject area in the previous frame.
  • the ISP 30 may execute the step S4 based on other various methods such as training processes involving machine learning (ML) or artificial intelligence (AI) .
  • ML machine learning
  • AI artificial intelligence
  • step S5 If it is determined that the segmentation processing is successful, the process proceeds to the step S5, otherwise proceeds to the step S8.
  • the ISP 30 extracts depth values in the subject area from the depth map. Specifically, depth values in an area corresponding to the subject area R1 are extracted from the depth map.
  • the ISP 30 determines a reference depth value based on the extracted depth values. Specifically, the ISP 30 calculates a representative average value of the extracted depth values such as a mean value, a median value, a quartile range value (e.g., a third quartile, a first quartile) . After calculating the reference depth value, the ISP 30 stores it in the memory 49.
  • a representative average value of the extracted depth values such as a mean value, a median value, a quartile range value (e.g., a third quartile, a first quartile) .
  • the ISP 30 performs bokeh processing on the image based on the reference depth value.
  • the ISP 30 uses the reference depth value as a depth value of the target subject. For example, a bokeh intensity is set to be 0 in the subject area, and the bokeh intensity of the other areas are increased as the distance from the subject increases.
  • step S7 After the step S7, the process returns to the step S1. In this way, the step S1 is performed every time a new video frame is captured by the camera module 10 provided with the mobile device 100.
  • step S4 When it is determined that the segmentation processing is not successful in the step S4, the steps S8 to S10 will be executed as described below.
  • the ISP 30 determines whether the autofocus operation is successful or not.
  • the autofocus areas R2 in FIGs. 3A and 3C show the case where the autofocus operation is not successful.
  • the autofocus area R2 in FIG. 3B shows the case where the autofocus operation is successful.
  • the ISP 30 determines that the autofocus operation is successful when the autofocus area R2 has not changed in position beyond a predetermined reference compared to a autofocus area in the previous frame.
  • the ISP 30 may execute the step S8 based on the other various methods such as training processes involving machine learning (ML) or artificial intelligence (AI) .
  • ML machine learning
  • AI artificial intelligence
  • step S9 If it is determined that the autofocus operation is successful, the process proceeds to the step S9, otherwise proceeds to the step S10.
  • the ISP 30 extracts depth values in the autofocus area from the depth map. Specifically, depth values in an area corresponding to the autofocus area R2 are extracted from the depth map.
  • the process proceeds to the step S6, and the reference depth value is calculated based on the depth values extracted from the autofocus area.
  • the ISP 30 reads a reference depth value of the previous video frame from the memory 49. After the step S10, the process proceeds to the step S7, and the bokeh processing is performed based on the read reference depth value.
  • At least one of the steps described above may be performed by the main processor 48.
  • FIG. 4 shows an example of time-series images obtained by the method described above.
  • a stable video can be obtained as shown in FIG. 4. That is to say, the buildings B in the background are stably displayed with blur while a sharpness of the person P as the subject is maintained over video frames even when the autofocus area R2 is out of the person P at times t1 and t3.
  • FIG. 5 shows an example of images with bokeh which contain a cow in the field.
  • FIG. 6 shows an example of images with bokeh which contain a car in front of a building.
  • the images I1 and I2 in FIG. 5 show examples of failure due to the unsuccessful autofocus operation (i.e., the autofocus area R2 is out of a cow C) .
  • the cow C is not focused, but the sky and the ground are focused in the images I1 and I2 respectively.
  • an image I3 is generated by using a MAT image J1 in which the subject area R1 indicating the cow C is separated, and thus the cow C is focused appropriately.
  • the images I4 and I5 in FIG. 6 show examples of failure due to the unsuccessful autofocus operation (i.e., the autofocus area R2 is out of a vehicle V) .
  • the vehicle V is not focused, but the building and the person are focused in the images I4 and I5 respectively.
  • an image I6 is generated by using a MAT image J2 in which the subject area R1 indicating the vehicle V is separated, and thus the vehicle V is focused appropriately.
  • an appropriate bokeh processing can be performed on an image captured by the camera module 10 even if the autofocus operation is not successful, thereby improving a stability of bokeh processing on an image such as portrait shooting.
  • a camera application installed on smart phones can provide high quality video portrait with stable bokeh in real time.
  • first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features.
  • a feature defined as “first” and “second” may comprise one or more of this feature.
  • a plurality of means “two or more than two” , unless otherwise specified.
  • the terms “mounted” , “connected” , “coupled” and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements which can be understood by those skilled in the art according to specific situations.
  • a structure in which a first feature is "on" or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are in contact via an additional feature formed therebetween.
  • a first feature "on” , “above” or “on top of” a second feature may include an embodiment in which the first feature is orthogonally or obliquely “on” , “above” or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below” , “under” or “on bottom of” a second feature may include an embodiment in which the first feature is orthogonally or obliquely “below” , "under” or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature.
  • Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
  • the logic and/or step described in other manners herein or shown in the flow chart may be specifically achieved in any computer readable medium to be used by the instructions execution system, device or equipment (such as a system based on computers, a system comprising processors or other systems capable of obtaining instructions from the instructions execution system, device and equipment executing the instructions) , or to be used in combination with the instructions execution system, device and equipment.
  • the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
  • the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
  • the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
  • each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
  • a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instructions execution system.
  • the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
  • each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
  • the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
  • the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed is a method of improving a stability of bokeh processing. The method includes acquiring an image, an autofocus area in the image and a depth map, performing subject segmentation processing on the image to acquire a subject area indicating a target subject, extracting, from the depth map, depth values in the subject area when the subject segmentation processing is successful, or depth values in the autofocus area when the subject segmentation processing is not successful and an autofocus operation is successful, determining a reference depth value based on the extracted depth values, and performing bokeh processing on the image based on the reference depth value. The depth map corresponds to the image.

Description

METHOD OF IMPROVING STABILITY OF BOKEH PROCESSING AND ELECTRONIC DEVICE TECHNICAL FIELD
The present disclosure relates to a method of improving a stability of bokeh processing, and an electronic device performing the method.
BACKGROUND
In recent years, techniques of image processing in which an image with bokeh is generated artificially from an image captured by a camera module of a smart phone, are widely spread. In the image with bokeh, a subject such as a person in the foreground should be clearly displayed, while the background such as buildings should be blurred. For example, a bokeh intensity is set to be 0 in an area of the subject, and the bokeh intensity of the other areas increases as the distance from the subject increases.
The area of the subject is determined based on an autofocus area such as an AF rectangle obtained by an autofocus operation of the camera module. However, the autofocus area R is easy to drift or fly as shown in FIG. 7. The images with bokeh at times t1 and t3 are not appropriate because the autofocus area R is out of the area of a person P as the subject.
In practice, the autofocus area tends to be out of the area of the subject when the subject moves or light reflected from the subject changes, for example.
Although it is difficult to stabilize the autofocus operation, consumers have been eagerly expecting the development of a technique of stably generating images with bokeh.
SUMMARY
The present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of improving a stability of bokeh processing and an electrical device implementing such method.
In accordance with the present disclosure, a method of improving a stability of bokeh processing. The method includes acquiring an image, an autofocus area in the image and a depth map, performing subject segmentation processing on the image to acquire a subject area indicating a target subject, extracting, from the depth map, depth values in the subject area when the subject segmentation processing is successful, or depth values in the autofocus area when the subject segmentation processing is not successful and an autofocus operation is successful, determining a reference depth value based on the extracted depth values, and performing bokeh processing on the image based on the reference depth value. The depth map corresponds to the image. The subject segmentation processing separates an area of the target subject in the image from the other areas.
In some embodiments, the acquiring an image, an autofocus area in the image and a depth map which corresponds to the image may be performed every time a new video frame is captured by a camera module provided with a mobile device.
In some embodiments, the performing bokeh processing on the image based on the reference depth value may use a reference depth value of the previous video frame when both the subject segmentation processing and the autofocus operation are not successful.
In some embodiments, the performing bokeh processing on the image based on the reference depth value may include using the reference depth value as a depth value of the target subject.
In some embodiments, the image may be a master image captured by a master camera module of a stereo camera module.
In some embodiments, the autofocus area may be an AF rectangle.
In some embodiments, the depth map may be generated based on a stereo image captured by a stereo camera module, the depth map may be a ToF depth map based on an image captured  by a range sensor module, or the depth map may be estimated based on the image by Artificial Intelligence (AI) .
In some embodiments, the target subject may be a person, an animal or an object.
In some embodiments, the subject segmentation processing may be determined to be successful when a shape of the subject area indicates the target subject.
In some embodiments, the subject segmentation processing may be determined to be successful when at least a part of the subject area overlaps with the autofocus area.
In some embodiments, the determining a reference depth value based on the extracted depth values may include calculating a representative value of the extracted depth values.
In accordance with the present disclosure, an electronic device includes a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform the method according to the present disclosure.
In accordance with the present disclosure, a non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a computer to implement the method according to the present disclosure, is provided.
BRIEF DESCRIPTION OF THE DRAWINGS
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings below.
FIG. 1 is a functional block diagram illustrating an example of a configuration of an electronic device according to an embodiment of the present disclosure.
FIG. 2 is a flowchart for generating a camera image with bokeh by the electronic device shown in FIG. 1.
FIG. 3A shows an example of a depth map (upper) and an example of a MAT image (lower) .
FIG. 3B shows an example of a depth map (upper) and an example of a MAT image (lower) .
FIG. 3C shows an example of a depth map (upper) and an example of a MAT image (lower) .
FIG. 4 shows an example of time-series images with bokeh according to the present disclosure.
FIG. 5 shows an example of images with bokeh which contain a cow in the field as a subject.
FIG. 6 shows an example of images with bokeh which contain a car in front of a building as a subject.
FIG. 7 shows time-series images with bokeh according to a prior art.
DETAILED DESCRIPTION
Embodiments of the present disclosure will be described in detail and examples of the embodiments will be illustrated in the accompanying drawings. The same or similar elements and elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to the drawings are explanatory and aim to illustrate the present disclosure, but shall not be construed to limit the present disclosure.
<Electronic device 100>
An electronic device 100 will be described with reference to FIG. 1. FIG. 1 is a circuit diagram illustrating an example of a configuration of the electronic device 100 according to an embodiment of the present disclosure.
The electronic device 100 is a mobile device such as a smartphone in this embodiment, but may be other types of electronic devices equipped with one or more camera modules.
As shown in FIG. 1, the electronic device 100 includes a stereo camera module 10, a range sensor module 20, and an image signal processor 30 that controls the stereo camera module 10 and the range sensor module 20. The image signal processor 30 may perform image processing on an image acquired from the stereo camera module 10.
The stereo camera module 10 includes a master camera module 11 and a slave camera module 12 to be used for binocular stereo viewing, as shown in FIG. 1.
The master camera module 11 includes a first lens 11a that is capable of focusing on a subject, a first image sensor 11b that detects an image inputted via the first lens 11a, and a first image sensor driver 11c that drives the first image sensor 11b, as shown in FIG. 1.
The slave camera module 12 includes a second lens 12a that is capable of focusing on a subject, a second image sensor 12b that detects an image inputted via the second lens 12a, and a second image sensor driver 12c that drives the second image sensor 12b, as shown in FIG. 1.
The master camera module 11 captures a master camera image. Similarly, the slave camera module 12 captures a slave camera image. The master camera image and the slave camera image may be a color image such as an RGB image, or a monochrome image.
A depth map can be generated based on the master camera image and the slave camera image by means of the stereo match technique. Specifically, an amount of parallax for each corresponding pixel from a stereo image (i.e., the master camera image and the slave camera image) is calculated. The depth value is proportional with the amount of parallax.
The camera module 10 can shoot a video at a given frame rate.
The range sensor module 20 captures a depth map. For example, the range sensor module 20 is a ToF camera and captures a time of flight depth map (a ToF depth map) by emitting pulsed light toward a subject and detecting light reflected from the subject. The ToF depth map indicates an actual distance between the electronic device 100 and the subject. The range sensor module can be omitted.
The image signal processor (ISP) 30 controls the master camera module 11, the slave camera module 12 and the range sensor module 20. The ISP 30 controls the camera module 10 to acquire an image and performs bokeh processing on the image to generate an image with bokeh. The image with bokeh can be generated based on the master camera image (or the slave camera image) and the depth map (or the ToF depth map) .
The ISP 30 also acquires an autofocus area (e.g., an AF rectangle) in the captured image from the stereo camera module 10. The autofocus area indicates an in-focus area. The autofocus area is obtained by an autofocus operation of the camera module 10.
Furthermore, as shown in FIG. 1, the electronic device 100 includes a global navigation satellite system (GNSS) module 40, a wireless communication module 41, a CODEC 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (IMU) 47, a main processor 48, and a memory 49.
The GNSS module 40 measures a current position of the electronic device 100. The wireless communication module 41 performs wireless communications with the Internet. The CODEC 42 bi-directionally performs encoding and decoding, using a predetermined encoding/decoding method. The speaker 43 outputs a sound in accordance with sound data decoded by the CODEC 42. The microphone 44 outputs sound data to the CODEC 42 based on inputted sound.
The display module 45 displays an image captured by the camera module 10. The display module 45 may display the image in real-time.
The input module 46 inputs information via a user’s operation. For example, the input module 46 inputs an instruction to capture and store an image displayed on the display module 45.
The IMU 47 detects the angular velocity and the acceleration of the electronic device 100. A posture of the electronic device 100 can be grasped by a measurement result of the IMU 47.
The main processor 48 controls the global navigation satellite system (GNSS) module 40, the wireless communication module 41, the CODEC 42, the speaker 43, the microphone 44, the display module 45, the input module 46, and the IMU 47.
The memory 49 stores data of the image, data of depth map, and a program which runs on the image signal processor 30 and/or the main processor 48.
< Method of improving stability of bokeh processing >
A method of improving a stability of bokeh processing according to an embodiment of the present disclosure will be described with reference to FIG. 2.
In the step S1, the image signal processor 30 determines whether a new video frame is captured or not. If it is determined that a new video frame is captured, the process proceeds to the step S2.
In the step S2, the ISP 30 acquires an image, an autofocus area in the image, and a depth map which corresponds to the image. Specifically, the image, the autofocus area and the depth map are acquired from the camera module 10. The image is a master camera image. Alternatively, the image may be a slave camera image. The autofocus area is typically a rectangular area, i.e., an AF rectangle.
The depth map may be a depth map generated by the master camera image and the slave camera image, the ToF depth map acquired from the range sensor module 20, or the depth map may be estimated based on the image by Artificial Intelligence (AI) . The AI may estimate the depth map based on either the master camera image or the slave camera image.
Examples of the depth map are shown in an upper diagram of FIGs. 3A to 3C. The depth map is a greyscale image. The brightness of an area in the depth map and the distance from the electronic device 100 to the area are inversely proportional. In the Example, the sign P indicates a person as the subject and the signs B indicate buildings in the background.
In the step S3, the ISP 30 performs subject segmentation processing on the image to acquire a subject area indicating a target subject. The subject segmentation processing separates an area of the target subject from the other areas in the image.
Specifically, a MAT image is generated by the segmentation processing. Examples of the MAT image are shown in a lower diagram of FIGs. 3A to 3C. The MAT image is a binary image. The sign R1 indicates the subject area indicating a person as the subject. The sign R2 indicates the autofocus area for reference.
In the step S4, the ISP 30 determines whether the segmentation processing is successful or not. The MAT image in FIG. 3A and FIG. 3B show the case where the segmentation processing is successful. On the other hand, FIG. 3C shows the case where the segmentation processing is not successful.
For example, the ISP 30 determines that the subject segmentation processing is successful when a shape of the subject area R1 indicates a target subject such as a person, as shown in FIGs 3A and 3B.
Optionally, the ISP 30 may determine that the subject segmentation processing is successful when at least a part of the subject area R1 overlaps with the autofocus area R2 as shown in FIG. 3B.
Optionally, the ISP 30 may determine that the subject segmentation processing is successful when the subject area R1 has not changed in position and/or size beyond a predetermined reference compared to a subject area in the previous frame.
The ISP 30 may execute the step S4 based on other various methods such as training processes involving machine learning (ML) or artificial intelligence (AI) .
If it is determined that the segmentation processing is successful, the process proceeds to the step S5, otherwise proceeds to the step S8.
In the step S5, the ISP 30 extracts depth values in the subject area from the depth map. Specifically, depth values in an area corresponding to the subject area R1 are extracted from the depth map.
In the step S6, the ISP 30 determines a reference depth value based on the extracted depth values. Specifically, the ISP 30 calculates a representative average value of the extracted depth values such as a mean value, a median value, a quartile range value (e.g., a third quartile, a first quartile) . After calculating the reference depth value, the ISP 30 stores it in the memory 49.
In the step S7, the ISP 30 performs bokeh processing on the image based on the reference depth value. The ISP 30 uses the reference depth value as a depth value of the target subject. For example, a bokeh intensity is set to be 0 in the subject area, and the bokeh intensity of the other areas are increased as the distance from the subject increases.
An appropriate bokeh processing can be performed by using the reference depth value even if the autofocus operation is not successful. After the step S7, the process returns to the step S1. In this way, the step S1 is performed every time a new video frame is captured by the camera module 10 provided with the mobile device 100.
When it is determined that the segmentation processing is not successful in the step S4, the steps S8 to S10 will be executed as described below.
In the step S8, the ISP 30 determines whether the autofocus operation is successful or not. The autofocus areas R2 in FIGs. 3A and 3C show the case where the autofocus operation is not successful. On the other hand, the autofocus area R2 in FIG. 3B shows the case where the autofocus operation is successful.
For example, the ISP 30 determines that the autofocus operation is successful when the autofocus area R2 has not changed in position beyond a predetermined reference compared to a autofocus area in the previous frame. The ISP 30 may execute the step S8 based on the other various methods such as training processes involving machine learning (ML) or artificial intelligence (AI) .
If it is determined that the autofocus operation is successful, the process proceeds to the step S9, otherwise proceeds to the step S10.
In the step S9, the ISP 30 extracts depth values in the autofocus area from the depth map. Specifically, depth values in an area corresponding to the autofocus area R2 are extracted from the depth map. After the step S9, the process proceeds to the step S6, and the reference depth value is calculated based on the depth values extracted from the autofocus area.
In the step S10, the ISP 30 reads a reference depth value of the previous video frame from the memory 49. After the step S10, the process proceeds to the step S7, and the bokeh processing is performed based on the read reference depth value.
Optionally, at least one of the steps described above may be performed by the main processor 48.
FIG. 4 shows an example of time-series images obtained by the method described above. A stable video can be obtained as shown in FIG. 4. That is to say, the buildings B in the background are stably displayed with blur while a sharpness of the person P as the subject is maintained over video frames even when the autofocus area R2 is out of the person P at times t1 and t3.
The subject is not limited to a person, but may be an animal, or an object such as a vehicle etc, . Two examples are described here. FIG. 5 shows an example of images with bokeh which contain a cow in the field. FIG. 6 shows an example of images with bokeh which contain a car in front of a building.
The images I1 and I2 in FIG. 5 show examples of failure due to the unsuccessful autofocus operation (i.e., the autofocus area R2 is out of a cow C) . The cow C is not focused, but the sky and the ground are focused in the images I1 and I2 respectively. In contrast with this, an image I3 is generated by using a MAT image J1 in which the subject area R1 indicating the cow C is separated, and thus the cow C is focused appropriately.
Similarly, the images I4 and I5 in FIG. 6 show examples of failure due to the unsuccessful autofocus operation (i.e., the autofocus area R2 is out of a vehicle V) . The vehicle V is not focused, but the building and the person are focused in the images I4 and I5 respectively. In contrast with this, an image I6 is generated by using a MAT image J2 in which the subject area R1 indicating the vehicle V is separated, and thus the vehicle V is focused appropriately.
As described above, according to an embodiment of the present disclosure, an appropriate bokeh processing can be performed on an image captured by the camera module 10 even if the autofocus operation is not successful, thereby improving a stability of bokeh processing on an image such as portrait shooting.
Even if the autofocus area is out of the subject, stable images with blur can be generated by using both the autofocus area and the subject area as long as the subject segmentation processing is successful.
Further, according to an embodiment of the present disclosure, when a user tries to shoot moving scenes where the autofocus operation is not stable, it becomes possible to the user to shoot a subject and obtain a video in which the background is blurred while maintaining the sharpness of the subject.
For example, a camera application installed on smart phones can provide high quality video portrait with stable bokeh in real time.
In the description of embodiments of the present disclosure, it is to be understood that terms such as "central" , "longitudinal" , "transverse" , "length" , "width" , "thickness" , "upper" , "lower" , "front" , "rear" , "back" , "left" , "right" , "vertical" , "horizontal" , "top" , "bottom" , "inner" , "outer" , "clockwise" and "counterclockwise" should be construed to refer to the orientation or the position as described or as shown in the drawings in discussion. These relative terms are only used to simplify the description of the present disclosure, and do not indicate or imply that the device or element referred to must have a particular orientation, or must be constructed or operated in a particular orientation. Thus, these terms cannot be constructed to limit the present disclosure.
In addition, terms such as "first" and "second" are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features. Thus, a feature defined as "first" and "second" may comprise one or more of this feature. In the description of the present disclosure, "a plurality of" means “two or more than two” , unless otherwise specified.
In the description of embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted" , "connected" , "coupled" and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements which can be understood by those skilled in the art according to specific situations.
In the embodiments of the present disclosure, unless specified or limited otherwise, a structure in which a first feature is "on" or "below" a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are in contact via an additional feature formed therebetween. Furthermore, a first feature "on" , "above" or "on top of" a second feature may include an embodiment in which the first feature is orthogonally or obliquely "on" , "above" or "on top of" the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature "below" , "under" or "on bottom of" a second feature may include an embodiment in which the first feature is orthogonally or obliquely "below" , "under" or "on bottom of" the second feature, or just means that the first feature is at a height lower than that of the second feature.
Various embodiments and examples are provided in the above description to implement different structures of the present disclosure. In order to simplify the present disclosure, certain elements and settings are described in the above. However, these elements and settings are only by way of example and are not intended to limit the present disclosure. In addition, reference numbers and/or reference letters may be repeated in different examples in the present disclosure. This repetition is for the purpose of simplification and clarity and does not refer to relations between different embodiments and/or settings. Furthermore, examples of different processes and materials are provided in the present disclosure. However, it would be appreciated by those skilled in the art that other processes and/or materials may also be applied.
Reference throughout this specification to "an embodiment" , "some embodiments" , "an exemplary embodiment" , "an example" , "a specific example" or "some examples" means that a particular feature, structure, material, or characteristics described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above phrases throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
The logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function, may be specifically achieved in any computer readable medium to be used by the instructions execution system, device or equipment (such as a system based on computers, a system comprising processors or other systems capable of obtaining instructions from the instructions execution system, device and equipment executing the instructions) , or to be used in combination with the instructions execution system, device and equipment. As to the specification, "the computer readable medium" may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment. More specific examples of the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) . In addition, the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instructions execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an  appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.
In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Although embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that the embodiments are explanatory and cannot be construed to limit the present disclosure, and changes, modifications, alternatives and variations can be made in the embodiments without departing from the scope of the present disclosure.

Claims (13)

  1. A method of improving a stability of bokeh processing, the method comprising:
    acquiring an image, an autofocus area in the image and a depth map which corresponds to the image;
    performing subject segmentation processing on the image to acquire a subject area indicating a target subject;
    extracting, from the depth map, depth values in the subject area when the subject segmentation processing is successful, or depth values in the autofocus area when the subject segmentation processing is not successful and an autofocus operation is successful;
    determining a reference depth value based on the extracted depth values; and
    performing bokeh processing on the image based on the reference depth value.
  2. The method according to claim 1, wherein the acquiring an image, an autofocus area in the image and a depth map which corresponds to the image is performed every time a new video frame is captured by a camera module provided with a mobile device.
  3. The method according to claim 2, wherein the performing bokeh processing on the image based on the reference depth value uses a reference depth value of the previous video frame when both the subject segmentation processing and the autofocus operation are not successful.
  4. The method according to any one of claims 1 to 3, wherein the performing bokeh processing on the image based on the reference depth value comprises using the reference depth value as a depth value of the target subject.
  5. The method according to any one of claims 1 to 4, wherein the image is a master image captured by a master camera module of a stereo camera module.
  6. The method according to any one of claims 1 to 5, wherein the autofocus area is an AF rectangle.
  7. The method according to any one of claims 1 to 6, wherein the depth map is generated based on a stereo image captured by a stereo camera module, the depth map is a ToF depth map based on an image captured by a range sensor module, or the depth map may be estimated based on the image by Artificial Intelligence (AI) .
  8. The method according to any one of claims 1 to 7, wherein the target subject is a person, an animal or an object.
  9. The method according to any one of claims 1 to 8, wherein the subject segmentation processing is determined to be successful when a shape of the subject area indicates the target subject.
  10. The method according to any one of claims 1 to 8, wherein the subject segmentation processing is determined to be successful when at least a part of the subject area overlaps with the autofocus area.
  11. The method according to any one of claims 1 to 10, wherein the determining a reference depth value based on the extracted depth values comprises calculating a representative value of the extracted depth values.
  12. An electronic device for image processing, comprising a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform the method according to any one of claims 1 to 11.
  13. A non-transitory computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a computer to implement the method according to any one of claims 1 to 11.
PCT/CN2021/082832 2021-03-24 2021-03-24 Method of improving stability of bokeh processing and electronic device WO2022198525A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/082832 WO2022198525A1 (en) 2021-03-24 2021-03-24 Method of improving stability of bokeh processing and electronic device
CN202180094935.9A CN116917933A (en) 2021-03-24 2021-03-24 Method for improving stability of foreground processing and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/082832 WO2022198525A1 (en) 2021-03-24 2021-03-24 Method of improving stability of bokeh processing and electronic device

Publications (1)

Publication Number Publication Date
WO2022198525A1 true WO2022198525A1 (en) 2022-09-29

Family

ID=83395024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082832 WO2022198525A1 (en) 2021-03-24 2021-03-24 Method of improving stability of bokeh processing and electronic device

Country Status (2)

Country Link
CN (1) CN116917933A (en)
WO (1) WO2022198525A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196802A1 (en) * 2014-06-25 2015-12-30 华为技术有限公司 Photographing method and apparatus, and electronic device
CN107945105A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
US20180122117A1 (en) * 2016-11-02 2018-05-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20190073749A1 (en) * 2017-09-07 2019-03-07 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and apparatus for image processing
US20200265565A1 (en) * 2019-02-20 2020-08-20 Samsung Electronics Co., Ltd. Electronic device applying bokeh effect to image and controlling method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015196802A1 (en) * 2014-06-25 2015-12-30 华为技术有限公司 Photographing method and apparatus, and electronic device
US20180122117A1 (en) * 2016-11-02 2018-05-03 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20190073749A1 (en) * 2017-09-07 2019-03-07 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method and apparatus for image processing
CN107945105A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
US20200265565A1 (en) * 2019-02-20 2020-08-20 Samsung Electronics Co., Ltd. Electronic device applying bokeh effect to image and controlling method thereof

Also Published As

Publication number Publication date
CN116917933A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US10740431B2 (en) Apparatus and method of five dimensional (5D) video stabilization with camera and gyroscope fusion
TWI706379B (en) Method, apparatus and electronic device for image processing and storage medium thereof
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
US10171791B2 (en) Methods and apparatus for conditional display of a stereoscopic image pair
US20170200279A1 (en) Temporal saliency map
CN105744138B (en) Quick focusing method and electronic equipment
CN112927271B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN111325786B (en) Image processing method and device, electronic equipment and storage medium
US20230033956A1 (en) Estimating depth based on iris size
CN104680563B (en) The generation method and device of a kind of image data
WO2022198525A1 (en) Method of improving stability of bokeh processing and electronic device
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
US20220245771A1 (en) Electronic device capable of correcting depth information and performing bokeh processing on image and method of controlling electronic device
KR20200092197A (en) Image processing method, image processing apparatus, electronic device, computer program and computer readable recording medium for processing augmented reality image
CN115225799B (en) Image processing method and terminal equipment
JPWO2015129062A1 (en) Image processing apparatus, image processing method, and image processing program
CN109543544B (en) Cross-spectrum image matching method and device, electronic equipment and storage medium
WO2022213332A1 (en) Method for bokeh processing, electronic device and computer-readable storage medium
WO2022178781A1 (en) Electric device, method of controlling electric device, and computer readable storage medium
WO2022241728A1 (en) Image processing method, electronic device and non–transitory computer–readable media
WO2024055290A1 (en) Method of detecting flicker area in captured image, electronic device and computer-readable storage medium
WO2023092380A1 (en) Method of suggesting shooting position and posture for electronic device having camera, electronic device and computer-readable storage medium
WO2022188007A1 (en) Image processing method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932153

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180094935.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21932153

Country of ref document: EP

Kind code of ref document: A1