CN117178286A - Scan processing method, electronic device and computer readable storage medium - Google Patents

Scan processing method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN117178286A
CN117178286A CN202180096376.5A CN202180096376A CN117178286A CN 117178286 A CN117178286 A CN 117178286A CN 202180096376 A CN202180096376 A CN 202180096376A CN 117178286 A CN117178286 A CN 117178286A
Authority
CN
China
Prior art keywords
foreground
image
size
camera parameters
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180096376.5A
Other languages
Chinese (zh)
Inventor
大井拓哉
罗俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN117178286A publication Critical patent/CN117178286A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/70
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • H04N23/662Transmitting camera control signals through networks, e.g. control via the Internet by using master/slave camera arrangements for affecting the control of camera image capture, e.g. placing the camera in a desirable condition to capture a desired image

Abstract

A method for processing the scenery is disclosed. The method comprises the following steps: acquiring DSLR camera parameters including focal length and F value; acquiring an image, a focusing distance and a depth map corresponding to the image; obtaining a foreground size of each pixel of the image based on the DSLR camera parameters, the focusing distance and the depth map to generate a foreground size map; and performing a foreground process on the image based on the foreground size map.

Description

Scan processing method, electronic device and computer readable storage medium
Technical Field
The present disclosure relates to a method for foreground processing, and in particular, to a method for accurately reproducing a high quality foreground equivalent to a foreground created by a digital single lens (dgital single lens reflex, DSLR) camera, an electronic device performing the method, and a computer readable storage medium storing a program implementing the method.
Background
In recent years, a technique of artificially generating an image having a foreground has been widely used. In an image having a foreground, an object should clearly display an object such as a person, and on the other hand, a background such as a building or sky should be blurred. For example, in a region occupied by a subject, the dispersion Jing Daxiao/intensity is set to 0, and the dispersion size of other regions increases with increasing distance from the subject.
The size/area of the image sensor of an electronic device such as a smart phone is smaller than the size/area of a DSLR camera with a large sensor such as a 35mm sensor. Due to the small size of the sensor, it is inevitable that the size of the scene generated based on the image photographed using the electronic device is smaller than that of the image photographed using the DSLR camera. Thus, the image generated by the electronic device with the foreground may look unnatural. To improve the quality of images with a foreground, it is known to use depth values of a depth map to determine the size of the foreground. However, even with this method, it is difficult to obtain an image having the same foreground as an image photographed using a DSLR camera.
Disclosure of Invention
The present disclosure is directed to solving at least one of the above-mentioned technical problems. Accordingly, there is a need in the present disclosure to provide a method for foreground processing and an electronic device implementing such a method.
According to the present disclosure, a method for foreground processing includes: the method includes obtaining a DSLR camera parameter including a focal length (F) and a focal ratio F value (a), obtaining an image, a focusing distance (D), and a depth map corresponding to the image, obtaining a foreground size of each pixel of the image based on the DSLR camera parameter, the focusing distance, and the depth map to generate a foreground size map, and performing a foreground process on the image based on the foreground size map.
According to the present disclosure, an electronic device includes a processor and a memory for storing instructions. The instructions, when executed by a processor, cause the processor to perform a method according to the present disclosure.
According to the present disclosure, a computer-readable storage medium having a computer program stored thereon is provided. The computer program is executed by a computer to implement the method according to the present disclosure.
Drawings
These and/or other aspects and advantages of the embodiments of the present disclosure will become apparent from and more readily appreciated from the following description taken in conjunction with the accompanying drawings.
Fig. 1A is a functional block diagram showing a configuration of an electronic device according to an embodiment of the present disclosure.
Fig. 1B is a functional block diagram of an image signal processor in an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flow chart for generating an image with a foreground according to an embodiment of the present disclosure.
Fig. 3 is an example of a user interface for inputting DSLR camera parameters.
Fig. 4A shows an example of an image captured by a camera module.
Fig. 4B shows an example of a depth map corresponding to the image shown in fig. 4A.
Fig. 5 illustrates an example of a foreground size map generated by using a method according to an embodiment of the present disclosure.
Fig. 6 illustrates an example of an image with a foreground generated by using a method according to an embodiment of the present disclosure.
Fig. 7 is a diagram for explaining the emphasis of the foreground based on DSLR camera parameters.
Fig. 8 is a diagram for explaining a change in the size of a scene in the depth of field (DoF) range.
Detailed Description
Embodiments of the present disclosure will be described in detail, and examples of the embodiments will be illustrated in the accompanying drawings. Throughout the description, identical or similar elements and elements having identical or similar functions are denoted by identical reference numerals. The embodiments described herein with reference to the drawings are illustrative and are intended to be illustrative of the present disclosure, but should not be construed as limiting the present disclosure.
< electronic device 100>
The electronic device 100 will be described with reference to fig. 1A. Fig. 1A is a functional block diagram showing an example of a configuration of an electronic device 100 according to an embodiment of the present disclosure.
The electronic device 100 is a mobile device such as a smart phone, tablet terminal, or mobile phone, but may be other types of electronic devices equipped with one or more camera modules.
As shown in fig. 1A. The electronic device 100 comprises a stereo camera module 10, a distance sensor module 20 and an image signal processor 30, a global navigation satellite system (global navigation satellite system, GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (inertial measurement unit, IMU) 47, a main processor 48 and a memory 49.
The stereoscopic camera module 10 includes a master camera module 11 and a slave camera module 12 for binocular stereoscopic viewing, as shown in fig. 1A. The camera module 10 may capture video at a given frame rate.
The main camera module 11 includes a first lens 11A capable of focusing on an object, a first image sensor 11b detecting an image input via the first lens 11A, and a first image sensor driver 11c driving the first image sensor 11b, as shown in fig. 1A.
The slave camera module 12 includes a second lens 12a capable of focusing on a subject, a second image sensor 12b detecting an image input via the second lens 12a, and a second image sensor driver 12c driving the second image sensor 12b, as shown in fig. 1A.
The main camera module 11 captures a main camera image. A slave camera image is captured from the camera module 12. The master camera image and the slave camera image may be color images such as RGB images or monochrome images.
By stereo matching techniques, a depth map may be generated based on the master camera image and the slave camera image. Specifically, the parallax amount is calculated for each corresponding pixel of the stereoscopic image (i.e., the master camera image and the slave camera image). The depth value increases with increasing parallax amount. The depth map includes a depth value for each pixel in the image.
The distance sensor module 20 captures a depth map. For example, the distance sensor module 20 is a time of flight (ToF) camera, and captures a ToF depth map by emitting pulsed light to a subject and detecting light reflected from the subject. The ToF depth map indicates the actual distance between the electronic apparatus 100 and the subject. Alternatively, the distance sensor module may be omitted.
The image signal processor (image signal processor, ISP) 30 controls the master camera module 11, the slave camera module 12, and the distance sensor module 20. The ISP 30 also performs image processing on images captured by the stereo camera module 10. Specifically, the ISP 30 acquires an original image from the camera module 10. The original image is a master camera image or a slave camera image. The ISP 30 performs a foreground process on the original image to generate an image having a foreground. An image having a scene may be generated based on the original image and the scene size map, which will be described later.
When the camera module 10 captures an image, the ISP 30 acquires an autofocus area in the captured image from the stereoscopic camera module 10. The autofocus area represents an in-focus area (in-focus area). For example, the autofocus area is displayed as an autofocus rectangle. The autofocus area is obtained by the autofocus operation of the camera module 10.
The ISP 30 acquires a focus distance or an object distance. The focus distance is a distance between the subject and the camera module 10. More precisely, the focusing distance indicates the distance between the focusing plane and the principal point of the lens 11a (11 b).
The ISP 30 may calculate the focus distance based on the depth map and the autofocus area. For example, the focus distance is acquired by calculating a representative value of the depth value in the auto-focus area. The representative value may be an average, median, or quartile distance value (e.g., third quartile, first quartile).
The GNSS module 40 measures the current position of the electronic device 100. The wireless communication module 41 performs wireless communication with the internet. The codec 42 bidirectionally performs encoding and decoding using a predetermined encoding/decoding method. The speaker 43 outputs sound based on the sound data decoded by the codec 42. The microphone 44 outputs sound data to the codec 42 based on the input sound.
The display module 45 displays various information such as an image captured by the camera module 10 in real time, a User Interface (UI), and an image generated by the ISP 30 with a foreground.
The input module 46 inputs information by operation of a user. The input module 46 is a touch panel, a keyboard, or the like. The input module 46 inputs instructions to capture and store images displayed on the display module 45. In addition, the input module 46 inputs DSLR camera parameters selected by the user (described below).
The IMU 47 detects angular velocity and acceleration of the electronic device 100. The posture of the electronic device 100 can be grasped from the measurement results of the IMU 47.
The main processor 48 controls a Global Navigation Satellite System (GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46 and an IMU 47.
The memory 49 stores data of an image, data of a depth map, various camera parameters to be used in image processing, and programs running on the image signal processor 30 and/or the main processor 48.
Next, the ISP 30 is described in detail with reference to fig. 1B.
The ISP 30 includes a first acquisition unit 31, a second acquisition unit 32, an acquisition unit 33, and an execution unit 34.
The first acquisition unit 31 is configured to acquire DSLR camera parameters including a focal length and an F value. The DSLR camera parameters are camera parameters of a digital single-lens reflex camera. The camera parameters include the type of image sensor and the type of lens. The DSLR camera parameters are different from those of the camera module 10 mounted on the electronic device 100.
The DSLR camera parameters include at least a focal length and an F value. The DSLR camera parameters may also include the size and/or resolution of the DSLR image sensor. The DSLR image sensor is an image sensor of a DSLR camera.
The second acquisition unit 32 is used for acquiring an image, a focusing distance, and a depth map. The depth map corresponds to the acquired image.
The obtaining unit 33 is configured to obtain a foreground size of each pixel of the image based on the DSLR camera parameters, the focusing distance, and the depth map to generate a foreground size map.
The execution unit 34 is configured to execute a foreground process on the image based on the foreground size map.
< method of treating Scan >
A foreground processing method according to an embodiment of the present disclosure will be described with reference to a flowchart shown in fig. 2.
In step S1, the display module 45 displays a User Interface (UI) allowing the user to input DSLR camera parameters.
Fig. 3 shows an example of a UI displayed by the display module 45. The user may input DSLR camera parameters through the UI. In this example, the user may select the type of DSLR image sensor, i.e., "645", "35mm", or "APS-C". The user may also select the lens types, i.e. focal length (20 mm to 100 mm) and F-value (F1.4 to F11). After selecting the DSLR camera parameters, the user clicks the save button to save the selected parameters in memory 49.
Alternatively, a UI allowing the user to select the resolution of the DSLR image sensor from the candidates may be displayed.
In step S2, the first acquisition unit 31 of the ISP 30 acquires DSLR camera parameters input by the user in step S1. Specifically, the first acquisition unit 31 reads the input DSLR camera parameters from the memory 49.
In step S3, the second acquisition unit 32 of the ISP 30 acquires an image (i.e., an original image), a focusing distance, and a depth map corresponding to the original image. Specifically, when a user takes a photo/video using the electronic device 100, the stereoscopic camera module 10 captures a master camera image and a slave camera image. The second acquisition unit 32 acquires a master camera image or a slave camera image as an original image.
By means of stereo matching techniques, the ISP 30 generates a depth map based on the master and slave camera images.
Alternatively, the depth map may be acquired from the distance sensor module 20.
The second acquisition unit 32 acquires an autofocus area from the camera module 10, and acquires a focus distance by calculating a representative value of a depth value in the autofocus area.
Alternatively, the second acquisition unit 32 may acquire the focusing distance directly from the camera module 10. In this case, the focusing distance is determined by the autofocus operation of the camera module 10.
Fig. 4A shows an example of an image captured by the camera module 10. The image includes three objects S1, S2, and S3, which are placed on a table in the order of the object S1, the object S2, and the object S3 from the front. In the example shown in fig. 4A. The autofocus area R is on the subject S2.
Fig. 4B shows an example of a depth map corresponding to the acquired image shown in fig. 4A. The depth map is a gray scale image. For example, the brightness of an area in the depth map decreases as the distance from the electronic device 100 increases.
Next, in step S4, the obtaining unit 33 of the ISP 30 calculates a foreground size based on the DSLR camera parameters, the focusing distance, and the depth map for each pixel of the acquired image. Thus, a foreground size map is generated.
The dispersion Jing Daxiao can be calculated by the following equation:
wherein C is the size of the scene, F is the focal length, A is the F value, D is the focusing distance, and D is the depth value of the corresponding pixel in the depth map.
Fig. 5 shows an example of the foreground size map generated in step S4. The foreground size map is a grayscale image. The brightness of a region in the foreground size map increases as the foreground size of the region increases. In the example shown in fig. 5, the brightness of the area indicating the focus object S2 is the lowest (i.e., the smallest foreground size). On the other hand, the brightness of the area indicating the focus object S1 is highest (i.e., the largest foreground size).
In step S5, the execution unit 34 of the ISP 30 executes a foreground process on the original image based on the foreground size map generated in step S4. Thus, an image having a foreground can be obtained.
The foreground processing may be performed by applying a smoothing filter to the original image. The smoothing filter is a filter generated based on the foreground size map. For example, the smoothing filter is a gaussian filter having a standard deviation calculated based on the size of the foreground in the foreground size map. The foreground size can be converted to standard deviation by the following equation.
Where σ is the standard deviation, C is the scene size, and pp is the pixel pitch of the DSLR image sensor.
The pixel pitch pp can be calculated by the following equation:
wherein S is the area of the DSLR image sensor, N p Is the number of pixels in the DSLR image sensor. S and N p Are DSLR camera parameters.
Specifically, the execution unit 34 generates a gaussian kernel for each pixel in the image. The coefficients of the gaussian kernel are determined based on the standard deviation σ. Then, for each pixel in the image, the execution unit 34 convolves the gaussian kernel over the corresponding pixel.
For example, the size of the gaussian kernel is 3×3,8×8, 16×16, but it is not limited to any particular size. The kernel size may be determined based on the standard deviation sigma. For example, the kernel size increases with decreasing standard deviation. The core size may also be determined based on the computing power of ISP 30. For example, core size increases with increasing computing power.
Fig. 6 shows an example of the image with a foreground generated in step S5. The focused object S2 is clearly displayed, while the objects S1 and S3 are largely blurred.
Fig. 7 shows a case where the user P of the electronic apparatus 100 takes a picture or video of the subject person S and has flowers FG in the foreground and trees BG in the background. As shown in fig. 7, the size of the foreground generated based on the DSLR camera parameters according to the above method (solid line) is larger than the size of the foreground generated based on the actual camera parameters of the electronic device 100 (broken line) over the entire distance. Thus, a large prospect of flowers FG and trees BG can be obtained.
Further, it can be seen from fig. 7. The curve of the foreground size in the vicinity of the subject person S is asymmetric in the distance direction. That is, the size of the foreground in front of the subject person S is larger than that behind the subject person S. This can be understood from the equation for calculating the size of the foreground.
It should be noted that there are a variety of methods of foreground processing. The foreground processing in step S5 is not limited to the above method, as long as a foreground size map is used.
The DSLR camera parameters may be acquired from preset parameters previously stored in the memory 49. In this case, the UI described above does not need to be displayed, that is, step S1 is not necessary.
Optionally, at least one of the steps described above may be performed by the main processor 48.
Alternatively, the size of the foreground near the subject may be changed. Specifically, the size of the scene in the depth of field (DoF) may be changed to a value smaller than the calculated size of the scene. Typically, all the foreground sizes in DoF change to 0. In this case, the obtaining unit 33 calculates the size of the foreground by the above equation, and changes the size of the foreground in the DoF to a predetermined value (e.g., 0) smaller than the calculated size of the foreground.
Fig. 8 shows the same situation as fig. 7 described above. As shown in fig. 8, the foreground size in DoF is set to 0. In other words, the dispersion Jing Daxiao is set to 0 in a first range of the distance Tf in front of the object S and in a second range of the distance Tr behind the object S. The foreground sizes in the first range and the second range are set to 0 so that the subject person S can be displayed more clearly.
DoF can be calculated based on the focal length (F), the F value (a), the focusing distance (D), and the allowable circle of confusion (δ). Specifically, doF is calculated by the following equation.
DoF=T f +T r
Wherein T is f Is the front depth of field, T r Is the back depth of field, δ is the allowable circle of confusion, F is the focal length, a is the F value, and D is the focus distance.
The allowable dispersion circle δ is calculated by the following equation.
δ=Max{pp,adr}
adr=1.22×λ×A,
Where δ is the allowable circle of confusion, max is a function of returning the larger of the two arguments, pp is the pixel pitch of the DSLR image sensor, adr is the airy spot radius, λ is the representative wavelength (e.g., 530 nm), and a is the F value.
As described above, according to the embodiments of the present disclosure, by performing a foreground process using a foreground size map generated based on DSLR camera parameters, a high quality foreground equivalent to a foreground created by a DSLR camera can be accurately reproduced. In other words, it is possible to generate an image with a natural and large foreground.
For example, even if a picture or video is taken using a smartphone or the like, a smartphone with a relatively small image sensor may generate an image with a large foreground (equivalent to the foreground created by a DSLR camera with a large image sensor).
In describing embodiments of the present disclosure, it should be understood that terms such as "center," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "back," "left," "right," "vertical," "horizontal," "top," "bottom," "interior," "exterior," "clockwise," and "counterclockwise" should be construed to refer to directions or locations as described or illustrated in the drawings in question. These related terms are only used to simplify the description of the present disclosure and do not indicate or imply that the devices or elements referred to must have a particular orientation or must be constructed or operated in a particular orientation. Accordingly, these terms should not be construed as limiting the present disclosure.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or significance, or to imply the number of technical features indicated. Thus, features defined as "first" and "second" may include one or more of the features. In the description of the present disclosure, "a plurality" means "two or more than two" unless otherwise indicated.
In the description of the embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted," "connected," "coupled," and the like are used broadly and may be, for example, a fixed connection, a removable connection, or an integral connection; or may be a mechanical or electrical connection; or may be directly connected or indirectly connected through an intermediate structure; internal communication of two elements as would be understood by one of ordinary skill in the art depending on the particular situation is also possible.
In embodiments of the present disclosure, unless specified or limited otherwise, structures in which a first feature is "on" or "under" a second feature may include embodiments in which the first feature is in direct contact with the second feature, and may also include embodiments in which the first feature and the second feature are not in direct contact with each other, but are contacted by additional features formed therebetween. Furthermore, embodiments in which a first feature is "on", "above" or "on top of" a second feature may include embodiments in which the first feature is "on", "above" or "on top of" the second feature "orthogonally or obliquely to the first feature, or simply means that the first feature is at a height greater than the height of the second feature; while a first feature "under", "beneath" or "at the bottom of" a second feature "may include embodiments where the first feature is" under "," beneath "or" at the bottom of "the second feature" orthogonally or obliquely, or simply meaning that the first feature is at a lower elevation than the second feature.
Various embodiments and examples are provided in the above description to implement the different structures of the present disclosure. In order to simplify the present disclosure, certain elements and arrangements are described above. However, these elements and arrangements are merely examples and are not intended to limit the present disclosure. Further, in different examples of the present disclosure, reference numerals and/or letters may be repeated. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. In addition, the present disclosure provides examples of different processes and materials. However, those skilled in the art will appreciate that other processes and/or materials may also be applied.
Reference throughout this specification to "an embodiment," "some embodiments," "an example embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above-identified phrases in various places throughout this specification are not necessarily all referring to the same embodiment or example of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in the flow diagrams or otherwise described herein may be understood as comprising one or more modules, segments, or portions of code comprising executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be implemented in a different order (including in substantially the same order or in an opposite order) than shown or discussed, as would be understood by those skilled in the art.
Logic and/or steps (e.g., a particular sequence of executable instructions for performing a logic function) described elsewhere herein or shown in a flowchart may be embodied in any computer-readable medium to be used by or in connection with an instruction execution system, apparatus, or device (e.g., a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device that executes the instructions). For the purposes of this description, a "computer-readable medium" can be any apparatus that can be used in, or that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples of the computer-readable medium include, but are not limited to: an electronic connection (electronic device) with one or more wires, a portable computer housing (magnetic device), a random access memory (random access memory, RAM), a Read Only Memory (ROM), an erasable programmable read only memory (erasable programmable read-only memory, EPROM or flash memory), a fiber optic device, and a portable compact disc read only memory (compact disk read-only memory, CDROM). Furthermore, the computer readable medium may even be paper or other suitable medium upon which the program can be printed, as, for example, when the program is desired to be electronically captured, the paper or other suitable medium can be optically scanned, then compiled, decrypted or otherwise processed in a suitable manner, and then the program can be stored in a computer memory.
It should be understood that each portion of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, in another embodiment as well, the steps or methods may be implemented by one or a combination of the following techniques, which are known in the art: discrete logic circuits with logic gates for implementing logic functions for data signals, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (programmable gate array, PGA), field programmable gate arrays (field programmable gate array, FPGA), and the like.
Those skilled in the art will appreciate that all or part of the steps in the above-described exemplary methods of the present disclosure may be implemented using program command related hardware. These programs may be stored in a computer readable storage medium and when run on a computer comprise one or a combination of steps in the method embodiments of the present disclosure.
Furthermore, each functional unit of the embodiments of the present disclosure may be integrated in a processing module, or these units may be physically present alone, or two or more units are integrated in a processing module. The integrated modules may be implemented in hardware or in software functional modules. When the integrated module is implemented in the form of a software functional module and sold or used as a stand-alone product, the integrated module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a Compact Disc (CD), or the like. The storage medium may be transitory or non-transitory.
Although embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions, and alterations may be made in the embodiments without departing from the scope of the disclosure.

Claims (20)

1. A method for foreground processing, comprising:
acquiring digital single inverse DSLR camera parameters including focal length and F value;
acquiring an image, a focusing distance and a depth map corresponding to the image;
based on the DSLR camera parameters, the focusing distance and the depth map, obtaining a foreground size of each pixel of the image to generate a foreground size map; and
and executing the foreground processing on the image based on the foreground size map.
2. The method of claim 1, wherein the DSLR camera parameters are obtained through a user interface allowing a user to input the DSLR camera parameters.
3. The method of claim 1, wherein the DSLR camera parameters are obtained from preset parameters.
4. A method according to any one of claims 1 to 3, wherein the focus distance is a representative value of a depth value in an autofocus area.
5. The method of any one of claims 1 to 4, wherein the dispersion Jing Daxiao is calculated by equation (1):
wherein C is the foreground size, F is the focal length, a is the F value, D is the focus distance, and D is the depth value of the corresponding pixel in the depth map.
6. The method according to any one of claims 1 to 5, wherein a scene size in the depth of field DoF is changed to a predetermined value smaller than the calculated scene size.
7. The method of claim 6, wherein the predetermined value is 0.
8. The method according to claim 6 or 7, wherein the DoF is calculated based on the focal length, the F value, the focus distance, and an allowable circle of confusion.
9. The method of claim 8, wherein the DoF is calculated by equations (2), (3) and (4):
DoF=T f +T r …(2)
wherein T is f Is the front depth of field, T r Is the back depth of field, δ is the allowed circle of confusion, F is the focal length, a is the F value, and D is the focus distance.
10. The method according to any one of claims 1 to 9, wherein the foreground processing is performed by applying a smoothing filter generated based on the foreground size map to the image.
11. The method of claim 10, wherein the smoothing filter is a gaussian filter having a standard deviation calculated based on the foreground size.
12. The method of claim 11, wherein the standard deviation is calculated by equation (5):
where σ is the standard deviation, C is the foreground size, and pp is the pixel pitch of the DSLR image sensor.
13. The method of claim 11 or 12, wherein the size of the gaussian filter is determined based on the standard deviation.
14. An electronic device for image processing, comprising:
a first acquisition unit configured to acquire digital single inverse DSLR camera parameters including a focal length and an F value;
a second acquisition unit configured to acquire an image, a focusing distance, and a depth map corresponding to the image;
an obtaining unit configured to obtain a foreground size of each pixel of the image based on the DSLR camera parameters, the focusing distance, and the depth map, to generate a foreground size map; and
and the execution unit is configured to execute the foreground processing on the image based on the foreground size map.
15. The electronic device of claim 14, wherein the first acquisition unit acquires the DSLR camera parameters through a user interface allowing a user to input the DSLR camera parameters.
16. The electronic device of claim 14, wherein the first obtaining unit obtains the DSLR camera parameters from preset parameters.
17. The electronic device according to any one of claims 14 to 16, wherein the obtaining unit calculates the foreground size and changes the foreground size in the depth of field DoF to a predetermined value smaller than the calculated foreground size.
18. The electronic device of any of claims 14-17, wherein the execution unit is to perform the foreground processing by applying a smoothing filter generated based on the foreground size map to the image.
19. An electronic device for image processing comprising a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform the method of any one of claims 1 to 13.
20. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a computer to implement the method of any of claims 1 to 13.
CN202180096376.5A 2021-04-08 2021-04-08 Scan processing method, electronic device and computer readable storage medium Pending CN117178286A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/086031 WO2022213332A1 (en) 2021-04-08 2021-04-08 Method for bokeh processing, electronic device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117178286A true CN117178286A (en) 2023-12-05

Family

ID=83544980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180096376.5A Pending CN117178286A (en) 2021-04-08 2021-04-08 Scan processing method, electronic device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN117178286A (en)
WO (1) WO2022213332A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2012258467A1 (en) * 2012-12-03 2014-06-19 Canon Kabushiki Kaisha Bokeh amplification
CN103973962B (en) * 2013-02-06 2017-09-01 聚晶半导体股份有限公司 Image processing method and image collecting device
US9087405B2 (en) * 2013-12-16 2015-07-21 Google Inc. Depth map generation using bokeh detection
US10567641B1 (en) * 2015-01-19 2020-02-18 Devon Rueckner Gaze-directed photography
CN105989574A (en) * 2015-02-25 2016-10-05 光宝科技股份有限公司 Image processing device and image field-depth processing method

Also Published As

Publication number Publication date
WO2022213332A1 (en) 2022-10-13

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
JP7145208B2 (en) Method and Apparatus and Storage Medium for Dual Camera Based Imaging
JP6911192B2 (en) Image processing methods, equipment and devices
CN107950018B (en) Image generation method and system, and computer readable medium
US10825146B2 (en) Method and device for image processing
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
US9444991B2 (en) Robust layered light-field rendering
US9544574B2 (en) Selecting camera pairs for stereoscopic imaging
JP5230456B2 (en) Image processing apparatus and image processing method
US9992478B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium for synthesizing images
US20110025830A1 (en) Methods, systems, and computer-readable storage media for generating stereoscopic content via depth map creation
US20160150215A1 (en) Method for performing multi-camera capturing control of an electronic device, and associated apparatus
KR20170005009A (en) Generation and use of a 3d radon image
KR102304784B1 (en) Double camera-based imaging method and apparatus
US9332195B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US9619886B2 (en) Image processing apparatus, imaging apparatus, image processing method and program
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
US8922627B2 (en) Image processing device, image processing method and imaging device
US11282176B2 (en) Image refocusing
US9485407B2 (en) Method of capturing images and obtaining information of the images
US20150116546A1 (en) Image processing apparatus, imaging apparatus, and image processing method
US20160275657A1 (en) Imaging apparatus, image processing apparatus and method of processing image
US20230033956A1 (en) Estimating depth based on iris size
CN107547789B (en) Image acquisition device and method for photographing composition thereof
CN117178286A (en) Scan processing method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination