CN107592455B - Shallow depth of field effect imaging method and device and electronic equipment - Google Patents

Shallow depth of field effect imaging method and device and electronic equipment Download PDF

Info

Publication number
CN107592455B
CN107592455B CN201710819207.5A CN201710819207A CN107592455B CN 107592455 B CN107592455 B CN 107592455B CN 201710819207 A CN201710819207 A CN 201710819207A CN 107592455 B CN107592455 B CN 107592455B
Authority
CN
China
Prior art keywords
information
imaging
electromagnetic wave
depth
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710819207.5A
Other languages
Chinese (zh)
Other versions
CN107592455A (en
Inventor
杜琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201710819207.5A priority Critical patent/CN107592455B/en
Publication of CN107592455A publication Critical patent/CN107592455A/en
Application granted granted Critical
Publication of CN107592455B publication Critical patent/CN107592455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a shallow depth of field effect imaging method and device and electronic equipment. The shallow depth of field effect imaging method comprises the following steps: and acquiring light ray information of reflected electromagnetic wave signals formed by the reflection of an imaging sub-area in the image sensor on the electromagnetic wave signals, and acquiring a first image of a scene to be shot according to the light ray information. And determining target depth of field information of a scene to be shot, and adjusting the distribution density of the imaging sub-area according to the target depth of field information to obtain a shallow depth of field effect image corresponding to the first image. The utilization rate of the imaging subarea is increased, and the imaging quality is high. The imaging sub-area can deform under the irradiation of incident light, and the reflected electromagnetic wave signal changes along with the deformation, so that the light information of the incident light is convenient to determine.

Description

Shallow depth of field effect imaging method and device and electronic equipment
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a shallow depth-of-field effect imaging method and apparatus, and an electronic device.
Background
Depth of Field (DoF) generally refers to an object distance range in which a camera lens can clearly image a scene to be photographed, an area within the object distance range is called in-focus, an area outside the object distance range is called out-of-focus, the in-focus is a clear image, and the out-of-focus can be a clear image or a blurred image according to the Depth of Field (or Depth of Field), for example: for deep depth of field, clear images can be formed in the focus and out of the focus, and the requirement for obtaining the deep depth of field on the camera lens is high; for shallow depth of field, a sharp image can be formed in focus and a blurred image can be formed out of focus.
There are generally two methods for acquiring an image with a shallow depth of field effect. One is to make the shot image locally clear and locally blurred, such as foreground clear and background blurred, by adjusting parameters such as the aperture size of the camera lens, the length of the physical focal length, and the focal distance between the lens and the object to be shot. And the other method is to adopt image processing software to process the shot picture through a certain blurring algorithm, so that the processed image is locally blurred to realize a blurring effect similar to lens blurring.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a shallow depth of field effect imaging method and apparatus, and an electronic device.
According to a first aspect of the present disclosure, a shallow depth effect imaging method is provided, the method including:
acquiring a reflected electromagnetic wave signal, wherein the reflected electromagnetic wave signal is formed by the reflection of an electromagnetic wave signal by an imaging subarea in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
determining light ray information of the incident light rays according to the reflected electromagnetic wave signals, and obtaining a first image of a scene to be shot according to the light ray information;
acquiring target depth of field information for a scene to be shot;
and adjusting the distribution density of the imaging sub-area according to the target depth of field information to obtain a shallow depth of field effect image corresponding to the first image.
Optionally, determining the light information of the incident light according to the reflected electromagnetic wave signal includes:
demodulating the reflected electromagnetic wave signal to obtain a first signal;
and recovering the light ray information of the incident light ray according to the first signal.
Optionally, the imaging sub-area comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
Optionally, determining the light information of the incident light according to the reflected electromagnetic wave signal includes:
sending the reflected electromagnetic wave signal to a monitoring model, wherein a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of a photosensitive layer;
receiving deformation parameters of the photosensitive layer output by the monitoring model;
and determining the light ray information of the incident light ray according to the deformation parameters.
Optionally, the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
Optionally, the obtaining of the target depth information for the scene to be shot includes:
acquiring object point depth information of a scene to be shot;
acquiring focal plane information aiming at the scene to be shot;
and determining the depth of field information of the target according to the depth information of the object point and the focal plane information.
Optionally, the target depth information includes at least one of:
the method comprises the steps that relative position information of depth information of at least partial focus foreign object points of a scene to be shot and a focal plane is obtained;
out-of-focus blur degree information.
Optionally, the out-of-focus blur degree includes information of a diffuse circular distribution of an imaging sub-area outside the focal plane.
Optionally, adjusting the distribution density of the imaging sub-area according to the target depth information includes:
adjusting the distribution density of the imaging sub-area in the direction perpendicular to the incident light according to the target depth of field information;
and/or adjusting the distribution density of the imaging sub-area in the direction parallel to the incident light according to the target depth information.
Optionally, adjusting the distribution density of the imaging sub-area according to the target depth information includes:
applying an external field to at least one of said imaging sub-regions;
and applying acting force to the imaging sub-area by using the external field to obtain a shallow depth of field effect image corresponding to the first image.
Optionally, the external field includes: at least one of a magnetic field, an electric field, and an optical field.
According to a second aspect of the present disclosure, there is provided a shallow depth effect imaging apparatus including:
an acquisition unit that acquires a reflected electromagnetic wave signal formed by reflection of an electromagnetic wave signal by an imaging sub-area in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
the processing unit is used for determining light ray information of the incident light rays according to the reflected electromagnetic wave signals and acquiring a first image of a scene to be shot according to the light ray information;
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for acquiring target depth of field information of a scene to be shot;
and the execution unit is used for adjusting the distribution density of the imaging sub-area according to the target depth of field information so as to obtain a shallow depth of field effect image corresponding to the first image.
Optionally, the processing unit includes:
a first processing subunit, configured to demodulate the reflected electromagnetic wave signal to obtain a first signal;
and the second processing subunit recovers the light ray information of the incident light ray according to the first signal.
Optionally, the imaging sub-area comprises:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
Optionally, the processing unit includes:
the transmitting subunit is used for transmitting the reflected electromagnetic wave signal to a monitoring model, and a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer;
the receiving subunit is used for receiving the deformation parameters of the photosensitive layer output by the monitoring model;
and the third processing subunit determines the light ray information of the incident light ray according to the deformation parameter.
Optionally, the deformation properties of at least two of the imaging sub-regions are different;
and/or the electromagnetic wave signal reflection characteristics of at least two of the imaging sub-regions are different.
Optionally, the determining unit includes:
the first determining subunit acquires object point depth information of a scene to be shot;
the second determining subunit acquires focal plane information aiming at the scene to be shot;
and the third determining subunit determines the target depth-of-field information according to the object point depth information and the focal plane information.
Optionally, the target depth information includes at least one of:
the method comprises the steps that relative position information of depth information of at least partial focus foreign object points of a scene to be shot and a focal plane is obtained;
out-of-focus blur degree information.
Optionally, the out-of-focus blur degree includes information of a diffuse circular distribution of an imaging sub-area outside the focal plane.
Optionally, the execution unit includes:
the first execution subunit adjusts the distribution density of the imaging sub-area in the direction vertical to the incident light according to the target depth of field information;
and/or the second execution subunit adjusts the distribution density of the imaging sub-area in the direction parallel to the incident light according to the target depth information.
Optionally, the execution unit includes:
a third execution subunit for applying an external field to at least one of the imaging sub-regions;
and the fourth execution subunit applies acting force to the imaging subarea by using the external field to obtain a shallow depth effect image corresponding to the first image.
Optionally, the external field includes: at least one of a magnetic field, an electric field, and an optical field.
According to a third aspect of the present disclosure, an electronic device is provided, the electronic device comprising:
a processor configured to implement the shallow depth effect imaging method described above.
According to a fourth aspect of the present disclosure, a computer-readable storage medium is proposed, on which computer instructions are stored, which instructions, when executed by a processor, implement the steps of the shallow depth effect imaging method described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiments, the present disclosure determines the light information of the incident light by obtaining the reflected electromagnetic wave signal formed by the reflection of the electromagnetic wave signal by the imaging sub-area in the image sensor. The imaging sub-area can deform under the irradiation of incident light, and the reflected electromagnetic wave signal changes along with the deformation, so that the light information of the incident light is convenient to determine. In addition, a first image of a scene to be shot is obtained according to the light information, and the distribution density of the imaging sub-area can be adjusted according to the target depth of field information to obtain a shallow depth of field effect image corresponding to the first image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1a is a flow chart of a shallow depth of field effect imaging method according to an exemplary embodiment of the disclosure;
FIG. 1b is a schematic diagram of an exemplary embodiment of the present disclosure for capturing incident light;
FIG. 1c is a schematic illustration of a state of motion of an imaging sub-region of an exemplary embodiment of the present disclosure;
fig. 2a is a flowchart of a shallow depth of field effect imaging method according to another exemplary embodiment of the present disclosure;
FIG. 2b is a deformation mode diagram of a reflected electromagnetic wave signal according to an exemplary embodiment of the present disclosure;
FIG. 2c is a deformation mode diagram of a reflected electromagnetic wave signal according to another exemplary embodiment of the present disclosure;
FIG. 2d is a deformation mode diagram of a reflected electromagnetic wave signal according to yet another exemplary embodiment of the present disclosure;
FIG. 2e is a deformation mode diagram of a reflected electromagnetic wave signal according to yet another exemplary embodiment of the present disclosure;
fig. 3a is a flowchart of a shallow depth of field effect imaging method according to yet another exemplary embodiment of the present disclosure;
FIG. 3b is a schematic diagram of the operation of capturing incident light according to yet another exemplary embodiment of the present disclosure;
FIG. 3c is a schematic diagram of an operation of capturing incident light according to yet another exemplary embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a shallow depth of field effect imaging apparatus according to an exemplary embodiment of the disclosure;
FIG. 5 is a schematic diagram of a processing unit in an exemplary embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a processing unit according to another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a determination unit in an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic diagram of a determination unit according to another exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an execution unit according to an exemplary embodiment of the disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to obtain an image with a shallow depth of field effect during shooting by using an image capturing device such as a camera, the present disclosure provides a shallow depth of field effect imaging method as shown in fig. 1a, where the shallow depth of field effect imaging method may include the following steps:
in step 101, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. Specifically, as shown in fig. 1b, the imaging subregion D may include a photosensitive layer D1 and a reflective layer D2. The photosensitive layer D1 is used for receiving the incident light H1 and generating deformation corresponding to the light information of the incident light H1. The reflective layer D2 can deform corresponding to the photosensitive layer D1 and reflect the reflected electromagnetic wave signal H2 corresponding to the incident light H1. The receiver I receives the reflected electromagnetic wave signal H2 for processing.
It should be noted that the deformation properties of the at least two imaging sub-regions D are different, and/or the electromagnetic wave signal reflection characteristics of the at least two imaging sub-regions D are different, so as to locate and distinguish the electromagnetic wave signals reflected by the different imaging sub-regions D. Wherein the above-mentioned "and/or" includes three cases, one case is that the deformation properties of at least two imaging sub-regions D are different, and the electromagnetic wave reflection characteristics of the imaging sub-regions D are the same. In another case, the electromagnetic wave reflection characteristics of at least two imaging sub-regions D are different, and the deformation properties of the imaging sub-regions D are the same. A further case is that the deformation properties of at least two of said imaging sub-regions D are different and/or that the electromagnetic wave signal reflection properties of at least two of said imaging sub-regions D are different. The electromagnetic wave signals reflected by the imaging sub-region D can be located and distinguished in all of the above three cases.
In step 102, light ray information of the incident light ray is determined according to the reflected electromagnetic wave signal, and a first image of a scene to be shot is obtained according to the light ray information.
It should be noted that the incident light rays related to the present disclosure need to be converged by at least one lens to form an image, or the incident light rays are converged by a reflector to form an image, which is not limited by the present disclosure.
The first image is a shot image formed by the image sensor according to the light information, and reflects the unadjusted depth of field effect of the scene to be shot. The light information may include: at least one of the intensity, color, and polarization direction of the incident light. In one embodiment, the image sensor comprises a monitoring model trained according to the reflected electromagnetic wave signal and the deformation parameters of the photosensitive layer corresponding to the reflected electromagnetic wave signal. In order to obtain ray information of the incident ray, the reflected electromagnetic wave signal may be sent to a monitoring model, and a training sample of the monitoring model includes a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer. The received deformation parameters may be used to determine ray information for the incident ray.
The reflection parameter of the reflection layer and the deformation parameter of the photosensitive layer are changes based on the same incident light, and are data which are corresponding to each other and have synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
In another embodiment, the reflected electromagnetic wave signal may be demodulated to obtain a first signal, and then the light information of the incident light may be recovered according to the first signal.
The depth of field generally represents the object distance range of a scene to be shot which is clearly imaged relative to a focal plane, and the distribution density in the corresponding focal area of an imaging sub-area of an image is greater than the target pixel density of the image outside the corresponding focal area, so that the imaging of the part inside the focal area in the target image is clearer than the imaging of the part outside the focal area, and thus the effect of a shallow depth-of-field image with clear in-focal area and fuzzy outside the focal area is visually presented.
In step 103, target depth information for a scene to be photographed is acquired.
In an embodiment, the target depth of field information may include object point depth information and focal plane information, and a relative position relationship between at least part of out-of-focus object point depth information of the scene to be photographed and the focal plane is obtained, so as to adjust the distribution density of the imaging sub-area correspondingly by using the relative position relationship.
In another embodiment, the target depth information may include out-of-focus blur degree information, where the out-of-focus blur degree includes diffuse circle distribution information of an imaging sub-region outside the focal plane, so as to adjust the distribution density of the imaging sub-region by using the diffuse circle distribution information.
It should be noted that, in the above embodiments, the target depth information may be obtained by the classical estimation method through the ray information, or may be obtained by, for example, a depth sensor, a radar, and a network connection, which is not limited by the present disclosure.
In step 104, the distribution density of the imaging sub-area is adjusted according to the target depth of field information to obtain a shallow depth of field effect image corresponding to the first image.
As shown in fig. 1c, the manner of adjusting the distribution density of the imaging sub-regions may include: and applying an external field E to at least one imaging sub-area D, and controlling the external field E to apply acting force to the imaging sub-area D by using a control part F so that the imaging sub-area D moves in the direction parallel and/or perpendicular to the incident light according to the target depth information. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
In the image acquisition process, each imaging sub-area of the image sensor participates in image acquisition, the imaging sub-areas can deform under the irradiation of incident light, and the light information of the incident light can be conveniently determined through the change of reflected electromagnetic wave signals. In addition, the distribution density of the imaging subareas of the image sensor is adjusted according to the distribution density of the imaging subareas of the image, and the distribution density of the imaging subareas of the image is determined according to the target depth information of the scene to be shot. And acquiring the image of the scene to be shot according to the adjusted image sensor, wherein the definition of different areas of the image presents different distribution corresponding to the distribution density of the imaging subarea. The part needing to be clearly presented has more imaging subareas to participate in image acquisition, the image definition of the part is higher, the part of the target depth information needing not to be clearly presented uses relatively fewer pixels to participate in image acquisition, and the image of the part is fuzzy.
The following two embodiments are proposed for the method for acquiring the depth-of-field information of the target:
fig. 2a is a flowchart of a shallow depth of field effect imaging method according to another exemplary embodiment of the present disclosure. As shown in fig. 2a, the shallow depth effect imaging method may include the steps of:
in step 201, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. In particular, the imaging sub-region may include a photosensitive layer and a reflective layer. The photosensitive layer can be used for receiving incident light and generating deformation corresponding to the light information of the incident light. The reflecting layer can generate deformation corresponding to the photosensitive layer and reflect the reflected electromagnetic wave signal corresponding to the incident light. The receiver receives the reflected electromagnetic wave signal for processing.
The image sensor trains a monitoring model according to the reflected electromagnetic wave signals and the deformation parameters of the photosensitive layer corresponding to the reflected electromagnetic wave signals. Specifically, when incident light irradiates the imaging subareas, reflected electromagnetic wave signals returned by each imaging subarea and photosensitive layer deformation parameters corresponding to the reflected electromagnetic wave signals are collected to form a training sample. By the principle, a large number of training samples can be recorded for the polarization direction, intensity, color and the like of different incident light rays. Based on the training samples, a large number of problems about logistic regression are automatically generated, and a certain relation existing between the training samples and the performance of the training model is further learned, so that a simple rule is obtained for corresponding the reflected electromagnetic wave signals and the deformation parameters of the photosensitive layer.
In step 202, the reflected electromagnetic wave signal is sent to a monitoring model, and a training sample of the monitoring model includes a data pair between the pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer.
In step 203, light ray information of the incident light ray is determined according to the received deformation parameter, and a first image of the scene to be shot is obtained according to the light ray information.
The first image is a shot image formed by the image sensor according to the light information, and reflects the unadjusted depth of field effect of the scene to be shot. In the above embodiment, in order to obtain the ray information of the incident ray, the reflected electromagnetic wave signal may be sent to the monitoring model, and the monitoring model outputs the deformation parameter of the photosensitive layer corresponding to the reflected electromagnetic wave signal according to the reflected electromagnetic wave signal. And determining the light ray information of the incident light ray according to the received deformation parameters. Wherein the light information may include: at least one of the intensity, color, and polarization direction of the incident light. The deformation parameters of the reflective layer and the photosensitive layer are changes based on the same incident light, and are data corresponding to each other and having synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
It should be noted that, step 202 and step 203 may be replaced by: demodulating the reflected electromagnetic wave signal to obtain a first signal; and recovering the light ray information of the incident light ray according to the first signal.
The deformation of the imaging sub-region may include at least one of a shape change, an area change, a density change, and a smoothness change, and the deformation may cause a change in the reflection characteristic of the reflective layer, which may be described by a channel parameter or a scattering parameter, which is not limited by the present disclosure. Due to the change of the reflection characteristic, the frequency spectrum and amplitude characteristic of the reflected electromagnetic wave signal G are changed, the reflected electromagnetic wave signal G is demodulated by a classical signal demodulation method to obtain a first signal, and the light ray information of the incident light ray is restored according to the demodulated first signal. The reflected electromagnetic wave signal G may include several common deformation modes as shown in fig. 2b, fig. 2c, fig. 2d, and fig. 2e when the imaging sub-region receives the incident light. After the reflection layer is deformed by the irradiation of incident light, the reflected electromagnetic wave signal G carries the light information of the incident light, and the first signal containing the incident light information can be obtained by demodulating the reflected electromagnetic wave signal G, so that the first signal can be used for recovering the light information of the incident light.
In step 204, object point depth information for the scene to be photographed is acquired.
The object point depth information may be obtained by a classical depth estimation method through the light information, for example, by methods such as binocular stereo vision, illumination shadow information, zoom focusing, defocus circle information, and the like, or may be obtained by external methods such as a depth sensor, a radar, and a network connection, which is not limited by the present disclosure.
In step 205, focal plane information for the scene to be photographed is acquired.
In one embodiment, the focal plane may be determined according to Region of Interest (ROI) information. The region of interest may include, but is not limited to, one or more of the following: the scene to be shot selected by the user is located in at least one area of the preview image of the image sensor, at least one area of the preview image watched by the user and an area of interest obtained by automatic detection of the preview image by the imaging equipment. According to the scheme, the focal plane of the scene to be shot is determined according to the region of interest, so that the determination of the focal plane is more consistent with the actual user requirements, and the personalized application requirements of the user can be better met.
In another embodiment, the focal plane of the scene to be photographed may be determined from the results of the image analysis, for example: and performing face recognition on the preview image, and determining a focusing plane of a face as a focusing plane of the scene to be shot according to a recognition result. Another example is: and identifying the preview image by a moving object, and determining the focusing plane of the corresponding area of the moving object as the focusing plane of the scene to be shot according to the identification result. According to the scheme, the focal plane of the scene to be shot can be determined according to the image analysis result of the preview image, so that the determination of the focal plane of the scene to be shot is more intelligent, and the efficiency and universality of the determination of the focal plane are improved.
In step 206, the target depth of field information is determined from the object point depth information and the focal plane information.
In the present embodiment, the target depth of field information may be a distance of the object point depth position of the scene to be photographed with respect to the focal plane position. Wherein the object point depth position and the focal plane position may have been obtained by step 205 and step 206.
In step 207, the distribution density of the imaging sub-regions is adjusted according to the target depth information to obtain a shallow depth effect image corresponding to the first image.
In this embodiment, the distribution density of the imaging sub-area is determined according to the depth of field information of the target, and specifically, the target distribution density of the imaging sub-area corresponds to the distance from the out-of-focus object position to the out-of-focus plane position in the scene to be photographed. The distribution density of the imaging sub-regions for expressing object points closer to the focal plane is greater than the distribution density of the imaging sub-regions for expressing object points further from the focal plane, so that there is a difference in the sharpness of the imaging for different distance intervals in the image. The imaging of the object point closer to the focal plane is clearer, and the imaging of the object point farther from the focal plane is more fuzzy, so that the effect of a shallow depth-of-field image that the closer the object point is imaged, the clearer the farther the object point is imaged, and the more fuzzy the object point is imaged is visually presented.
In the above embodiment, the manner of adjusting the distribution density of the imaging sub-regions may include: applying an external field to at least one of the imaging sub-regions, and applying a force to the imaging sub-region by using the external field to move the imaging sub-region in a direction parallel and/or perpendicular to the incident light according to the target depth information. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
Fig. 3a is a flowchart of a zooming method according to another exemplary embodiment of the present disclosure. As shown in fig. 3a, the zooming method may include the steps of:
in step 301, a reflected electromagnetic wave signal is acquired.
The image sensor may include a plurality of imaging sub-regions, the imaging sub-regions may deform under the irradiation of incident light, and the reflected electromagnetic wave signal is formed by the reflection of the electromagnetic wave signal by the imaging sub-regions in the image sensor. In particular, the imaging sub-region may include a photosensitive layer and a reflective layer. The photosensitive layer can be used for receiving incident light and generating deformation corresponding to the light information of the incident light. The reflecting layer can generate deformation corresponding to the photosensitive layer and reflect the reflected electromagnetic wave signal corresponding to the incident light. The receiver receives the reflected electromagnetic wave signal for processing.
In step 302, the reflected electromagnetic wave signal is sent to a monitoring model, and a training sample of the monitoring model includes a data pair between the pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer.
In step 303, light ray information of the incident light ray is determined according to the received deformation parameter, and a first image of the scene to be shot is obtained according to the light ray information.
The first image is a shot image formed by the image sensor according to the light information, and reflects the unadjusted depth of field effect of the scene to be shot. In the above embodiments, the image sensor includes a monitoring model trained according to the reflected electromagnetic wave signal and the deformation parameter of the photosensitive layer corresponding thereto. In order to obtain the ray information of the incident ray, the reflected electromagnetic wave signal may be sent to the monitoring model, and the monitoring model outputs the deformation parameter of the photosensitive layer corresponding to the reflected electromagnetic wave signal according to the reflected electromagnetic wave signal. And determining the light ray information of the incident light ray according to the received deformation parameters. Wherein the light information may include: at least one of the intensity, color, and polarization direction of the incident light. The deformation parameters of the reflective layer and the photosensitive layer are changes based on the same incident light, and are data corresponding to each other and having synchronism. Because the photosensitive layers of different light-induced deformation materials have different deformation parameters for incident light, each light-induced deformation material has a light-induced deformation function corresponding to the light-induced deformation material, and the light information of the incident light can be calculated.
It should be noted that, steps 302 and 303 may be replaced by: demodulating the reflected electromagnetic wave signal to obtain a first signal; and recovering the light ray information of the incident light ray according to the first signal.
In step 304, object point depth information for the scene to be photographed is acquired.
The object point depth information may be obtained by a classical depth estimation method through the light information, for example, by methods such as binocular stereo vision, illumination shadow information, zoom focusing, defocus circle information, and the like, or may be obtained by external methods such as a depth sensor, a radar, and a network connection, which is not limited by the present disclosure.
In step 305, out-of-focus blur degree information for the scene to be photographed is acquired.
The manner of acquiring the out-of-focus blur degree information is not limited, and may be determined by a user, or may be determined by depth information of a scene to be photographed, or may be predetermined by an imaging device, for example.
In one embodiment, the out-of-focus blur degree information includes: and the diffusion circle distribution information of at least part of in-focus object points of the scene to be shot at least part of imaging points of the image sensor. In this case, the imaging sub-area distribution density may be determined from the diffusion circle distribution information of at least part of the imaging points. The specific implementation manner for determining the diffusion circle distribution information is not limited and may include: determining circle of confusion (circle of convergence) information of at least one out-of-focus object point of the scene to be shot at least one imaging point of the image sensor, and determining circle of confusion information of at least part of other imaging points according to the distance between at least part of other out-of-focus object points of the scene to be shot and the focal plane and the determined circle of confusion information of at least one imaging point. For example, in fig. 3b, P is an object point on the focal plane, Q is an object point outside the focal plane, and the diameter of the circle of confusion of the imaging point of the out-of-focus object point Q on the image sensor in the scene to be photographed can be determined by calculation. Specifically, according to the determined circle diameter of the corresponding imaging point of the object point, the object distance of the object point, the known focal length of the lens of the scene to be shot and the object distance of the focusing plane, the virtual aperture value N desired by the user can be determined according to a circle diameter calculation formula. And then, in combination with the object distances of other one or more object points in the scene to be shot, determining the diameters of the diffusion circles of the other one or more object points in the scene to be shot at corresponding imaging points of the image sensor according to the following diffusion circle diameter calculation formula:
Figure BDA0001405873630000141
in the above equation, f represents the focal length of the lens, U1 represents the object distance of the focal plane, U2 represents the object distance of the object point of the circle of confusion to be calculated, N represents the virtual aperture value desired by the user, and d represents the diameter of the circle of confusion of the U2 corresponding to the corresponding imaging point of the image sensor.
In step 306, the distribution density of the imaging sub-area is adjusted according to the out-of-focus blur degree information to obtain a shallow depth effect image corresponding to the first image.
And after the diameter of the diffusion circle of the corresponding imaging point of each object point is obtained, determining the distribution density of the imaging sub-area according to the distribution information of the diffusion circle. When the determined ranges of different diffusion circles have certain overlap, the distribution density of the imaging sub-regions in the overlap region of the diffusion circles can be determined according to actual needs. As shown in fig. 3C, circle dispersions of the three in-focus object points at the three imaging points of the image sensor are denoted as A, B and C, respectively, and the radii of the three circle dispersions increase in order. And the distribution density of the imaging subareas of the area with the small diameter of the diffusion circle is greater than that of the imaging subareas of the area with the large diameter of the diffusion circle. However, there is a certain repetition of the three circles of confusion, in which case the distribution density of the imaging sub-regions in different regions can be determined according to a certain rule, which may include, but is not limited to, a high density priority rule, for example, the intersection of a and B or C corresponds to the distribution density of the imaging sub-regions corresponding to a, and the intersection of B and C corresponds to the distribution density of the target pixels of the image corresponding to B, thereby visually presenting the effect of the shallow depth image. The scheme enables the setting of the out-of-focus fuzzy degree to be more flexible.
In the above embodiment, the manner of adjusting the distribution density of the imaging sub-regions may include: applying an external field to at least one of the imaging sub-regions, and applying a force to the imaging sub-region by using the external field to move the imaging sub-region in a direction parallel and/or perpendicular to the incident light according to the target depth information. It should be noted that the external field may include: at least one of a magnetic field, an electric field, and an optical field, which is not limited by the present disclosure.
According to the above embodiments, the present disclosure further provides a shallow depth of field effect imaging apparatus applied to an image sensor. Fig. 4 is a schematic structural diagram of a shallow depth effect imaging apparatus according to an exemplary embodiment of the present disclosure, and as shown in fig. 4, the shallow depth effect imaging apparatus includes an acquisition unit 41, a processing unit 42, a determination unit 43, and an execution unit 44.
The acquisition unit 41 is configured to acquire the reflected electromagnetic wave signal. The reflected electromagnetic wave signals are formed by reflecting electromagnetic wave signals by imaging sub-regions in the image sensor, the image sensor comprises a plurality of imaging sub-regions, and the imaging sub-regions can deform under the irradiation of incident light.
The processing unit 42 is configured to determine ray information of the incident ray according to the reflected electromagnetic wave signal, and obtain a first image of a scene to be shot according to the ray information.
The determination unit 43 is configured to acquire target depth information for a scene to be photographed.
The execution unit 44 is configured to adjust the distribution density of the imaging sub-regions according to the target depth information to obtain a shallow depth effect image corresponding to the first image.
The present disclosure further provides a shallow depth of field effect imaging apparatus, fig. 5 is a schematic structural diagram of a processing unit according to an exemplary embodiment of the present disclosure, and as shown in fig. 5, on the basis of the foregoing embodiment shown in fig. 4, the processing unit 42 may include a transmitting subunit 421, a receiving subunit 422, and a third processing subunit 423. Wherein:
the transmitting subunit 421 is configured to transmit the reflected electromagnetic wave signal to a monitoring model, a training sample of which comprises a data pair between a previously obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer.
The receiving subunit 422 is configured to receive a deformation parameter of the photosensitive layer output by the monitoring model.
The third processing subunit 423 is configured to determine ray information of the incident ray according to the deformation parameter, and obtain a first image of the scene to be captured according to the ray information.
Fig. 6 is a schematic structural diagram of a processing unit according to another exemplary embodiment of the present disclosure, and as shown in fig. 6, on the basis of the foregoing embodiment shown in fig. 4, the processing unit 42 may include a first processing sub-unit 424 and a third processing sub-unit 425. Wherein:
the first processing subunit 424 is configured to demodulate the reflected electromagnetic wave signal to obtain a first signal.
The second processing subunit 425 is configured to recover, according to the first signal, ray information of the incident ray, and obtain a first image of the scene to be captured according to the ray information.
Fig. 7 is a schematic structural diagram of a determination unit according to an exemplary embodiment of the present disclosure. As shown in fig. 7, on the basis of the foregoing embodiment shown in fig. 4, the determining unit 43 may include a first determining subunit 431, a second determining subunit 432, and a third determining subunit 433. Wherein:
the first determining subunit 431 is configured to acquire object point depth information for the scene to be photographed.
The second determining subunit 432 is configured to acquire focal plane information for the scene to be photographed.
The third determining subunit 433 is configured to determine the target depth of field information from the object point depth information and the focal plane information.
Fig. 8 is a schematic structural diagram of a determination unit according to another exemplary embodiment of the present disclosure. As shown in fig. 8, on the basis of the aforementioned embodiment shown in fig. 4, the determining unit 43 may include a first determining subunit 431 and a fourth determining subunit 434. Wherein:
the first determining subunit 431 is configured to acquire object point depth information for the scene to be photographed.
The fourth determining subunit 434 is configured to determine out-of-focus blur degree information from the light information.
Fig. 9 is a schematic structural diagram of an execution unit according to an exemplary embodiment of the disclosure. As shown in fig. 9, on the basis of the aforementioned embodiment shown in fig. 4, the execution unit 44 may include a first execution subunit 441, a second execution subunit 442, a third execution subunit 443, and a fourth execution subunit 444. Wherein:
the first execution subunit 441 is configured to adjust the distribution density of the imaging sub-regions in the direction perpendicular to the incident light ray according to the target depth of field information.
The second performing subunit 442 is configured to adjust the distribution density of the imaging sub-regions in a direction parallel to the incident light ray according to the target depth of field information.
The third execution subunit 443 is configured to apply an external field to at least one of the imaging sub-regions.
The fourth execution subunit 444 is configured to apply a force to the imaging sub-region using the external field to obtain a shallow depth effect image corresponding to the first image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The present disclosure further proposes an electronic device, which may comprise a processor configured to implement the above-mentioned shallow depth effect imaging method.
In an exemplary embodiment, the present disclosure also provides a non-transitory computer-readable storage medium comprising instructions. For example, a memory including instructions that, when executed by a processor of the distress device, implement the shallow depth of field effect imaging method of the present disclosure. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

1. A method of shallow depth effect imaging, comprising:
acquiring a reflected electromagnetic wave signal, wherein the reflected electromagnetic wave signal is formed by the reflection of an electromagnetic wave signal by an imaging subarea in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
determining light ray information of the incident light rays according to the reflected electromagnetic wave signals, and obtaining a first image of a scene to be shot according to the light ray information;
acquiring target depth of field information for a scene to be shot;
adjusting the distribution density of the imaging sub-area according to the target depth of field information to obtain a shallow depth of field effect image corresponding to the first image;
the deformation properties of the at least two imaging sub-regions D are different, and/or the electromagnetic wave signal reflection characteristics of the at least two imaging sub-regions D are different, so as to locate and distinguish the electromagnetic wave signals reflected by the different imaging sub-regions D.
2. The shallow depth of field effect imaging method of claim 1, wherein determining ray information of the incident ray from the reflected electromagnetic wave signal comprises:
demodulating the reflected electromagnetic wave signal to obtain a first signal;
and recovering the light ray information of the incident light ray according to the first signal.
3. The shallow depth effect imaging method according to claim 1, wherein the imaging sub-region includes:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
4. The shallow depth of field effect imaging method of claim 3, wherein determining ray information of the incident ray from the reflected electromagnetic wave signal comprises:
sending the reflected electromagnetic wave signal to a monitoring model, wherein a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of a photosensitive layer;
receiving deformation parameters of the photosensitive layer output by the monitoring model;
and determining the light ray information of the incident light ray according to the deformation parameters.
5. The shallow depth of field effect imaging method according to claim 1, wherein acquiring target depth of field information for the scene to be captured includes:
acquiring object point depth information of a scene to be shot;
acquiring focal plane information aiming at the scene to be shot;
and determining the depth of field information of the target according to the depth information of the object point and the focal plane information.
6. The shallow depth field effect imaging method according to claim 1, wherein the target depth field information includes at least one of:
the method comprises the steps that relative position information of depth information of at least partial focus foreign object points of a scene to be shot and a focal plane is obtained;
out-of-focus blur degree information.
7. The shallow depth of field effect imaging method of claim 6, wherein the out-of-focus blur degree comprises diffuse circular distribution information of the imaging sub-region outside the focal plane.
8. The shallow depth of field effect imaging method of claim 1, wherein adjusting the distribution density of the imaging sub-regions according to the target depth of field information comprises:
adjusting the distribution density of the imaging sub-area in the direction perpendicular to the incident light according to the target depth of field information;
and/or adjusting the distribution density of the imaging sub-area in the direction parallel to the incident light according to the target depth information.
9. The shallow depth of field effect imaging method of claim 1, wherein adjusting the distribution density of the imaging sub-regions according to the target depth of field information comprises:
applying an external field to at least one of said imaging sub-regions;
and applying acting force to the imaging sub-area by using the external field to obtain a shallow depth of field effect image corresponding to the first image.
10. The shallow depth of field effect imaging method of claim 9, wherein the external field comprises: at least one of a magnetic field, an electric field, and an optical field.
11. A shallow depth effect imaging apparatus, comprising:
an acquisition unit that acquires a reflected electromagnetic wave signal formed by reflection of an electromagnetic wave signal by an imaging sub-area in an image sensor; the image sensor comprises a plurality of imaging sub-regions, wherein the imaging sub-regions can deform under the irradiation of incident light;
the processing unit is used for determining light ray information of the incident light rays according to the reflected electromagnetic wave signals and acquiring a first image of a scene to be shot according to the light ray information;
the device comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for acquiring target depth of field information of a scene to be shot;
the execution unit is used for adjusting the distribution density of the imaging sub-area according to the target depth of field information so as to obtain a shallow depth of field effect image corresponding to the first image;
the deformation properties of the at least two imaging sub-regions D are different, and/or the electromagnetic wave signal reflection characteristics of the at least two imaging sub-regions D are different, so as to locate and distinguish the electromagnetic wave signals reflected by the different imaging sub-regions D.
12. The shallow depth effect imaging apparatus according to claim 11, wherein the processing unit includes:
a first processing subunit, configured to demodulate the reflected electromagnetic wave signal to obtain a first signal;
and the second processing subunit recovers the light ray information of the incident light ray according to the first signal.
13. The shallow depth field effect imaging apparatus according to claim 11, wherein the imaging sub-region includes:
a photosensitive layer which senses the irradiation of incident light and deforms;
and the reflecting layer returns corresponding reflected electromagnetic wave signals and can deform corresponding to the photosensitive layer.
14. The shallow depth effect imaging apparatus according to claim 13, wherein the processing unit includes:
the transmitting subunit is used for transmitting the reflected electromagnetic wave signal to a monitoring model, and a training sample of the monitoring model comprises a data pair between a pre-obtained reflected electromagnetic wave signal and a deformation parameter of the photosensitive layer;
the receiving subunit is used for receiving the deformation parameters of the photosensitive layer output by the monitoring model;
and the third processing subunit determines the light ray information of the incident light ray according to the deformation parameter.
15. The shallow depth effect imaging apparatus according to claim 11, wherein the determination unit includes:
the first determining subunit acquires object point depth information of a scene to be shot;
the second determining subunit acquires focal plane information of a scene to be shot;
and the third determining subunit determines the target depth-of-field information according to the object point depth information and the focal plane information.
16. The shallow depth field effect imaging apparatus according to claim 11, wherein the target depth field information includes at least one of:
the method comprises the steps that relative position information of depth information of at least partial focus foreign object points of a scene to be shot and a focal plane is obtained;
out-of-focus blur degree information.
17. The shallow depth of field effect imaging apparatus of claim 16, wherein the out-of-focus blur degree comprises diffuse circular distribution information of an imaging sub-region outside the focal plane.
18. The shallow depth effect imaging apparatus according to claim 11, wherein the execution unit includes:
the first execution subunit adjusts the distribution density of the imaging sub-area in the direction vertical to the incident light according to the target depth of field information;
and/or the second execution subunit adjusts the distribution density of the imaging sub-area in the direction parallel to the incident light according to the target depth information.
19. The shallow depth effect imaging apparatus according to claim 11, wherein the execution unit includes:
a third execution subunit for applying an external field to at least one of the imaging sub-regions;
and the fourth execution subunit applies acting force to the imaging subarea by using the external field to obtain a shallow depth effect image corresponding to the first image.
20. The shallow depth field effect imaging device of claim 19, wherein the external field comprises: at least one of a magnetic field, an electric field, and an optical field.
21. An electronic device, comprising:
a processor configured to implement the shallow depth effect imaging method of any one of claims 1-10.
22. A computer readable storage medium having computer instructions stored thereon which, when executed by a processor, implement: the steps of the shallow depth effect imaging method of any one of claims 1 to 10.
CN201710819207.5A 2017-09-12 2017-09-12 Shallow depth of field effect imaging method and device and electronic equipment Active CN107592455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710819207.5A CN107592455B (en) 2017-09-12 2017-09-12 Shallow depth of field effect imaging method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710819207.5A CN107592455B (en) 2017-09-12 2017-09-12 Shallow depth of field effect imaging method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107592455A CN107592455A (en) 2018-01-16
CN107592455B true CN107592455B (en) 2020-03-17

Family

ID=61050526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710819207.5A Active CN107592455B (en) 2017-09-12 2017-09-12 Shallow depth of field effect imaging method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107592455B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337434B (en) * 2018-03-27 2020-05-22 中国人民解放军国防科技大学 Out-of-focus virtual refocusing method for light field array camera
CN111835968B (en) * 2020-05-28 2022-02-08 北京迈格威科技有限公司 Image definition restoration method and device and image shooting method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159038A (en) * 2014-08-26 2014-11-19 北京智谷技术服务有限公司 Method and device of imaging control of image with shallow depth of field effect as well as imaging equipment
CN104469147A (en) * 2014-11-20 2015-03-25 北京智谷技术服务有限公司 Light field collection control method and device and light field collection equipment
CN106161910A (en) * 2015-03-24 2016-11-23 北京智谷睿拓技术服务有限公司 Image formation control method and device, imaging device
CN106161912A (en) * 2015-03-24 2016-11-23 北京智谷睿拓技术服务有限公司 Focusing method and device, capture apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472233B (en) * 2014-09-09 2019-01-18 北京智谷技术服务有限公司 Optical field acquisition control method and device, optical field acquisition equipment
CN104243823B (en) * 2014-09-15 2018-02-13 北京智谷技术服务有限公司 Optical field acquisition control method and device, optical field acquisition equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104159038A (en) * 2014-08-26 2014-11-19 北京智谷技术服务有限公司 Method and device of imaging control of image with shallow depth of field effect as well as imaging equipment
CN104469147A (en) * 2014-11-20 2015-03-25 北京智谷技术服务有限公司 Light field collection control method and device and light field collection equipment
CN106161910A (en) * 2015-03-24 2016-11-23 北京智谷睿拓技术服务有限公司 Image formation control method and device, imaging device
CN106161912A (en) * 2015-03-24 2016-11-23 北京智谷睿拓技术服务有限公司 Focusing method and device, capture apparatus

Also Published As

Publication number Publication date
CN107592455A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
DE102012215429B4 (en) Image processing system and automatic focusing method
JP5968107B2 (en) Image processing method, image processing apparatus, and program
CN108076278B (en) Automatic focusing method and device and electronic equipment
US20180070015A1 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
KR20200041981A (en) Image processing method, apparatus, and device
KR20130112036A (en) Continuous autofocus based on face detection and tracking
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
US9759994B1 (en) Automatic projection focusing
CN108833795B (en) Focusing method and device of image acquisition equipment
KR20140057190A (en) Focus error estimation in images
US9485407B2 (en) Method of capturing images and obtaining information of the images
CN115297255A (en) Method of refocusing images captured by plenoptic camera
WO2021134179A1 (en) Focusing method and apparatus, photographing device, movable platform and storage medium
JP2009047497A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
CN107592455B (en) Shallow depth of field effect imaging method and device and electronic equipment
CN111526282A (en) Method and device for shooting with adjustable depth of field based on flight time
JP2009047498A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
JP2009047496A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
JP6602081B2 (en) Imaging apparatus and control method thereof
CN113810607A (en) Continuous focusing control method and system for television shooting and television shooting device
JP2009047495A (en) Stereoscopic imaging device, control method of stereoscopic imaging device, and program
US20190230340A1 (en) Three-dimensional image capturing device and method
JP2015175982A (en) Image processor, image processing method, and program
KR101737260B1 (en) Camera system for extracting depth from images of different depth of field and opertation method thereof
CN107483819B (en) Focusing method, focusing device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant