CN107592455A - Shallow Deep Canvas imaging method, device and electronic equipment - Google Patents
Shallow Deep Canvas imaging method, device and electronic equipment Download PDFInfo
- Publication number
- CN107592455A CN107592455A CN201710819207.5A CN201710819207A CN107592455A CN 107592455 A CN107592455 A CN 107592455A CN 201710819207 A CN201710819207 A CN 201710819207A CN 107592455 A CN107592455 A CN 107592455A
- Authority
- CN
- China
- Prior art keywords
- imaging
- district
- information
- electromagnetic wave
- wave signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The disclosure is directed to a kind of shallow Deep Canvas imaging method, device and electronic equipment.Wherein, above-mentioned shallow Deep Canvas imaging method includes:The imaging sub-district obtained in imaging sensor reflects to form the light information that reflection electromagnetic wave signal determines the incident ray to electromagnetic wave signal, and the first image of scene to be taken the photograph is obtained according to the light information.It is determined that for the target depth of view information of scene to be taken the photograph, the distribution density as sub-district is adjusted to according to the target depth of view information, to obtain shallow Deep Canvas image corresponding with described first image.The utilization rate of imaging sub-district is added, and image quality is high.Wherein, the imaging sub-district can deform upon under the irradiation of incident ray, and above-mentioned reflection electromagnetic wave signal changes therewith, therefore be easy to determine the light information of incident ray.
Description
Technical field
This disclosure relates to electronic technology field, more particularly to shallow Deep Canvas imaging method, device and electronic equipment.
Background technology
The depth of field (Depth of Field, abbreviation DoF), which typically refers to pick-up lens, to be treated and takes the photograph scene and be capable of blur-free imaging
Object distance range, the region in the object distance range are referred to as in Jiao, and the region outside the object distance range is referred to as afocal, it is burnt in into clear
Picture, afocal can be into sharply defined image or vague images, such as according to the depth of field (or depth of the depth of field):Deep for Vistavision, afocal all can be into Jiao
Sharply defined image, it is very high to obtain requirement of the deep depth image to pick-up lens;For the shallow depth of field, Jiao Nei can into sharply defined image, afocal into
Vague image.
The method for obtaining shallow Deep Canvas image generally has two kinds.It is a kind of be by adjust the aperture size of pick-up lens,
The parameters such as the focal distance distance of physics focal length length, camera lens and object to be taken the photograph, make the clear On Local Fuzzy of image local of shooting,
The clear blurred background of such as prospect.Another kind is to the picture that has shot using image processing software by certain fuzzy algorithmic approach
Handled so that the fuzzy blur effect to realize similar to camera lens virtualization of image local after processing.
The content of the invention
To overcome problem present in correlation technique, the disclosure provides a kind of shallow Deep Canvas imaging method, device and electricity
Sub- equipment.
According to the disclosure in a first aspect, proposing a kind of shallow Deep Canvas imaging method, this method includes:
Reflection electromagnetic wave signal is obtained, the reflection electromagnetic wave signal is by the imaging sub-district in imaging sensor to electromagnetic wave
Signal reflex is formed;Wherein, described image sensor includes some imaging sub-districts, and the imaging sub-district can be in the photograph of incident ray
Penetrate down and deform upon;
The light information of the incident ray is determined according to the reflection electromagnetic wave signal, is obtained according to the light information
First image of scene to be taken the photograph;
Obtain the target depth of view information for scene to be taken the photograph;
According to the distribution density of the target depth of view information adjustment imaging sub-district, to obtain and described first image pair
The shallow Deep Canvas image answered.
Optionally, the light information of the incident ray is determined according to the reflection electromagnetic wave signal, including:
The reflection electromagnetic wave signal is demodulated, to obtain the first signal;
The light information of the incident ray is recovered according to first signal.
Optionally, the imaging sub-district includes:
Photosensitive layer, senses the irradiation of incident ray, and deforms upon;
Reflecting layer, corresponding reflection electromagnetic wave signal is returned to, and deformation corresponding with the photosensitive layer can occur.
Optionally, the light information of the incident ray is determined according to the reflection electromagnetic wave signal, including:
The reflection electromagnetic wave signal is sent to monitoring model, the training sample of the monitoring model includes what is be obtained ahead of time
Data pair between reflection electromagnetic wave signal and the deformation parameter of photosensitive layer;
Receive the deformation parameter of the photosensitive layer of the monitoring model output;
The light information of the incident ray is determined according to the deformation parameter.
Optionally, the deformation attribute of at least two imaging sub-districts is different;
And/or at least two the imaging sub-district electromagnetic wave signal reflection characteristic it is different.
Optionally, the target depth of view information for scene to be taken the photograph is obtained, including:
Obtain the object point depth information for scene to be taken the photograph;
Obtain the focal plane information for the scene to be taken the photograph;
The target depth of view information is determined according to the object point depth information and the focal plane information.
Optionally, the target depth of view information includes at least one of:
At least part afocal object point depth information of scene to be taken the photograph and the relative position information of focal plane;
Afocal fog-level information.
Optionally, the afocal fog-level includes the Mass circle distributed intelligence of the imaging sub-district outside the focal plane.
Optionally, included according to the distribution density of the target depth of view information adjustment imaging sub-district:
According to the target depth of view information adjustment imaging sub-district in the distribution density on incident ray direction;
And/or point according to the target depth of view information adjustment imaging sub-district on parallel to incident ray direction
Cloth density.
Optionally, according to the distribution density of the target depth of view information adjustment imaging sub-district, including:
Apply outfield at least one imaging sub-district;
Apply active force to the imaging sub-district using the outfield, to obtain the shallow depth of field corresponding with described first image
Effect image.
Optionally, the outfield includes:At least one of magnetic field, electric field, light field.
According to the second aspect of the disclosure, a kind of shallow Deep Canvas imaging device, the shallow Deep Canvas imaging device are proposed
Including:
Acquiring unit, reflection electromagnetic wave signal is obtained, the reflection electromagnetic wave signal is by imaging in imaging sensor
Area reflects to form to electromagnetic wave signal;Wherein, described image sensor includes some imaging sub-districts, and the imaging sub-district can enter
Penetrate under the irradiation of light and deform upon;
Processing unit, the light information of the incident ray is determined according to the reflection electromagnetic wave signal, according to the light
First image of line information acquisition scene to be taken the photograph;
Determining unit, obtain the target depth of view information for scene to be taken the photograph;
Execution unit, according to the target depth of view information adjustment it is described imaging sub-district distribution density, with obtain with it is described
Shallow Deep Canvas image corresponding to first image.
Optionally, the processing unit includes:
First processing subelement, is demodulated to the reflection electromagnetic wave signal, to obtain the first signal;
Second processing subelement, the light information of the incident ray is recovered according to first signal.
Optionally, the imaging sub-district includes:
Photosensitive layer, senses the irradiation of incident ray, and deforms upon;
Reflecting layer, corresponding reflection electromagnetic wave signal is returned to, and deformation corresponding with the photosensitive layer can occur.
Optionally, the processing unit includes:
Transmission sub-unit, the reflection electromagnetic wave signal, the training sample bag of the monitoring model are sent to monitoring model
Include the data pair between the reflection electromagnetic wave signal being obtained ahead of time and the deformation parameter of photosensitive layer;
Receiving subelement, receive the deformation parameter of the photosensitive layer of the monitoring model output;
3rd processing subelement, the light information of the incident ray is determined according to the deformation parameter.
Optionally, the deformation attribute of at least two imaging sub-districts is different;
And/or at least two the imaging sub-district electromagnetic wave signal reflection characteristic it is different.
Optionally, the determining unit includes:
First determination subelement, obtain the object point depth information for scene to be taken the photograph;
Second determination subelement, obtain the focal plane information for the scene to be taken the photograph;
3rd determination subelement, determine that the target depth of field is believed according to the object point depth information and the focal plane information
Breath.
Optionally, the target depth of view information includes at least one of:
At least part afocal object point depth information of scene to be taken the photograph and the relative position information of focal plane;
Afocal fog-level information.
Optionally, the afocal fog-level includes the Mass circle distributed intelligence of the imaging sub-district outside the focal plane.
Optionally, the execution unit includes:
First performs subelement, according to the target depth of view information adjustment imaging sub-district perpendicular to incident ray side
Upward distribution density;
And/or second perform subelement, according to the target depth of view information adjustment it is described imaging sub-district parallel to incidence
Distribution density on radiation direction.
Optionally, the execution unit includes:
3rd performs subelement, applies outfield at least one imaging sub-district;
4th performs subelement, applies active force to the imaging sub-district using the outfield, to obtain and described first
Shallow Deep Canvas image corresponding to image.
Optionally, the outfield includes:At least one of magnetic field, electric field, light field.
According to the third aspect of the disclosure, a kind of electronic equipment is proposed, the electronic equipment includes:
Processor, the processor are configured as realizing above-mentioned shallow Deep Canvas imaging method.
According to the fourth aspect of the disclosure, a kind of computer-readable recording medium is proposed, is stored thereon with computer instruction,
The step of above-mentioned shallow Deep Canvas imaging method, is realized in the instruction when being executed by processor.
The technical scheme provided by this disclosed embodiment can include the following benefits:
From above-described embodiment, the disclosure is reflected electromagnetic wave signal by obtaining the imaging sub-district in imaging sensor
Form the light information that reflection electromagnetic wave signal determines the incident ray.Wherein, the imaging sub-district can be in incident ray
Deformed upon under irradiation, above-mentioned reflection electromagnetic wave signal changes therewith, therefore is easy to determine the light information of incident ray.
In addition, the first image of scene to be taken the photograph is obtained according to the light information, according to can adjusting the target depth of view information
The distribution density of sub-district is imaged, to obtain shallow Deep Canvas image corresponding with described first image.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not
The disclosure can be limited.
Brief description of the drawings
Accompanying drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure
Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 a are a kind of flow charts of shallow Deep Canvas imaging method of the exemplary embodiment of the disclosure one;
Fig. 1 b are a kind of fundamental diagrams of acquisition incident ray of the exemplary embodiment of the disclosure one;
Fig. 1 c are a kind of schematic diagrames of imaging sub-district motion state of the exemplary embodiment of the disclosure one;
Fig. 2 a are a kind of flow charts of shallow Deep Canvas imaging method of disclosure another exemplary embodiment;
Fig. 2 b are a kind of deformation pattern figures of reflection electromagnetic wave signal of the exemplary embodiment of the disclosure one;
Fig. 2 c are a kind of deformation pattern figures of reflection electromagnetic wave signal of disclosure another exemplary embodiment;
Fig. 2 d are a kind of deformation pattern figures of reflection electromagnetic wave signal of disclosure further example embodiment;
Fig. 2 e are a kind of deformation pattern figures of reflection electromagnetic wave signal of the another exemplary embodiment of the disclosure;
Fig. 3 a are a kind of flow charts of shallow Deep Canvas imaging method of disclosure further example embodiment;
Fig. 3 b are a kind of fundamental diagrams of acquisition incident ray of disclosure further example embodiment;
Fig. 3 c are a kind of fundamental diagrams of acquisition incident ray of the another exemplary embodiment of the disclosure;
Fig. 4 is a kind of shallow Deep Canvas image device structure schematic diagram of the exemplary embodiment of the disclosure one;
Fig. 5 is a kind of structural representation of processing unit of the exemplary embodiment of the disclosure one;
Fig. 6 is a kind of structural representation of processing unit of disclosure another exemplary embodiment;
Fig. 7 is a kind of structural representation of determining unit of the exemplary embodiment of the disclosure one;
Fig. 8 is a kind of structural representation of determining unit of disclosure another exemplary embodiment;
Fig. 9 is a kind of structural representation of execution unit of the exemplary embodiment of the disclosure one.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects be described in detail in claims, the application.
It is only merely for the purpose of description specific embodiment in term used in this application, and is not intended to be limiting the application.
" one kind " of singulative used in the application and appended claims, " described " and "the" are also intended to including majority
Form, unless context clearly shows that other implications.It is also understood that term "and/or" used herein refers to and wrapped
Containing the associated list items purpose of one or more, any or all may be combined.
It will be appreciated that though various information, but this may be described using term first, second, third, etc. in the application
A little information should not necessarily be limited by these terms.These terms are only used for same type of information being distinguished from each other out.For example, do not departing from
In the case of the application scope, the first information can also be referred to as the second information, and similarly, the second information can also be referred to as
One information.Depending on linguistic context, word as used in this " if " can be construed to " ... when " or " when ...
When " or " in response to determining ".
During being shot using camera devices such as cameras, in order to obtain the image of shallow Deep Canvas, the disclosure
A kind of shallow Deep Canvas imaging method as shown in Figure 1a is proposed, the shallow Deep Canvas imaging method may comprise steps of:
In a step 101, reflection electromagnetic wave signal is obtained.
Wherein, imaging sensor can include some imaging sub-districts, and the imaging sub-district can issue in the irradiation of incident ray
Raw deformation, above-mentioned reflection electromagnetic wave signal are reflected to form by the imaging sub-district in imaging sensor to electromagnetic wave signal.Specifically,
As shown in Figure 1 b, imaging sub-district D can include photosensitive layer D1 and reflecting layer D2.Photosensitive layer D1 can be used for receiving incident ray H1,
And deformation corresponding with incident ray H1 light information occurs.Deformation corresponding with photosensitive layer D1 can occur for reflecting layer D2, and
Reflection reflection electromagnetic wave signal H2 corresponding with incident ray H1.Receiver I receives above-mentioned reflection electromagnetic wave signal H2 to carry out
Processing.
It should be noted that at least two imaging sub-district D deformation attribute is different, and/or, at least two imaging sub-district D
Electromagnetic wave signal reflection characteristic it is different, positioned and distinguished with the electromagnetic wave signal reflected different imaging sub-district D.Its
In, above-mentioned "and/or" includes three kinds of situations, and a kind of situation is that at least two imaging sub-district D deformation attribute is different, and is imaged son
Area D reflection of electromagnetic wave characteristic identical.Another situation is that at least two imaging sub-district D reflection of electromagnetic wave characteristic is different,
And the transform properties for being imaged sub-district D are identical.Another situation is that at least two imaging sub-district D deformation attribute is different, and/
Or, at least two the imaging sub-district D electromagnetic wave signal reflection characteristic it is different.Above-mentioned three kinds of situations can be to being imaged sub-district
The electromagnetic wave signal that D is reflected is positioned and distinguished.
In a step 102, the light information of incident ray is determined according to reflection electromagnetic wave signal, according to the light information
Obtain the first image of scene to be taken the photograph.
It should be noted that this disclosure relates to incident ray incident ray need to be converged by an at least lens with
Imaging, or incident ray is converged to be imaged by speculum, the disclosure is limited not to this.
Wherein, described first image is the shooting image that described image sensor is formed according to the light information, is reacted
Go out the unjustified Deep Canvas of scene to be taken the photograph.The light information can include:The intensity of incident ray, color, polarization side
At least one of to.In one embodiment, described image sensor includes according to the reflection electromagnetic wave signal and corresponded to therewith
The deformation parameter of photosensitive layer train monitoring model.In order to obtain the light information of incident ray, can be sent out to monitoring model
Send the reflection electromagnetic wave signal, the training sample of the monitoring model include the reflection electromagnetic wave signal that is obtained ahead of time with it is photosensitive
Data pair between the deformation parameter of layer.The deformation parameter received is determined for the light letter of the incident ray
Breath.
The reflection parameters in reflecting layer and the deformation parameter of photosensitive layer are the caused changes based on same incident ray, are phases
Mutually corresponding and with synchronism data.Because the deformation that the photosensitive layer of different photic deformable materials occurs for incident ray is joined
Number is different, but every kind of photic deformable material has the light that corresponding photic warping function can calculate incident ray
Information.
In another embodiment, first the reflection electromagnetic wave signal can be demodulated, to obtain the first signal, then root
The light information of the incident ray is recovered according to first signal.
The depth of field generally characterizes object distance range of the scene to be taken the photograph with respect to focal plane blur-free imaging, and image imaging sub-district is corresponded in Jiao
Distribution density be more than the image object picture element density of its corresponding afocal, in order to which the imaging of burnt inside points in target image is relative
The imaging of afocal part is relatively sharp, and the shallow depth image effect that clear, afocal obscures in Jiao is thus visually presented.
In step 103, the target depth of view information for scene to be taken the photograph is obtained.
In one embodiment, target depth of view information can include object point depth information and focal plane information, obtain field to be taken the photograph
At least part afocal object point depth information of scape and the relative position relation of focal plane, with corresponding using above-mentioned relative position relation
The distribution density being adjusted to as sub-district.
In another embodiment, the target depth of view information can include afocal fog-level information, afocal fog-level
Mass circle distributed intelligence including the imaging sub-district outside the focal plane, to utilize adjustment corresponding to above-mentioned Mass circle distributed intelligence
It is imaged the distribution density of sub-district.
It should be noted that target depth of view information can be by the light information with classical estimation in the above-described embodiments
Method obtains, and can also be obtained for example, by depth transducer, radar and network connection, the disclosure is limited not to this
System.
At step 104, according to the distribution density of the target depth of view information adjustment imaging sub-district, to obtain and described the
Shallow Deep Canvas image corresponding to one image.
As illustrated in figure 1 c, adjusting the mode of the distribution density of the imaging sub-district can include:For it is at least one it is described into
As sub-district D applies outfield E, control the outfield E to apply active force to the imaging sub-district D using control unit F so that it is described into
As sub-district D is moved on parallel and/or direction perpendicular to incident ray according to target depth of view information.It should be noted that institute
Stating outfield can include:At least one of magnetic field, electric field, light field, the disclosure is limited not to this.
Each imaging sub-district of embodiment of the present disclosure imaging sensor described in image acquisition process both participates in IMAQ,
Imaging sub-district can deform upon under the irradiation of incident ray, be easy to determine incident ray by the change of reflection electromagnetic wave signal
Light information.Carried out in addition, the imaging sub-district distribution density of imaging sensor is imaged sub-district distribution density according to image
Adjustment, and image imaging sub-district distribution density is determined according to the target depth of view information of scene to be taken the photograph.According to the figure after adjustment
As sensor obtains the image of scene to be taken the photograph, the definition of image different zones shows corresponding with imaging sub-district distribution density
Difference distribution.The part for needing clearly to present has more imaging sub-districts and participates in IMAQ, the image definition of this part
It is higher, and the target depth of view information then participates in IMAQ, the portion without the part clearly presented with relatively small number of pixel
The image divided is more fuzzy, and the above method improves IMAQ efficiency.
Now following two embodiments are proposed for the acquisition modes of target depth of view information:
Fig. 2 a are a kind of flow charts of shallow Deep Canvas imaging method of disclosure another exemplary embodiment.Such as Fig. 2 a institutes
Show, the shallow Deep Canvas imaging method may comprise steps of:
In step 201, reflection electromagnetic wave signal is obtained.
Wherein, imaging sensor can include some imaging sub-districts, and the imaging sub-district can issue in the irradiation of incident ray
Raw deformation, above-mentioned reflection electromagnetic wave signal are reflected to form by the imaging sub-district in imaging sensor to electromagnetic wave signal.Specifically,
Imaging sub-district can include photosensitive layer and reflecting layer.Photosensitive layer can be used for receiving incident ray, and the light with incident ray occurs
Deformation corresponding to line information.Deformation corresponding with photosensitive layer can occur for reflecting layer, and reflect reflection electricity corresponding with incident ray
Magnetostatic wave signal.Receiver receives above-mentioned reflection electromagnetic wave signal to be handled.
Described image sensor includes the deformation parameter according to the reflection electromagnetic wave signal and corresponding photosensitive layer
Train monitoring model.Specifically, when incident ray is irradiated in imaging sub-district, the reflection that each imaging sub-district returns is gathered
Electromagnetic wave signal and corresponding photosensitive layer deformation parameter, form training sample.By above-mentioned principle, for different incidences
The polarised direction of light, intensity, color etc. can recorded substantial amounts of training sample.Based on above-mentioned training sample, automatically generate
Largely on logistic regression the problem of, and then learn certain existing pass between training sample and training pattern performance
System, so as to obtain a simple rule for the deformation parameter of reflection electromagnetic wave signal and photosensitive layer is mapped.
In step 202, reflection electromagnetic wave signal is sent to monitoring model, the training sample of monitoring model includes obtaining in advance
Data pair between the reflection electromagnetic wave signal and the deformation parameter of photosensitive layer that obtain.
In step 203, the light information of incident ray is determined according to the deformation parameter received, is believed according to the light
Breath obtains the first image of scene to be taken the photograph.
Wherein, described first image is the shooting image that described image sensor is formed according to the light information, is reacted
Go out the unjustified Deep Canvas of scene to be taken the photograph.In the above-described embodiments, in order to obtain the light information of incident ray, Ke Yixiang
The monitoring model sends the reflection electromagnetic wave signal, and monitoring model corresponds to therewith according to the reflection electromagnetic wave signal output
Photosensitive layer deformation parameter.The deformation parameter according to receiving determines the light information of the incident ray.Wherein, institute
Stating light information can include:At least one of the intensity of incident ray, color, polarised direction.Above-mentioned reflecting layer and photosensitive layer
Deformation parameter be based on same incident ray and caused change, be data mutual corresponding and with synchronism.Due to not
The deformation parameter occurred with the photosensitive layer of light-induced variable shape material for incident ray is different, but every kind of photic deformable material is all
There is the light information that corresponding photic warping function can calculate incident ray.
It should be noted that step 202 and step 203 can be replaced by:The reflection electromagnetic wave signal is carried out
Demodulation, to obtain the first signal;The light information of the incident ray is recovered according to first signal.
Being imaged deformation caused by sub-district can be included in change in shape, area change, variable density, smooth degree change extremely
One of few, above-mentioned deformation causes the change of reflecting layer reflection characteristic, and reflection characteristic can pass through channel parameter or scattered
Parameter is penetrated to describe, the disclosure is defined not to this.Due to the change of reflection characteristic, reflection electromagnetic wave signal G is changed
Frequency spectrum and amplitude characteristic, the reflection electromagnetic wave signal G is demodulated using the signal demodulating method of classics, to obtain the
One signal, and recover according to the first signal after demodulation the light information of incident ray.Wherein, reflection electromagnetic wave signal G exists
Imaging sub-district can include the several frequently seen distorted pattern as shown in Fig. 2 b, Fig. 2 c, Fig. 2 d, Fig. 2 e when receiving incident ray irradiation
Formula.Reflecting layer carries incident ray after incident ray irradiation deforms in its reflection electromagnetic wave signal G reflected
Light information, demodulation reflection electromagnetic wave signal G can obtain the first signal comprising incident ray information, therefore the first signal can
For recovering the light information of incident ray.
In step 204, the object point depth information for scene to be taken the photograph is obtained.
Wherein, object point depth information can be obtained by the light information with classical depth estimation method, such as be passed through
Binocular stereo vision, illumination shadow information, zoom focus, defocus the methods of Mass circle information, can also be passed for example, by depth
The externalist methodologies such as sensor, radar and network connection are obtained, and the disclosure is limited not to this.
In step 205, the focal plane information for scene to be taken the photograph is obtained.
In one embodiment, the focal plane can be according to region of interest (Region of Interest, abbreviation ROI) information
It is determined that.The region of interest may include but be not limited to one or more of:The scene to be taken the photograph of user's selection passes in described image
At least one region for the preview image that at least one region of the preview image of sensor, user watch attentively, imaging device pair
The region of interest that preview image automatic detection obtains.The program determines the focal plane of scene to be taken the photograph according to the region of interest, makes
It is more identical with actual user's demand to obtain the determination of the focal plane, can more preferably meet the application demand of user individual.
In another embodiment, the focal plane of scene to be taken the photograph can determine according to the result of graphical analysis, such as:To described pre-
Image of looking at carries out recognition of face, the focal plane of face is defined as according to recognition result described in scene to be taken the photograph focal plane.Again
Such as:Object identification is moved to the preview image, according to recognition result by the focal plane of the respective area of mobile object
It is defined as the focal plane of the scene to be taken the photograph.The program can determine that Jiao of scene to be taken the photograph puts down according to the image analysis result of preview image
Face so that the determination of the focal plane of scene to be taken the photograph is more intelligent, improves efficiency and universality that the focal plane determines.
In step 206, the target depth of view information is determined according to object point depth information and focal plane information.
In the present embodiment, target depth of view information can scene object point depth location be taken the photograph relative to focal plane position
Distance.Wherein, object point depth location and focal plane position can be obtained by step 205 and step 206.
In step 207, the distribution density as sub-district is adjusted to according to target depth of view information, to obtain and first figure
The shallow Deep Canvas image as corresponding to.
In the present embodiment, the distribution density of imaging sub-district is determined according to target depth of view information, specifically, imaging sub-district mesh
Mark the distance that distribution density corresponds to afocal object point positional distance focal plane position in scene to be taken the photograph.For expressing apart from focal plane
The distribution density of the imaging sub-district of nearer object point is more than point for the imaging sub-district for being used for expressing the object point apart from focal plane farther out
Cloth density, in order to which the definition of the imaging in different distance section in image has differences.The object point nearer apart from focal plane into
More obscured apart from the objective point imaging of focal plane farther out as more clear, more closely objective point imaging is thus visually presented
The fuzzyyer shallow depth image effect of more clear, more remote objective point imaging.
In the above-described embodiments, adjusting the mode of the distribution density of the imaging sub-district can include:For at least one institute
State imaging sub-district and apply outfield, apply active force to the imaging sub-district using the outfield, so that the imaging sub-district is flat
Go and/or moved on the direction of incident ray according to the target depth of view information.It should be noted that the outfield can
With including:At least one of magnetic field, electric field, light field, the disclosure is limited not to this.
Fig. 3 a are a kind of flow charts of Zooming method of disclosure another exemplary embodiment.As shown in Figure 3 a, the zoom
Method may comprise steps of:
In step 301, reflection electromagnetic wave signal is obtained.
Wherein, imaging sensor can include some imaging sub-districts, and the imaging sub-district can issue in the irradiation of incident ray
Raw deformation, above-mentioned reflection electromagnetic wave signal are reflected to form by the imaging sub-district in imaging sensor to electromagnetic wave signal.Specifically,
Imaging sub-district can include photosensitive layer and reflecting layer.Photosensitive layer can be used for receiving incident ray, and the light with incident ray occurs
Deformation corresponding to line information.Deformation corresponding with photosensitive layer can occur for reflecting layer, and reflect reflection electricity corresponding with incident ray
Magnetostatic wave signal.Receiver receives above-mentioned reflection electromagnetic wave signal to be handled.
In step 302, reflection electromagnetic wave signal is sent to monitoring model, the training sample of monitoring model includes obtaining in advance
Data pair between the reflection electromagnetic wave signal and the deformation parameter of photosensitive layer that obtain.
In step 303, the light information of incident ray is determined according to the deformation parameter received, is believed according to the light
Breath obtains the first image of scene to be taken the photograph.
Wherein, described first image is the shooting image that described image sensor is formed according to the light information, is reacted
Go out the unjustified Deep Canvas of scene to be taken the photograph.In the above-described embodiments, imaging sensor is included according to reflection electromagnetic wave signal
And the deformation parameter of corresponding photosensitive layer trains monitoring model.In order to obtain the light information of incident ray, Ke Yixiang
The monitoring model sends the reflection electromagnetic wave signal, and monitoring model corresponds to therewith according to the reflection electromagnetic wave signal output
Photosensitive layer deformation parameter.The deformation parameter according to receiving determines the light information of the incident ray.Wherein, institute
Stating light information can include:At least one of the intensity of incident ray, color, polarised direction.Above-mentioned reflecting layer and photosensitive layer
Deformation parameter be based on same incident ray and caused change, be data mutual corresponding and with synchronism.Due to not
The deformation parameter occurred with the photosensitive layer of light-induced variable shape material for incident ray is different, but every kind of photic deformable material is all
There is the light information that corresponding photic warping function can calculate incident ray.
It should be noted that step 302 and step 303 can be replaced by:The reflection electromagnetic wave signal is carried out
Demodulation, to obtain the first signal;The light information of the incident ray is recovered according to first signal.
In step 304, the object point depth information for scene to be taken the photograph is obtained.
Wherein, object point depth information can be obtained by the light information with classical depth estimation method, such as be passed through
Binocular stereo vision, illumination shadow information, zoom focus, defocus the methods of Mass circle information, can also be passed for example, by depth
The externalist methodologies such as sensor, radar and network connection are obtained, and the disclosure is limited not to this.
In step 305, the afocal fog-level information for scene to be taken the photograph is obtained.
The acquisition modes of the afocal fog-level information are unrestricted, such as can be determined by user, or, can be by waiting to take the photograph
The depth information determination of scene, or, can be predefined by imaging device, etc..
In one embodiment, afocal fog-level information includes:At least part afocal object point of scene to be taken the photograph passes in image
The blur circle distributed intelligence of at least part imaging point of sensor.In this case, can be according to the blur circle of at least part imaging point
Distributed intelligence determines imaging sub-district distribution density.Determining that the specific implementation of blur circle distributed intelligence is unrestricted may include:
It is determined that blur circle (circle of of at least afocal object point for scene to be taken the photograph in an at least imaging point for imaging sensor
Confusion) information, according to other afocal object points of at least part of scene to be taken the photograph and the distance of focal plane and determination at least
The blur circle information of one imaging point determines the blur circle information of at least partly other imaging points.Such as Fig. 3 b, P are on focal plane
Object point, Q are object point outside focal plane, can determine that afocal object point Q is in the imaging of described image sensor in scene to be taken the photograph by calculating
The disperse circular diameter of point.Specifically, according to the disperse circular diameter of the corresponding imaging point of above-mentioned object point of determination, the thing of above-mentioned object point
Away from, the focal length of camera lens of known scene to be taken the photograph and the object distance of focal plane, can determine that according to disperse circular diameter calculation formula
Go out the desired virtual aperture value N of user.The object distance of other one or more object points in scene to be taken the photograph is combined afterwards, can be according to following
Disperse circular diameter calculation formula determines the corresponding imaging point of other in scene to be taken the photograph one or more object points in imaging sensor
Disperse circular diameter:
In above formula, f represents the focal length of camera lens, and U1 represents the object distance of focal plane, and U2 represents the object point of blur circle to be calculated
Object distance, N represent the desired virtual aperture value of user, d represent U2 correspond to object point imaging sensor corresponding imaging point more
Dissipate circular diameter.
Within step 306, the distribution density as sub-district is adjusted to according to afocal fog-level information, to obtain and described the
Shallow Deep Canvas image corresponding to one image.
After obtaining the disperse circular diameter of the corresponding imaging point of each object point, you can determine imaging according to blur circle distributed intelligence
Area's distribution density.When it is determined that the scopes of different blur circles have certain overlapping when, blur circle weight can be determined according to being actually needed
The imaging sub-district distribution density in folded region.As shown in Figure 3 c, three afocal object points imaging sensor three imaging points more
Scattered circle is expressed as A, B and C, and the radius of three blur circles increases successively.Wherein, imaging in the small region of disperse circular diameter
Area's distribution density is more than the imaging sub-district distribution density in the big region of disperse circular diameter.But three blur circles have certain weight
It is multiple, the imaging sub-district distribution density that certain rule determines different zones can be followed under the situation, rule may include but be not limited to
Big density priority rule, such as the A sub-district distribution densities that are imaged corresponding with B or C common factor are imaged sub-district distribution density corresponding to A
A, B and C common factor correspondence image object pixel density are the corresponding imaging sub-district distribution density b of B, are thus visually showed
Shallow depth image effect.The program make it that the setting of afocal fog-level is more flexible.
In the above-described embodiments, adjusting the mode of the distribution density of the imaging sub-district can include:For at least one institute
State imaging sub-district and apply outfield, apply active force to the imaging sub-district using the outfield, so that the imaging sub-district is flat
Go and/or moved on the direction of incident ray according to the target depth of view information.It should be noted that the outfield can
With including:At least one of magnetic field, electric field, light field, the disclosure is limited not to this.
According to above-described embodiment, the disclosure is it is further proposed that a kind of shallow Deep Canvas imaging device, applied to image sensing
Device.Fig. 4 is a kind of structural representation of shallow Deep Canvas imaging device of the exemplary embodiment of the disclosure one, as shown in figure 4, on
Stating shallow Deep Canvas imaging device includes acquiring unit 41, processing unit 42, determining unit 43 and execution unit 44.
Acquiring unit 41 is configured as obtaining reflection electromagnetic wave signal.Wherein, the reflection electromagnetic wave signal is passed by image
Imaging sub-district in sensor reflects to form to electromagnetic wave signal, and described image sensor includes some imaging sub-districts, the imaging
Sub-district can deform upon under the irradiation of incident ray.
Processing unit 42 is configured as determining the light information of the incident ray, root according to the reflection electromagnetic wave signal
The first image of scene to be taken the photograph is obtained according to the light information.
Determining unit 43 is configured as obtaining the target depth of view information for scene to be taken the photograph.
Execution unit 44 is configured as the distribution density according to the target depth of view information adjustment imaging sub-district, to obtain
Obtain shallow Deep Canvas image corresponding with described first image.
The disclosure also proposes a kind of shallow Deep Canvas imaging device, and Fig. 5 is at one kind of the exemplary embodiment of the disclosure one
The structural representation of unit is managed, as shown in figure 5, on the basis of foregoing embodiment illustrated in fig. 4, the processing unit 42 can wrap
Include transmission sub-unit 421, receiving subelement the 422, the 3rd handles subelement 423.Wherein:
Transmission sub-unit 421 is configured as sending the reflection electromagnetic wave signal to monitoring model, the monitoring model
Training sample includes the data pair between the deformation parameter for the reflection electromagnetic wave signal and photosensitive layer being obtained ahead of time.
Receiving subelement 422 is configured as receiving the deformation parameter of the photosensitive layer of the monitoring model output.
3rd processing subelement 423 is configured as determining the light information of the incident ray according to the deformation parameter,
The first image of scene to be taken the photograph is obtained according to the light information.
Fig. 6 is a kind of structural representation of processing unit of disclosure another exemplary embodiment, as shown in fig. 6, preceding
On the basis of stating embodiment illustrated in fig. 4, it is single that the processing unit 42 can include the first processing processing of subelement 424 and the 3rd
Member 425.Wherein:
First processing subelement 424 is configured as being demodulated the reflection electromagnetic wave signal, to obtain the first signal.
The light that second processing subelement 425 is configured as recovering the incident ray according to first signal is believed
Breath, the first image of scene to be taken the photograph is obtained according to the light information.
Fig. 7 is a kind of structural representation of determining unit of the exemplary embodiment of the disclosure one.As shown in fig. 7, foregoing
On the basis of embodiment illustrated in fig. 4, determining unit 43 can include the first determination subelement 431, the second determination subelement 432,
3rd determination subelement 433.Wherein:
First determination subelement 431 is configured as obtaining the object point depth information for scene to be taken the photograph.
Second determination subelement 432 is configured as obtaining the focal plane information for scene to be taken the photograph.
3rd determination subelement 433 is configured as according to determining the object point depth information and the focal plane information
Target depth of view information.
Fig. 8 is a kind of structural representation of determining unit of disclosure another exemplary embodiment.As shown in figure 8, preceding
On the basis of stating embodiment illustrated in fig. 4, determining unit 43 can include the first determination subelement 431 and the 4th determination subelement
434.Wherein:
First determination subelement 431 is configured as obtaining the object point depth information for scene to be taken the photograph.
4th determination subelement 434 is configured as determining afocal fog-level information according to the light information.
Fig. 9 is a kind of structural representation of execution unit of the exemplary embodiment of the disclosure one.As shown in figure 9, foregoing
On the basis of embodiment illustrated in fig. 4, execution unit 44 can include first perform subelement 441, second perform subelement 442,
3rd, which performs subelement 443 and the 4th, performs subelement 444.Wherein:
First execution subelement 441 be configured as according to the target depth of view information adjustment it is described imaging sub-district perpendicular to
Distribution density on incident ray direction.
Second execution subelement 442 be configured as according to the target depth of view information adjustment it is described imaging sub-district parallel to
Distribution density on incident ray direction.
3rd execution subelement 443 is configured as applying outfield at least one imaging sub-district.
4th execution subelement 444 is configured to, with the outfield and applies active force to the imaging sub-district, to obtain
Shallow Deep Canvas image corresponding with described first image.
On the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant this method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
For device embodiment, because it corresponds essentially to embodiment of the method, so related part is real referring to method
Apply the part explanation of example.Device embodiment described above is only schematical, wherein described be used as separating component
The unit of explanation can be or may not be physically separate, can be as the part that unit is shown or can also
It is not physical location, you can with positioned at a place, or can also be distributed on multiple NEs.Can be according to reality
Need to select some or all of module therein to realize the purpose of disclosure scheme.Those of ordinary skill in the art are not paying
In the case of going out creative work, you can to understand and implement.
For the disclosure it is further proposed that a kind of electronic equipment, the electronic equipment can include processor, the processor by with
It is set to and realizes above-mentioned shallow Deep Canvas imaging method.
In one exemplary embodiment, the disclosure additionally provides a kind of computer-readable storage of non-transitory including instructing
Medium.Such as the memory including instruction, above-mentioned instruction can realize the above-mentioned of the disclosure after the computing device by emergency device
Shallow Deep Canvas imaging method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory
Device (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc..
Those skilled in the art will readily occur to the disclosure after considering specification and putting into practice technical scheme disclosed herein
Other embodiments.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, uses
Way or adaptations follow the general principle of the disclosure and including undocumented in the art known of the disclosure
General knowledge or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are under
The claim in face is pointed out.
It should be appreciated that the precision architecture that the disclosure is not limited to be described above and is shown in the drawings, and
And various modifications and changes can be being carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.
Claims (24)
- A kind of 1. shallow Deep Canvas imaging method, it is characterised in that including:Reflection electromagnetic wave signal is obtained, the reflection electromagnetic wave signal is by the imaging sub-district in imaging sensor to electromagnetic wave signal Reflect to form;Wherein, described image sensor includes some imaging sub-districts, and the imaging sub-district can be under the irradiation of incident ray Deform upon;The light information of the incident ray is determined according to the reflection electromagnetic wave signal, is obtained according to the light information and waits to take the photograph First image of scene;Obtain the target depth of view information for scene to be taken the photograph;It is corresponding with described first image to obtain according to the distribution density of the target depth of view information adjustment imaging sub-district Shallow Deep Canvas image.
- 2. shallow Deep Canvas imaging method according to claim 1, it is characterised in that according to the reflection electromagnetic wave signal The light information of the incident ray is determined, including:The reflection electromagnetic wave signal is demodulated, to obtain the first signal;The light information of the incident ray is recovered according to first signal.
- 3. shallow Deep Canvas imaging method according to claim 1, it is characterised in that the imaging sub-district includes:Photosensitive layer, senses the irradiation of incident ray, and deforms upon;Reflecting layer, corresponding reflection electromagnetic wave signal is returned to, and deformation corresponding with the photosensitive layer can occur.
- 4. shallow Deep Canvas imaging method according to claim 3, it is characterised in that according to the reflection electromagnetic wave signal The light information of the incident ray is determined, including:The reflection electromagnetic wave signal is sent to monitoring model, the training sample of the monitoring model includes the reflection being obtained ahead of time Data pair between electromagnetic wave signal and the deformation parameter of photosensitive layer;Receive the deformation parameter of the photosensitive layer of the monitoring model output;The light information of the incident ray is determined according to the deformation parameter.
- 5. shallow Deep Canvas imaging method according to claim 1, it is characterised in that:The deformation attribute of at least two imaging sub-districts is different;And/or at least two the imaging sub-district electromagnetic wave signal reflection characteristic it is different.
- 6. shallow Deep Canvas imaging method according to claim 1, it is characterised in that obtain the target for scene to be taken the photograph Depth of view information, including:Obtain the object point depth information for scene to be taken the photograph;Obtain the focal plane information for the scene to be taken the photograph;The target depth of view information is determined according to the object point depth information and the focal plane information.
- 7. shallow Deep Canvas imaging method according to claim 1, it is characterised in that the target depth of view information include with It is at least one lower:At least part afocal object point depth information of scene to be taken the photograph and the relative position information of focal plane;Afocal fog-level information.
- 8. shallow Deep Canvas imaging method according to claim 7, it is characterised in that the afocal fog-level includes institute State the Mass circle distributed intelligence of the imaging sub-district outside focal plane.
- 9. shallow Deep Canvas imaging method according to claim 1, it is characterised in that adjusted according to the target depth of view information The distribution density of the whole imaging sub-district, including:According to the target depth of view information adjustment imaging sub-district in the distribution density on incident ray direction;And/or the distribution according to the target depth of view information adjustment imaging sub-district on parallel to incident ray direction is close Degree.
- 10. shallow Deep Canvas imaging method according to claim 1, it is characterised in that according to the target depth of view information The distribution density of the imaging sub-district is adjusted, including:Apply outfield at least one imaging sub-district;Apply active force to the imaging sub-district using the outfield, to obtain shallow Deep Canvas corresponding with described first image Image.
- 11. shallow Deep Canvas imaging method according to claim 10, it is characterised in that the outfield includes:Magnetic field, electricity At least one of field, light field.
- A kind of 12. shallow Deep Canvas imaging device, it is characterised in that including:Acquiring unit, reflection electromagnetic wave signal is obtained, the reflection electromagnetic wave signal is by the imaging sub-district pair in imaging sensor Electromagnetic wave signal reflects to form;Wherein, described image sensor includes some imaging sub-districts, and the imaging sub-district can be in incident light Deformed upon under the irradiation of line;Processing unit, the light information of the incident ray is determined according to the reflection electromagnetic wave signal, believed according to the light Breath obtains the first image of scene to be taken the photograph;Determining unit, obtain the target depth of view information for scene to be taken the photograph;Execution unit, according to the distribution density of the target depth of view information adjustment imaging sub-district, to obtain and described first Shallow Deep Canvas image corresponding to image.
- 13. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the processing unit includes:First processing subelement, is demodulated to the reflection electromagnetic wave signal, to obtain the first signal;Second processing subelement, the light information of the incident ray is recovered according to first signal.
- 14. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the imaging sub-district includes:Photosensitive layer, senses the irradiation of incident ray, and deforms upon;Reflecting layer, corresponding reflection electromagnetic wave signal is returned to, and deformation corresponding with the photosensitive layer can occur.
- 15. shallow Deep Canvas imaging device according to claim 14, it is characterised in that the processing unit includes:Transmission sub-unit, the reflection electromagnetic wave signal is sent to monitoring model, the training sample of the monitoring model is included in advance Data pair between the reflection electromagnetic wave signal and the deformation parameter of photosensitive layer that first obtain;Receiving subelement, receive the deformation parameter of the photosensitive layer of the monitoring model output;3rd processing subelement, the light information of the incident ray is determined according to the deformation parameter.
- 16. shallow Deep Canvas imaging device according to claim 12, it is characterised in that:The deformation attribute of at least two imaging sub-districts is different;And/or at least two the imaging sub-district electromagnetic wave signal reflection characteristic it is different.
- 17. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the determining unit includes:First determination subelement, obtain the object point depth information for scene to be taken the photograph;Second determination subelement, obtain the focal plane information for scene to be taken the photograph;3rd determination subelement, the target depth of view information is determined according to the object point depth information and the focal plane information.
- 18. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the target depth of view information includes At least one of:At least part afocal object point depth information of scene to be taken the photograph and the relative position information of focal plane;Afocal fog-level information.
- 19. shallow Deep Canvas imaging device according to claim 18, it is characterised in that the afocal fog-level includes The Mass circle distributed intelligence of imaging sub-district outside the focal plane.
- 20. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the execution unit includes:First performs subelement, according to the target depth of view information adjustment imaging sub-district on incident ray direction Distribution density;And/or second perform subelement, according to the target depth of view information adjustment it is described imaging sub-district parallel to incident ray Distribution density on direction.
- 21. shallow Deep Canvas imaging device according to claim 12, it is characterised in that the execution unit includes:3rd performs subelement, applies outfield at least one imaging sub-district;4th performs subelement, applies active force to the imaging sub-district using the outfield, with acquisition and described first image Corresponding shallow Deep Canvas image.
- 22. shallow Deep Canvas imaging device according to claim 21, it is characterised in that the outfield includes:Magnetic field, electricity At least one of field, light field.
- 23. a kind of electronic equipment, it is characterised in that including:Processor, the processor are configured as realizing the shallow Deep Canvas imaging side as described in claim any one of 1-11 Method.
- 24. a kind of computer-readable recording medium, is stored thereon with computer instruction, it is characterised in that the instruction is by processor Realized during execution:As any one of claim 1-11 the step of shallow Deep Canvas imaging method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710819207.5A CN107592455B (en) | 2017-09-12 | 2017-09-12 | Shallow depth of field effect imaging method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710819207.5A CN107592455B (en) | 2017-09-12 | 2017-09-12 | Shallow depth of field effect imaging method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107592455A true CN107592455A (en) | 2018-01-16 |
CN107592455B CN107592455B (en) | 2020-03-17 |
Family
ID=61050526
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710819207.5A Active CN107592455B (en) | 2017-09-12 | 2017-09-12 | Shallow depth of field effect imaging method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107592455B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108337434A (en) * | 2018-03-27 | 2018-07-27 | 中国人民解放军国防科技大学 | Out-of-focus virtual refocusing method for light field array camera |
CN111835968A (en) * | 2020-05-28 | 2020-10-27 | 北京迈格威科技有限公司 | Image definition restoration method and device and image shooting method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104159038A (en) * | 2014-08-26 | 2014-11-19 | 北京智谷技术服务有限公司 | Method and device of imaging control of image with shallow depth of field effect as well as imaging equipment |
CN104243823A (en) * | 2014-09-15 | 2014-12-24 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition device |
CN104469147A (en) * | 2014-11-20 | 2015-03-25 | 北京智谷技术服务有限公司 | Light field collection control method and device and light field collection equipment |
CN105472233A (en) * | 2014-09-09 | 2016-04-06 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition equipment |
CN106161910A (en) * | 2015-03-24 | 2016-11-23 | 北京智谷睿拓技术服务有限公司 | Image formation control method and device, imaging device |
CN106161912A (en) * | 2015-03-24 | 2016-11-23 | 北京智谷睿拓技术服务有限公司 | Focusing method and device, capture apparatus |
-
2017
- 2017-09-12 CN CN201710819207.5A patent/CN107592455B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104159038A (en) * | 2014-08-26 | 2014-11-19 | 北京智谷技术服务有限公司 | Method and device of imaging control of image with shallow depth of field effect as well as imaging equipment |
CN105472233A (en) * | 2014-09-09 | 2016-04-06 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition equipment |
CN104243823A (en) * | 2014-09-15 | 2014-12-24 | 北京智谷技术服务有限公司 | Light field acquisition control method and device and light field acquisition device |
CN104469147A (en) * | 2014-11-20 | 2015-03-25 | 北京智谷技术服务有限公司 | Light field collection control method and device and light field collection equipment |
CN106161910A (en) * | 2015-03-24 | 2016-11-23 | 北京智谷睿拓技术服务有限公司 | Image formation control method and device, imaging device |
CN106161912A (en) * | 2015-03-24 | 2016-11-23 | 北京智谷睿拓技术服务有限公司 | Focusing method and device, capture apparatus |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108337434A (en) * | 2018-03-27 | 2018-07-27 | 中国人民解放军国防科技大学 | Out-of-focus virtual refocusing method for light field array camera |
CN108337434B (en) * | 2018-03-27 | 2020-05-22 | 中国人民解放军国防科技大学 | Out-of-focus virtual refocusing method for light field array camera |
CN111835968A (en) * | 2020-05-28 | 2020-10-27 | 北京迈格威科技有限公司 | Image definition restoration method and device and image shooting method and device |
CN111835968B (en) * | 2020-05-28 | 2022-02-08 | 北京迈格威科技有限公司 | Image definition restoration method and device and image shooting method and device |
Also Published As
Publication number | Publication date |
---|---|
CN107592455B (en) | 2020-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10269130B2 (en) | Methods and apparatus for control of light field capture object distance adjustment range via adjusting bending degree of sensor imaging zone | |
CN105659580B (en) | A kind of Atomatic focusing method, device and electronic equipment | |
CN102812496B (en) | For the ambiguity function modeling that the depth of field is played up | |
US20210227132A1 (en) | Method for tracking target in panoramic video, and panoramic camera | |
CN107452031B (en) | Virtual ray tracking method and light field dynamic refocusing display system | |
CN101713902A (en) | Fast camera auto-focus | |
WO2012104759A1 (en) | Method of recording an image and obtaining 3d information from the image, camera system | |
US9253415B2 (en) | Simulating tracking shots from image sequences | |
CN110488481A (en) | A kind of microscope focusing method, microscope and relevant device | |
CN113079325B (en) | Method, apparatus, medium, and device for imaging billions of pixels under dim light conditions | |
CN109451240B (en) | Focusing method, focusing device, computer equipment and readable storage medium | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
CN106204554A (en) | Depth of view information acquisition methods based on multiple focussing image, system and camera terminal | |
CN107592455A (en) | Shallow Deep Canvas imaging method, device and electronic equipment | |
CN106469435B (en) | Image processing method, device and equipment | |
CN101557469B (en) | Image processing device and image processing method | |
US9995905B2 (en) | Method for creating a camera capture effect from user space in a camera capture system | |
CN111260687B (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
Xue | Blind image deblurring: a review | |
CN105467741A (en) | Panoramic shooting method and terminal | |
CN110211155A (en) | Method for tracking target and relevant apparatus | |
Zhao et al. | Image aesthetics enhancement using composition-based saliency detection | |
CN114007056A (en) | Method and device for generating three-dimensional panoramic image | |
CN107682597B (en) | Imaging method, imaging device and electronic equipment | |
CN107483828B (en) | Zooming method, zooming device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |