CN110035218A - A kind of image processing method, image processing apparatus and photographing device - Google Patents

A kind of image processing method, image processing apparatus and photographing device Download PDF

Info

Publication number
CN110035218A
CN110035218A CN201810028792.1A CN201810028792A CN110035218A CN 110035218 A CN110035218 A CN 110035218A CN 201810028792 A CN201810028792 A CN 201810028792A CN 110035218 A CN110035218 A CN 110035218A
Authority
CN
China
Prior art keywords
image
camera
depth
field
photographing device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810028792.1A
Other languages
Chinese (zh)
Other versions
CN110035218B (en
Inventor
张磊
张熙
黄一宁
周蔚
胡昌启
李瑞华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810028792.1A priority Critical patent/CN110035218B/en
Priority to PCT/CN2018/113926 priority patent/WO2019137081A1/en
Publication of CN110035218A publication Critical patent/CN110035218A/en
Application granted granted Critical
Publication of CN110035218B publication Critical patent/CN110035218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Abstract

The embodiment of the present application discloses a kind of image processing method, image processing apparatus and photographing device, for improving user experience.The embodiment of the present application method includes: that photographing device passes through controller control second camera focusing to the target position where object, wherein, has second depth of field when second camera focusing is to object, photographing device determines first depth of field according to target position, wherein, the overlapping depth of field of first depth of field and second depth of field is less than third threshold value, and object is located in the overlapping depth of field, photographing device determines first position corresponding with first depth of field according to the depth-of-field guide of the first camera, wherein, first position is different from target position, photographing device controls the first camera focusing to first position by controller, photographing device obtains the first image and the second image later, photographing device carries out Fuzzy Processing to the region other than object in the first image or the second image and obtains target image.

Description

A kind of image processing method, image processing apparatus and photographing device
Technical field
This application involves image applied technical field more particularly to a kind of image processing methods, image processing apparatus and bat According to equipment.
Background technique
In imaging field, aperture size is an important indicator of imaging lens, and image planes not only may be implemented in big aperture Illuminance increases, and promotes signal noise ratio (snr) of image, can also realize the shallow depth of field, and the image shot is made to have the clear rest part of main body Fuzzy virtualization effect.
In frivolous consumption electronic product, due to size-constrained, single camera lens cannot achieve large aperture virtualization effect, lead to Normal way is to take the photograph image capturing system with two camera lens composition parallel pair to shoot object, completes pair to object It is defocused, two images that two camera lenses take respectively are acquired, acquired image is converted under a common coordinate system, right The overlapping region of two images carries out disparity computation, according to parallax can calculate subject to camera distance, thus It obtains the depth map of photographed scene, and then according to depth map, the image except plane where object is subjected to Fuzzy Processing, from And realize the effect of virtualization.
However since the focal length of two cameras is similar, so two cameras are all focused and are shot to object When the obtained depth of field of two images be also it is similar, i.e. two images be respectively relative to other than object object (prospect and Background) fog-level there is no apparent difference, it is therefore poor according to the depth map precision that two images obtain, it is not easy to by mesh The fringe region that connects with its contexts of mark object or in which hollowed out area be split, often occur object be blurred or The case where region other than person's object is not blurred, the photo virtualization effect shot is undesirable, and user experience is poor.
Summary of the invention
The embodiment of the present application provides a kind of image processing method, image processing apparatus and photographing device, for shooting To virtualization effect more preferably image, user experience is prompted.
In view of this, the embodiment of the present application first aspect provides a kind of image processing method, it is applied to photographing device, institute It states photographing device and includes the first camera, second camera and controller, first camera and the second camera Optical axis is parallel, and the difference between the field angle of first camera and the field angle of the second camera is less than the first threshold Value, the difference between the focal length of first camera and the focal length of the second camera are less than second threshold, comprising:
Photographing device drive control device control second camera is moved along optical axis direction and is focused to where object Target position has second depth of field when second camera focusing is to target position, and photographing device can be determined according to target position First depth of field, that is, the depth of field that the first camera needs to have, wherein to guarantee the overlapping scape of first depth of field and second depth of field It is deep to be less than third threshold value, and object is located in the overlapping depth of field, subsequent photographing device can according to the depth-of-field guide of the first camera To determine first position corresponding with first depth of field, that is, the position that the first camera needs to focus, wherein first position Different from target position, photographing device drive control device controls the first camera and moves and focused to first along optical axis direction It sets, the first image and second camera that take when available first camera focusing of photographing device is to first position later The second image taken when focusing to target position, and then photographing device can be to object in the first image or the second image Region in addition carries out Fuzzy Processing and obtains target image.
In the embodiment of the present application, depth of field when the first camera is shot from second camera is different, the first image and The fog-level that second image is respectively relative to the object (foreground and background) other than object is also different, can be more accurately Object is identified, the fringe region that especially can effectively connect with its contexts to object and/or wherein Hollowed out area be split processing, only blurred to obtain virtualization effect more preferably image to the region other than object, Improve user experience.
It is described in conjunction with the embodiment of the present application in a first aspect, in the first embodiment of the embodiment of the present application first aspect Photographing device carries out Fuzzy Processing to the region other than object described in the first image or second image and obtains mesh Logo image includes:
Photographing device calculates the parallax information of the first image and the second image, can be calculated according to parallax information and the Identical first depth information of one image coordinate, can determine in the first image other than object according to the first depth information later First area, that is, object in the first image is distinguished with the first area other than object, wherein the firstth area Domain includes the hollowed out area in the fringe region and/or object to connect in the first image with object, and then photographing device pair First area in first image carries out Fuzzy Processing and obtains target image.
It is described in conjunction with the embodiment of the present application in a first aspect, in second of embodiment of the embodiment of the present application first aspect Photographing device carries out Fuzzy Processing to the region other than object described in the first image or second image and obtains mesh Logo image includes:
Photographing device calculates the parallax information of the first image and the second image, can be calculated according to parallax information and the Identical second depth information of two image coordinates, can determine in the second image other than object according to the second depth information later Second area, that is, object in the second image is distinguished with the second area other than object, wherein the secondth area Domain includes the hollowed out area in the fringe region and/or object to connect in the second image with object, and then photographing device pair Second area in second image carries out Fuzzy Processing and obtains target image.
By scheme provided by the embodiments of the present application, photographing device can arbitrarily be selected from the first image and the second image Then one of image calculates the depth information of the image, and determines the region in the image other than object, then to the figure Region as in other than object carries out Fuzzy Processing and finally obtains target image, provides a variety of choosings for the embodiment of the present application It selects, enriches the realizability of scheme.
In conjunction with the embodiment of the present application in a first aspect, the first embodiment or the application of the embodiment of the present application first aspect Second of embodiment of embodiment first aspect, in the third embodiment of the embodiment of the present application first aspect,
The field angle of first camera and second camera is all larger than or is equal to 60 °.
Pass through scheme provided by the embodiments of the present application, it is ensured that the first camera and second camera all have sufficiently large Field angle, the coverage area for the image that two cameras are shot is also relatively large, and finally obtained target image can be with With sufficiently large coverage area.
In conjunction with the embodiment of the present application in a first aspect, the first embodiment or the application of the embodiment of the present application first aspect Second of embodiment of embodiment first aspect, in the 4th kind of embodiment of the embodiment of the present application first aspect,
The nearest focal distance of first camera and second camera is respectively less than or is equal to 20cm.
Pass through scheme provided by the embodiments of the present application, it is ensured that two cameras can focus to distance field close enough Scape improves the practicability of this programme.
The embodiment of the present application second aspect provides a kind of image processing apparatus, including the first camera and the second camera shooting Head, first camera is parallel with the optical axis of the second camera, the field angle of first camera and described second Difference between the field angle of camera is less than first threshold, the focal length of first camera and the coke of the second camera Difference away between is less than second threshold, the image processing apparatus further include:
Control unit, for controlling second camera focusing to the target position where object, wherein described the Has second depth of field when two camera focusings are to the object;
First determination unit, for determining first depth of field according to the target position, wherein first depth of field and described The overlapping depth of field of second depth of field is less than third threshold value, and the object is located in the overlapping depth of field;
Second determination unit, for determining corresponding with first depth of field the according to the depth-of-field guide of first camera One position, wherein the first position is different from the target position;
Described control unit is also used to control first camera focusing to the first position;
Acquiring unit, for obtaining the first image and the second image, the first image is first camera focusing The image taken when to the first position, when second image is that the second camera is focused to the target position The image taken;
Blur unit, for carrying out mould to the region other than object described in the first image or second image Paste handles to obtain target image.
Optionally, the blur unit includes:
First computing module, for calculating the parallax information of the first image Yu second image;
Second computing module, for calculating the first depth information of the first image according to the parallax information;
Determining module, for determining other than object described in the first image according to first depth information One region, the first area include the fringe region to connect in the first image with the object and/or the target Hollowed out area in object;
Obfuscation module obtains the target figure for carrying out Fuzzy Processing to the first area in the first image Picture.
Optionally, second computing module can be also used for calculating second image according to the parallax information Second depth information;
The determining module can be also used for determining target described in second image according to second depth information Second area other than object, the second area include the fringe region to connect in second image with the object and/ Or the hollowed out area in the object;
The obfuscation module can be also used for obtaining the second area progress Fuzzy Processing in second image The target image.
In the embodiment of the present application, control unit control second camera focusing to the target position where object, second Camera has second depth of field, and the first determination unit determines first depth of field, first depth of field and second depth of field according to target position The overlapping depth of field is less than third threshold value and object is located in the overlapping depth of field, and the second determination unit is according to the scape of the first camera later It deeply feels and determines first position corresponding with first depth of field, first position is different from target position, the first camera shooting of control unit control First position is arrived in head focusing, the first image that subsequent acquiring unit takes when obtaining the first camera focusing to first position and The second image taken when second camera focusing is to target position, and then blur unit is in the first image or the second image Region other than object carries out Fuzzy Processing and obtains target image, it is to be understood that the first camera and second camera Depth of field when being shot is different, and the first image and the second image are respectively relative to the object (foreground and background) other than object Fog-level be also different, more accurately object can be identified, especially can be effectively to object and its The fringe region and/or hollowed out area therein that contexts connect are split processing, only to the region other than object into Row virtualization obtains virtualization effect more preferably image, improves user experience.
The embodiment of the present application third aspect provides a kind of photographing device, including the first camera and second camera, institute State that the first camera is parallel with the optical axis of the second camera, the field angle of first camera and the second camera Field angle between difference be less than first threshold, between the focal length of first camera and the focal length of the second camera Difference be less than second threshold, the photographing device further include:
Processor, controller, memory, bus and input/output interface;
Program code is stored in the memory;
The processor performs the following operations when calling the program code in the memory:
The controller is driven to control the second camera focusing to the target position where object, wherein described Has second depth of field when second camera focusing is to the object;
First depth of field is determined according to the target position, wherein the overlapping scape of first depth of field and second depth of field It is deep to be less than third threshold value, and the object is located in the overlapping depth of field;
The corresponding first position of first depth of field is determined according to the depth-of-field guide of first camera, wherein described One position is different from the target position;
The controller is driven to control first camera focusing to the first position;
The first image and the second image are obtained, the first image is first camera focusing to the first position When the image that takes, second image is the image taken when the second camera is focused to the target position;
Fuzzy Processing is carried out to the region other than object described in the first image or second image and obtains mesh Logo image.
The embodiment of the present application fourth aspect provides a kind of computer readable storage medium, including instruction, works as described instruction When running on computers, so that computer is executed such as some or all of step in first aspect image processing method.
The aspect of the embodiment of the present application the 5th provides a kind of computer program product comprising instruction, when its on computers When operation, so that computer is executed such as some or all of step in first aspect image processing method.
As can be seen from the above technical solutions, the embodiment of the present application has the advantage that
In the embodiment of the present application, photographing device controls second camera focusing to the target where object by controller Position, second camera have second depth of field, and photographing device determines first depth of field, first depth of field and the second scape according to target position The deep overlapping depth of field is less than third threshold value and object is located in the overlapping depth of field, and photographing device is according to the scape of the first camera later It deeply feels and determines first position corresponding with first depth of field, first position is different from target position, and photographing device passes through controller control The first camera focusing is made to first position, what subsequent photographing device took when obtaining the first camera focusing to first position The second image taken when first image and second camera focusing are to target position, so photographing device to the first image or Region in second image other than object carries out Fuzzy Processing and obtains target image, it is to be understood that the first camera with Depth of field when second camera is shot is different, and the first image and the second image are respectively relative to the object other than object The fog-level of (foreground and background) is also different, and can more accurately be identified to object, especially can be effectively The fringe region and/or hollowed out area therein connect to object with its contexts is split processing, only to object Region in addition is blurred to obtain virtualization effect more preferably image, improves user experience.
Detailed description of the invention
Parallax schematic diagram when Fig. 1 is two camera shootings;
Fig. 2 is the depth map that the prior art is obtained according to disparity computation;
Fig. 3 is one embodiment schematic diagram of the application image processing method;
Fig. 4 (a) is two cameras of the application schematic diagram that position is arranged on photographing device;
Fig. 4 (b) is two cameras of the application another schematic diagram that position is arranged on photographing device;
Fig. 5 is the schematic diagram that the application changes with the camera focusing change in location camera depth of field;
Fig. 6 is the schematic diagram that the application overlaps the depth of field;
Fig. 7 is another schematic diagram that the application overlaps the depth of field;
Fig. 8 is the depth map that the application is obtained according to disparity computation;
Fig. 9 is one embodiment schematic diagram of the application image processing apparatus;
Figure 10 is another embodiment schematic diagram of the application image processing apparatus;
Figure 11 is the structural schematic diagram of the application photographing device.
Specific embodiment
The embodiment of the present application provides a kind of image processing method and photographing device, obtains virtualization effect for shooting and more manages The image thought improves user experience.
The description and claims of this application and term " first ", " second ", " third ", " in above-mentioned attached drawing The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiments described herein can be in addition to illustrating herein Or the sequence other than the content of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that Cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units need not limit In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce The other step or units of product or equipment inherently.
The embodiment of the present application can be applied to include two cameras photographing device, the optical axis of two cameras is parallel, and And two camera field angles and focal length are same or similar, due to the optical axis of two cameras be not overlapped i.e. two cameras it Between there are distances, so the image that two cameras are shot has parallax, referring to FIG. 1, two cameras are respectively A and B, Focal length is all f, and for the object for needing to shoot in the position of P point, object is respectively P1 and P2 in the position of two imaging surfaces, can To find out, the distance of P1 to A camera imaging face left edge is L1, and the distance of P2 to B camera imaging face left edge is L2, L1 With L2 and unequal, there are parallaxes for two images that A camera and B camera are shot, according to the principle of similar triangles The distance Z of plane where P point to two cameras can be calculated can further obtain two cameras on this basis and clap Take the photograph the depth map of scene overlapping region.
The depth map specifically can be as shown in Fig. 2, can be to the object for being located at Different Plane in photographed scene according to depth map Body is split processing, for example, two cameras are all focused to the plane where human body, obtains depth map shown in Fig. 2, according to Plane where human body is split by depth map with its contexts, carries out mould to the region except plane where human body again later Paste processing, finally obtains the image of virtualization, but due to the focal length of two cameras is similar, so two cameras are all right The depth of field of coke when being shot to human body be also it is similar, i.e. two images be respectively relative to other than human body part (prospect with Background) fog-level there is no apparent difference, therefore finally obtained depth map precision as shown in Figure 2 is poor, can see To the hollowed out area between the fringe region and human finger of human body and unintelligible, it is not easy to be split place to this partial region Reason, the effect is unsatisfactory for finally obtained photo virtualization.
For this purpose, the embodiment of the present application provides a kind of image processing method on the basis of dual camera camera system, it can Virtualization effect more preferably photo is obtained with shooting.
For ease of understanding, the detailed process in the embodiment of the present application is described below:
Referring to Fig. 3, one embodiment of image processing method includes: in the embodiment of the present application
301, photographing device controls second camera focusing to the target position where object by controller.
In the embodiment of the present application, photographing device includes the first camera, second camera and controller, wherein first takes the photograph As head is parallel with the optical axis of second camera, and the difference between the field angle of the first camera and the field angle of second camera Less than first threshold, the difference between the focal length of the first camera and the focal length of second camera is less than second threshold, in determination After the object for needing to shoot, photographing device can determine target position where object, subsequent photographing device driving Controller is simultaneously moved by controller control second camera along the direction of optical axis and is focused to target position, it is possible to understand that It is that second depth of field that second camera has at this time can be determined when second camera focusing is to target position.
It should be noted that the depth of field of camera refers to the quilt that the imaging that can obtain clear image in camera is measured Take the photograph object longitudinal separation range, two endpoints of the depth of field are respectively depth of field near point and depth of field far point, depth of field near point be in the depth of field away from The point nearest from camera, depth of field far point are point farthest apart from camera in the depth of field.
It should be noted that the field angle of the first camera and second camera can be both greater than or equal to 60 °, two are taken the photograph As head is owned by relatively large field angle, the overlapping region of two such camera shooting is larger, may finally be found a view The sufficiently large photo of range, certainly, the field angle size of two cameras are also possible to other values, such as are both greater than or are equal to 50 °, specifically herein without limitation.
In addition, the nearest focal distance of the first camera and second camera can be both less than or equal to 20cm, equally, two The nearest focal distance of a camera is also possible to other values, such as both less than or equal to 30cm, specifically herein without limitation.
It is understood that the photographing device can be terminal device, such as mobile phone or tablet computer etc., the first camera With second camera the set-up mode on the photographing device can there are many, please refer to Fig. 4 (a) and Fig. 4 (b), made with mobile phone For photographing device, two cameras can also be arranged above and below as shown in Fig. 4 (b) with shown in left-right situs such as Fig. 4 (a), In addition, two cameras can be all disposed within the back side of mobile phone display screen, it can also be all disposed within one side identical with display screen, That is, two cameras can all be used as front camera, it can also be all used as rear camera, do not limited herein specifically It is fixed, and the distance between two cameras are also subject in practical application, herein without limitation.
Optionally, photographing device can also include more camera shootings other than including the first camera and second camera Head.
302, photographing device determines first depth of field according to target position.
After second camera focusing to target position, first depth of field can be calculated in photographing device, that is, first takes the photograph The depth of field needed to have as head, wherein the overlapping depth of field of first depth of field and second depth of field is less than third threshold value, and object To be located in the overlapping depth of field.
It is understood that the focusing position of camera is different, the depth of field corresponding to camera is also different, such as Fig. 5 Shown, abscissa indicates that the focusing position of camera arrives the distance of camera, in the corresponding depth of field of ordinate expression camera Point arrives the distance of camera, from figure 5 it can be seen that the distance of the focusing position of camera to camera is closer, then this is taken the photograph As currently corresponding field depth is smaller for head, when the embodiment of the present application requires the first camera and second camera to be shot pair The depth of field answered is different, and second camera is focused in the target position where object, object must be in second depth of field, It is also required to guarantee that first depth of field can cover object simultaneously, so first depth of field and second depth of field model that have one section Chong Die The namely overlapping depth of field is enclosed, furthermore the overlapping range of first depth of field and second depth of field cannot be too big, so the overlapping depth of field will meet It can be 10cm or 20cm less than the condition of third threshold value, such as third threshold value, specifically herein without limitation.
303, photographing device determines the corresponding first position of first depth of field.
After photographing device has determined first depth of field, it can determine that first depth of field is corresponding according to the depth-of-field guide of the first camera First position, first position namely the first camera need the position focused, wherein first position is different from target position It sets, it is to be understood that since camera focusing position its corresponding depth of field that changes also changes therewith, and according to camera Depth-of-field guide can know the corresponding relationship between focusing position and the depth of field, therefore corresponding can be calculated by first depth of field One position.
It should be noted that first position be different from the specific manifestation mode in target position can there are many, separately below It is illustrated:
One, first position to photographing device distance be less than target position to photographing device distance.
As shown in fig. 6, second camera focusing is to target position and has second depth of field, the first camera focusing to first Position simultaneously has first depth of field, and first position is closer apart from photographing device relative to target position, and the overlapping depth of field at this time is the One depth of field far point is to the distance between the second depth of field near point range, in order to keep the overlapping depth of field small as far as possible, it is also desirable to, First depth of field can just cover object, that is, the first depth of field far point is overlapped with target position.
Two, first position to photographing device distance be greater than target position to photographing device distance.
As shown in fig. 7, second camera focusing is to target position and has second depth of field, the first camera focusing to first Position simultaneously has first depth of field, and first position is farther apart from photographing device relative to target position, and the overlapping depth of field at this time is the One depth of field near point is to the distance between the second depth of field far point range, in order to keep the overlapping depth of field small as far as possible, it is also desirable to, First depth of field can just cover object, that is, the first depth of field near point is overlapped with target position.
304, photographing device controls the first camera focusing to first position by controller.
After photographing device has determined first position, photographing device can be taken the photograph with drive control device and by controller control first As head is moved along the direction of optical axis and is focused to first position.
305, photographing device obtains the first image and the second image.
The first image taken when available first camera focusing of photographing device is to first position and the second camera shooting The second image taken when head focusing is to target position, it is to be understood that after the first camera and second camera unlatching Scan picture can be carried out, such as carries out the processing of the base images such as brightness and color, image can be sent into after processing aobvious The display screen shooting that find a view finally obtains the first image and the second image.
It should be noted that user needs to keep posture constant when shooting using photographing device, and shoot field There is no object to move in scape, so that the first image and the second image are two images shot for Same Scene.
306, photographing device carries out Fuzzy Processing to the region other than object in the first image or the second image and obtains mesh Logo image.
The parallax information of the first image and the second image that photographing device can be calculated, further according to parallax information Depth information can be calculated, optionally, which can be the first depth information identical with the first image coordinate, It is also possible to the second depth information identical with the second image coordinate, photographing device can be determined according to the first depth information later First area in first image other than object, or object in the second image can also be determined according to the second depth information Second area in addition, wherein first area includes in the fringe region and/or object to connect in the first image with object Hollowed out area, second area includes the vacancy section in the fringe region and/or object to connect in the second image with object Domain, the depth information can specifically show as depth map, are illustrated in figure 8 and shot and counted using the embodiment of the present application method Obtained depth map, photographing device can be split processing to the region other than object and object according to depth map, The region other than object and object is namely distinguished, take human body as the object of shooting, human body as can be seen from Figure 8 Fringe region and human finger between hollowed out area it is obviously apparent relative to the depth map of Fig. 2, that is, Fig. 8 relative to Fig. 2 accuracy is higher, can more effectively be split to the hollowed out area between the fringe region of human body and human finger, The target image that Fuzzy Processing is blurred, and distance objective object institute finally are carried out to the region other than object on the basis of this It is bigger in the remoter region blur degree of plane.
Optionally, photographing device can choose the first image and on the basis of the first image to first other than object Region carry out Fuzzy Processing obtain target image, also can choose the second image and on the basis of the second image to object with Outer second area carries out Fuzzy Processing and obtains target image, is specifically chosen which image to do Fuzzy Processing and not limit herein It is fixed.
In the embodiment of the present application, photographing device controls second camera focusing to the target where object by controller Position, second camera have second depth of field, and photographing device determines first depth of field, first depth of field and the second scape according to target position The deep overlapping depth of field is less than third threshold value and object is located in the overlapping depth of field, and photographing device is according to the scape of the first camera later It deeply feels and determines first position corresponding with first depth of field, first position is different from target position, and photographing device passes through controller control The first camera focusing is made to first position, what subsequent photographing device took when obtaining the first camera focusing to first position The second image taken when first image and second camera focusing are to target position, so photographing device to the first image or Region in second image other than object carries out Fuzzy Processing and obtains target image, it is to be understood that the first camera with Depth of field when second camera is shot is different, and the first image and the second image are respectively relative to the object other than object The fog-level of (foreground and background) is also different, and can more accurately be identified to object, especially can be effectively The fringe region and/or hollowed out area therein connect to object with its contexts is split processing, only to object Region in addition is blurred to obtain virtualization effect more preferably image, improves user experience.
The image processing method in the embodiment of the present application is described above, below to the figure in the embodiment of the present application As processing unit is described:
Referring to Fig. 9, one embodiment of image processing apparatus includes the first camera and second in the embodiment of the present application Camera, the first camera is parallel with the optical axis of second camera, the field angle of the first camera and the visual field of second camera Difference between angle is less than first threshold, and the difference between the focal length of the first camera and the focal length of second camera is less than second Threshold value;
In addition, the image processing apparatus further include:
Control unit 901 is focused for controlling second camera to the target position where object, wherein second takes the photograph As having second depth of field when head focusing is to object;
First determination unit 902, for determining first depth of field according to target position, wherein first depth of field and second depth of field The overlapping depth of field be less than third threshold value, and object is located in the overlapping depth of field;
Second determination unit 903, for according to the depth-of-field guide of the first camera determine it is corresponding with first depth of field first It sets, wherein first position is different from target position;
Control unit 901 is also used to control the first camera focusing to first position;
Acquiring unit 904 is used to obtain the first image and the second image, and the first image is the first camera focusing to first The image taken when position, the second image are the image taken when second camera is focused to target position;
Blur unit 905 is obtained for carrying out Fuzzy Processing to the region other than object in the first image or the second image To target image.
It should be noted that the control unit 901 in the embodiment of the present application may include having controller, that is to say, that can be with It is to be focused by controller the first camera of control and second camera being integrated in control unit 901, in addition, control is single Member 901 and the controller unit that can also be two different, control unit 901 controls controller, and is controlled by controller It makes the first camera and second camera is focused.
In the embodiment of the present application, control unit 901 controls second camera and focuses to the target position where object, the Two cameras have second depth of field, and the first determination unit 902 determines first depth of field, first depth of field and the second scape according to target position The deep overlapping depth of field is less than third threshold value and object is located in the overlapping depth of field, and the second determination unit 903 is taken the photograph according to first later As the depth-of-field guide of head determines that first position corresponding with first depth of field, first position are different from target position, the control of control unit 901 The first camera focusing is made to first position, subsequent acquiring unit 904 is shot when obtaining the first camera focusing to first position To the first image and second camera focusing to target position when the second image for taking, and then blur unit 905 is to the Region in one image or the second image other than object carries out Fuzzy Processing and obtains target image, it is to be understood that first Depth of field when camera is shot from second camera is different, and the first image and the second image are respectively relative to other than object The fog-level of object (foreground and background) be also different, more accurately object can be identified, especially can be with The fringe region and/or hollowed out area therein effectively to connect with its contexts to object is split processing, only right Region other than object is blurred to obtain virtualization effect more preferably image, improves user experience.
Optionally, on the basis of Fig. 9 corresponding embodiment, referring to Fig. 10, the embodiment of the present application image processing apparatus Another embodiment in,
Blur unit 905 includes:
First computing module 9051, the parallax information for calculating the first image and the second image;
Second computing module 9052, the first depth information for calculating the first image according to parallax information;
Determining module 9053, for determining the first area in the first image other than object according to the first depth information, First area includes the hollowed out area in the fringe region and/or object to connect in the first image with object;
Obfuscation module 9054 obtains target image for carrying out Fuzzy Processing to the first area in the first image.
Optionally,
Second computing module 9052, the second depth information for being also used to calculate the second image according to parallax information;
Determining module 9053 is also used to determine the secondth area in the second image other than object according to the second depth information Domain, second area include the hollowed out area in the fringe region and/or object to connect in the second image with object;
Obfuscation module 9054 is also used to carry out Fuzzy Processing to the second area in the second image to obtain target image.
The image processing apparatus in the embodiment of the present application is described from the angle of modular functionality entity above, below The photographing device in the embodiment of the present application is described from the angle of hardware handles:
The embodiment of the present application also provides a kind of photographing devices, as shown in figure 11, for ease of description, illustrate only and this Apply for the relevant part of embodiment, it is disclosed by specific technical details, please refer to the embodiment of the present application method part.This, which is taken pictures, sets It include mobile phone, tablet computer, personal digital assistant (personal digital assistant, PDA), vehicle mounted electric for that can be The terminal devices such as brain, by taking photographing device is mobile phone as an example:
Figure 11 shows the block diagram of the part-structure of mobile phone relevant to photographing device provided by the embodiments of the present application.Ginseng Figure 11 is examined, mobile phone includes: memory 1120, input unit 1130, display unit 1140, controller 1150, the first camera 1160, the components such as second camera 1170, processor 1180 and power supply 1190.It will be understood by those skilled in the art that Figure 11 Shown in handset structure do not constitute the restriction to mobile phone, may include than illustrating more or fewer components, or combination Certain components or different component layouts.
It is specifically introduced below with reference to each component parts of the Figure 11 to mobile phone:
Memory 1120 can be used for storing software program and module, and processor 1180 is stored in memory by operation 1120 software program and module, thereby executing the various function application and data processing of mobile phone.Memory 1120 can be led It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses institute according to mobile phone Data (such as audio data, phone directory etc.) of creation etc..In addition, memory 1120 may include high random access storage Device, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are solid State memory device.
Input unit 1130 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with And the related key signals input of function control.Specifically, input unit 1130 may include touch panel 1131 and other inputs Equipment 1132.Touch panel 1131, also referred to as touch screen collect touch operation (such as the user of user on it or nearby Use the behaviour of any suitable object or attachment such as finger, stylus on touch panel 1131 or near touch panel 1131 Make), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1131 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 1180, and order that processor 1180 is sent can be received and executed.In addition, Touch panel 1131 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface Plate 1131, input unit 1130 can also include other input equipments 1132.Specifically, other input equipments 1132 may include But in being not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. It is one or more.
Display unit 1140 can be used for showing information input by user or be supplied to user information and mobile phone it is each Kind menu is mainly used for showing the image that camera takes in the embodiment of the present application.Display unit 1140 may include display Panel 1141 optionally can use liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1141.Further, touch panel 1131 can cover display panel 1141, after touch panel 1131 detects touch operation on it or nearby, send place to Reason device 1180 is followed by subsequent processing device 1180 according to the type of touch event on display panel 1141 to determine the type of touch event Corresponding visual output is provided.Although touch panel 1131 and display panel 1141 are as two independent portions in Figure 11 Part realizes the input and input function of mobile phone, but in some embodiments it is possible to by touch panel 1131 and display panel 1141 is integrated and that realizes mobile phone output and input function.
Controller 1150 can be used for controlling the first camera and second camera and move and carry out pair along the direction of optical axis It is burnt.
First camera 1160 and second camera 1170 can be used for carrying out scene shooting respectively obtain the first image and Second image, wherein the first camera 1160 is parallel with the optical axis of second camera 1170, the field angle of the first camera 1160 Difference between the field angle of second camera 1170 is less than first threshold, the focal length of the first camera 1160 and the second camera shooting Difference between first 1170 focal length is less than second threshold.
Processor 1180 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, By running or execute the software program and/or module that are stored in memory 1120, and calls and be stored in memory 1120 Interior data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 1180 may include one or more processing units;Preferably, processor 1180 can integrate application processor and modulation /demodulation processing Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1180.
Mobile phone further includes the power supply 1190 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply Management system and processor 1180 are logically contiguous, to realize management charging, electric discharge and power consumption pipe by power-supply management system The functions such as reason.
In the embodiment of the present application, processor 1180 is specifically used for executing in embodiment illustrated in fig. 3 performed by photographing device All or part of movement, it is specific that details are not described herein again.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, read-only Memory), random access memory (RAM, random access memory), magnetic or disk etc. are various can store journey The medium of sequence code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (11)

1. a kind of image processing method, be applied to photographing device, the photographing device include the first camera, second camera and Controller, first camera is parallel with the optical axis of the second camera, the field angle of first camera with it is described Difference between the field angle of second camera is less than first threshold, the focal length and the second camera of first camera Focal length between difference be less than second threshold, which is characterized in that the described method includes:
The photographing device controls the second camera by the controller and focuses to the target position where object, In, have second depth of field when second camera focusing is to the object;
The photographing device determines first depth of field according to the target position, wherein first depth of field and second depth of field The overlapping depth of field be less than third threshold value, and the object is located in the overlapping depth of field;
The photographing device determines first position corresponding with first depth of field according to the depth-of-field guide of first camera, In, the first position is different from the target position;
The photographing device controls first camera focusing to the first position by the controller;
The photographing device obtains the first image and the second image, and the first image is first camera focusing described in The image taken when first position, second image are that the second camera takes when focusing to the target position Image;
The photographing device carries out fuzzy place to the region other than object described in the first image or second image Reason obtains target image.
2. the method according to claim 1, wherein the photographing device is to the first image or described second Region progress Fuzzy Processing other than object described in image obtains target image and includes:
The photographing device calculates the parallax information of the first image and second image;
The photographing device calculates the first depth information of the first image according to the parallax information;
The photographing device determines the firstth area other than object described in the first image according to first depth information Domain, the first area include in the fringe region to connect in the first image with the object and/or the object Hollowed out area;
The photographing device carries out Fuzzy Processing to the first area in the first image and obtains the target image.
3. the method according to claim 1, wherein the photographing device is to the first image or described second Region progress Fuzzy Processing other than object described in image obtains target image and includes:
The photographing device calculates the parallax information of the first image and second image;
The photographing device calculates the second depth information of second image according to the parallax information;
The photographing device object described in second image is determined according to second depth information other than the secondth area Domain, the second area include in the fringe region and/or the object to connect in second image with the object Hollowed out area;
The photographing device carries out Fuzzy Processing to the second area in second image and obtains the target image.
4. according to the method in any one of claims 1 to 3, which is characterized in that
The field angle of first camera and the second camera is all larger than or is equal to 60 °.
5. according to the method in any one of claims 1 to 3, which is characterized in that
The nearest focal distance of first camera and the second camera is respectively less than or is equal to 20cm.
6. a kind of image processing apparatus, including the first camera and second camera, first camera is taken the photograph with described second As the optical axis of head is parallel, the difference between the field angle of first camera and the field angle of the second camera is less than One threshold value, the difference between the focal length of first camera and the focal length of the second camera is less than second threshold, special Sign is, further includes:
Control unit, for controlling the second camera focusing to the target position where object, wherein described second takes the photograph As having second depth of field when head focusing is to the object;
First determination unit, for determining first depth of field according to the target position, wherein first depth of field and described second The overlapping depth of field of the depth of field is less than third threshold value, and the object is located in the overlapping depth of field;
Second determination unit, for determining first corresponding with first depth of field according to the depth-of-field guide of first camera It sets, wherein the first position is different from the target position;
Described control unit is also used to control first camera focusing to the first position;
Acquiring unit is first camera focusing to institute for the first image of acquisition and the second image, the first image The image taken when stating first position, second image are that the second camera is shot when focusing to the target position The image arrived;
Blur unit, for carrying out fuzzy place to the region other than object described in the first image or second image Reason obtains target image.
7. image processing apparatus according to claim 6, which is characterized in that the blur unit includes:
First computing module, for calculating the parallax information of the first image Yu second image;
Second computing module, for calculating the first depth information of the first image according to the parallax information;
Determining module, for determining the firstth area other than object described in the first image according to first depth information Domain, the first area include in the fringe region to connect in the first image with the object and/or the object Hollowed out area;
Obfuscation module obtains the target image for carrying out Fuzzy Processing to the first area in the first image.
8. image processing apparatus according to claim 6, which is characterized in that the blur unit includes:
First computing module, for calculating the parallax information of the first image Yu second image;
Second computing module, for calculating the second depth information of second image according to the parallax information;
Determining module, for the secondth area other than determining object described in second image according to second depth information Domain, the second area include in the fringe region and/or the object to connect in second image with the object Hollowed out area;
Obfuscation module obtains the target image for carrying out Fuzzy Processing to the second area in second image.
9. a kind of photographing device, including the first camera and second camera, first camera and the second camera Optical axis it is parallel, the difference between the field angle of first camera and the field angle of the second camera is less than the first threshold Value, the difference between the focal length of first camera and the focal length of the second camera are less than second threshold, and feature exists In, further includes:
Processor, controller, memory, bus and input/output interface;
Program code is stored in the memory;
The processor performs the following operations when calling the program code in the memory:
The controller is driven to control the second camera focusing to the target position where object, wherein described second Has second depth of field when camera focusing is to the object;
First depth of field is determined according to the target position, wherein the overlapping depth of field of first depth of field and second depth of field is small In third threshold value, and the object is located in the overlapping depth of field;
The corresponding first position of first depth of field is determined according to the depth-of-field guide of first camera, wherein described first It sets and is different from the target position;
The controller is driven to control first camera focusing to the first position;
Obtain the first image and the second image, the first image is clapped when being first camera focusing to the first position The image taken the photograph, second image are the image taken when the second camera is focused to the target position;
Fuzzy Processing is carried out to the region other than object described in the first image or second image and obtains target figure Picture.
10. a kind of computer readable storage medium, including instruction, when described instruction is run on computers, so that computer Execute the method as described in any one of claim 1 to 5.
11. a kind of computer program product comprising instruction, when run on a computer, so that computer executes such as right It is required that method described in any one of 1 to 5.
CN201810028792.1A 2018-01-11 2018-01-11 Image processing method, image processing device and photographing equipment Active CN110035218B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810028792.1A CN110035218B (en) 2018-01-11 2018-01-11 Image processing method, image processing device and photographing equipment
PCT/CN2018/113926 WO2019137081A1 (en) 2018-01-11 2018-11-05 Image processing method, image processing apparatus, and photographing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810028792.1A CN110035218B (en) 2018-01-11 2018-01-11 Image processing method, image processing device and photographing equipment

Publications (2)

Publication Number Publication Date
CN110035218A true CN110035218A (en) 2019-07-19
CN110035218B CN110035218B (en) 2021-06-15

Family

ID=67219302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810028792.1A Active CN110035218B (en) 2018-01-11 2018-01-11 Image processing method, image processing device and photographing equipment

Country Status (2)

Country Link
CN (1) CN110035218B (en)
WO (1) WO2019137081A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112469984A (en) * 2019-12-31 2021-03-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN112585941A (en) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 Focusing method and device, shooting equipment, movable platform and storage medium
CN112702530A (en) * 2020-12-29 2021-04-23 维沃移动通信(杭州)有限公司 Algorithm control method and electronic equipment
CN113079313A (en) * 2019-12-18 2021-07-06 佳能株式会社 Image processing apparatus, image pickup apparatus, image processing method, and storage medium
CN113688824A (en) * 2021-09-10 2021-11-23 福建汇川物联网技术科技股份有限公司 Information acquisition method and device for construction node and storage medium
CN116051362A (en) * 2022-08-24 2023-05-02 荣耀终端有限公司 Image processing method and electronic equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144404B (en) * 2019-12-06 2023-08-11 恒大恒驰新能源汽车科技(广东)有限公司 Method, apparatus, system, computer device and storage medium for detecting legacy object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169760A1 (en) * 2012-01-04 2013-07-04 Lloyd Watts Image Enhancement Methods And Systems
CN103763477A (en) * 2014-02-21 2014-04-30 上海果壳电子有限公司 Double-camera after-shooting focusing imaging device and method
CN104424640A (en) * 2013-09-06 2015-03-18 格科微电子(上海)有限公司 Method and device for carrying out blurring processing on images
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN107087091A (en) * 2017-05-31 2017-08-22 广东欧珀移动通信有限公司 The casing assembly and electronic equipment of electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169760A1 (en) * 2012-01-04 2013-07-04 Lloyd Watts Image Enhancement Methods And Systems
CN104424640A (en) * 2013-09-06 2015-03-18 格科微电子(上海)有限公司 Method and device for carrying out blurring processing on images
CN103763477A (en) * 2014-02-21 2014-04-30 上海果壳电子有限公司 Double-camera after-shooting focusing imaging device and method
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN107087091A (en) * 2017-05-31 2017-08-22 广东欧珀移动通信有限公司 The casing assembly and electronic equipment of electronic equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079313A (en) * 2019-12-18 2021-07-06 佳能株式会社 Image processing apparatus, image pickup apparatus, image processing method, and storage medium
CN113079313B (en) * 2019-12-18 2022-09-06 佳能株式会社 Image processing apparatus, image pickup apparatus, image processing method, and storage medium
CN112585941A (en) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 Focusing method and device, shooting equipment, movable platform and storage medium
CN112469984A (en) * 2019-12-31 2021-03-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN112469984B (en) * 2019-12-31 2024-04-09 深圳迈瑞生物医疗电子股份有限公司 Image analysis device and imaging method thereof
CN111246093A (en) * 2020-01-16 2020-06-05 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112702530A (en) * 2020-12-29 2021-04-23 维沃移动通信(杭州)有限公司 Algorithm control method and electronic equipment
CN113688824A (en) * 2021-09-10 2021-11-23 福建汇川物联网技术科技股份有限公司 Information acquisition method and device for construction node and storage medium
CN113688824B (en) * 2021-09-10 2024-02-27 福建汇川物联网技术科技股份有限公司 Information acquisition method, device and storage medium for construction node
CN116051362A (en) * 2022-08-24 2023-05-02 荣耀终端有限公司 Image processing method and electronic equipment
CN116051362B (en) * 2022-08-24 2023-09-15 荣耀终端有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
WO2019137081A1 (en) 2019-07-18
CN110035218B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN110035218A (en) A kind of image processing method, image processing apparatus and photographing device
JP7264993B2 (en) Method and apparatus for controlling interaction between virtual object and projectile, and computer program
CN111935393A (en) Shooting method, shooting device, electronic equipment and storage medium
US9392165B2 (en) Array camera, mobile terminal, and methods for operating the same
JP2022537614A (en) Multi-virtual character control method, device, and computer program
CN105933589A (en) Image processing method and terminal
US8913037B1 (en) Gesture recognition from depth and distortion analysis
US9081418B1 (en) Obtaining input from a virtual user interface
US20150009119A1 (en) Built-in design of camera system for imaging and gesture processing applications
US9268408B2 (en) Operating area determination method and system
CN112771438B (en) Depth sculpturing three-dimensional depth images using two-dimensional input selection
CN112771856B (en) Separable distortion parallax determination
CN109151329A (en) Photographic method, device, terminal and computer readable storage medium
CN105141942A (en) 3d image synthesizing method and device
CN111597922A (en) Cell image recognition method, system, device, equipment and medium
CN109726614A (en) 3D stereoscopic imaging method and device, readable storage medium storing program for executing, electronic equipment
CN111127541B (en) Method and device for determining vehicle size and storage medium
CN111339880A (en) Target detection method and device, electronic equipment and storage medium
CN113724309A (en) Image generation method, device, equipment and storage medium
CN105100557B (en) Portable electronic device and image extraction method
CN114115544B (en) Man-machine interaction method, three-dimensional display device and storage medium
CN110213407A (en) A kind of operating method of electronic device, electronic device and computer storage medium
EP3962062A1 (en) Photographing method and apparatus, electronic device, and storage medium
CN108550182A (en) A kind of three-dimensional modeling method and terminal
CN104156138B (en) Filming control method and imaging control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant