CN105578026A - Photographing method and user terminal - Google Patents

Photographing method and user terminal Download PDF

Info

Publication number
CN105578026A
CN105578026A CN201510404513.3A CN201510404513A CN105578026A CN 105578026 A CN105578026 A CN 105578026A CN 201510404513 A CN201510404513 A CN 201510404513A CN 105578026 A CN105578026 A CN 105578026A
Authority
CN
China
Prior art keywords
camera
depth
field layer
user terminal
focusing position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510404513.3A
Other languages
Chinese (zh)
Other versions
CN105578026B (en
Inventor
杨杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201510404513.3A priority Critical patent/CN105578026B/en
Priority to PCT/CN2015/085884 priority patent/WO2017008353A1/en
Publication of CN105578026A publication Critical patent/CN105578026A/en
Application granted granted Critical
Publication of CN105578026B publication Critical patent/CN105578026B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Abstract

The embodiment of the invention discloses a photographing method and a user terminal. The method comprises the steps that when an autofocusing photographing instruction is received, the autofocusing photographing instruction is responded, a first preview image is acquired through a first camera and a second preview image is acquired through a second camera; object distance of photographed objects included in the first preview image and the second preview image to the user terminal is calculated according to the first preview image and the second preview image; at least one depth of field graph layer is generated according to the calculated object distance, and the depth of field graph layer includes one photographed object or multiple photographed objects of which the object distance to the user terminal is the same; a target focusing position corresponding to the depth of field graph layer is acquired according to the corresponding relation between the preset object distance and a focusing position; focusing of the target focusing position is performed and photographing is performed through the first camera and/or the second camera. Therefore, the focusing position can be automatically set and focusing and photographing can be performed.

Description

A kind of image pickup method and user terminal
Technical field
The present invention relates to electronic technology field, particularly relate to a kind of image pickup method and user terminal.
Background technology
The exposal model of focusing afterwards of first taking pictures is the exposal model that a kind of playability is very strong, is subject to liking of users.First taking pictures focuses afterwards is the photo utilizing multiple focusing positions different, synthesizes the picture that is special procured form.When user clicks this pictures, figure sector-meeting along with the click location of user different, the picture of display different focus point position, where can accomplish a little, there is just clear.
But practice finds, in the current exposal model of focus afterwards of first taking pictures, owing to needing to take the different photo of multiple focusing positions, just need user manually to set focusing position, make user terminal carry out focusing to focusing position and take; Prior art automatically can not set focusing position and carry out focusing shooting.
Summary of the invention
The embodiment of the invention discloses a kind of image pickup method and user terminal, automatically can set focusing position and carry out focusing shooting.
The embodiment of the invention discloses a kind of image pickup method, described method comprises:
When receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image by the first camera, and obtain the second preview image by second camera;
According to described first preview image and described second preview image, calculate described first preview image and the subject included by the described second preview image object distance from user terminal;
Described object distance according to calculating generates at least one depth of field layer, and described depth of field layer comprises a described subject or from the identical multiple described subject of the object distance of described user terminal;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described depth of field layer;
By described first camera and/or described second camera described target focusing position focused and take pictures.
The embodiment of the invention also discloses a kind of user terminal, described user terminal comprises:
Acquisition module, for when receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image, and obtain the second preview image by second camera by the first camera;
Computing module, for according to described first preview image and described second preview image, calculates described first preview image and the subject included by the described second preview image object distance from user terminal;
Generation module, for generating at least one depth of field layer according to the described object distance calculated, described depth of field layer comprises a described subject or from the identical multiple described subject of the object distance of described user terminal;
Described acquisition module, also for the corresponding relation according to the object distance preset and focusing position, obtains the target focusing position corresponding with described depth of field layer;
Focusing module, for focus to described target focusing position by described first camera and/or described second camera and take pictures.
In embodiments of the present invention, when user terminal receives auto-focusing shooting instruction, user terminal responds this auto-focusing shooting instruction, obtains the first preview image, and obtain the second preview image by second camera by the first camera; User terminal, according to the first preview image and the second preview image, calculates the first preview image and the subject included by the second preview image object distance from user terminal; User terminal generates at least one depth of field layer according to the object distance calculated, this depth of field layer comprise the first preview image and the second preview image a subject or from the identical multiple subject of the object distance of user terminal; User terminal, according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with depth of field layer, and to be focused to target focusing position by the first camera and/or second camera and take pictures.Visible, the embodiment of the present invention automatically can set focusing position and carry out focusing shooting.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of a kind of image pickup method disclosed in the embodiment of the present invention;
Fig. 2 is the schematic flow sheet of another kind of image pickup method disclosed in the embodiment of the present invention;
Fig. 3 is the structural representation of a kind of user terminal disclosed in the embodiment of the present invention;
Fig. 4 is the structural representation of another kind of user terminal disclosed in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The embodiment of the invention discloses a kind of image pickup method and user terminal, automatically can set focusing position and carry out focusing shooting.Below be described in detail respectively.
Refer to Fig. 1, the schematic flow sheet of Fig. 1 a kind of image pickup method disclosed in the embodiment of the present invention.As shown in Figure 1, this image pickup method can comprise the following steps.
S101, when receive auto-focusing shooting instruction time, respond this auto-focusing shooting instruction, obtain the first preview image by the first camera, and obtain the second preview image by second camera.
In the embodiment of the present invention, when user terminal receives auto-focusing shooting instruction, user terminal, by this auto-focusing of response shooting instruction, obtains the first preview image by the first camera, and obtains the second preview image by second camera.Wherein, this user terminal can include but not limited to that smart mobile phone, panel computer, notebook computer, desktop computer etc. have the user terminal of camera function.The operating system of this user terminal can include but not limited to Android operation system, IOS, Symbian (Saipan) operating system, BlackBerry (blackberry, blueberry) operating system, Windows operating system etc., and the embodiment of the present invention does not limit.
In the embodiment of the present invention, this user terminal is configured with two cameras, and namely user terminal has dual camera.For convenience of describing, these two cameras are called " the first camera " and " second camera ", but what deserves to be explained is, " the first camera " and " second camera " are only for describing object.
In the embodiment of the present invention, optionally, before user terminal receives auto-focusing shooting instruction, user can at user terminal selecting autofocus mode.After user selects autofocus mode, user can input auto-focusing shooting instruction.Optionally, the focal modes of user terminal can include but not limited to manual focus pattern and autofocus mode.Manual focus pattern needs user manually to set focusing position, and user terminal carries out focusing shooting to the focusing position that user manually sets.Optionally, user sets one or more focusing position by preview image, makes user terminal carry out focusing shooting.
S102, according to this first preview image and this second preview image, calculate this first preview image and this subject included by the second preview image object distance from user terminal.
In the embodiment of the present invention, user terminal, by according to the first preview image obtained and the second preview image, calculates the first preview image and the subject included by the second preview image object distance (namely subject is from the distance of user terminal) from user terminal.Particularly, user terminal will determine the subject that the first pre-set image and the second pre-set image comprise, then calculates the object distance of subject from user terminal.Optionally, user terminal only can determine the personage that the first pre-set image and the second pre-set image comprise, and calculate the object distance of personage to user terminal, the embodiment of the present invention does not limit.Be the known technology of industry owing to calculating subject to the object distance of user terminal, therefore the embodiment of the present invention is no longer set forth.For example, if the first preview image comprises personage A and personage B, second preview image comprises personage A, then user terminal will recognize the first preview image and the second preview image comprises personage A and personage B, and user terminal will calculate the distance of personage A and personage B to user terminal respectively.
S103, generate at least one depth of field layer according to the object distance calculated, this depth of field layer comprises a subject or from the identical multiple subject of the object distance of this user terminal.
In the embodiment of the present invention, user terminal to calculate in the first preview image and the second preview image included subject to user terminal distance after, will according to the depth map of the object distance generation calculated for the first preview image or the second preview image.This depth map is made up of at least one depth of field layer.When user terminal generates the depth map for the first preview image according to the object distance calculated, each depth of field layer of depth map comprises a subject of the first preview image or comprises the multiple subject identical from the object distance of this user terminal of the first preview image.When user terminal generates the depth map for the second preview image according to the object distance calculated, each depth of field layer of depth map comprises a subject of the second preview image or comprises the multiple subject identical from the object distance of this user terminal of the second preview image.
For example, if the first preview image comprises personage A, personage B and personage C, the second preview image comprises personage A, then user terminal can according to the depth map of the object distance generation calculated for the first preview image.This depth map comprises all subject of the first preview image (i.e. personage A, personage B and personage C), and this depth map is made up of at least one depth of field layer.If personage A is different from the object distance of user terminal with personage B, personage B is identical from the object distance of user terminal with personage C, then this depth map is made up of two depth of field layer, and wherein, a depth of field layer comprises a personage A and depth of field layer and comprises personage B and personage C.
Find in practice, when user terminal is focused to a subject and takes, also clearly can shoot the subject with this subject equivalent distance.Therefore, clear for only need once the to focus all subject shooting that just depth of field layer can be comprised of each depth of field layer.
S104, corresponding relation according to the object distance that presets and focusing position, obtain the target focusing position corresponding with this depth of field layer.
In the embodiment of the present invention, after user terminal generates depth of field layer, by the corresponding relation according to the object distance preset and focusing position, obtain the target focusing position corresponding with this depth of field layer.
Usually, camera be generally divided into focus, auto-focusing, optical anti-vibration, array camera etc. classification.Applying maximum is focus and automatic focusing camera head.Wherein, can the camera of auto-focusing usually be made up of periphery components and parts such as camera lens, motor, colour filter, base, imageing sensor, motor driving chip, circuit base plate, connector, electric capacity.
The device relevant to the embodiment of the present invention be camera lens and motor mainly.Wherein, camera lens is used for collected light, and projects to image sensor surface, the position residing for camera lens directly determines the readability of collected image.When camera lens is in focal position corresponding to a subject, the image that user terminal takes this subject is the most clear.Wherein, motor, for driving lens moving to focusing position (focal position that namely subject is corresponding), realize focusing, make camera lens project clearly image to imageing sensor.
In the embodiment of the present invention, the position when subject that depth of field layer just comprises for camera by target focusing position is taken the most clear residing for camera lens.
As the optional execution mode of one, the embodiment that user terminal obtains the target focusing position corresponding with this depth of field layer can comprise the following steps:
11) from the depth of field layer generated, obtain the first object depth of field layer comprising personage;
12) according to the corresponding relation of the object distance that presets and focusing position, the target focusing position corresponding with first object depth of field layer is obtained.
In this embodiment, the depth map of generation may be made up of a lot of depth of field layer, therefore, can screen the depth of field layer that some are important, only obtain the focusing position that important depth of field layer is corresponding, eyeglass is moved to focusing position and realizes focusing.
In actual applications, user often carries out focusing shooting, therefore, in this embodiment, by whether comprising face in face recognition technology identification depth of field layer to personage; If a depth of field layer comprises face, then user terminal judges that this depth of field layer comprises personage, then user terminal obtains this depth of field layer as first object depth of field layer.
As the optional execution mode of one, the embodiment that user terminal obtains the target focusing position corresponding with this depth of field layer can comprise the following steps:
21) from the depth of field layer generated, obtain the second target depth of field layer that object distance is less than predeterminable range;
22) according to the corresponding relation of the object distance that presets and focusing position, the target focusing position corresponding with the second target depth of field layer is obtained.
In actual applications, user often loses interest in too remote subject, therefore, in this embodiment, user terminal can obtain object distance and be less than the depth of field layer of predeterminable range as the second target depth of field layer from the depth of field layer generated, and according to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with the second target depth of field layer.
As the optional execution mode of one, the embodiment that user terminal obtains the target focusing position corresponding with this depth of field layer can comprise the following steps:
31) from the depth of field layer generated, obtain the 3rd target depth of field layer that the subject quantity comprised is greater than predetermined number;
32) according to the corresponding relation of the object distance that presets and focusing position, the target focusing position corresponding with the 3rd target depth of field layer is obtained.
In actual applications, in order to save shooting time, depth of field layer that then can be only more to subject carries out focusing shooting, therefore, in this embodiment, user terminal can obtain the subject quantity comprised and is greater than the depth of field layer of predetermined number as the 3rd target depth of field layer from the depth of field layer generated, and according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with the 3rd target depth of field layer.
S105, by this first camera and/or this second camera this target focusing position to be focused and to take pictures.
In the embodiment of the present invention, user terminal to be focused to this target focusing position by this first camera and/or this second camera and the embodiment of taking pictures is:
The camera lens of this first camera and/or the lens moving of this second camera to target focusing position, and are carried out the shooting of picture by user terminal by this first camera and/or this second camera.
In the embodiment of the present invention, when the target focusing position obtained is one, user terminal by the motor of the first camera by the lens moving of the first camera to target focusing position, realize focusing, and carry out picture shooting by the first camera after focusing; Maybe when the target focusing position obtained is one, user terminal by the motor of second camera by the lens moving of second camera to target focusing position, realize focusing, and carry out picture shooting by second camera after focusing, the embodiment of the present invention does not limit.
In the embodiment of the present invention, when the target focusing position obtained is multiple, user terminal can move the camera lens of the first camera and second camera to different target focusing position simultaneously, realize focusing respectively, and carry out picture shooting respectively by the first camera and second camera after focusing, until take picture corresponding to all target focusing position.Such as, when there is target focusing position 1, target focusing position 2, target focusing position 3 and target focusing position 4, user terminal by the motor of the first camera by the lens moving of the first camera to target focusing position 1, realize focusing, and after focusing by the first camera pictures taken 1; Meanwhile, user terminal by the motor of second camera by the lens moving of second camera to target focusing position 2, realize focusing, and after focusing by second camera pictures taken 2.After the first camera has taken picture 1, the lens moving of the first camera to target focusing position 3, has been realized focusing by the motor that user terminal continues through the first camera, and after focusing by the first camera pictures taken 3; After second camera has taken picture 2, the lens moving of second camera to target focusing position 4, has been realized focusing by the motor that user terminal continues through second camera, and after focusing by second camera pictures taken 4.When the target focusing position obtained is multiple, can improve the capture rate of picture in this way, adding users is experienced.
In the method described by Fig. 1, when user terminal receives auto-focusing shooting instruction, user terminal responds this auto-focusing shooting instruction, obtains the first preview image, and obtain the second preview image by second camera by the first camera; User terminal, according to the first preview image and the second preview image, calculates the first preview image and the subject included by the second preview image object distance from user terminal; User terminal generates at least one depth of field layer according to the object distance calculated, this depth of field layer comprise the first preview image and the second preview image a subject or from the identical multiple subject of the object distance of user terminal; User terminal, according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with depth of field layer, and to be focused to target focusing position by the first camera and/or second camera and take pictures.Visible, automatically can set focusing position by the embodiment of the present invention and carry out focusing shooting, avoiding user and manually set focusing position, improve Consumer's Experience.
Refer to Fig. 2, the schematic flow sheet of Fig. 2 a kind of image pickup method disclosed in the embodiment of the present invention.As shown in Figure 2, this image pickup method can comprise the following steps.
S201, user terminal receive camera enabled instruction.
In the embodiment of the present invention, when user expects shooting, send camera enabled instruction to user terminal; After user terminal receives camera enabled instruction, camera application will be started.Optionally, user can by carrying out operation input camera enabled instruction to the hardware button that user terminal provides, application icon.Optionally, camera enabled instruction also can be sent by third party application, and such as, deliver the application of picture, have a talk about, concrete, the embodiment of the present invention does not limit.
S202, user terminal respond this enabled instruction, start the first camera and second camera.
In the embodiment of the present invention, user terminal by this enabled instruction of response, starts the first camera and second camera after receiving camera enabled instruction.
In the embodiment of the present invention, this user terminal is configured with two cameras, and namely user terminal has dual camera.For convenience of describing, these two cameras are called " the first camera " and " second camera ", but what deserves to be explained is, " the first camera " and " second camera " are only for describing object.
S203, user terminal judge whether current focal modes is autofocus mode.
In the embodiment of the present invention, after user terminal starts the first camera and second camera, will judge whether current focal modes is autofocus mode.
In the embodiment of the present invention, the focal modes of user terminal can comprise manual focus pattern and autofocus mode.Manual focus pattern needs user manually to set focusing position, and user terminal carries out focusing shooting to the focusing position that user manually sets.Optionally, user sets one or more focusing position by preview image, makes user terminal carry out focusing shooting.
In the embodiment of the present invention, when user terminal judges that current focal modes is autofocus mode, perform step S204.When user terminal judges current focal modes not as autofocus mode, user terminal detects user and whether inputs the setting instruction of manual focus position.
In the embodiment of the present invention, user terminal has multiple focal modes and will be conducive to user and set better focusing position, provides user more focusing position setting means, is conducive to improving Consumer's Experience.
If S204 user terminal judges that current focal modes is autofocus mode, then when user terminal receives auto-focusing shooting instruction, respond this auto-focusing shooting instruction, obtain the first preview image by the first camera, and obtain the second preview image by second camera.
S205, user terminal, according to this first preview image and this second preview image, calculate this first preview image and this subject included by the second preview image object distance from user terminal.
S206, user terminal generate at least one depth of field layer according to the object distance calculated, and this depth of field layer comprises a subject or from the identical multiple subject of the object distance of this user terminal.
S207, user terminal, according to the mapping relations of the object distance preset and focusing position, obtain the target focusing position corresponding with this depth of field layer.
S208, user terminal to be focused to target focusing position by the first camera and/or second camera and are taken pictures.
In the method described by Fig. 2, if current focal modes is autofocus mode, when user terminal receives auto-focusing shooting instruction, user terminal is by this auto-focusing of response shooting instruction, obtain the first preview image by the first camera, and obtain the second preview image by second camera; User terminal, according to the first preview image and the second preview image, calculates the first preview image and the subject included by the second preview image object distance from user terminal; User terminal generates at least one depth of field layer according to the object distance calculated, this depth of field layer comprise the first preview image and the second preview image a subject or from the identical multiple subject of the object distance of user terminal; User terminal, according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with depth of field layer, and to be focused to target focusing position by the first camera and/or second camera and take pictures.Visible, automatically can set focusing position by the embodiment of the present invention and carry out focusing shooting, avoiding user and manually set focusing position, improve Consumer's Experience.
Refer to Fig. 3, Fig. 3 is the structural representation of a kind of user terminal disclosed in the embodiment of the present invention.Wherein, the user terminal shown in Fig. 3 can comprise acquisition module 301, computing module 302, generation module 303 and Focusing module 304.Wherein:
Acquisition module 301, for when receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image, and obtain the second preview image by second camera by the first camera.
In the embodiment of the present invention, when user terminal receives auto-focusing shooting instruction, the acquisition module 301 of user terminal, by this auto-focusing of response shooting instruction, obtains the first preview image by the first camera, and obtains the second preview image by second camera.Wherein, this user terminal can include but not limited to that smart mobile phone, panel computer, notebook computer, desktop computer etc. have the user terminal of camera function.The operating system of this user terminal can include but not limited to Android operation system, IOS, Symbian (Saipan) operating system, BlackBerry (blackberry, blueberry) operating system, Windows operating system etc., and the embodiment of the present invention does not limit.
In the embodiment of the present invention, this user terminal is configured with two cameras, and namely user terminal has dual camera.For convenience of describing, these two cameras are called " the first camera " and " second camera ", but what deserves to be explained is, " the first camera " and " second camera " are only for describing object.
In the embodiment of the present invention, optionally, before user terminal receives auto-focusing shooting instruction, user can at user terminal selecting autofocus mode.After user selects autofocus mode, user can input auto-focusing shooting instruction.Optionally, the focal modes of user terminal can include but not limited to manual focus pattern and autofocus mode.Manual focus pattern needs user manually to set focusing position, and user terminal carries out focusing shooting to the focusing position that user manually sets.Optionally, user sets one or more focusing position by preview image, makes user terminal carry out focusing shooting.
Computing module 302, for according to described first preview image and described second preview image, calculates described first preview image and the subject included by the described second preview image object distance from user terminal.
In the embodiment of the present invention, computing module 302, by the first preview image of obtaining according to acquisition module 301 and the second preview image, calculates the first preview image and the subject included by the second preview image object distance (namely subject is from the distance of user terminal) from user terminal.Particularly, computing module 302 will determine the subject that the first pre-set image and the second pre-set image comprise, then calculates the object distance of subject from user terminal.Optionally, computing module 302 only can determine the personage that the first pre-set image and the second pre-set image comprise, and calculate the object distance of personage to user terminal, the embodiment of the present invention does not limit.Be the known technology of industry owing to calculating subject to the object distance of user terminal, therefore the embodiment of the present invention is no longer set forth.For example, if the first preview image comprises personage A and personage B, second preview image comprises personage A, then computing module 302 will recognize the first preview image and the second preview image comprises personage A and personage B, and computing module 302 will calculate the distance of personage A and personage B to user terminal respectively.
Generation module 303, for generating at least one depth of field layer according to the described object distance calculated, described depth of field layer comprises a described subject or from the identical multiple described subject of the object distance of described user terminal.
In the embodiment of the present invention, computing module 302 to calculate in the first preview image and the second preview image included subject to user terminal distance after, generation module 303 will according to the depth map of the object distance generation calculated for the first preview image or the second preview image.This depth map is made up of at least one depth of field layer.When generation module 303 generates the depth map for the first preview image according to the object distance calculated, each depth of field layer of depth map comprises a subject of the first preview image or comprises the multiple subject identical from the object distance of this user terminal of the first preview image.When generation module 303 generates the depth map for the second preview image according to the object distance calculated, each depth of field layer of depth map comprises a subject of the second preview image or comprises the multiple subject identical from the object distance of this user terminal of the second preview image.
For example, if the first preview image comprises personage A, personage B and personage C, the second preview image comprises personage A, then generation module 303 can according to the depth map of the object distance generation calculated for the first preview image.This depth map comprises all subject of the first preview image (i.e. personage A, personage B and personage C), and this depth map is made up of at least one depth of field layer.If personage A is different from the object distance of user terminal with personage B, personage B is identical from the object distance of user terminal with personage C, then this depth map is made up of two depth of field layer, and wherein, a depth of field layer comprises a personage A and depth of field layer and comprises personage B and personage C.
Find in practice, when user terminal is focused to a subject and takes, also clearly can shoot the subject with this subject equivalent distance.Therefore, clear for only need once the to focus all subject shooting that just depth of field layer can be comprised of each depth of field layer.
Described acquisition module 301, also for the corresponding relation according to the object distance preset and focusing position, obtains the target focusing position corresponding with described depth of field layer.
In the embodiment of the present invention, after generation module 303 generates depth of field layer, acquisition module 301, by the corresponding relation according to the object distance preset and focusing position, obtains the target focusing position corresponding with this depth of field layer.
Usually, camera be generally divided into focus, auto-focusing, optical anti-vibration, array camera etc. classification.Applying maximum is focus and automatic focusing camera head.Wherein, can the camera of auto-focusing usually be made up of periphery components and parts such as camera lens, motor, colour filter, base, imageing sensor, motor driving chip, circuit base plate, connector, electric capacity.
The device relevant to the embodiment of the present invention be camera lens and motor mainly.Wherein, camera lens is used for collected light, and projects to image sensor surface, the position residing for camera lens directly determines the readability of collected image.When camera lens is in focal position corresponding to a subject, the image that user terminal takes this subject is the most clear.Wherein, motor, for driving lens moving to focusing position (focal position that namely subject is corresponding), realize focusing, make camera lens project clearly image to imageing sensor.
In the embodiment of the present invention, the position when subject that depth of field layer just comprises for camera by target focusing position is taken the most clear residing for camera lens.
As the optional execution mode of one, acquisition module 301 specifically for:
The first object depth of field layer comprising personage is obtained from the depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with first object depth of field layer.
In this embodiment, the depth map of generation may be made up of a lot of depth of field layer, therefore, can screen the depth of field layer that some are important, only obtain the focusing position that important depth of field layer is corresponding, eyeglass is moved to focusing position and realizes focusing.
In actual applications, user often carries out focusing shooting to personage, and therefore, in this embodiment, whether acquisition module 301 is by comprising face in face recognition technology identification depth of field layer; If a depth of field layer comprises face, then acquisition module 301 judges that this depth of field layer comprises personage, then acquisition module 301 obtains this depth of field layer as first object depth of field layer.
As the optional execution mode of one, acquisition module 301 specifically for:
The second target depth of field layer that object distance is less than predeterminable range is obtained from the depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with the second target depth of field layer.
In actual applications, user often loses interest in too remote subject, therefore, in this embodiment, acquisition module 301 can obtain object distance and be less than the depth of field layer of predeterminable range as the second target depth of field layer from the depth of field layer generated, and according to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with the second target depth of field layer.
As the optional execution mode of one, acquisition module 301 specifically for:
The 3rd target depth of field layer that the subject quantity comprised is greater than predetermined number is obtained from the depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with the 3rd target depth of field layer.
In actual applications, in order to save shooting time, depth of field layer that then can be only more to subject carries out focusing shooting, therefore, in this embodiment, acquisition module 301 can obtain the subject quantity comprised and is greater than the depth of field layer of predetermined number as the 3rd target depth of field layer from the depth of field layer generated, and according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with the 3rd target depth of field layer.
Focusing module 304, for focus to described target focusing position by described first camera and/or described second camera and take pictures.
In the embodiment of the present invention, Focusing module 304 specifically for:
By the camera lens of this first camera and/or the lens moving of this second camera to target focusing position, and carried out the shooting of picture by this first camera and/or this second camera.
In the embodiment of the present invention, when the target focusing position that acquisition module 301 obtains is one, Focusing module 304 by the motor of the first camera by the lens moving of the first camera to target focusing position, realize focusing, and carry out picture shooting by the first camera after focusing; Or when the target focusing position that acquisition module 301 obtains is one, Focusing module 304 by the motor of second camera by the lens moving of second camera to target focusing position, realize focusing, and carry out picture shooting by second camera after focusing, the embodiment of the present invention does not limit.
In the embodiment of the present invention, when the target focusing position that acquisition module 301 obtains is multiple, Focusing module 304 can move the camera lens of the first camera and second camera to different target focusing position simultaneously, realize focusing respectively, and carry out picture shooting respectively by the first camera and second camera after focusing, until take picture corresponding to all target focusing position.Such as, when there is target focusing position 1, target focusing position 2, target focusing position 3 and target focusing position 4, Focusing module 304 by the motor of the first camera by the lens moving of the first camera to target focusing position 1, realize focusing, and after focusing by the first camera pictures taken 1; Meanwhile, Focusing module 304 by the motor of second camera by the lens moving of second camera to target focusing position 2, realize focusing, and after focusing by second camera pictures taken 2.After the first camera has taken picture 1, the lens moving of the first camera to target focusing position 3, has been realized focusing by the motor that Focusing module 304 continues through the first camera, and after focusing by the first camera pictures taken 3; After second camera has taken picture 2, the lens moving of second camera to target focusing position 4, has been realized focusing by the motor that Focusing module 304 continues through second camera, and after focusing by second camera pictures taken 4.When the target focusing position obtained is multiple, can improve the capture rate of picture in this way, adding users is experienced.
See also Fig. 4, Fig. 4 is the structural representation of another kind of user terminal disclosed in the embodiment of the present invention.Wherein, the user terminal shown in Fig. 4 is that user terminal is as shown in Figure 3 optimized and obtains.Compared with the user terminal shown in Fig. 3, the user terminal shown in Fig. 4, except all modules comprising the user terminal shown in Fig. 3, can also comprise receiver module 305, start module 306 and judge module 307.Wherein:
Receiver module 305, for receiving camera enabled instruction.
In the embodiment of the present invention, when user expects shooting, send camera enabled instruction to user terminal; After receiver module 305 receives camera enabled instruction, camera application will be started.Optionally, user can by carrying out operation input camera enabled instruction to the hardware button that user terminal provides, application icon.Optionally, camera enabled instruction also can be sent by third party application, and such as, deliver the application of picture, have a talk about, concrete, the embodiment of the present invention does not limit.
Starting module 306, for responding described enabled instruction, starting the first camera and second camera.
In the embodiment of the present invention, after receiver module 305 receives camera enabled instruction, start module 306 by this enabled instruction of response, start the first camera and second camera.
In the embodiment of the present invention, this user terminal is configured with two cameras, and namely user terminal has dual camera.For convenience of describing, these two cameras are called " the first camera " and " second camera ", but what deserves to be explained is, " the first camera " and " second camera " are only for describing object.
Judge module 307, for judging whether current focal modes is autofocus mode, when judging that current focal modes is autofocus mode, triggering described acquisition module 301 performs described when receiving auto-focusing shooting instruction, respond described auto-focusing shooting instruction, obtain the first preview image by the first camera, and obtain the step of the second preview image by second camera.
In the embodiment of the present invention, after startup module 306 starts the first camera and second camera, judge module 307 will judge whether current focal modes is autofocus mode.
In the embodiment of the present invention, the focal modes of user terminal can comprise manual focus pattern and autofocus mode.Manual focus pattern needs user manually to set focusing position, and user terminal carries out focusing shooting to the focusing position that user manually sets.Optionally, user sets one or more focusing position by preview image, makes user terminal carry out focusing shooting.
When judge module 307 judges current focal modes not as autofocus mode, user terminal detects user and whether inputs the setting instruction of manual focus position.
In the embodiment of the present invention, user terminal has multiple focal modes and will be conducive to user and set better focusing position, provides user more focusing position setting means, is conducive to improving Consumer's Experience.
In the user terminal described by Fig. 3 and Fig. 4, when user terminal receives auto-focusing shooting instruction, acquisition module responds this auto-focusing shooting instruction, obtains the first preview image, and obtain the second preview image by second camera by the first camera; Computing module, according to the first preview image and the second preview image, calculates the first preview image and the subject included by the second preview image object distance from user terminal; Generation module generates at least one depth of field layer according to the object distance calculated, this depth of field layer comprise the first preview image and the second preview image a subject or from the identical multiple subject of the object distance of user terminal; Acquisition module, according to the corresponding relation of the object distance preset and focusing position, obtains the target focusing position corresponding with depth of field layer, and to be focused to target focusing position by the first camera and/or second camera by Focusing module and take pictures.Visible, automatically can set focusing position by the embodiment of the present invention and carry out focusing shooting, avoiding user and manually set focusing position, improve Consumer's Experience.
It should be noted that, in the above-described embodiments, the description of each embodiment is all emphasized particularly on different fields in certain embodiment, there is no the part described in detail, can see the associated description of other embodiments.Secondly, those skilled in the art also should know, the embodiment described in specification all belongs to preferred embodiment, and involved action and module might not be that the present invention is necessary.
Step in embodiment of the present invention method can be carried out order according to actual needs and be adjusted, merges and delete.
Embodiment of the present invention user terminal module can carry out merging, divide and deleting according to actual needs.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is that the hardware that can carry out Dictating user terminal equipment relevant by program has come, this program can be stored in a computer-readable recording medium, storage medium can comprise: flash disk, read-only memory (Read-OnlyMemory, ROM), random access device (RandomAccessMemory, RAM), disk or CD etc.
Above a kind of image pickup method disclosed in the embodiment of the present invention and user terminal are described in detail, apply specific case herein to set forth principle of the present invention and execution mode, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1. an image pickup method, is characterized in that, described method comprises:
When receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image by the first camera, and obtain the second preview image by second camera;
According to described first preview image and described second preview image, calculate described first preview image and the subject included by the described second preview image object distance from user terminal;
Described object distance according to calculating generates at least one depth of field layer, and described depth of field layer comprises a described subject or from the identical multiple described subject of the object distance of described user terminal;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described depth of field layer;
By described first camera and/or described second camera described target focusing position focused and take pictures.
2. method according to claim 1, is characterized in that, the corresponding relation of the object distance that described basis presets and focusing position, obtains the target focusing position corresponding with described depth of field layer and comprises:
The first object depth of field layer comprising personage is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described first object depth of field layer.
3. method according to claim 1, is characterized in that, the corresponding relation of the object distance that described basis presets and focusing position, obtains the target focusing position corresponding with described depth of field layer and comprises:
The second target depth of field layer that object distance is less than predeterminable range is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described second target depth of field layer.
4. method according to claim 1, is characterized in that, the corresponding relation of the object distance that described basis presets and focusing position, obtains the target focusing position corresponding with described depth of field layer and comprises:
The 3rd target depth of field layer that the subject quantity comprised is greater than predetermined number is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described 3rd target depth of field layer.
5. the method according to Claims 1 to 4 any one, is characterized in that, described method also comprises:
Receive camera enabled instruction;
Respond described enabled instruction, start the first camera and second camera;
Judge whether current focal modes is autofocus mode;
If the determination result is YES, then performing described when receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image by the first camera, and obtain the step of the second preview image by second camera.
6. a user terminal, is characterized in that, described user terminal comprises:
Acquisition module, for when receiving auto-focusing shooting instruction, responding described auto-focusing shooting instruction, obtaining the first preview image, and obtain the second preview image by second camera by the first camera;
Computing module, for according to described first preview image and described second preview image, calculates described first preview image and the subject included by the described second preview image object distance from user terminal;
Generation module, for generating at least one depth of field layer according to the described object distance calculated, described depth of field layer comprises a described subject or from the identical multiple described subject of the object distance of described user terminal;
Described acquisition module, also for the corresponding relation according to the object distance preset and focusing position, obtains the target focusing position corresponding with described depth of field layer;
Focusing module, for focus to described target focusing position by described first camera and/or described second camera and take pictures.
7. user terminal according to claim 6, is characterized in that, described acquisition module specifically for:
The first object depth of field layer comprising personage is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described first object depth of field layer.
8. user terminal according to claim 6, is characterized in that, described acquisition module specifically for:
The second target depth of field layer that object distance is less than predeterminable range is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described second target depth of field layer.
9. user terminal according to claim 6, is characterized in that, described acquisition module specifically for:
The 3rd target depth of field layer that the subject quantity comprised is greater than predetermined number is obtained from the described depth of field layer generated;
According to the corresponding relation of the object distance preset and focusing position, obtain the target focusing position corresponding with described 3rd target depth of field layer.
10. the user terminal according to claim 6 ~ 9 any one, is characterized in that, described user terminal also comprises:
Receiver module, for receiving camera enabled instruction;
Starting module, for responding described enabled instruction, starting the first camera and second camera;
Judge module, for judging whether current focal modes is autofocus mode, when judging that current focal modes is autofocus mode, triggering described acquisition module performs described when receiving auto-focusing shooting instruction, respond described auto-focusing shooting instruction, obtain the first preview image by the first camera, and obtain the step of the second preview image by second camera.
CN201510404513.3A 2015-07-10 2015-07-10 A kind of image pickup method and user terminal Active CN105578026B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510404513.3A CN105578026B (en) 2015-07-10 2015-07-10 A kind of image pickup method and user terminal
PCT/CN2015/085884 WO2017008353A1 (en) 2015-07-10 2015-07-31 Capturing method and user terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510404513.3A CN105578026B (en) 2015-07-10 2015-07-10 A kind of image pickup method and user terminal

Publications (2)

Publication Number Publication Date
CN105578026A true CN105578026A (en) 2016-05-11
CN105578026B CN105578026B (en) 2017-11-17

Family

ID=55887638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510404513.3A Active CN105578026B (en) 2015-07-10 2015-07-10 A kind of image pickup method and user terminal

Country Status (2)

Country Link
CN (1) CN105578026B (en)
WO (1) WO2017008353A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959555A (en) * 2016-06-02 2016-09-21 广东欧珀移动通信有限公司 Shooting mode automatic adjustment method and device and mobile terminal
CN106506958A (en) * 2016-11-15 2017-03-15 维沃移动通信有限公司 A kind of method shot using mobile terminal and mobile terminal
CN107911619A (en) * 2017-12-28 2018-04-13 上海传英信息技术有限公司 The camera system and its image capture method of a kind of intelligent terminal
WO2018176929A1 (en) * 2017-03-27 2018-10-04 华为技术有限公司 Image background blurring method and apparatus
CN108632536A (en) * 2018-07-31 2018-10-09 Oppo广东移动通信有限公司 A kind of camera control method and device, terminal, storage medium
CN109064390A (en) * 2018-08-01 2018-12-21 Oppo(重庆)智能科技有限公司 A kind of image processing method, image processing apparatus and mobile terminal
CN109246362A (en) * 2017-04-28 2019-01-18 中兴通讯股份有限公司 A kind of image processing method and mobile terminal
CN110418056A (en) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 A kind of image processing method, device, storage medium and electronic equipment
CN110661970A (en) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 Photographing method and device, storage medium and electronic equipment
CN111083375A (en) * 2019-12-27 2020-04-28 维沃移动通信有限公司 Focusing method and electronic equipment
CN111310567A (en) * 2020-01-16 2020-06-19 中国建设银行股份有限公司 Face recognition method and device under multi-person scene
CN111885307A (en) * 2020-07-30 2020-11-03 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium
WO2024007654A1 (en) * 2022-07-06 2024-01-11 惠州Tcl移动通信有限公司 Camera focusing method and apparatus, electronic device, and computer readable storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209782B (en) * 2018-11-22 2024-04-16 中国银联股份有限公司 Recognition method and recognition system for abnormal lamp of equipment in machine room
CN112995496B (en) * 2019-12-18 2022-07-05 青岛海信移动通信技术股份有限公司 Video recording method and mobile terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100277619A1 (en) * 2009-05-04 2010-11-04 Lawrence Scarff Dual Lens Digital Zoom
CN103246130A (en) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 Focusing method and device
CN104243838A (en) * 2014-09-10 2014-12-24 广东欧珀移动通信有限公司 Photographing method and device based on flash lamp
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104349063A (en) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 Method, device and terminal for controlling camera shooting
CN104639839A (en) * 2015-03-16 2015-05-20 深圳市欧珀通信软件有限公司 Method and device for shooting

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621619A (en) * 2008-07-04 2010-01-06 华晶科技股份有限公司 Shooting method for simultaneously focusing various faces and digital image capture device thereof
US20130057655A1 (en) * 2011-09-02 2013-03-07 Wen-Yueh Su Image processing system and automatic focusing method
KR101888956B1 (en) * 2012-05-31 2018-08-17 엘지이노텍 주식회사 Camera module and auto-focusing method thereof
US9578224B2 (en) * 2012-09-10 2017-02-21 Nvidia Corporation System and method for enhanced monoimaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100277619A1 (en) * 2009-05-04 2010-11-04 Lawrence Scarff Dual Lens Digital Zoom
CN103246130A (en) * 2013-04-16 2013-08-14 广东欧珀移动通信有限公司 Focusing method and device
CN104243838A (en) * 2014-09-10 2014-12-24 广东欧珀移动通信有限公司 Photographing method and device based on flash lamp
CN104349063A (en) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 Method, device and terminal for controlling camera shooting
CN104333700A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Image blurring method and image blurring device
CN104639839A (en) * 2015-03-16 2015-05-20 深圳市欧珀通信软件有限公司 Method and device for shooting

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959555A (en) * 2016-06-02 2016-09-21 广东欧珀移动通信有限公司 Shooting mode automatic adjustment method and device and mobile terminal
CN105959555B (en) * 2016-06-02 2019-07-19 Oppo广东移动通信有限公司 Screening-mode automatic adjusting method, device and mobile terminal
CN106506958A (en) * 2016-11-15 2017-03-15 维沃移动通信有限公司 A kind of method shot using mobile terminal and mobile terminal
CN106506958B (en) * 2016-11-15 2020-04-10 维沃移动通信有限公司 Method for shooting by adopting mobile terminal and mobile terminal
CN108668069B (en) * 2017-03-27 2020-04-14 华为技术有限公司 Image background blurring method and device
WO2018176929A1 (en) * 2017-03-27 2018-10-04 华为技术有限公司 Image background blurring method and apparatus
CN108668069A (en) * 2017-03-27 2018-10-16 华为技术有限公司 A kind of image background weakening method and device
CN109246362A (en) * 2017-04-28 2019-01-18 中兴通讯股份有限公司 A kind of image processing method and mobile terminal
CN109246362B (en) * 2017-04-28 2021-03-16 中兴通讯股份有限公司 Image processing method and mobile terminal
CN107911619A (en) * 2017-12-28 2018-04-13 上海传英信息技术有限公司 The camera system and its image capture method of a kind of intelligent terminal
CN108632536A (en) * 2018-07-31 2018-10-09 Oppo广东移动通信有限公司 A kind of camera control method and device, terminal, storage medium
CN108632536B (en) * 2018-07-31 2020-10-30 Oppo广东移动通信有限公司 Camera control method and device, terminal and storage medium
CN109064390A (en) * 2018-08-01 2018-12-21 Oppo(重庆)智能科技有限公司 A kind of image processing method, image processing apparatus and mobile terminal
CN109064390B (en) * 2018-08-01 2023-04-07 Oppo(重庆)智能科技有限公司 Image processing method, image processing device and mobile terminal
CN110418056A (en) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 A kind of image processing method, device, storage medium and electronic equipment
CN110661970A (en) * 2019-09-03 2020-01-07 RealMe重庆移动通信有限公司 Photographing method and device, storage medium and electronic equipment
CN110661970B (en) * 2019-09-03 2021-08-24 RealMe重庆移动通信有限公司 Photographing method and device, storage medium and electronic equipment
CN111083375A (en) * 2019-12-27 2020-04-28 维沃移动通信有限公司 Focusing method and electronic equipment
CN111083375B (en) * 2019-12-27 2021-06-29 维沃移动通信有限公司 Focusing method and electronic equipment
CN111310567A (en) * 2020-01-16 2020-06-19 中国建设银行股份有限公司 Face recognition method and device under multi-person scene
CN111310567B (en) * 2020-01-16 2023-06-23 中国建设银行股份有限公司 Face recognition method and device in multi-person scene
CN111885307A (en) * 2020-07-30 2020-11-03 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium
CN111885307B (en) * 2020-07-30 2022-07-22 努比亚技术有限公司 Depth-of-field shooting method and device and computer readable storage medium
WO2024007654A1 (en) * 2022-07-06 2024-01-11 惠州Tcl移动通信有限公司 Camera focusing method and apparatus, electronic device, and computer readable storage medium

Also Published As

Publication number Publication date
WO2017008353A1 (en) 2017-01-19
CN105578026B (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN105578026A (en) Photographing method and user terminal
CN103051841B (en) The control method of time of exposure and device
CN107950018B (en) Image generation method and system, and computer readable medium
CN105141832B (en) One kind focusing abnormality eliminating method and mobile terminal
CN104410783A (en) Focusing method and terminal
CN104994274A (en) Rapid photographing method based on mobile terminal and mobile terminal
CN107787463B (en) The capture of optimization focusing storehouse
US10516819B2 (en) Electronic device with multiple lenses and lens switching method
CN105227838A (en) A kind of image processing method and mobile terminal
CN104935698A (en) Photographing method of smart terminal, photographing device and smart phone
CN104811613A (en) Camera focusing method
CN106031148B (en) Imaging device, method of auto-focusing in an imaging device and corresponding computer program
EP3062513A1 (en) Video apparatus and photography method thereof
CN105007422A (en) Phase focusing method and user terminal
CN104994287A (en) Camera shooting method based on wide-angle camera and mobile terminal
CN103402058A (en) Shot image processing method and device
CN112352417B (en) Focusing method of shooting device, system and storage medium
CN104811612A (en) Terminal
CN104601879A (en) Focusing method
CN110830726B (en) Automatic focusing method, device, equipment and storage medium
CN105516609A (en) Shooting method and device
CN105120153A (en) Image photographing method and device
CN104994288A (en) Shooting method and user terminal
CN106161970B (en) A kind of image pickup method, device and mobile terminal
CN104935709A (en) Method and device for achieving lens compatibility

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant