CN104092946B - Image-pickup method and image collecting device - Google Patents

Image-pickup method and image collecting device Download PDF

Info

Publication number
CN104092946B
CN104092946B CN201410357259.1A CN201410357259A CN104092946B CN 104092946 B CN104092946 B CN 104092946B CN 201410357259 A CN201410357259 A CN 201410357259A CN 104092946 B CN104092946 B CN 104092946B
Authority
CN
China
Prior art keywords
image
scene
target
information
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410357259.1A
Other languages
Chinese (zh)
Other versions
CN104092946A (en
Inventor
周梁
于魁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhigu Ruituo Technology Services Co Ltd
Original Assignee
Beijing Zhigu Ruituo Technology Services Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhigu Ruituo Technology Services Co Ltd filed Critical Beijing Zhigu Ruituo Technology Services Co Ltd
Priority to CN201410357259.1A priority Critical patent/CN104092946B/en
Publication of CN104092946A publication Critical patent/CN104092946A/en
Application granted granted Critical
Publication of CN104092946B publication Critical patent/CN104092946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

A kind of image-pickup method and image collecting device are provided in the embodiment of the present application.The method includes:The first image information of target scene and desired focusing position are sent to an at least external equipment;Obtain sent in response to described first image information and desired focusing position with relevant second image information of the target scene;There is the target image of the target scene of desired scene depth according to second image information collecting.The method and device of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment obtain can assistant images collecting device collect the image with desired scene depth information mode, extend the ability of the limited image capture device of hardware to a certain extent.

Description

Image-pickup method and image collecting device
Technical field
This application involves technical field of image processing more particularly to a kind of image-pickup methods and image collecting device.
Background technology
It takes pictures anywhere or anytime for the convenience of the user demand, camera model is equipped on more and more portable devices.Just Portable device it is small and exquisite, portable while offering convenience for user, also because volume etc. factor limits camera model Hardware capabilities.Compared with single anti-equal professional capture apparatus, portable device acquired image effect is limited.For example, due to mirror The limitation of head focal length and aperture, after exact focus, portable device single anti-equal professional capture apparatus institute difficult to realize can be in fact Existing shallow depth field imaging effect, and also lack corresponding shallow depth field imaging preview effect before shooting.
Invention content
The application's is designed to provide a kind of image acquisition scheme.
According to the application's in a first aspect, providing a kind of image-pickup method, the method includes:
The first image information of target scene and desired focusing position are sent to an at least external equipment;
Acquisition sends relevant with the target scene in response to described first image information and desired focusing position Second image information;
There is the target image of the target scene of desired scene depth according to second image information collecting.
According to the second aspect of the application, a kind of image-pickup method is provided, the method includes:
The first image information of target scene is sent to an at least external equipment;
Obtain sent in response to described first image information with relevant second image information of the target scene;
According to second image information and desired focusing position, the target with desired scene depth is acquired The target image of scene.
According to the third aspect of the application, a kind of image collecting device is provided, described device includes:
One sending module, for sending the first image information of target scene and desired focusing to an at least external equipment Position;
One first acquisition module, for obtains sent in response to described first image information and desired focusing position with Relevant second image information of target scene;
There is the target image of the target scene of desired scene depth according to second image information collecting.
According to the fourth aspect of the application, a kind of image collecting device is provided, described device includes:
One sending module, the first image information for sending target scene to an at least external equipment;
One first acquisition module, for obtain sent in response to described first image information it is related to the target scene The second image information;
One acquisition module, for according to second image information and desired focusing position, acquisition to have desired field The target image of the target scene of depth of field degree.
The method and device of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment obtain Can assistant images collecting device collect the image with desired scene depth information mode, extend to a certain extent The ability of the limited image capture device of hardware.
Description of the drawings
Fig. 1 is the flow chart of the image-pickup method of the application the first embodiment;
Fig. 2 is the flow chart of the image-pickup method of second of embodiment of the application;
Fig. 3 is the structure diagram of the first realization method of the image collecting device of the application the first embodiment;
Fig. 4 is the structure diagram of second of realization method of the image collecting device of the application the first embodiment;
Fig. 5 be the first embodiment of the application image collecting device in acquisition module structure diagram;
Fig. 6 be the first embodiment of the application image collecting device in acquisition module processing unit structure diagram;
Fig. 7 is the structure diagram of the third realization method of the image collecting device of the application the first embodiment;
Fig. 8 is the structure diagram of the first realization method of the image collecting device of second of embodiment of the application;
Fig. 9 is the structure diagram of second of realization method of the image collecting device of second of embodiment of the application;
Figure 10 be second of embodiment of the application image collecting device in acquisition module structure diagram;
Figure 11 be second of embodiment of the application image collecting device in acquisition module processing unit structure diagram;
Figure 12 is the structure diagram of the third realization method of the image collecting device of the application the first embodiment;
Figure 13 is the structure diagram of the 4th kind of realization method of the image collecting device of the application the first embodiment;
Figure 14 is the structure diagram of the 4th kind of realization method of the image collecting device of second of embodiment of the application.
Specific implementation mode
(identical label indicates identical element in several attached drawings) and embodiment below in conjunction with the accompanying drawings, to the tool of the application Body embodiment is described in further detail.Following embodiment is not limited to scope of the present application for illustrating the application.
It will be understood by those skilled in the art that the terms such as " first ", " second " in the application be only used for distinguishing it is asynchronous Suddenly, equipment or module etc. neither represent any particular technology meaning, also do not indicate that the inevitable logical order between them.
The method and device of the embodiment of the present application is proposed mainly for such image capture device:Due to the hardware of itself (for example, camera lens etc.) limits, and preview and can not collect the images of user's desired effects before shooting.Such equipment can be to appoint Anticipating has the equipment of image collecting function, for example, mobile phone, tablet computer, intelligent glasses, Wearable etc..And in this Shen Please be in each embodiment, especially the depth of field more shallow under desired focusing position (has under focusing position desired image effect More shallow Deep Canvas image is referred to herein as " shallow depth image ").
As shown in Figure 1, the image-pickup method of the first embodiment of the application includes:
S120. the first image information of target scene and desired focusing position are sent to an at least external equipment.
In the method for the present embodiment, if image capture device can not acquire the image with desired effects, it can acquire Before the image of target scene, the first image information of target scene and desired focusing position are sent to an at least external equipment It sets.First image information is that image capture device can be obtained in current acquisition position, related to target scene and equipment Information.For example, the first image information may include at least one of the following:Current temporal information (including season, it is specific when Between etc.), acquisition position, current context information (including weather, illumination etc.), the direction of current device, current device acquisition ginseng Number (for example, focal length) etc..
External equipment is the also arbitrary equipment as outside image capture device for image capture device.For example, Other image capture devices, the equipment for not having Image Acquisition ability, server computer etc..
S140. believing with relevant second image of the target scene of being sent in response to described first image information is obtained Breath.
In the method for the present embodiment, in response to receiving the first image information and desired focusing position, outside at least one Portion's equipment can according to first image information and desired focusing position from it is local or from internet obtain largely with the first image The image of the relevant target scene of information, these images include that focusing position is different from the desired focusing position, but acquires The image of all sames such as position and angle can cross the shallow depth map for obtaining and generating under desired focusing position using these images As enough information.Second image information is such, realizes the required information of desired effects, is acquired for assistant images Image of the equipment acquisition with desired scene depth.
S160. there is the target figure of the target scene of desired scene depth according to second image information collecting Picture.
The method of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment acquisition can be auxiliary It helps image capture device to collect the mode of the information of the image with desired scene depth, extends hardware to a certain extent The ability of limited image capture device.
In a kind of possible realization method of the present embodiment, the first image information can be by way of preview target scene It obtains, correspondingly, the method for the present embodiment further includes:
Obtain the first preview image of the target scene.
The first preview image of target scene is acquired by image capture device can obtain and the first preview image phase The first image information closed.First preview image is that have the first scene based on the existing acquisition capacity of image capture device Depth and the first focusing position.User, can be by clicking certain by the image of preview target scene on the preview image Position determines desired focusing position.Alternatively, first focusing position is the desired focusing position.Correspondingly, first Image information may include:The acquisition time of first preview image, the acquisition position of the first preview image, the first preview image of acquisition When environmental information, the first preview image of acquisition when the direction of equipment, the acquisition parameter of image capture device etc..
In order to acquire desired shallow depth image, in one possible implementation, second image information can wrap It includes:At least one second depth information of scene of the target scene under the desired focusing position, described at least one second At least one second scene depth corresponding to scape depth information is different from first scene depth.
In this realization method, step S160 further comprises:
The first image of the target scene is obtained, the focusing position of described first image is the desired focusing position It sets, the scene depth of the first image is first scene depth.
According to the first preview image and desired focusing position, image capture device can be used with desired focusing position First image of acquisition target scene is set, and obtains first image.
Described first image is handled according at least one second depth information of scene, obtaining has desired field The target image of the target scene of depth of field degree.
After obtaining the first image, according to the second depth information of scene pair first obtained from an at least external equipment Image is handled, you can obtains the target image under desired focusing position, with desired scene depth.According to scene depth Degree information and focusing position generate the technology that corresponding map seems this field maturation, and this will not be repeated here.
In one possible implementation, which can be the form of depth map (DepthMap), Each pixel value of depth map indicates the distance between certain point and image capture device in target scene.
In one possible implementation, corresponding scene depth can be generated according at least one second depth information of scene At least one second preview image, for selection by the user desired scene depth.Correspondingly, according at least one second scene depth Degree information handles described first image, obtains the target image of the target scene with desired scene depth, Further include:
According at least scene depth corresponding at least one second depth information of scene, generate described at least 1 the Two preview images.That is, generating at least one second preview image of target scene, each second preview image all has expectation Focusing position, and scene depth is one at least one second depth information of scene.
Show at least one second preview image of the target scene.
Targets preview image, the scene depth of the targets preview image are determined from least one second preview image For the desired scene depth.For example, the targets preview image can be determined according to the instruction of user.
Described first image is handled according to the targets preview image, obtains the target image.According to target The scene depth of preview image handles the first image so that and treated, and image has desired focusing position, and The scene depth of targets preview image.
In addition, the method in the present embodiment further includes step:
Determine the desired focusing position.
Automatically desired focusing position can be determined or according to the instruction of user.For example, certainly by modes such as recognitions of face It is dynamic to determine.The instruction of user can be any appropriate modes such as touch command, gesture instruction, phonetic order.
After acquiring target image, the method for the present embodiment further includes:
Show the target image.
In addition, present invention also provides another image-pickup method, as shown in Fig. 2, second of embodiment of the application Image-pickup method includes step:
S220. the first image information of target scene is sent to an at least external equipment.
In the method for the present embodiment, if image capture device can not acquire the image with desired effects, it can acquire Before the image of target scene, the first image information of target scene is sent to an at least external equipment.First image information is That image capture device can be obtained in current acquisition position and target scene and device-dependent information.For example, the first figure As information may include at least one of the following:Current temporal information (including season, specific time etc.), acquisition position, when Preceding environmental information (including weather, illumination etc.), the direction of current device, acquisition parameter of current device (for example, focal length) etc..
External equipment is the also arbitrary equipment as outside image capture device for image capture device.For example, Other image capture devices, the equipment for not having Image Acquisition ability, server computer etc..
S240. believing with relevant second image of the target scene of being sent in response to described first image information is obtained Breath.
In the method for the present embodiment, in response to receiving first image information, at least external equipment meeting basis should First image information obtains the largely image with the relevant target scene of the first image information, these figures from local or from internet As include identical acquisition position, identical visual angle, various focusing positions image, can be crossed and be generated using these images The enough information of shallow depth image.Second image information is such, realizes the required information of desired effects, for assisting Image of the image capture device acquisition with desired scene depth.
S260. according to second image information and desired focusing position, the institute with desired scene depth is acquired State the target image of target scene.
The method of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment acquisition can be auxiliary It helps image capture device to collect the mode of the information of the image with desired scene depth, extends hardware to a certain extent The ability of limited image capture device.
In a kind of possible realization method of the present embodiment, the first image information can be obtained by way of preview target scene , correspondingly, the method for the present embodiment further includes:
Obtain the first preview image of the target scene;
The first preview image of target scene is acquired by image capture device can obtain and the first preview image phase The first image information closed.First preview image is that have the first scene based on the existing acquisition capacity of image capture device Depth and the first focusing position.Correspondingly, the first image information may include:The acquisition time of first preview image, first are in advance Look at image acquisition position, acquisition the first preview image when environmental information, acquisition the first preview image when equipment direction, figure As the acquisition parameter etc. of collecting device.
In order to acquire desired shallow depth image, in one possible implementation, second image information can wrap It includes:With relevant at least one second image of first preview image, at least one second image has and described first pair Different at least one second focusing position in burnt position.According to these images, can calculate under realization different focus position, difference The required depth information of scene depth.In this realization method, step S260 further comprises:
At least one second of the target scene under desired focusing position is obtained according at least one second image Scape depth information.The technology that the scene depth under a certain focusing position is this field maturation is calculated according to multiple images, herein not It repeats.
The third image of the target scene is obtained, the focusing position of the third image is the desired focusing position It sets, the scene depth of described first image is that first scene is deep.
That is, according to preview image and desired focusing position, using image capture device with desired focusing position Acquire the third image of target scene.
The third image is handled according at least one second depth information of scene, obtaining has desired field The target image of the target scene of depth of field degree.
After obtaining third image, third image is handled according at least one second depth information of scene, i.e., The target image under desired focusing position, with desired scene depth can be obtained.
The third image is handled at least one second depth information of scene according to, is obtained with desired In the target image of the target scene of scene depth, the corresponding fields depth of field can be generated according at least one second depth information of scene At least one second preview image of degree, for selection by the user desired scene depth.Correspondingly, according to described at least one second Scape depth information handles the third image, obtains the target figure of the target scene with desired scene depth Picture further includes:
According to the corresponding scene depth of at least one second depth information of scene, generate described at least one second preview graph Picture.That is, generating at least one second preview image of target scene, each second preview image all has desired focusing position It sets, and scene depth is one at least one second depth information of scene.
Show at least one second preview image of the target scene.
Targets preview image, the scene depth of the targets preview image are determined from least one second preview image For the desired scene depth.For example, the targets preview image can be determined according to the instruction of user.
The third image is handled according to the targets preview image, obtains the target image.According to target The scene depth of preview image handles third image so that and treated, and image has desired focusing position, and The scene depth of targets preview image.
In addition, the method in the present embodiment further includes:
Determine the desired focusing position
Automatically desired focusing position can be determined or according to the instruction of user.For example, certainly by modes such as recognitions of face It is dynamic to determine.The instruction of user can be any appropriate modes such as touch command, gesture instruction, phonetic order.
After acquiring target image, the method for the present embodiment further includes:
Show the target image.
In conclusion the method for the application the first and second of embodiment is with from the external equipment of image capture device The mode for obtaining information necessary to the image of acquisition desired effects, need not increase the additional hardware of equipment, you can obtain the phase The image effect of prestige extends the ability of image capture device to a certain extent.
It will be understood by those skilled in the art that in the above method of the application specific implementation mode, the serial number of each step Size is not meant that the order of the execution order, and the execution sequence of each step should be determined by its function and internal logic, without answering Any restriction is constituted to the implementation process of the application specific implementation mode.
In addition, the embodiment of the present application also provides a kind of computer-readable mediums, including following grasp is carried out when executed The computer-readable instruction of work:Execute the operation of each step of the method in above-mentioned Fig. 1 illustrated embodiments.
The embodiment of the present application also provides another computer-readable mediums, including operated below when executed Computer-readable instruction:Execute the operation of each step of the method in above-mentioned Fig. 2 illustrated embodiments.
Present invention also provides a kind of image collecting device, which can partly or entirely belong to image capture device, if To partly belong to the device of image capture device, which includes the communication module communicated with image capture device, below The description to this communication module is omitted in description.As shown in figure 3, the image collecting device 300 of the first embodiment of the application wraps It includes:
Sending module 320, for sending the first image information of target scene and desired right to an at least external equipment Burnt position.
In the device of the present embodiment, if image capture device can not acquire the image with desired effects, it can acquire Before the image of target scene, the first image information of target scene is sent to an at least external equipment by sending module 320 And desired focusing position.First image information be image capture device current acquisition position can obtain and target Scene and device-dependent information.For example, the first image information may include at least one of the following:Current temporal information (including season, specific time etc.), acquisition position, current context information (including weather, illumination etc.), current device direction, Acquisition parameter (for example, focal length) of current device etc..
External equipment is the also arbitrary equipment as outside image capture device for image capture device.For example, Other image capture devices, the equipment for not having Image Acquisition ability, server computer etc..
First acquisition module 340 sending with the target scene phase for obtaining in response to described first image information The second image information closed.
In the device of the present embodiment, in response to receiving the first image information and desired focusing position, outside at least one Portion's equipment can according to first image information and desired focusing position from it is local or from internet obtain largely with the first image The image of the relevant target scene of information, these images include that focusing position is different from the desired focusing position, but acquires The device 300 of the image of all sames such as position and angle, the present embodiment utilizes these images, can cross acquisition and generate desired focusing The enough information of shallow depth image under position.Second image information is such, realization required information of desired effects, For image of the assistant images collecting device acquisition with desired scene depth.
Acquisition module 360, the target for having desired scene depth according to second image information collecting The target image of scene.
The device of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment acquisition can be auxiliary It helps image capture device to collect the mode of the information of the image with desired scene depth, extends hardware to a certain extent The ability of limited image capture device.
In a kind of possible realization method of the present embodiment, the first image information can be by way of preview target scene It obtains, correspondingly, as shown in figure 4, the device 300 of the present embodiment further includes:
Second acquisition module 310, the first preview image for obtaining the target scene.
The first preview image of target scene is acquired by image capture device can obtain and the first preview image phase The first image information closed.First preview image is that have the first scene based on the existing acquisition capacity of image capture device Depth and the first focusing position.User, can be by clicking certain by the image of preview target scene on the preview image Position determines desired focusing position.Alternatively, first focusing position is the desired focusing position.Correspondingly, first Image information may include:The acquisition time of first preview image, the acquisition position of the first preview image, the first preview image of acquisition When environmental information, the first preview image of acquisition when the direction of equipment, the acquisition parameter of image capture device etc..
In order to acquire desired shallow depth image, in one possible implementation, second image information can wrap It includes:At least one second depth information of scene of the target scene under the desired focusing position, described at least one second At least one second scene depth corresponding to scape depth information is different from first scene depth.
In this realization method, as shown in figure 5, acquisition module 360 further comprises:
The focusing position of acquiring unit 362, the first image for obtaining the target scene, described first image is institute Desired focusing position is stated, the scene depth of the first image is first scene depth, which can adopt for image Collect the camera model of equipment, or first image is obtained from the camera model of image capture device.
That is, according to the first preview image and desired focusing position, using image capture device with desired focusing First image of station acquisition target scene, and obtain first image.
Processing unit 364 is handled described first image at least one second depth information of scene according to, Obtain the target image of the target scene with desired scene depth.
After acquiring unit 362 obtains the first image, processing unit 364 is according to obtaining from an at least external equipment Second the first image of depth information of scene pair is handled, you can obtains under desired focusing position, has desired scene deep The target image of degree.The technology that corresponding map seems this field maturation is generated according to depth information of scene and focusing position, herein It does not repeat.
In one possible implementation, second depth information of scene can be depth map form, depth map it is every One pixel value indicates the distance between certain point and image capture device in target scene.
In one possible implementation, processing unit 364 can be according to the generation pair of at least one second depth information of scene At least one second preview image of scene depth is answered, for selection by the user desired scene depth.Correspondingly, as shown in fig. 6, place Reason unit 364 may also include:
Subelement 3642 is generated, for deep according at least scene corresponding at least one second depth information of scene It spends, at least one second preview image described in generation.That is, generate the generation target scene of subelement 3642 at least one second is pre- Look at image, each second preview image all has desired focusing position, and scene depth is that at least one second scene is deep Spend one in information.
Show subelement 3644, at least one second preview image for showing the target scene.The display subelement 3644 can be the display of image capture device.
Determination subelement 3646, for determining targets preview image, the mesh from least one second preview image The scene depth for marking preview image is the desired scene depth.For example, can determine that the target is pre- according to the instruction of user Look at image.
Subelement 3648 is handled, for being handled described first image according to the targets preview image, obtains institute State target image.Subelement 3648 is handled according to the scene depth of targets preview image, the first image is handled so that place Image after reason has the scene depth of desired focusing position and targets preview image.
In addition, as shown in fig. 7, the device 300 of the present embodiment may also include:
Determining module 311, for determining the desired focusing position.
Automatically desired focusing position can be determined or according to the instruction of user.For example, certainly by modes such as recognitions of face It is dynamic to determine.The instruction of user can be any appropriate modes such as touch command, gesture instruction, phonetic order.
It, can also be by showing that subelement 3644 shows the target image after acquiring target image.
Present invention also provides another image collecting devices, as shown in figure 8, the image of second of embodiment of the application is adopted Acquisition means 800 include:
Sending module 820, the first image information for sending target scene to an at least external equipment.
In the device of the present embodiment, if image capture device can not acquire the image with desired effects, it can acquire Before the image of target scene, the first image information of target scene is sent to an at least external equipment by sending module 820. First image information is that image capture device can be obtained in current acquisition position and target scene and device-dependent letter Breath.For example, the first image information may include at least one of the following:Current temporal information (including season, specific time Deng), acquisition position, current context information (including weather, illumination etc.), the direction of current device, the acquisition parameter of current device (for example, focal length) etc..
External equipment is the also arbitrary equipment as outside image capture device for image capture device.For example, Other image capture devices, the equipment for not having Image Acquisition ability, server computer etc..
First acquisition module 840 sending with the target scene phase for obtaining in response to described first image information The second image information closed.
In the device of the present embodiment, in response to receiving first image information, at least external equipment meeting basis should First image information obtains the largely image with the relevant target scene of the first image information, these figures from local or from internet As include identical acquisition position, identical visual angle, various focusing positions image, the device 800 of the present embodiment is available These images, which can be crossed to obtain, generates the enough information of shallow depth image.Second image information is such, realizes desired effects Required information, for image of the assistant images collecting device acquisition with desired scene depth.
Acquisition module 860, for according to second image information and desired focusing position, acquisition to have desired field The target image of the target scene of depth of field degree.
The device of the embodiment of the present application by with the relevant information of target scene, from an at least external equipment acquisition can be auxiliary It helps image capture device to collect the mode of the information of the image with desired scene depth, extends hardware to a certain extent The ability of limited image capture device.
In a kind of possible realization method of the present embodiment, the first image information can be obtained by way of preview target scene , correspondingly, as shown in figure 9, the device 800 of the present embodiment further includes:
Second acquisition module 810, the first preview image for obtaining the target scene;
The first preview image of target scene is acquired by image capture device can obtain and the first preview image phase The first image information closed.Preview image is that have the first scene depth based on the existing acquisition capacity of image capture device And first focusing position.Correspondingly, the first image information may include:The acquisition time of first preview image, the first preview graph The direction of equipment, image are adopted when environmental information when the acquisition position of picture, the first preview image of acquisition, the first preview image of acquisition Collect the acquisition parameter etc. of equipment.
In order to acquire desired shallow depth image, in one possible implementation, second image information can wrap It includes:With relevant at least one second image of the first preview image, at least one second image has and described first pair Different at least one second focusing position in burnt position.According to these images, can calculate under realization different focus position, difference The required depth information of scene depth.In this realization method, as shown in Figure 10, acquisition module 860 can further comprise:
First acquisition unit 862 obtains the mesh under desired focusing position at least one second image according to Mark at least one second depth information of scene of scene.It is ability to calculate the scene depth under a certain focusing position according to multiple images The technology of domain maturation, this will not be repeated here.
Second acquisition unit 864, the third image for obtaining the target scene, the focusing position of the third image Scene depth for the desired focusing position, described first image is that first scene is deep.The second acquisition unit 864 It can be the camera model of image capture device, or the third image is obtained from the camera model of image capture device.
That is, according to preview image and desired focusing position, using image capture device with desired focusing position Acquire the third image of target scene.
Processing unit 866 is handled the third image at least one second depth information of scene according to, Obtain the target image of the target scene with desired scene depth.
After second acquisition unit 864 obtains third image, processing unit 866 is according at least one second scene depth Information handles third image, you can obtains the target image under desired focusing position, with desired scene depth.
In one possible implementation, second depth information of scene can be depth map form, depth map it is every One pixel value indicates the distance between certain point and image capture device in target scene.Processing unit 866 can be according at least One second depth information of scene generates at least one second preview image of corresponding scene depth, for selection by the user desired scene Depth.Correspondingly, as shown in figure 11, processing unit 866 further includes:
Subelement 8662 is generated, for according to the corresponding scene depth of at least one second depth information of scene, generating At least one second preview image.That is, generating at least one second preview image of target scene, each second preview graph As all having desired focusing position, and scene depth is one at least one second depth information of scene.
Show subelement 8664, at least one second preview image for showing the target scene.The display subelement 8664 can be the display of image capture device.
Determination subelement 8666, for determining targets preview image, the mesh from least one second preview image The scene depth for marking preview image is the desired scene depth.For example, can determine that the target is pre- according to the instruction of user Look at image.
Subelement 8668 is handled, for determining targets preview image, the mesh from least one second preview image The scene depth for marking preview image is the desired scene depth.For example, can determine that the target is pre- according to the instruction of user Look at image.
In addition, as shown in figure 12, further including in the device 800 of the present embodiment:
Determining module 811, for determining the desired focusing position
Automatically desired focusing position can be determined or according to the instruction of user.For example, certainly by modes such as recognitions of face It is dynamic to determine.The instruction of user can be any appropriate modes such as touch command, gesture instruction, phonetic order.
After acquiring target image, the described device of the present embodiment can also be by showing that subelement 8664 shows the target figure Picture.
In conclusion the device of the application the first and second of embodiment is with from the external equipment of image capture device The mode for obtaining information necessary to the image of acquisition desired effects, need not increase the additional hardware of equipment, you can obtain the phase The image effect of prestige extends the ability of image capture device to a certain extent.
Figure 13 is a kind of structural schematic diagram of image collecting device 1300 provided by the embodiments of the present application, and the application is specifically real Example is applied not limit the specific implementation of image collecting device 1300.As shown in figure 13, which can be with Including:
Processor (processor) 1310, communication interface (Communications Interface1320, memory (memory) 1330 and communication bus 1340.Wherein:
Processor 1310, communication interface 1320 and memory 1330 complete mutual lead to by communication bus 1340 Letter.
Communication interface 1320, for being communicated with the network element of such as client etc..
Image Acquisition in the device embodiment of above-mentioned Fig. 3 specifically may be implemented for executing program 1332 in processor 1310 The correlation function of device.
Specifically, program 1332 may include program code, and said program code includes computer-managed instruction.
Processor 1310 may be a central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present application Road.Program 1332 specifically can be used for so that described image harvester 1300 executes following steps:
The first image information of target scene and desired focusing position are sent to an at least external equipment;
Acquisition sends relevant with the target scene in response to described first image information and desired focusing position Second image information;
There is the target image of the target scene of desired scene depth according to second image information collecting.
The specific implementation of each step may refer to corresponding in corresponding steps and unit in above-described embodiment in program 1332 Description, this will not be repeated here.It is apparent to those skilled in the art that for convenience and simplicity of description, it is above-mentioned The equipment of description and the specific work process of module, can refer to corresponding processes in the foregoing method embodiment description, herein not It repeats again.
Figure 14 is a kind of structural schematic diagram of image collecting device 1400 provided by the embodiments of the present application, and the application is specifically real Example is applied not limit the specific implementation of image collecting device 1400.As shown in figure 14, which can be with Including:
Processor (processor) 1410, communication interface (Communications Interface) 1420, memory (memory) 1430 and communication bus 1440.Wherein:
Processor 1410, communication interface 1420 and memory 1430 complete mutual lead to by communication bus 1440 Letter.
Communication interface 1420, for being communicated with the network element of such as client etc..
Image Acquisition in the device embodiment of above-mentioned Fig. 8 specifically may be implemented for executing program 1432 in processor 1410 The correlation function of device.
Specifically, program 1432 may include program code, and said program code includes computer-managed instruction.
Processor 1410 may be a central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present application Road.Program 1432 specifically can be used for so that the multi-carrier data transmission device 1400 executes following steps:
The first image information of target scene is sent to an at least external equipment;
Obtain sent in response to described first image information with relevant second image information of the target scene;
According to second image information and desired focusing position, the target with desired scene depth is acquired The target image of scene.
The specific implementation of each step may refer to corresponding in corresponding steps and unit in above-described embodiment in program 1432 Description, this will not be repeated here.It is apparent to those skilled in the art that for convenience and simplicity of description, it is above-mentioned The equipment of description and the specific work process of module, can refer to corresponding processes in the foregoing method embodiment description, herein not It repeats again.
It is apparent to those skilled in the art that for convenience and simplicity of description, the equipment of foregoing description With the specific work process of module, the corresponding description in aforementioned device embodiment can be referred to, details are not described herein.
Although subject matter described herein is held in the execution on the computer systems of binding operation system and application program It is provided in capable general context, but it will be appreciated by the appropriately skilled person that may also be combined with other kinds of program module To execute other realizations.In general, program module include routines performing specific tasks or implementing specific abstract data types, Program, component, data structure and other kinds of structure.It will be understood by those skilled in the art that subject matter described herein can It is put into practice, including portable equipment, multicomputer system, based on microprocessor or can compiled with using other computer system configurations Journey consumption electronic product, minicomputer, mainframe computer etc., it is possible to use task by communication network by being connected wherein In the distributed computing environment that remote processing devices execute.In a distributed computing environment, program module can be located locally and far In the two of journey memory storage device.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer read/write memory medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part or the technical solutions that contribute to original technology can be expressed in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of step. And computer read/write memory medium above-mentioned include with store as computer-readable instruction, data structure, program module or its Any mode or technology of the information such as his data are come the physics volatile and non-volatile, removable and can not be situated between because of east realized Matter.Computer read/write memory medium specifically includes, but is not limited to, USB flash disk, mobile hard disk, read-only memory (ROM, Read- Only Memory), random access memory (RAM, Random Access Memory), Erasable Programmable Read Only Memory EPROM (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other solid-state memory technologies, CD-ROM, number are more Functional disc (DVD), HD-DVD, blue light (Blue-Ray) or other light storage devices, tape, disk storage or other magnetic storages Equipment or any other medium that can be used to store information needed and can be accessed by computer.
Embodiment of above is merely to illustrate the application, and is not the limitation to the application, in relation to the common of technical field Technical staff can also make a variety of changes and modification in the case where not departing from spirit and scope, therefore all Equivalent technical solution also belongs to the scope of the application, and the scope of patent protection of the application should be defined by the claims.

Claims (37)

1. a kind of image-pickup method, which is characterized in that the method includes:
The first image information of target scene and desired focusing position are sent to an at least external equipment;
Obtain sent in response to described first image information and desired focusing position with the target scene relevant second Image information, second image information are the target for the target scene for having desired scene depth for assisted acquisition The information of image;
According to target image described in second image information collecting.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
Obtain the first preview image of the target scene;
Described first image information is the relevant information of first preview image, and first preview image has the first scene Depth and the first focusing position.
3. according to the method described in claim 2, it is characterized in that, first focusing position and the desired focusing position It is identical.
4. according to the method in claim 2 or 3, which is characterized in that second image information includes:It is described desired right At least one second depth information of scene of the target scene under burnt position.
5. according to the method described in claim 4, it is characterized in that, described have expectation according to second image information collecting The target image of the target scene of scene depth include:
The first image of the target scene is obtained, the focusing position of described first image is the desired focusing position, institute The scene depth for stating the first image is first scene depth;
Described first image is handled according at least one second depth information of scene, obtaining has desired scene deep The target image of the target scene of degree.
6. according to the method described in claim 5, it is characterized in that, at least one second depth information of scene pair described in the basis Described first image is handled, and the target image for obtaining the target scene with desired scene depth includes:
Show that at least one second preview image of the target scene, at least one second preview image have described at least one The corresponding scene depth of second depth information of scene and the desired focusing position;
Targets preview image is determined from least one second preview image, the scene depth of the targets preview image is institute State desired scene depth;
Described first image is handled according to the targets preview image, obtains the target image.
7. according to the method described in claim 6, it is characterized in that, at least one second depth information of scene pair described in the basis Described first image is handled, and the target image for obtaining the target scene with desired scene depth further includes:
According to the corresponding scene depth of at least one second depth information of scene, generate described at least one second preview image.
8. according to the method in any one of claims 1 to 3, which is characterized in that the method further includes:Determine the phase The focusing position of prestige.
9. according to the method described in claim 8, it is characterized in that, in the determination desired focusing position:According to Family instruction determines the desired focusing position.
10. according to the method in any one of claims 1 to 3, which is characterized in that the method further includes:Described in display Target image.
11. according to the method in any one of claims 1 to 3, which is characterized in that described first image information includes following At least one of:The acquisition time of image, the acquisition position of image, acquisition image when environmental information, acquisition image when set The acquisition parameter of standby direction, collecting device.
12. a kind of image-pickup method, which is characterized in that the method includes:
The first image information of target scene is sent to an at least external equipment;
Obtain sent in response to described first image information with relevant second image information of the target scene, described second Image information is the information of the target image for the target scene for having desired scene depth for assisted acquisition;
According to second image information and desired focusing position, the target image is acquired.
13. according to the method for claim 12, which is characterized in that the method further includes:
Obtain the first preview image of the target scene;
Described first image information is the relevant information of first preview image, and first preview image has the first scene Depth and the first focusing position.
14. according to the method for claim 13, which is characterized in that second image information includes:It is pre- with described first Relevant at least one second image of image of looking at, at least one second image have different at least from first focusing position One second focusing position.
15. according to the method for claim 14, which is characterized in that described according to second image information and desired right Burnt position, the target image for acquiring the target scene with desired scene depth include:
At least one second scene that the target scene under desired focusing position is obtained according at least one second image is deep Spend information;
The third image of the target scene is obtained, the focusing position of the third image is the desired focusing position, institute The scene depth for stating third image is first scene depth;
The third image is handled according at least one second depth information of scene, obtaining has desired scene deep The target image of the target scene of degree.
16. according to the method for claim 15, which is characterized in that at least one second depth information of scene described in the basis The third image is handled, the target image for obtaining the target scene with desired scene depth includes:
Show that at least one second preview image of the target scene, at least one second preview image have described at least one The corresponding scene depth of second depth information of scene and the desired focusing position;
Targets preview image is determined from least one second preview image, the scene depth of the targets preview image is institute State desired scene depth;
The third image is handled according to the targets preview image, obtains the target image.
17. according to the method for claim 16, which is characterized in that at least one second depth information of scene described in the basis The third image is handled, the target image of the target scene with desired scene depth is obtained, obtains institute Stating target image further includes:
According to the corresponding scene depth of at least one second depth information of scene, generate described at least one second preview image.
18. the method according to any one of claim 12 to 17, which is characterized in that the method further includes:
Determine the desired focusing position.
19. according to the method for claim 18, which is characterized in that in the determination desired focusing position:According to User instruction determines the desired focusing position.
20. the method according to any one of claim 12 to 17, which is characterized in that the method further includes:Display institute State target image.
21. the method according to any one of claim 12 to 17, which is characterized in that described first image information include with It is at least one of lower:The acquisition time of image, the acquisition position of image, acquisition image when environmental information, acquisition image when The direction of equipment, the acquisition parameter of collecting device.
22. a kind of image collecting device, which is characterized in that described device includes:
One sending module, for sending the first image information of target scene and desired focusing position to an at least external equipment It sets;
One first acquisition module, for obtains sent in response to described first image information and desired focusing position with it is described Relevant second image information of target scene, second image information are to have desired scene depth for assisted acquisition The information of the target image of the target scene;
One acquisition module, for according to second image information, acquiring the target image.
23. device according to claim 22, which is characterized in that described device further includes:
One second acquisition module, the first preview image for obtaining the target scene;
Described first image information is the relevant information of first preview image, and first preview image has the first scene Depth and the first focusing position.
24. device according to claim 23, which is characterized in that the acquisition module includes:
The focusing position of one acquiring unit, the first image for obtaining the target scene, described first image is the phase At least one second depth information of scene of the target scene under the focusing position of prestige, the scene depth of described first image is institute State the first scene depth;
One processing unit, for according to described at least one second depth information of scene include the desired focusing position under At least one second depth information of scene of the target scene, handles described first image, and obtaining has desired field The target image of the target scene of depth of field degree.
25. device according to claim 24, which is characterized in that the processing unit includes:
One display subelement, at least one second preview image for showing the target scene, at least one second preview Image has the corresponding scene depth of at least one second depth information of scene and the desired focusing position;
One determination subelement determines targets preview image, the targets preview image from least one second preview image Scene depth be the desired scene depth;
One processing subelement obtains the target for being handled described first image according to the targets preview image Image.
26. device according to claim 25, which is characterized in that the processing unit further includes:
One generates subelement, for described according to the corresponding scene depth of at least one second depth information of scene, generating extremely Few one second preview image.
27. the device according to any one of claim 22 to 26, which is characterized in that described device further includes:One second Determining module, for determining the desired focusing position.
28. device according to claim 27, which is characterized in that second determining module is used for true according to user instruction The fixed desired focusing position.
29. device according to claim 26, which is characterized in that the display subelement is additionally operable to show the target figure Picture.
30. a kind of image collecting device, which is characterized in that described device includes:
One sending module, the first image information for sending target scene to an at least external equipment;
One first acquisition module, for obtain sent in response to described first image information with the target scene relevant the Two image informations, second image information are the mesh for the target scene for having desired scene depth for assisted acquisition The information of logo image;
One acquisition module, for according to second image information and desired focusing position, acquiring the target image.
31. device according to claim 30, which is characterized in that described device further includes:
One second acquisition module, the first preview image for obtaining the target scene;
Described first image information is the relevant information of first preview image, and first preview image has the first scene Depth and the first focusing position.
32. device according to claim 31, which is characterized in that the acquisition module includes:
One first acquisition unit, for including relevant extremely with first preview image according to second image information Few one second image, obtains at least one second depth information of scene of the target scene under desired focusing position, it is described extremely Few one second image has at least one second focusing position different from first focusing position;
The focusing position of one second acquisition unit, the third image for obtaining the target scene, the third image is institute Desired focusing position is stated, the scene depth of the third image is first scene depth;
One processing unit is handled the third image at least one second depth information of scene according to, is obtained The target image of the target scene with desired scene depth.
33. device according to claim 32, which is characterized in that the processing unit includes:
One display subelement, at least one second preview image for showing the target scene, at least one second preview Image has the corresponding scene depth of at least one second depth information of scene and the desired focusing position;
One determination subelement, for determining targets preview image, the targets preview from least one second preview image The scene depth of image is the desired scene depth;
One processing subelement obtains the target for being handled the third image according to the targets preview image Image.
34. device according to claim 33, which is characterized in that the processing unit further includes:
One generates subelement, for described according to the corresponding scene depth of at least one second depth information of scene, generating extremely Few one second preview image.
35. the device according to any one of claim 30 to 34, which is characterized in that described device further includes:One determines Module, for determining the desired focusing position.
36. device according to claim 35, which is characterized in that the determining module determines the phase according to user instruction The focusing position of prestige.
37. device according to claim 33, which is characterized in that the display subelement is additionally operable to show the target figure Picture.
CN201410357259.1A 2014-07-24 2014-07-24 Image-pickup method and image collecting device Active CN104092946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410357259.1A CN104092946B (en) 2014-07-24 2014-07-24 Image-pickup method and image collecting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410357259.1A CN104092946B (en) 2014-07-24 2014-07-24 Image-pickup method and image collecting device

Publications (2)

Publication Number Publication Date
CN104092946A CN104092946A (en) 2014-10-08
CN104092946B true CN104092946B (en) 2018-09-04

Family

ID=51640626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410357259.1A Active CN104092946B (en) 2014-07-24 2014-07-24 Image-pickup method and image collecting device

Country Status (1)

Country Link
CN (1) CN104092946B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104363377B (en) * 2014-11-28 2017-08-29 广东欧珀移动通信有限公司 Display methods, device and the terminal of focus frame
CN104796579B (en) * 2015-04-30 2018-12-14 联想(北京)有限公司 Information processing method and electronic equipment
CN112261295B (en) * 2020-10-22 2022-05-20 Oppo广东移动通信有限公司 Image processing method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103384998A (en) * 2011-03-31 2013-11-06 富士胶片株式会社 Imaging device, imaging method, program, and program storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040174434A1 (en) * 2002-12-18 2004-09-09 Walker Jay S. Systems and methods for suggesting meta-information to a camera user
JP5478215B2 (en) * 2009-11-25 2014-04-23 オリンパスイメージング株式会社 Image capturing apparatus and method for controlling image capturing apparatus
TWI532009B (en) * 2010-10-14 2016-05-01 華晶科技股份有限公司 Method and apparatus for generating image with shallow depth of field
CN102158648B (en) * 2011-01-27 2014-09-10 明基电通有限公司 Image capturing device and image processing method
CN103166945A (en) * 2011-12-14 2013-06-19 北京千橡网景科技发展有限公司 Picture processing method and system
CN102647449B (en) * 2012-03-20 2016-01-27 西安联客信息技术有限公司 Based on the intelligent photographic method of cloud service, device and mobile terminal
US20130342735A1 (en) * 2012-06-20 2013-12-26 Chen-Hung Chan Image processing method and image processing apparatus for performing defocus operation according to image alignment related information

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103384998A (en) * 2011-03-31 2013-11-06 富士胶片株式会社 Imaging device, imaging method, program, and program storage medium

Also Published As

Publication number Publication date
CN104092946A (en) 2014-10-08

Similar Documents

Publication Publication Date Title
JP6411505B2 (en) Method and apparatus for generating an omnifocal image
CN103973978B (en) It is a kind of to realize the method focused again and electronic equipment
CN106161939B (en) Photo shooting method and terminal
EP3063730B1 (en) Automated image cropping and sharing
US10165201B2 (en) Image processing method and apparatus and terminal device to obtain a group photo including photographer
JP2016521882A5 (en)
JP2012252507A5 (en)
CN109840946B (en) Virtual object display method and device
CN108174082B (en) Image shooting method and mobile terminal
CN111667420A (en) Image processing method and device
CN104092946B (en) Image-pickup method and image collecting device
CN105635568A (en) Image processing method in mobile terminal and mobile terminal
CN107133981B (en) Image processing method and device
JP6283329B2 (en) Augmented Reality Object Recognition Device
CN105678696B (en) A kind of information processing method and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN104486553B (en) A kind of distant view photograph image pickup method and terminal
CN106470337A (en) For the method for the personalized omnirange video depth of field, device and computer program
CN109842791B (en) Image processing method and device
CN103826061B (en) Information processing method and electronic device
CN114285988B (en) Display method, display device, electronic equipment and storage medium
CN112508801B (en) Image processing method and computing device
CN108495038A (en) Image processing method, device, storage medium and electronic equipment
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN114007056A (en) Method and device for generating three-dimensional panoramic image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant