WO2019137081A1 - 一种图像处理方法、图像处理装置及拍照设备 - Google Patents

一种图像处理方法、图像处理装置及拍照设备 Download PDF

Info

Publication number
WO2019137081A1
WO2019137081A1 PCT/CN2018/113926 CN2018113926W WO2019137081A1 WO 2019137081 A1 WO2019137081 A1 WO 2019137081A1 CN 2018113926 W CN2018113926 W CN 2018113926W WO 2019137081 A1 WO2019137081 A1 WO 2019137081A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
target
depth
field
Prior art date
Application number
PCT/CN2018/113926
Other languages
English (en)
French (fr)
Inventor
张磊
张熙
黄一宁
周蔚
胡昌启
李瑞华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019137081A1 publication Critical patent/WO2019137081A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Definitions

  • the present application relates to the field of image application technologies, and in particular, to an image processing method, an image processing device, and a photographing device.
  • the aperture size is an important indicator of the imaging lens.
  • the large aperture can not only increase the image surface illumination, improve the image signal-to-noise ratio, but also achieve a shallow depth of field, so that the captured image has the subject clear and the rest is blurred. Effect.
  • the usual approach is to use two lenses to form a parallel dual-camera image acquisition system to shoot the target, focus on the target After that, two images captured by the two lenses are acquired, and the acquired images are transformed into a common coordinate system, and the overlapping regions of the two images are calculated by parallax, and the distance from the object to the camera can be calculated according to the parallax. Therefore, the depth map of the shooting scene is obtained, and then the image outside the plane of the target object is blurred according to the depth map, thereby realizing the effect of blurring.
  • the depth of field of the two images obtained when the two cameras are focused on the target is similar, that is, the two images are respectively relative to the object other than the target (foreground and background).
  • the degree of blurring There is no obvious difference in the degree of blurring. Therefore, the depth map obtained from the two images is inferior in accuracy, and it is not easy to divide the edge region of the target object with the front and back background or the hollow region therein, and the target object is often blurred or The area other than the target is not blurred, and the photographed blurring effect is not satisfactory, and the user experience is poor.
  • the embodiment of the present application provides an image processing method, an image processing device, and a photographing device, which are used to capture an image with better blurring effect and prompt the user to experience.
  • the first aspect of the embodiments of the present application provides an image processing method, which is applied to a photographing device, where the photographing device includes a first camera, a second camera, and a controller, and the first camera and the second The optical axes of the cameras are parallel, the difference between the field of view of the first camera and the field of view of the second camera is less than a first threshold, the focal length of the first camera and the focal length of the second camera The difference between the differences is less than the second threshold, including:
  • the camera device driving controller controls the second camera to move along the optical axis direction and focus on the target position where the target object is located, and the second camera has a second depth of field when the focus is focused on the target position, and the photographing device can determine the first depth of field according to the target position, It is the depth of field that the first camera needs to have, wherein the overlapping depth of field of the first depth of field and the second depth of field is less than the third threshold, and the target is located in the overlapping depth of field, and then the photographing device can be according to the depth of field of the first camera.
  • a first position corresponding to the first depth of field that is, a position at which the first camera needs to be focused, wherein the first position is different from the target position
  • the photographing device driving controller controls the first camera to move along the optical axis direction and focus on a first position, after which the photographing device can obtain a first image captured when the first camera focuses on the first position and a second image captured when the second camera focuses on the target position, and the photographing device can be the first image or the first image
  • the area other than the target in the two images is subjected to blurring processing to obtain a target image.
  • the depth of field when the first camera and the second camera are photographed are different, and the degree of blurring of the first image and the second image with respect to objects other than the object (foreground and background) are different, and can be more accurate.
  • the target object is identified, in particular, the edge region and/or the hollow region in which the object is connected to the background can be effectively segmented, and only the region other than the target is blurred to obtain a blur effect.
  • the image improves the user experience.
  • the photographing device blurs an area other than the target object in the first image or the second image Processing the target image includes:
  • the photographing device calculates the disparity information of the first image and the second image, and the first depth information that is the same as the first image coordinate may be calculated according to the disparity information, and then the first object other than the target object in the first image may be determined according to the first depth information.
  • a region that is, distinguishing between the target in the first image and the first region other than the target, wherein the first region includes an edge region in the first image that is in contact with the target and/or a hollow region in the target, Then, the photographing device performs blur processing on the first region in the first image to obtain a target image.
  • the photographing device blurs an area other than the target object in the first image or the second image Processing the target image includes:
  • the photographing device calculates the disparity information of the first image and the second image, and calculates second depth information that is the same as the second image coordinate according to the disparity information, and then determines the second object other than the target object in the second image according to the second depth information. a region, that is, distinguishing between the target in the second image and the second region other than the target, wherein the second region includes an edge region in the second image that is in contact with the target and/or a hollow region in the target. Then, the photographing device performs blur processing on the second region in the second image to obtain a target image.
  • the photographing device may arbitrarily select one of the first image and the second image, and then calculate the depth information of the image, and determine an area other than the target in the image, and then the image.
  • the area other than the target object is subjected to blurring processing to finally obtain the target image, which provides various options for the embodiment of the present application, and enriches the achievability of the solution.
  • the angle of view of the first camera and the second camera are both greater than or equal to 60°.
  • both the first camera and the second camera have a sufficiently large field of view, and the coverage of the images captured by the two cameras is relatively large, and the resulting target image may have sufficient Large coverage.
  • the closest focusing distance of the first camera and the second camera is less than or equal to 20 cm.
  • a second aspect of the present application provides an image processing apparatus including a first camera and a second camera.
  • the first camera is parallel to the optical axis of the second camera, and the angle of view of the first camera is The difference between the angles of view of the second camera is less than a first threshold, the difference between the focal length of the first camera and the focal length of the second camera is less than a second threshold, and the image processing apparatus further includes :
  • control unit configured to control the second camera to focus on a target position where the target is located, wherein the second camera has a second depth of field when focusing on the target;
  • a first determining unit configured to determine a first depth of field according to the target position, wherein an overlapping depth of field of the first depth of field and the second depth of field is less than a third threshold, and the target is located in the overlapping depth of field in;
  • a second determining unit configured to determine, according to the depth of field table of the first camera, a first position corresponding to the first depth of field, wherein the first position is different from the target position;
  • the control unit is further configured to control the first camera to focus on the first position
  • An acquiring unit configured to acquire a first image that is an image captured when the first camera focuses on the first position, and a second image that is in focus on the second camera An image taken when the target position is reached;
  • a blurring unit configured to perform blur processing on an area other than the target object in the first image or the second image to obtain a target image.
  • the obfuscation unit includes:
  • a first calculating module configured to calculate disparity information of the first image and the second image
  • a second calculating module configured to calculate first depth information of the first image according to the disparity information
  • a determining module configured to determine, according to the first depth information, a first area other than the target in the first image, where the first area includes an edge of the first image that is in contact with the target a hollow area in the area and/or the target;
  • a blurring module configured to perform blur processing on the first area in the first image to obtain the target image.
  • the second calculating module is further configured to calculate second depth information of the second image according to the disparity information
  • the determining module may be further configured to determine, according to the second depth information, a second area other than the target object in the second image, where the second area includes the second image and the target object An edge region that meets and/or a hollow region in the target;
  • the blurring module may be further configured to perform blurring processing on the second region in the second image to obtain the target image.
  • control unit controls the second camera to focus on the target position where the target is located, the second camera has a second depth of field, and the first determining unit determines the first depth of field according to the target position, and the first depth of field and the second depth of field are intersected.
  • the stack depth of field is smaller than the third threshold and the target is located in the overlapping depth of field
  • the second determining unit determines the first position corresponding to the first depth of field according to the depth of field table of the first camera, the first position is different from the target position
  • the control unit controls A camera focuses on the first position
  • the acquisition unit acquires the first image captured when the first camera focuses on the first position and the second image captured when the second camera focuses on the target position
  • the blur unit pairs the first image Or blurring the area other than the target in the second image to obtain the target image.
  • the degree of blurring of objects is also different, and the target can be identified more accurately.
  • the utility model can effectively divide the edge region and/or the hollow region in which the target object meets the background before and after the object, and only blurs the region other than the target object to obtain an image with better blurring effect, thereby improving the user experience. .
  • a third aspect of the present application provides a photographing apparatus, including a first camera and a second camera.
  • the first camera is parallel to the optical axis of the second camera, and the angle of view of the first camera is The difference between the field of view of the second camera is less than the first threshold, and the difference between the focal length of the first camera and the focal length of the second camera is less than the second threshold.
  • a processor a controller, a memory, a bus, and an input and output interface
  • Program code is stored in the memory
  • the processor performs the following operations when calling the program code in the memory:
  • a fourth aspect of embodiments of the present application provides a computer readable storage medium comprising instructions for causing a computer to perform some or all of the steps of the image processing method of the first aspect when the instructions are run on a computer.
  • a fifth aspect of an embodiment of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform some or all of the steps in the image processing method of the first aspect.
  • the embodiments of the present application have the following advantages:
  • the camera device controls the second camera to focus on the target position where the target object is located, the second camera has a second depth of field, and the camera device determines the first depth of field according to the target position, the first depth of field and the second depth of field.
  • the overlapping depth of field is smaller than the third threshold and the target is located in the overlapping depth of field, after which the photographing device determines the first position corresponding to the first depth of field according to the depth of field table of the first camera, the first position is different from the target position, and the photographing device passes the controller Controlling the first camera to focus to the first position, and then the photographing device acquires the first image captured when the first camera focuses on the first position and the second image captured when the second camera focuses on the target position, and then the photographing device pairs An area other than the target in the image or the second image is subjected to blur processing to obtain a target image.
  • the depth of field when the first camera and the second camera are photographed are different, and the first image and the second image are respectively relative to the target.
  • the degree of blurring of objects other than (foreground and background) is different, and the target can be more accurately
  • the identification is performed, in particular, the edge region and/or the hollow region in which the object is in contact with the background can be effectively segmented, and only the region other than the target is blurred to obtain an image with better blurring effect, and the image is improved.
  • the user experience is performed, in particular, the edge region and/or the hollow region in which the object is in contact with the background can be effectively segmented, and only the region other than the target is blurred to obtain an image with better blurring effect, and the image is improved.
  • FIG. 1 is a schematic diagram of parallax when two cameras are photographed
  • FIG. 3 is a schematic diagram of an embodiment of an image processing method according to the present application.
  • Figure 4 (a) is a schematic view showing the position of two cameras on the photographing device of the present application.
  • 4(b) is another schematic view showing the position of two cameras on the photographing device of the present application.
  • FIG. 5 is a schematic diagram showing changes in the depth of field of the camera as the focus position of the camera changes according to the present application
  • Figure 6 is a schematic view showing the overlapping depth of field of the present application.
  • Figure 7 is another schematic view of overlapping depth of field of the present application.
  • FIG. 9 is a schematic diagram of an embodiment of an image processing apparatus according to the present application.
  • FIG. 10 is a schematic diagram of another embodiment of an image processing apparatus according to the present application.
  • FIG. 11 is a schematic structural diagram of a photographing apparatus of the present application.
  • the embodiment of the present invention provides an image processing method and a photographing device, which are used to capture an image with better blurring effect and improve user experience.
  • the embodiment of the present application can be applied to a photographing device including two cameras.
  • the optical axes of the two cameras are parallel, and the viewing angles and focal lengths of the two cameras are the same or similar, because the optical axes of the two cameras do not coincide, that is, two cameras. There is a distance between them, so the images captured by the two cameras have parallax.
  • the two cameras are A and B respectively, the focal length is f, the target to be photographed is at the P point, and the target is in two.
  • the positions of the imaging planes are P1 and P2 respectively.
  • the distance from the left edge of the imaging surface of P1 to A camera is L1
  • the distance from the left edge of the imaging surface of P2 to B camera is L2
  • L1 and L2 are not equal
  • a camera There are parallaxes between the two images captured by the B camera.
  • the distance Z from the P point to the plane of the two cameras can be calculated. Based on this, the depth map of the overlapping regions of the two camera shooting scenes can be further obtained.
  • the depth map may be specifically as shown in FIG. 2, according to the depth map, the objects located in different planes in the shooting scene may be segmented, for example, the two cameras are focused on the plane where the human body is located, and the depth map shown in FIG. 2 is obtained.
  • the plane of the human body is divided into the front and back background, and then the area outside the plane of the human body is blurred, and finally the blurred image is obtained, but since the focal lengths of the two cameras are similar, both cameras are The depth of field when focusing on the human body is similar, that is, there is no significant difference between the two images relative to the part of the human body (foreground and background), so the depth map shown in Figure 2 is more accurate. Poor, you can see that the edge area of the human body and the hollow area between the human fingers are not clear, it is not easy to divide this part of the area, and the resulting photo blur effect is not ideal.
  • the embodiment of the present application provides an image processing method based on the dual camera shooting system, which can capture a photo with better blurring effect.
  • an embodiment of an image processing method in an embodiment of the present application includes:
  • the photographing device controls the second camera to focus on the target position where the target is located by the controller.
  • the photographing device includes a first camera, a second camera, and a controller, wherein the optical axes of the first camera and the second camera are parallel, and the field of view of the first camera and the field of view of the second camera The difference between the difference is less than the first threshold, the difference between the focal length of the first camera and the focal length of the second camera is less than the second threshold, and after determining the target to be photographed, the photographing device can determine the target of the target Position, then photographing the device driving controller and controlling the second camera to move along the optical axis direction and focus to the target position by the controller, it can be understood that when the second camera focuses on the target position, it can be determined that the second camera has The second depth of field.
  • the depth of field of the camera refers to the distance between the front and back of the object measured by the image in which the camera can obtain a clear image.
  • the two endpoints of the depth of field are the depth of field and the depth of field, and the depth of field is near the depth of field.
  • the closest point to the camera, the depth of field is the farthest point in the depth of field from the camera.
  • the field of view of the first camera and the second camera may both be greater than or equal to 60°, and both cameras have relatively large field of view angles, so that the overlapping regions of the two cameras are larger, and finally A photograph having a sufficiently large framing range is obtained.
  • the angle of view of the two cameras may be other values, for example, greater than or equal to 50 degrees, which is not limited herein.
  • the closest focus distance of the first camera and the second camera may be less than or equal to 20 cm.
  • the closest focus distance of the two cameras may be other values, for example, less than or equal to 30 cm, which is not limited herein.
  • the camera device can be a terminal device, such as a mobile phone or a tablet computer.
  • the first camera and the second camera can be arranged on the camera device in multiple ways. Please refer to FIG. 4( a ) and FIG. 4 .
  • (b) Taking a mobile phone as a camera device as an example, the two cameras can be arranged side by side as shown in Fig. 4(a), or can be arranged up and down as shown in Fig. 4(b).
  • both cameras can be set in the mobile phone.
  • the back side of the display can also be set on the same side as the display. That is to say, both cameras can be used as front cameras or as rear cameras, which are not limited here, and two cameras.
  • the distance between the two is also subject to the actual application, which is not limited here.
  • the camera device may include more cameras in addition to the first camera and the second camera.
  • the photographing device determines the first depth of field according to the target position.
  • the photographing device can calculate the first depth of field, that is, the depth of field that the first camera needs to have, wherein the overlapping depth of field of the first depth of field and the second depth of field is less than a third threshold, and the target To be in overlapping depth of field.
  • the focus position of the camera is different, and the depth of field corresponding to the camera is different.
  • the abscissa indicates the distance from the focus position of the camera to the camera
  • the ordinate indicates the point in the depth of field corresponding to the camera to the camera.
  • the distance between the focus position of the camera and the camera is the same as that of the camera.
  • the smaller the depth of field corresponding to the camera the more the first camera and the second camera are required to shoot.
  • the depth of field is different.
  • the second camera focuses on the target position where the target is located.
  • the target must be in the second depth of field. It is also necessary to ensure that the first depth of field can cover the target, so there will be a section between the first depth of field and the second depth of field.
  • the overlapping range is also the overlapping depth of field.
  • the overlapping range of the first depth of field and the second depth of field cannot be too large, so the overlapping depth of field should satisfy the condition less than the third threshold.
  • the third threshold may be 10 cm or 20 cm. There is no limit here.
  • the photographing device determines a first position corresponding to the first depth of field.
  • the first position corresponding to the first depth of field may be determined according to the depth of field table of the first camera, where the first position is the position that the first camera needs to be focused, wherein the first position is different from the target position
  • the corresponding depth of field changes as the focus position of the camera changes, and the correspondence between the focus position and the depth of field can be known according to the depth of field of the camera, so the corresponding depth can be calculated from the first depth of field. a location.
  • the distance from the first location to the camera device is less than the distance from the target location to the camera device.
  • the second camera focuses on the target position and has a second depth of field.
  • the first camera focuses on the first position and has a first depth of field.
  • the first position is closer to the camera device than the target position.
  • the depth of field is the distance between the first depth of field and the second depth of field.
  • the ideal situation is that the first depth of field can just cover the target, that is, the first scene.
  • the far-reaching point coincides with the target position.
  • the distance from the first location to the camera device is greater than the distance from the target location to the camera device.
  • the second camera focuses on the target position and has a second depth of field.
  • the first camera focuses on the first position and has a first depth of field.
  • the first position is farther away from the camera device than the target position.
  • the depth of field is the distance between the first depth of field and the second depth of field.
  • the ideal situation is that the first depth of field can cover the target, that is, the first depth of field.
  • the near point coincides with the target position.
  • the photographing device controls the first camera to focus to the first position by using a controller.
  • the photographing device can drive the controller and the controller controls the first camera to move in the direction of the optical axis and focus to the first position.
  • the photographing device acquires the first image and the second image.
  • the photographing device can obtain the first image captured when the first camera focuses on the first position and the second image captured when the second camera focuses on the target position. It can be understood that after the first camera and the second camera are turned on, Real-time image processing can be performed, for example, basic image processing such as brightness and color is performed, and after processing, the image is sent to the display screen for framing shooting to finally obtain the first image and the second image.
  • Real-time image processing can be performed, for example, basic image processing such as brightness and color is performed, and after processing, the image is sent to the display screen for framing shooting to finally obtain the first image and the second image.
  • the user needs to maintain the posture when shooting with the photographing device, and no object is generated in the shooting scene, so that the first image and the second image are two images taken for the same scene.
  • the photographing device performs blurring processing on an area other than the target object in the first image or the second image to obtain a target image.
  • the camera device may calculate the disparity information of the first image and the second image, and further calculate the depth information according to the disparity information.
  • the depth information may be the first depth information that is the same as the first image coordinate, or Is the second depth information that is the same as the second image coordinate, and then the photographing device may determine the first region other than the target in the first image according to the first depth information, or may determine the target in the second image according to the second depth information.
  • the depth information in the object may be specifically expressed as a depth map, as shown in FIG. 8 , the depth map is taken by using the method of the embodiment of the present application, and the photographing device may target the object according to the depth map. And the area other than the target is divided, that is, the area other than the target and the target, which can be seen from FIG. Out, with the human body as the target, the hollow area between the edge area of the human body and the human finger is obviously clearer than the depth map of FIG.
  • FIG. 8 is more accurate than FIG. 2, and can be more effective.
  • the edge region of the human body and the hollow region between the human fingers are segmented. On the basis of this, the region other than the target object is finally blurred to obtain a blurred target image, and the region farther away from the plane of the target object is more blurred. Big.
  • the photographing device may select the first image and perform blur processing on the first region other than the target on the basis of the first image to obtain the target image, or select the second image and target the second image.
  • the second area other than the object is subjected to blurring processing to obtain a target image, and which image is specifically selected for blurring is not limited herein.
  • the camera device controls the second camera to focus on the target position where the target object is located, the second camera has a second depth of field, and the camera device determines the first depth of field according to the target position, the first depth of field and the second depth of field.
  • the overlapping depth of field is smaller than the third threshold and the target is located in the overlapping depth of field, after which the photographing device determines the first position corresponding to the first depth of field according to the depth of field table of the first camera, the first position is different from the target position, and the photographing device passes the controller Controlling the first camera to focus to the first position, and then the photographing device acquires the first image captured when the first camera focuses on the first position and the second image captured when the second camera focuses on the target position, and then the photographing device pairs An area other than the target in the image or the second image is subjected to blur processing to obtain a target image.
  • the depth of field when the first camera and the second camera are photographed are different, and the first image and the second image are respectively relative to the target.
  • the degree of blurring of objects other than (foreground and background) is different, and the target can be more accurately
  • the identification is performed, in particular, the edge region and/or the hollow region in which the object is in contact with the background can be effectively segmented, and only the region other than the target is blurred to obtain an image with better blurring effect, and the image is improved.
  • the user experience is performed, in particular, the edge region and/or the hollow region in which the object is in contact with the background can be effectively segmented, and only the region other than the target is blurred to obtain an image with better blurring effect, and the image is improved.
  • an embodiment of an image processing apparatus in the embodiment of the present application includes a first camera and a second camera.
  • the optical axes of the first camera and the second camera are parallel, and the field of view of the first camera and the second camera are The difference between the angles of view is less than the first threshold, and the difference between the focal length of the first camera and the focal length of the second camera is less than the second threshold;
  • the image processing apparatus further includes:
  • the control unit 901 is configured to control the second camera to focus on the target position where the target is located, wherein the second camera has a second depth of field when focusing on the target;
  • a first determining unit 902 configured to determine a first depth of field according to the target position, wherein an overlapping depth of field of the first depth of field and the second depth of field is less than a third threshold, and the target is located in the overlapping depth of field;
  • the second determining unit 903 is configured to determine a first location corresponding to the first depth of field according to the depth of field table of the first camera, where the first location is different from the target location;
  • the control unit 901 is further configured to control the first camera to focus to the first position
  • the acquiring unit 904 is configured to acquire the first image and the second image, where the first image is an image captured when the first camera focuses on the first position, and the second image is an image captured when the second camera focuses on the target position;
  • the blurring unit 905 is configured to perform blur processing on an area other than the target object in the first image or the second image to obtain a target image.
  • control unit 901 in the embodiment of the present application may include a controller, that is, the first camera and the second camera may be controlled by a controller integrated in the control unit 901 to perform focusing, and further, the control unit The 901 and the controller may also be two different units.
  • the control unit 901 controls the controller, and the controller controls the first camera and the second camera to perform focusing.
  • control unit 901 controls the second camera to focus on the target position where the target is located, the second camera has the second depth of field, and the first determining unit 902 determines the first depth of field according to the target position, the first depth of field and the second depth of field.
  • the overlapping depth of field is smaller than the third threshold and the target is located in the overlapping depth of field, and then the second determining unit 903 determines the first position corresponding to the first depth of field according to the depth of field table of the first camera, the first position is different from the target position, and the control is performed.
  • the unit 901 controls the first camera to focus on the first position, and then the acquiring unit 904 acquires the first image captured when the first camera focuses on the first position and the second image captured when the second camera focuses on the target position, thereby blurring
  • the unit 905 performs blur processing on the area other than the target object in the first image or the second image to obtain a target image.
  • the depth of field when the first camera and the second camera are photographed are different, and the first image and the second image are respectively
  • the degree of blurring of objects other than the target is also different, and the target can be more accurately
  • the identification is performed, in particular, the edge region and/or the hollow region in which the object is in contact with the background can be effectively segmented, and only the region other than the target is blurred to obtain an image with better blurring effect, and the image is improved.
  • the fuzzy unit 905 includes:
  • a first calculating module 9051 configured to calculate disparity information of the first image and the second image
  • the second calculating module 9052 is configured to calculate first depth information of the first image according to the disparity information
  • a determining module 9053 configured to determine, according to the first depth information, a first region other than the target in the first image, where the first region includes an edge region in the first image that is in contact with the target and/or a hollow region in the target;
  • the blurring module 9054 is configured to perform blur processing on the first region in the first image to obtain a target image.
  • the second calculating module 9052 is further configured to calculate second depth information of the second image according to the disparity information
  • the determining module 9053 is further configured to determine, according to the second depth information, a second region other than the target in the second image, where the second region includes an edge region in the second image that is in contact with the target and/or a hollow region in the target ;
  • the blurring module 9054 is further configured to perform blur processing on the second region in the second image to obtain a target image.
  • the image processing device in the embodiment of the present application is described above from the perspective of a modular functional entity.
  • the camera device in the embodiment of the present application is described below from the perspective of hardware processing:
  • the embodiment of the present application further provides a photographing device.
  • a photographing device As shown in FIG. 11 , for the convenience of description, only the parts related to the embodiment of the present application are shown. For details that are not disclosed, refer to the method part of the embodiment of the present application.
  • the photographing device may be a terminal device including a mobile phone, a tablet computer, a personal digital assistant (PDA), a car computer, and the like, and the camera device is used as a mobile phone as an example:
  • FIG. 11 is a block diagram showing a partial structure of a mobile phone related to a photographing apparatus provided by an embodiment of the present application.
  • the mobile phone includes: a memory 1120, an input unit 1130, a display unit 1140, a controller 1150, a first camera 1160, a second camera 1170, a processor 1180, and a power source 1190.
  • the structure of the handset shown in FIG. 11 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.
  • the memory 1120 can be used to store software programs and modules, and the processor 1180 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 1120.
  • the memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 1120 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 1130 may include a touch panel 1131 and other input devices 1132.
  • the touch panel 1131 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1131 or near the touch panel 1131. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1131 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1180 is provided and can receive commands from the processor 1180 and execute them.
  • the touch panel 1131 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1130 may also include other input devices 1132.
  • other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1140 can be used to display the information input by the user or the information provided to the user and various menus of the mobile phone.
  • the image is mainly used for displaying the image captured by the camera.
  • the display unit 1140 may include a display panel 1141.
  • the display panel 1141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1131 can cover the display panel 1141. After the touch panel 1131 detects a touch operation thereon or nearby, the touch panel 1131 transmits to the processor 1180 to determine the type of the touch event, and then the processor 1180 according to the touch event.
  • the type provides a corresponding visual output on the display panel 1141.
  • the touch panel 1131 and the display panel 1141 are used as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated. Realize the input and output functions of the phone.
  • the controller 1150 can be used to control the first camera and the second camera to move in the direction of the optical axis and focus.
  • the first camera 1160 and the second camera 1170 can be used to capture a scene to obtain a first image and a second image, respectively, wherein the first camera 1160 is parallel to the optical axis of the second camera 1170, and the field of view of the first camera 1160 is The difference between the field angles of the second camera 1170 is less than the first threshold, and the difference between the focal length of the first camera 1160 and the focal length of the second camera 1170 is less than the second threshold.
  • the processor 1180 is a control center for the handset, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 1120, and invoking data stored in the memory 1120, The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 1180 may include one or more processing units; preferably, the processor 1180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1180.
  • the handset also includes a power source 1190 (such as a battery) that powers the various components.
  • a power source can be logically coupled to the processor 1180 via a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the processor 1180 is specifically configured to perform all or part of the actions performed by the photographing device in the embodiment shown in FIG. 3 , and details are not described herein again.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种图像处理方法、图像处理装置及拍照设备,用于提高用户体验。本申请实施例方法包括:拍照设备通过控制器控制第二摄像头对焦到目标物所在的目标位置,其中,第二摄像头对焦到目标物时具备第二景深,拍照设备根据目标位置确定第一景深,其中,第一景深和第二景深的交叠景深小于第三阈值,且目标物位于交叠景深中,拍照设备根据第一摄像头的景深表确定与第一景深对应的第一位置,其中,第一位置不同于目标位置,拍照设备通过控制器控制第一摄像头对焦到第一位置,之后拍照设备获取第一图像及第二图像,拍照设备对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像。

Description

一种图像处理方法、图像处理装置及拍照设备
本申请要求于2018年01月11日提交中国专利局、申请号为201810028792.1、发明名称为“一种图像处理方法、图像处理装置及拍照设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像应用技术领域,尤其涉及一种图像处理方法、图像处理装置及拍照设备。
背景技术
在成像领域,光圈大小是成像镜头的一个重要指标,大的光圈不仅可以实现像面光照度增加,提升图像信噪比,还可以实现浅景深,使拍摄出的图像具有主体清晰其余部分模糊的虚化效果。
在轻薄的消费电子产品中,由于尺寸受限,单个镜头无法实现大光圈虚化效果,通常的做法是用两个镜头组成平行双摄图像采集系统对目标物进行拍摄,完成对目标物的对焦后,采集两个镜头分别拍摄到的两张图像,采集到的图像被变换到一个公共坐标系下,对两张图像的重叠区域进行视差计算,根据视差可以计算出被拍摄物体到相机的距离,从而得出拍摄场景的深度图,进而根据深度图,将目标物所在平面之外的图像进行模糊处理,从而实现虚化的效果。
然而由于两个摄像头的焦距是相近的,所以两个摄像头都对焦到目标物进行拍摄时得到的两张图像的景深也是相近的,即两张图像分别相对于目标物以外的物体(前景和背景)的模糊程度没有明显的差别,因此根据两张图像得到的深度图精度较差,不容易将目标物与其前后背景相接的边缘区域或其中的镂空区域进行分割,经常出现目标物被模糊或者目标物以外的区域未被模糊的情况,拍摄得到的照片虚化效果不理想,用户体验较差。
发明内容
本申请实施例提供了一种图像处理方法、图像处理装置及拍照设备,用于拍摄得到虚化效果更理想的图像,提示用户体验。
有鉴于此,本申请实施例第一方面提供了一种图像处理方法,应用于拍照设备,所述拍照设备包括第一摄像头、第二摄像头及控制器,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,包括:
拍照设备驱动控制器控制第二摄像头沿着光轴方向移动并对焦到目标物所在的目标位置,第二摄像头对焦到目标位置时具备第二景深,拍照设备根据目标位置可以确定第一景深,也就是第一摄像头需要具备的景深,其中,要保证第一景深和第二景深的交叠景深小于第三阈值,且目标物位于该交叠景深中,随后拍照设备根据第一摄像头的景深表可以确定与第一景深对应的第一位置,也就是第一摄像头需要对焦到的位置,其中,第一位置不 同于目标位置,拍照设备驱动控制器控制第一摄像头沿着光轴方向移动并对焦到第一位置,之后拍照设备可以获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,进而拍照设备可以对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像。
本申请实施例中,第一摄像头与第二摄像头进行拍摄时的景深不同,第一图像及第二图像分别相对于目标物以外的物体(前景和背景)的模糊程度也是不同的,能更准确的对目标物进行识别,尤其是可以有效地对目标物与其前后背景相接的边缘区域和/或其中的镂空区域进行分割处理,只对目标物以外的区域进行虚化得到虚化效果更理想的图像,提高了用户体验。
结合本申请实施例第一方面,本申请实施例第一方面的第一种实施方式中,所述拍照设备对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像包括:
拍照设备计算第一图像与第二图像的视差信息,根据视差信息可以计算得到与第一图像坐标相同的第一深度信息,之后根据第一深度信息可以确定第一图像中目标物以外的第一区域,也就是对第一图像中目标物与目标物以外的第一区域进行区分,其中,第一区域包括第一图像中与目标物相接的边缘区域和/或目标物中的镂空区域,进而拍照设备对第一图像中的第一区域进行模糊处理得到目标图像。
结合本申请实施例第一方面,本申请实施例第一方面的第二种实施方式中,所述拍照设备对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像包括:
拍照设备计算第一图像与第二图像的视差信息,根据视差信息可以计算得到与第二图像坐标相同的第二深度信息,之后根据第二深度信息可以确定第二图像中目标物以外的第二区域,也就是对第二图像中目标物与目标物以外的第二区域进行区分,其中,第二区域包括第二图像中与目标物相接的边缘区域和/或目标物中的镂空区域,进而拍照设备对第二图像中的第二区域进行模糊处理得到目标图像。
通过本申请实施例提供的方案,拍照设备可以从第一图像和第二图像中任意选择其中一个图像,然后计算该图像的深度信息,并确定该图像中目标物以外的区域,再对该图像中目标物以外的区域进行模糊处理最终得到目标图像,为本申请实施例的提供了多种选择,丰富了方案的可实现性。
结合本申请实施例第一方面,本申请实施例第一方面的第一种实施方式或本申请实施例第一方面的第二种实施方式,本申请实施例第一方面的第三种实施方式中,
第一摄像头与第二摄像头的视场角均大于或等于60°。
通过本申请实施例提供的方案,可以保证第一摄像头与第二摄像头都具有足够大的视场角,两个摄像头拍摄得到的图像的覆盖范围也相对较大,最终得到的目标图像可以具有足够大的覆盖范围。
结合本申请实施例第一方面,本申请实施例第一方面的第一种实施方式或本申请实施例第一方面的第二种实施方式,本申请实施例第一方面的第四种实施方式中,
第一摄像头与第二摄像头的最近对焦距离均小于或等于20cm。
通过本申请实施例提供的方案,可以保证两个摄像头都能对焦到距离足够近的场景,提高了本方案的实用性。
本申请实施例第二方面提供了一种图像处理装置,包括第一摄像头及第二摄像头,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,该图像处理装置还包括:
控制单元,用于控制所述第二摄像头对焦到目标物所在的目标位置,其中,所述第二摄像头对焦到所述目标物时具备第二景深;
第一确定单元,用于根据所述目标位置确定第一景深,其中,所述第一景深和所述第二景深的交叠景深小于第三阈值,且所述目标物位于所述交叠景深中;
第二确定单元,用于根据所述第一摄像头的景深表确定与所述第一景深对应的第一位置,其中,所述第一位置不同于所述目标位置;
所述控制单元,还用于控制所述第一摄像头对焦到所述第一位置;
获取单元,用于获取第一图像及第二图像,所述第一图像为所述第一摄像头对焦到所述第一位置时拍摄到的图像,所述第二图像为所述第二摄像头对焦到所述目标位置时拍摄到的图像;
模糊单元,用于对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像。
可选地,所述模糊单元包括:
第一计算模块,用于计算所述第一图像与所述第二图像的视差信息;
第二计算模块,用于根据所述视差信息计算所述第一图像的第一深度信息;
确定模块,用于根据所述第一深度信息确定所述第一图像中所述目标物以外的第一区域,所述第一区域包括所述第一图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
模糊模块,用于对所述第一图像中的所述第一区域进行模糊处理得到所述目标图像。
可选地,所述第二计算模块,还可以用于根据所述视差信息计算所述第二图像的第二深度信息;
所述确定模块,还可以用于根据所述第二深度信息确定所述第二图像中所述目标物以外的第二区域,所述第二区域包括所述第二图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
所述模糊模块,还可以用于对所述第二图像中的所述第二区域进行模糊处理得到所述目标图像。
本申请实施例中,控制单元控制第二摄像头对焦到目标物所在的目标位置,第二摄像头具备第二景深,第一确定单元根据目标位置确定第一景深,第一景深与第二景深的交叠景深小于第三阈值且目标物位于交叠景深中,之后第二确定单元根据第一摄像头的景深表确定与第一景深对应的第一位置,第一位置不同于目标位置,控制单元控制第一摄像头对 焦到第一位置,随后获取单元获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,进而模糊单元对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像,可以理解的是,第一摄像头与第二摄像头进行拍摄时的景深不同,第一图像及第二图像分别相对于目标物以外的物体(前景和背景)的模糊程度也是不同的,能更准确的对目标物进行识别,尤其是可以有效地对目标物与其前后背景相接的边缘区域和/或其中的镂空区域进行分割处理,只对目标物以外的区域进行虚化得到虚化效果更理想的图像,提高了用户体验。
本申请实施例第三方面提供了一种拍照设备,包括第一摄像头及第二摄像头,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,所述拍照设备还包括:
处理器、控制器、存储器、总线以及输入输出接口;
所述存储器中存储有程序代码;
所述处理器调用所述存储器中的程序代码时执行如下操作:
驱动所述控制器控制所述第二摄像头对焦到目标物所在的目标位置,其中,所述第二摄像头对焦到所述目标物时具备第二景深;
根据所述目标位置确定第一景深,其中,所述第一景深和所述第二景深的交叠景深小于第三阈值,且所述目标物位于所述交叠景深中;
根据所述第一摄像头的景深表确定所述第一景深对应的第一位置,其中,所述第一位置不同于所述目标位置;
驱动所述控制器控制所述第一摄像头对焦到所述第一位置;
获取第一图像及第二图像,所述第一图像为所述第一摄像头对焦到所述第一位置时拍摄到的图像,所述第二图像为所述第二摄像头对焦到所述目标位置时拍摄到的图像;
对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像。
本申请实施例第四方面提供了一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如第一方面图像处理方法中的部分或者全部步骤。
本申请实施例第五方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如第一方面图像处理方法中的部分或者全部步骤。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例中,拍照设备通过控制器控制第二摄像头对焦到目标物所在的目标位置,第二摄像头具备第二景深,拍照设备根据目标位置确定第一景深,第一景深与第二景深的交叠景深小于第三阈值且目标物位于交叠景深中,之后拍照设备根据第一摄像头的景深表确定与第一景深对应的第一位置,第一位置不同于目标位置,拍照设备通过控制器控制第一摄像头对焦到第一位置,随后拍照设备获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,进而拍照设备对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像,可以理解的是,第一摄像头与第二摄像头进行拍摄时的景深不同,第一图像及第二图像分别相对于目标物以外的物体(前景 和背景)的模糊程度也是不同的,能更准确的对目标物进行识别,尤其是可以有效地对目标物与其前后背景相接的边缘区域和/或其中的镂空区域进行分割处理,只对目标物以外的区域进行虚化得到虚化效果更理想的图像,提高了用户体验。
附图说明
图1为两个摄像头拍摄时的视差示意图;
图2为现有技术根据视差计算得到的深度图;
图3为本申请图像处理方法的一个实施例示意图;
图4(a)为本申请两个摄像头在拍照设备上位置排布的一个示意图;
图4(b)为本申请两个摄像头在拍照设备上位置排布的另一个示意图;
图5为本申请随着摄像头对焦位置变化摄像头景深变化的示意图;
图6为本申请交叠景深的一个示意图;
图7为本申请交叠景深的另一个示意图;
图8为本申请根据视差计算得到的深度图;
图9为本申请图像处理装置的一个实施例示意图;
图10为本申请图像处理装置另一个实施例示意图;
图11为本申请拍照设备的结构示意图。
具体实施方式
本申请实施例提供了一种图像处理方法及拍照设备,用于拍摄得到虚化效果更理想的图像,提高用户体验。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例可应用于包括两个摄像头的拍照设备,两个摄像头的光轴平行,并且两个摄像头视场角及焦距相同或相近,由于两个摄像头的光轴并不重合即两个摄像头之间存在距离,所以两个摄像头拍摄得到的图像有视差,请参考图1,两个摄像头分别为A和B,焦距都为f,需要拍摄的目标物在P点的位置,目标物在两个成像面的位置分别为P1和P2,可以看出,P1到A摄像头成像面左边缘的距离为L1,P2到B摄像头成像面左边缘的距离为L2,L1与L2并不相等,A摄像头与B摄像头拍摄得到的两张图像存在视差,根据相似三角形的原理可以计算出P点到两个摄像头所在平面的距离Z,在此基础上可以进一步得到两个摄像头拍摄场景重叠区域的深度图。
该深度图具体可以如图2所示,根据深度图可以对拍摄场景内位于不同平面的物体进行分割处理,例如,两个摄像头都对焦到人体所在的平面,得到图2所示的深度图,根据深度图将人体所在平面与其前后背景进行分割,之后再对人体所在平面之外的区域进行模糊处理,最终得到虚化的图像,不过由于两个摄像头的焦距是相近的,所以两个摄像头都对焦到人体进行拍摄时的景深也是相近的,即两张图像分别相对于人体以外的部分(前景和背景)的模糊程度没有明显的差别,因此最终得到的如图2所示的深度图精度较差,可以看到人体的边缘区域及人体手指之间的镂空区域并不清晰,不容易对这部分区域进行分割处理,最终得到的照片虚化效果并不理想。
为此,本申请实施例在双摄像头拍摄系统的基础上提供了一种图像处理方法,可以拍摄得到虚化效果更理想的照片。
为便于理解,下面对本申请实施例中的具体流程进行描述:
请参阅图3,本申请实施例中图像处理方法的一个实施例包括:
301、拍照设备通过控制器控制第二摄像头对焦到目标物所在的目标位置。
本申请实施例中,拍照设备包括第一摄像头、第二摄像头及控制器,其中,第一摄像头与第二摄像头的光轴平行,且第一摄像头的视场角与第二摄像头的视场角之间的差值小于第一阈值,第一摄像头的焦距与第二摄像头的焦距之间的差值小于第二阈值,在确定了需要拍摄的目标物之后,拍照设备可以确定目标物所在的目标位置,随后拍照设备驱动控制器并由控制器控制第二摄像头沿着光轴的方向移动并对焦到目标位置,可以理解的是,第二摄像头对焦到目标位置时可以确定此时第二摄像头具备的第二景深。
需要说明的是,摄像头的景深是指在摄影头能够取得清晰图像的成像所测定的被摄物体前后距离范围,景深的两个端点分别为景深近点和景深远点,景深近点为景深内距离摄像头最近的点,景深远点为景深内距离摄像头最远的点。
需要说明的是,第一摄像头与第二摄像头的视场角可以都大于或等于60°,两个摄像头都拥有相对较大的视场角,这样两个摄像头拍摄的重合区域较大,最终可以得到取景范围足够大的照片,当然,两个摄像头的视场角大小也可以是其他值,例如都大于或等于50°,具体此处不做限定。
另外,第一摄像头与第二摄像头的最近对焦距离可以都小于或等于20cm,同样,两个摄像头的最近对焦距离也可以是其他值,例如都小于或等于30cm,具体此处不做限定。
可以理解的是,该拍照设备可以是终端设备,例如手机或平板电脑等,第一摄像头与第二摄像头在该拍照设备上的设置方式可以有多种,请参考图4(a)及图4(b),以手机作为拍照设备为例,两个摄像头可以左右排列如图4(a)所示,也可以上下排列如图4(b)所示,另外,两个摄像头可以都设置在手机显示屏的背面,也可以都设置在与显示屏相同的一面,也就是说,两个摄像头可以都作为前置摄像头,也可以都作为后置摄像头,具体此处不做限定,而且两个摄像头之间的距离也以实际应用中为准,此处不做限定。
可选地,拍照设备除了包括第一摄像头及第二摄像头外还可以包括更多的摄像头。
302、拍照设备根据目标位置确定第一景深。
第二摄像头对焦到目标位置后,拍照设备可以计算得到第一景深,也就是第一摄像头 需要具备的景深,其中,第一景深和第二景深的交叠景深要小于第三阈值,并且目标物要位于交叠景深中。
可以理解的是,摄像头的对焦位置不同,摄像头所对应的景深也是不同的,如图5所示,横坐标表示摄像头的对焦位置到摄像头的距离,纵坐标表示摄像头对应的景深内的点到摄像头的距离,从图5中可以看出,摄像头的对焦位置到摄像头的距离越近,那么该摄像头当前对应的景深范围越小,本申请实施例要求第一摄像头与第二摄像头进行拍摄时对应的景深不同,第二摄像头对焦在目标物所在的目标位置,目标物一定是在第二景深内的,同时也需要保证第一景深能够覆盖到目标物,所以第一景深与第二景深会有一段重叠的范围也就是交叠景深,此外第一景深与第二景深的重叠范围不能太大,所以交叠景深要满足小于第三阈值的条件,例如第三阈值可以是10cm,或者是20cm,具体此处不做限定。
303、拍照设备确定第一景深对应的第一位置。
拍照设备确定了第一景深后,可以根据第一摄像头的景深表确定第一景深对应的第一位置,第一位置也就是第一摄像头需要对焦到的位置,其中,第一位置不同于目标位置,可以理解的是,由于摄像头对焦位置发生变化其对应的景深也随之变化,而根据摄像头的景深表可以获知对焦位置与景深之间的对应关系,因此由第一景深可以计算出对应的第一位置。
需要说明的是,第一位置区别于目标位置具体的表现方式可以有多种,下面分别进行说明:
一、第一位置到拍照设备的距离小于目标位置到拍照设备的距离。
如图6所示,第二摄像头对焦到目标位置并具备第二景深,第一摄像头对焦到第一位置并具备第一景深,第一位置相对于目标位置距离拍照设备更近,此时的交叠景深为第一景深远点到第二景深近点之间的距离范围,为了使交叠景深尽可能的小,理想的情况是,第一景深刚好可以覆盖到目标物,也就是第一景深远点与目标位置重合。
二、第一位置到拍照设备的距离大于目标位置到拍照设备的距离。
如图7所示,第二摄像头对焦到目标位置并具备第二景深,第一摄像头对焦到第一位置并具备第一景深,第一位置相对于目标位置距离拍照设备更远,此时的交叠景深为第一景深近点到第二景深远点之间的距离范围,为了使交叠景深尽可能的小,理想的情况是,第一景深刚好可以覆盖到目标物,也就是第一景深近点与目标位置重合。
304、拍照设备通过控制器控制第一摄像头对焦到第一位置。
拍照设备确定了第一位置之后,拍照设备可以驱动控制器并由控制器控制第一摄像头沿着光轴的方向移动并对焦到第一位置。
305、拍照设备获取第一图像及第二图像。
拍照设备可以获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,可以理解的是,第一摄像头及第二摄像头开启后即可以进行实时图像处理,例如进行亮度和颜色等基础图像处理,处理后会将图像送入显示屏进行取景拍摄最终得到第一图像和第二图像。
需要说明的是,用户在使用拍照设备进行拍摄时需要保持姿势不变,并且拍摄场景中 没有物体发生运动,使得第一图像及第二图像是针对同一场景拍摄得到的两张图像。
306、拍照设备对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像。
拍照设备可以计算得到的第一图像及第二图像的视差信息,进一步根据视差信息可以计算得到深度信息,可选地,该深度信息可以是与第一图像坐标相同的第一深度信息,也可以是与第二图像坐标相同的第二深度信息,之后拍照设备可以根据第一深度信息确定第一图像中目标物以外的第一区域,或者也可以根据第二深度信息确定第二图像中目标物以外的第二区域,其中,第一区域包括第一图像中与目标物相接的边缘区域和/或目标物中的镂空区域,第二区域包括第二图像中与目标物相接的边缘区域和/或目标物中的镂空区域,该深度信息具体可以表现为深度图,如图8所示为采用本申请实施例方法进行拍摄并计算得到的深度图,拍照设备可以根据深度图对目标物及目标物以外的区域进行分割处理,也就是区分目标物与目标物以外的区域,从图8中可以看出,以人体为拍摄的目标物,人体的边缘区域及人体手指之间的镂空区域相对于图2的深度图明显更清晰,也就是图8相对于图2精确度更高,可以更有效地对人体的边缘区域及人体手指之间的镂空区域进行分割,在此基础上最后对目标物以外的区域进行模糊处理得到虚化的目标图像,并且距离目标物所在平面越远的区域模糊程度越大。
可选地,拍照设备可以选择第一图像并在第一图像的基础上对目标物以外的第一区域进行模糊处理得到目标图像,也可以选择第二图像并在第二图像的基础上对目标物以外的第二区域进行模糊处理得到目标图像,具体选择哪个图像来做模糊处理此处不做限定。
本申请实施例中,拍照设备通过控制器控制第二摄像头对焦到目标物所在的目标位置,第二摄像头具备第二景深,拍照设备根据目标位置确定第一景深,第一景深与第二景深的交叠景深小于第三阈值且目标物位于交叠景深中,之后拍照设备根据第一摄像头的景深表确定与第一景深对应的第一位置,第一位置不同于目标位置,拍照设备通过控制器控制第一摄像头对焦到第一位置,随后拍照设备获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,进而拍照设备对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像,可以理解的是,第一摄像头与第二摄像头进行拍摄时的景深不同,第一图像及第二图像分别相对于目标物以外的物体(前景和背景)的模糊程度也是不同的,能更准确的对目标物进行识别,尤其是可以有效地对目标物与其前后背景相接的边缘区域和/或其中的镂空区域进行分割处理,只对目标物以外的区域进行虚化得到虚化效果更理想的图像,提高了用户体验。
上面对本申请实施例中的图像处理方法进行了描述,下面对本申请实施例中的图像处理装置进行描述:
请参阅图9,本申请实施例中图像处理装置的一个实施例包括第一摄像头及第二摄像头,第一摄像头与第二摄像头的光轴平行,第一摄像头的视场角与第二摄像头的视场角之间的差值小于第一阈值,第一摄像头的焦距与第二摄像头的焦距之间的差值小于第二阈值;
此外,该图像处理装置还包括:
控制单元901、用于控制第二摄像头对焦到目标物所在的目标位置,其中,第二摄像头对焦到目标物时具备第二景深;
第一确定单元902、用于根据目标位置确定第一景深,其中,第一景深和第二景深的交叠景深小于第三阈值,且目标物位于交叠景深中;
第二确定单元903、用于根据第一摄像头的景深表确定与第一景深对应的第一位置,其中,第一位置不同于目标位置;
控制单元901、还用于控制第一摄像头对焦到第一位置;
获取单元904、用于获取第一图像及第二图像,第一图像为第一摄像头对焦到第一位置时拍摄到的图像,第二图像为第二摄像头对焦到目标位置时拍摄到的图像;
模糊单元905、用于对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像。
需要说明的是,本申请实施例中的控制单元901可以包括有控制器,也就是说可以是由集成在控制单元901中的控制器控制第一摄像头及第二摄像头进行对焦,此外,控制单元901与控制器也可是两个不同的单元,控制单元901对控制器进行控制,并由控制器来控制第一摄像头及第二摄像头进行对焦。
本申请实施例中,控制单元901控制第二摄像头对焦到目标物所在的目标位置,第二摄像头具备第二景深,第一确定单元902根据目标位置确定第一景深,第一景深与第二景深的交叠景深小于第三阈值且目标物位于交叠景深中,之后第二确定单元903根据第一摄像头的景深表确定与第一景深对应的第一位置,第一位置不同于目标位置,控制单元901控制第一摄像头对焦到第一位置,随后获取单元904获取第一摄像头对焦到第一位置时拍摄到的第一图像及第二摄像头对焦到目标位置时拍摄到的第二图像,进而模糊单元905对第一图像或第二图像中目标物以外的区域进行模糊处理得到目标图像,可以理解的是,第一摄像头与第二摄像头进行拍摄时的景深不同,第一图像及第二图像分别相对于目标物以外的物体(前景和背景)的模糊程度也是不同的,能更准确的对目标物进行识别,尤其是可以有效地对目标物与其前后背景相接的边缘区域和/或其中的镂空区域进行分割处理,只对目标物以外的区域进行虚化得到虚化效果更理想的图像,提高了用户体验。
可选地,在图9对应的实施例的基础上,请参阅图10,本申请实施例图像处理装置的另一个实施例中,
模糊单元905包括:
第一计算模块9051、用于计算第一图像与第二图像的视差信息;
第二计算模块9052、用于根据视差信息计算第一图像的第一深度信息;
确定模块9053、用于根据第一深度信息确定第一图像中目标物以外的第一区域,第一区域包括第一图像中与目标物相接的边缘区域和/或目标物中的镂空区域;
模糊模块9054、用于对第一图像中的第一区域进行模糊处理得到目标图像。
可选地,
第二计算模块9052、还用于根据视差信息计算第二图像的第二深度信息;
确定模块9053、还用于根据第二深度信息确定第二图像中目标物以外的第二区域,第二区域包括第二图像中与目标物相接的边缘区域和/或目标物中的镂空区域;
模糊模块9054、还用于对第二图像中的第二区域进行模糊处理得到目标图像。
上面从模块化功能实体的角度对本申请实施例中的图像处理装置进行描述,下面从硬件处理的角度对本申请实施例中的拍照设备进行描述:
本申请实施例还提供了一种拍照设备,如图11所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该拍照设备可以为包括手机、平板电脑、个人数字助理(personal digital assistant,PDA)、车载电脑等终端设备,以拍照设备为手机为例:
图11示出的是与本申请实施例提供的拍照设备相关的手机的部分结构的框图。参考图11,手机包括:存储器1120、输入单元1130、显示单元1140、控制器1150、第一摄像头1160、第二摄像头1170、处理器1180、以及电源1190等部件。本领域技术人员可以理解,图11中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图11对手机的各个构成部件进行具体的介绍:
存储器1120可用于存储软件程序以及模块,处理器1180通过运行存储在存储器1120的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1120可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1130可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1130可包括触控面板1131以及其他输入设备1132。触控面板1131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1131上或在触控面板1131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1180,并能接收处理器1180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1131。除了触控面板1131,输入单元1130还可以包括其他输入设备1132。具体地,其他输入设备1132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1140可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单,在本申请实施例中主要用于显示摄像头拍摄到的图像。显示单元1140可包括显示面板1141,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1141。进一步的,触控面板1131可覆盖显示面板1141,当触控面板1131检测到在其上或附近的触摸操作后,传送给处理器1180以确定触摸事件的类型,随后处理器1180根据触摸事件的类型在显示面 板1141上提供相应的视觉输出。虽然在图11中,触控面板1131与显示面板1141是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1131与显示面板1141集成而实现手机的输入和输出功能。
控制器1150可用于控制第一摄像头及第二摄像头沿着光轴的方向移动并进行对焦。
第一摄像头1160及第二摄像头1170可用于对场景进行拍摄分别得到第一图像及第二图像,其中,第一摄像头1160与第二摄像头1170的光轴平行,第一摄像头1160的视场角与第二摄像头1170的视场角之间的差值小于第一阈值,第一摄像头1160的焦距与第二摄像头1170的焦距之间的差值小于第二阈值。
处理器1180是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1120内的软件程序和/或模块,以及调用存储在存储器1120内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1180可包括一个或多个处理单元;优选的,处理器1180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1180中。
手机还包括给各个部件供电的电源1190(比如电池),优选的,电源可以通过电源管理系统与处理器1180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
在本申请实施例中,处理器1180具体用于执行图3所示实施例中拍照设备所执行的全部或部分动作,具体此处不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可 以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,read-only memory)、随机存取存储器(RAM,random access memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (11)

  1. 一种图像处理方法,应用于拍照设备,所述拍照设备包括第一摄像头、第二摄像头及控制器,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,其特征在于,所述方法包括:
    所述拍照设备通过所述控制器控制所述第二摄像头对焦到目标物所在的目标位置,其中,所述第二摄像头对焦到所述目标物时具备第二景深;
    所述拍照设备根据所述目标位置确定第一景深,其中,所述第一景深和所述第二景深的交叠景深小于第三阈值,且所述目标物位于所述交叠景深中;
    所述拍照设备根据所述第一摄像头的景深表确定与所述第一景深对应的第一位置,其中,所述第一位置不同于所述目标位置;
    所述拍照设备通过所述控制器控制所述第一摄像头对焦到所述第一位置;
    所述拍照设备获取第一图像及第二图像,所述第一图像为所述第一摄像头对焦到所述第一位置时拍摄到的图像,所述第二图像为所述第二摄像头对焦到所述目标位置时拍摄到的图像;
    所述拍照设备对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述拍照设备对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像包括:
    所述拍照设备计算所述第一图像与所述第二图像的视差信息;
    所述拍照设备根据所述视差信息计算所述第一图像的第一深度信息;
    所述拍照设备根据所述第一深度信息确定所述第一图像中所述目标物以外的第一区域,所述第一区域包括所述第一图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
    所述拍照设备对所述第一图像中的所述第一区域进行模糊处理得到所述目标图像。
  3. 根据权利要求1所述的方法,其特征在于,所述拍照设备对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像包括:
    所述拍照设备计算所述第一图像与所述第二图像的视差信息;
    所述拍照设备根据所述视差信息计算所述第二图像的第二深度信息;
    所述拍照设备根据所述第二深度信息确定所述第二图像中所述目标物以外的第二区域,所述第二区域包括所述第二图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
    所述拍照设备对所述第二图像中的所述第二区域进行模糊处理得到所述目标图像。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,
    所述第一摄像头与所述第二摄像头的视场角均大于或等于60°。
  5. 根据权利要求1至3中任一项所述的方法,其特征在于,
    所述第一摄像头与所述第二摄像头的最近对焦距离均小于或等于20cm。
  6. 一种图像处理装置,包括第一摄像头及第二摄像头,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,其特征在于,还包括:
    控制单元,用于控制所述第二摄像头对焦到目标物所在的目标位置,其中,所述第二摄像头对焦到所述目标物时具备第二景深;
    第一确定单元,用于根据所述目标位置确定第一景深,其中,所述第一景深和所述第二景深的交叠景深小于第三阈值,且所述目标物位于所述交叠景深中;
    第二确定单元,用于根据所述第一摄像头的景深表确定与所述第一景深对应的第一位置,其中,所述第一位置不同于所述目标位置;
    所述控制单元,还用于控制所述第一摄像头对焦到所述第一位置;
    获取单元,用于获取第一图像及第二图像,所述第一图像为所述第一摄像头对焦到所述第一位置时拍摄到的图像,所述第二图像为所述第二摄像头对焦到所述目标位置时拍摄到的图像;
    模糊单元,用于对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像。
  7. 根据权利要求6所述的图像处理装置,其特征在于,所述模糊单元包括:
    第一计算模块,用于计算所述第一图像与所述第二图像的视差信息;
    第二计算模块,用于根据所述视差信息计算所述第一图像的第一深度信息;
    确定模块,用于根据所述第一深度信息确定所述第一图像中所述目标物以外的第一区域,所述第一区域包括所述第一图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
    模糊模块,用于对所述第一图像中的所述第一区域进行模糊处理得到所述目标图像。
  8. 根据权利要求6所述的图像处理装置,其特征在于,所述模糊单元包括:
    第一计算模块,用于计算所述第一图像与所述第二图像的视差信息;
    第二计算模块,用于根据所述视差信息计算所述第二图像的第二深度信息;
    确定模块,用于根据所述第二深度信息确定所述第二图像中所述目标物以外的第二区域,所述第二区域包括所述第二图像中与所述目标物相接的边缘区域和/或所述目标物中的镂空区域;
    模糊模块,用于对所述第二图像中的所述第二区域进行模糊处理得到所述目标图像。
  9. 一种拍照设备,包括第一摄像头及第二摄像头,所述第一摄像头与所述第二摄像头的光轴平行,所述第一摄像头的视场角与所述第二摄像头的视场角之间的差值小于第一阈值,所述第一摄像头的焦距与所述第二摄像头的焦距之间的差值小于第二阈值,其特征在于,还包括:
    处理器、控制器、存储器、总线以及输入输出接口;
    所述存储器中存储有程序代码;
    所述处理器调用所述存储器中的程序代码时执行如下操作:
    驱动所述控制器控制所述第二摄像头对焦到目标物所在的目标位置,其中,所述第二摄像头对焦到所述目标物时具备第二景深;
    根据所述目标位置确定第一景深,其中,所述第一景深和所述第二景深的交叠景深小于第三阈值,且所述目标物位于所述交叠景深中;
    根据所述第一摄像头的景深表确定所述第一景深对应的第一位置,其中,所述第一位置不同于所述目标位置;
    驱动所述控制器控制所述第一摄像头对焦到所述第一位置;
    获取第一图像及第二图像,所述第一图像为所述第一摄像头对焦到所述第一位置时拍摄到的图像,所述第二图像为所述第二摄像头对焦到所述目标位置时拍摄到的图像;
    对所述第一图像或所述第二图像中所述目标物以外的区域进行模糊处理得到目标图像。
  10. 一种计算机可读存储介质,包括指令,当所述指令在计算机上运行时,使得计算机执行如权利要求1至5中任意一项所述的方法。
  11. 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1至5中任意一项所述的方法。
PCT/CN2018/113926 2018-01-11 2018-11-05 一种图像处理方法、图像处理装置及拍照设备 WO2019137081A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810028792.1A CN110035218B (zh) 2018-01-11 2018-01-11 一种图像处理方法、图像处理装置及拍照设备
CN201810028792.1 2018-01-11

Publications (1)

Publication Number Publication Date
WO2019137081A1 true WO2019137081A1 (zh) 2019-07-18

Family

ID=67219302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113926 WO2019137081A1 (zh) 2018-01-11 2018-11-05 一种图像处理方法、图像处理装置及拍照设备

Country Status (2)

Country Link
CN (1) CN110035218B (zh)
WO (1) WO2019137081A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144404A (zh) * 2019-12-06 2020-05-12 恒大新能源汽车科技(广东)有限公司 遗留物体检测方法、装置、系统、计算机设备和存储介质
CN114677425A (zh) * 2022-03-17 2022-06-28 北京小马慧行科技有限公司 确定物体景深的方法与装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7409604B2 (ja) * 2019-12-18 2024-01-09 キヤノン株式会社 画像処理装置、撮像装置、画像処理方法、プログラムおよび記録媒体
CN112585941A (zh) * 2019-12-30 2021-03-30 深圳市大疆创新科技有限公司 对焦方法、装置、拍摄设备、可移动平台和存储介质
CN112469984B (zh) * 2019-12-31 2024-04-09 深圳迈瑞生物医疗电子股份有限公司 一种图像分析装置及其成像方法
CN111246093B (zh) * 2020-01-16 2021-07-20 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN112702530B (zh) * 2020-12-29 2023-04-25 维沃移动通信(杭州)有限公司 算法控制方法及电子设备
CN113688824B (zh) * 2021-09-10 2024-02-27 福建汇川物联网技术科技股份有限公司 一种施工节点的信息采集方法、装置及存储介质
CN116051362B (zh) * 2022-08-24 2023-09-15 荣耀终端有限公司 图像处理方法及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169760A1 (en) * 2012-01-04 2013-07-04 Lloyd Watts Image Enhancement Methods And Systems
CN103763477A (zh) * 2014-02-21 2014-04-30 上海果壳电子有限公司 一种双摄像头拍后调焦成像装置和方法
CN104424640A (zh) * 2013-09-06 2015-03-18 格科微电子(上海)有限公司 对图像进行虚化处理的方法和装置
CN105847674A (zh) * 2016-03-25 2016-08-10 维沃移动通信有限公司 一种基于移动终端的预览图像处理方法及移动终端
CN107087091A (zh) * 2017-05-31 2017-08-22 广东欧珀移动通信有限公司 电子设备的外壳组件及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130169760A1 (en) * 2012-01-04 2013-07-04 Lloyd Watts Image Enhancement Methods And Systems
CN104424640A (zh) * 2013-09-06 2015-03-18 格科微电子(上海)有限公司 对图像进行虚化处理的方法和装置
CN103763477A (zh) * 2014-02-21 2014-04-30 上海果壳电子有限公司 一种双摄像头拍后调焦成像装置和方法
CN105847674A (zh) * 2016-03-25 2016-08-10 维沃移动通信有限公司 一种基于移动终端的预览图像处理方法及移动终端
CN107087091A (zh) * 2017-05-31 2017-08-22 广东欧珀移动通信有限公司 电子设备的外壳组件及电子设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144404A (zh) * 2019-12-06 2020-05-12 恒大新能源汽车科技(广东)有限公司 遗留物体检测方法、装置、系统、计算机设备和存储介质
CN111144404B (zh) * 2019-12-06 2023-08-11 恒大恒驰新能源汽车科技(广东)有限公司 遗留物体检测方法、装置、系统、计算机设备和存储介质
CN114677425A (zh) * 2022-03-17 2022-06-28 北京小马慧行科技有限公司 确定物体景深的方法与装置

Also Published As

Publication number Publication date
CN110035218B (zh) 2021-06-15
CN110035218A (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2019137081A1 (zh) 一种图像处理方法、图像处理装置及拍照设备
WO2022000992A1 (zh) 拍摄方法、装置、电子设备和存储介质
CN107592466B (zh) 一种拍照方法及移动终端
WO2013146269A1 (ja) 画像撮像装置、画像処理方法およびプログラム
WO2015081555A1 (zh) 双镜头设备的拍照方法及双镜头设备
WO2021136078A1 (zh) 图像处理方法、图像处理系统、计算机可读介质和电子设备
CN103297696A (zh) 拍摄方法、装置和终端
US11792351B2 (en) Image processing method, electronic device, and computer-readable storage medium
US9921054B2 (en) Shooting method for three dimensional modeling and electronic device supporting the same
WO2017147748A1 (zh) 一种可穿戴式系统的手势控制方法以及可穿戴式系统
CN113840070B (zh) 拍摄方法、装置、电子设备及介质
CN105306819B (zh) 一种基于手势控制拍照的方法及装置
WO2023173668A1 (zh) 一种虚拟场景中的输入识别方法、设备及存储介质
WO2021238564A1 (zh) 显示设备及其畸变参数确定方法、装置、系统及存储介质
CN104282041A (zh) 三维建模方法及装置
WO2023273499A1 (zh) 深度检测方法及装置、电子设备和存储介质
WO2018161564A1 (zh) 手势识别系统、方法及显示设备
WO2021081909A1 (zh) 拍摄设备的对焦方法、拍摄设备、系统及存储介质
CN111862148A (zh) 实现视觉跟踪的方法、装置、电子设备及介质
CN114363522A (zh) 拍照方法及相关装置
CN108106599A (zh) 触觉提供装置、触觉提供方法及计算机可读介质
CN108550182B (zh) 一种三维建模方法和终端
WO2019218878A1 (zh) 拍照修复方法以及装置、存储介质及终端设备
WO2022161324A1 (zh) 工作状态确定方法和装置
KR20160055407A (ko) 홀로그래피 터치 방법 및 프로젝터 터치 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900447

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18900447

Country of ref document: EP

Kind code of ref document: A1