CN110035218B - Image processing method, image processing device and photographing equipment - Google Patents
Image processing method, image processing device and photographing equipment Download PDFInfo
- Publication number
- CN110035218B CN110035218B CN201810028792.1A CN201810028792A CN110035218B CN 110035218 B CN110035218 B CN 110035218B CN 201810028792 A CN201810028792 A CN 201810028792A CN 110035218 B CN110035218 B CN 110035218B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- depth
- field
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application discloses an image processing method, an image processing device and photographing equipment, which are used for improving user experience. The method in the embodiment of the application comprises the following steps: the photographing device controls the second camera to focus to a target position where a target object is located through the controller, wherein the second camera has a second depth of field when focusing to the target object, the photographing device determines a first depth of field according to the target position, the overlapped depth of the first depth of field and the second depth of field is smaller than a third threshold, the target object is located in the overlapped depth of field, the photographing device determines a first position corresponding to the first depth of field according to a depth of field table of the first camera, the first position is different from the target position, the photographing device controls the first camera to focus to the first position through the controller, then the photographing device obtains a first image and a second image, and the photographing device performs fuzzy processing on an area, except for the target object, in the first image or the second image to obtain the target image.
Description
Technical Field
The present application relates to the field of image application technologies, and in particular, to an image processing method, an image processing apparatus, and a photographing device.
Background
In the imaging field, the aperture size is an important index of an imaging lens, and the large aperture can not only increase the image surface illumination, improve the image signal to noise ratio, but also realize shallow depth of field, so that the shot image has the blurring effect of the clear main body and the fuzzy rest part.
In a light and thin consumer electronics product, due to limited size, a single lens cannot achieve a large aperture blurring effect, and a common method is to use two lenses to form a parallel double-shot image acquisition system to shoot a target object, acquire two images respectively shot by the two lenses after focusing the target object is completed, convert the acquired images into a common coordinate system, perform parallax calculation on an overlapped area of the two images, calculate the distance from the shot object to a camera according to the parallax, thereby obtaining a depth map of a shooting scene, and further perform blurring processing on an image outside a plane where the target object is located according to the depth map, thereby achieving the blurring effect.
However, since the focal lengths of the two cameras are close, the depths of field of the two images obtained when the two cameras focus on the target object for shooting are also close, that is, the two images have no obvious difference with respect to the blur degree of the object (foreground and background) other than the target object, so that the accuracy of the depth map obtained according to the two images is poor, the edge area where the target object is connected with the front and rear backgrounds of the target object or the hollow area therein is not easily segmented, the situation that the target object is blurred or the area other than the target object is not blurred often occurs, the blurring effect of the shot picture is not ideal, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device and a photographing device, which are used for photographing to obtain an image with a more ideal blurring effect and prompting user experience.
In view of this, a first aspect of the embodiments of the present application provides an image processing method applied to a photographing apparatus, where the photographing apparatus includes a first camera, a second camera and a controller, an optical axis of the first camera is parallel to an optical axis of the second camera, a difference between a field angle of the first camera and a field angle of the second camera is smaller than a first threshold, and a difference between a focal length of the first camera and a focal length of the second camera is smaller than a second threshold, the method including:
the photographing device driving controller controls the second camera to move along the optical axis direction and focus to a target position where a target object is located, the second camera has a second depth of field when focusing to the target position, the photographing device can determine a first depth of field according to the target position, namely the depth of field which the first camera needs to have, wherein the overlapping depth of field of the first depth of field and the second depth of field is ensured to be smaller than a third threshold value, the target object is located in the overlapping depth of field, then the photographing device can determine a first position corresponding to the first depth of field according to a depth of field table of the first camera, namely the position where the first camera needs to focus, wherein the first position is different from the target position, the photographing device driving controller controls the first camera to move along the optical axis direction and focus to the first position, and then the photographing device can obtain a first image shot when the first camera focuses to the first position and a second image shot when the second camera focuses to the target position, and then the photographing device can perform blurring processing on the region except the target object in the first image or the second image to obtain a target image.
In the embodiment of the application, the depths of field of the first camera and the second camera are different when the first camera and the second camera are used for shooting, the fuzzy degrees of the first image and the second image relative to objects (foreground and background) except the target object are different, the target object can be identified more accurately, especially, the edge area and/or the hollow area of the target object connected with the front background and the rear background of the target object can be effectively segmented, only the area except the target object is blurred, an image with a more ideal blurring effect is obtained, and user experience is improved.
With reference to the first aspect of the embodiment of the present application, in a first implementation manner of the first aspect of the embodiment of the present application, the performing, by the photographing apparatus, a blurring process on a region other than the target object in the first image or the second image to obtain a target image includes:
the photographing device calculates parallax information of the first image and the second image, first depth information with the same coordinate as the first image can be obtained through calculation according to the parallax information, then a first area in the first image except the target object can be determined according to the first depth information, namely the first area in the first image except the target object and the first area in the target object are distinguished, wherein the first area comprises an edge area in the first image, connected with the target object, and/or a hollow area in the target object, and then the photographing device conducts fuzzy processing on the first area in the first image to obtain the target image.
With reference to the first aspect of the embodiment of the present application, in a second implementation manner of the first aspect of the embodiment of the present application, the performing, by the photographing apparatus, a blurring process on a region other than the target object in the first image or the second image to obtain a target image includes:
the photographing device calculates parallax information of the first image and the second image, second depth information with the same coordinate as the second image can be obtained through calculation according to the parallax information, then a second area in the second image except the target object can be determined according to the second depth information, namely the second area in the second image except the target object is distinguished from the target object, wherein the second area comprises an edge area in the second image, connected with the target object, and/or a hollow area in the target object, and then the photographing device conducts fuzzy processing on the second area in the second image to obtain the target image.
According to the scheme provided by the embodiment of the application, the photographing device can randomly select one of the first image and the second image, then calculate the depth information of the image, determine the region out of the target object in the image, and then perform the blurring processing on the region out of the target object in the image to finally obtain the target image.
In combination with the first aspect of the examples of the present application, the first implementation manner of the first aspect of the examples of the present application or the second implementation manner of the first aspect of the examples of the present application, in the third implementation manner of the first aspect of the examples of the present application,
the field angles of the first camera and the second camera are both greater than or equal to 60 °.
Through the scheme provided by the embodiment of the application, the first camera and the second camera can be ensured to have enough large field angles, the coverage range of the images shot by the two cameras is relatively large, and the finally obtained target image can have enough large coverage range.
With reference to the first aspect of the examples of the present application, the first implementation manner of the first aspect of the examples of the present application or the second implementation manner of the first aspect of the examples of the present application, in the fourth implementation manner of the first aspect of the examples of the present application,
the closest focusing distance between the first camera and the second camera is less than or equal to 20 cm.
Through the scheme provided by the embodiment of the application, the two cameras can be guaranteed to focus the scene close enough to the distance, and the practicability of the scheme is improved.
A second aspect of the embodiments of the present application provides an image processing apparatus, including a first camera and a second camera, where an optical axis of the first camera is parallel to an optical axis of the second camera, a difference between a field angle of the first camera and a field angle of the second camera is smaller than a first threshold, and a difference between a focal length of the first camera and a focal length of the second camera is smaller than a second threshold, the image processing apparatus further including:
the control unit is used for controlling the second camera to focus to a target position where a target object is located, wherein the second camera has a second depth of field when focusing to the target object;
a first determining unit, configured to determine a first depth of field according to the target position, wherein an overlapping depth of field of the first depth of field and the second depth of field is smaller than a third threshold, and the target is located in the overlapping depth of field;
a second determining unit, configured to determine a first position corresponding to the first depth of field according to a depth table of the first camera, where the first position is different from the target position;
the control unit is further used for controlling the first camera to focus to the first position;
the acquisition unit is used for acquiring a first image and a second image, wherein the first image is an image shot when the first camera is focused to the first position, and the second image is an image shot when the second camera is focused to the target position;
and the blurring unit is used for blurring the region except the target object in the first image or the second image to obtain a target image.
Optionally, the blurring unit comprises:
the first calculation module is used for calculating parallax information of the first image and the second image;
the second calculation module is used for calculating first depth information of the first image according to the parallax information;
a determining module, configured to determine, according to the first depth information, a first region in the first image, the first region being outside the object, where the first region includes an edge region in the first image, the edge region being connected to the object, and/or a hollow region in the object;
and the blurring module is used for blurring the first area in the first image to obtain the target image.
Optionally, the second calculating module may be further configured to calculate second depth information of the second image according to the disparity information;
the determining module may be further configured to determine, according to the second depth information, a second region in the second image, where the second region is other than the target object, and the second region includes an edge region in the second image, where the edge region is connected to the target object, and/or a hollow region in the target object;
the blurring module may be further configured to perform blurring processing on the second region in the second image to obtain the target image.
In the embodiment of the present application, the control unit controls the second camera to focus on the target position where the target object is located, the second camera has a second depth of field, the first determining unit determines a first depth of field according to the target position, an overlapped depth of field between the first depth of field and the second depth of field is smaller than a third threshold, and the target object is located in the overlapped depth of field, then the second determining unit determines a first position corresponding to the first depth of field according to a depth of field table of the first camera, the first position is different from the target position, the control unit controls the first camera to focus on the first position, then the obtaining unit obtains a first image captured when the first camera is focused on the first position and a second image captured when the second camera is focused on the target position, and then the blurring unit performs blurring processing on an area other than the target object in the first image or the second image to obtain the target image, it can be understood that, the first camera and the second camera have different depths of field when shooting, the first image and the second image have different fuzzy degrees relative to objects (foreground and background) except the target object, the target object can be identified more accurately, especially, the edge area and/or the hollow area of the target object connected with the front background and the rear background of the target object can be effectively segmented, only the area except the target object is blurred, an image with more ideal blurring effect is obtained, and the user experience is improved.
The third aspect of the embodiment of the present application provides a photographing apparatus, including a first camera and a second camera, the first camera is parallel to an optical axis of the second camera, a difference between a field angle of the first camera and a field angle of the second camera is smaller than a first threshold, a difference between a focal length of the first camera and a focal length of the second camera is smaller than a second threshold, and the photographing apparatus further includes:
the system comprises a processor, a controller, a memory, a bus and an input/output interface;
the memory has program code stored therein;
when the processor calls the program codes in the memory, the following operations are executed:
driving the controller to control the second camera to focus to a target position where a target object is located, wherein the second camera has a second depth of field when focusing to the target object;
determining a first depth of field according to the target position, wherein the overlapped depth of the first depth of field and the second depth of field is smaller than a third threshold value, and the target is located in the overlapped depth of field;
determining a first position corresponding to the first depth of field according to a depth table of the first camera, wherein the first position is different from the target position;
driving the controller to control the first camera to focus to the first position;
acquiring a first image and a second image, wherein the first image is an image shot when the first camera is focused to the first position, and the second image is an image shot when the second camera is focused to the target position;
and carrying out fuzzy processing on the region except the target object in the first image or the second image to obtain a target image.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform some or all of the steps in the image processing method according to the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product containing instructions, which when run on a computer, causes the computer to perform some or all of the steps of the image processing method according to the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, the photographing device controls the second camera to focus on a target position where a target object is located through the controller, the second camera has a second depth of field, the photographing device determines a first depth of field according to the target position, the overlapping depth of field of the first depth of field and the second depth of field is smaller than a third threshold, the target object is located in the overlapping depth of field, then the photographing device determines a first position corresponding to the first depth of field according to a depth of field table of the first camera, the first position is different from the target position, the photographing device controls the first camera to focus on the first position through the controller, then the photographing device obtains a first image shot when the first camera focuses on the first position and a second image shot when the second camera focuses on the target position, and then the photographing device performs fuzzy processing on a region except for the target object in the first image or the second image to obtain the target image, it can be understood that the depths of field when the first camera and the second camera are used for shooting are different, and the blur degrees of the first image and the second image relative to objects (foreground and background) other than the target object are different, so that the target object can be identified more accurately, especially, the edge area and/or the hollow area of the target object connected with the front background and the rear background of the target object can be effectively segmented, only the area other than the target object is blurred, so that an image with a more ideal blurring effect is obtained, and the user experience is improved.
Drawings
FIG. 1 is a schematic view of parallax when two cameras shoot;
FIG. 2 is a depth map calculated from disparity according to the prior art;
FIG. 3 is a schematic diagram of an embodiment of an image processing method of the present application;
fig. 4(a) is a schematic view of the arrangement of two cameras on the photographing apparatus according to the present application;
fig. 4(b) is another schematic diagram of the position arrangement of two cameras on the photographing device according to the present application;
FIG. 5 is a schematic view of the present application showing the change in depth of field of a camera as the camera's position of focus changes;
FIG. 6 is a schematic view of the overlapping depth of field of the present application;
FIG. 7 is another schematic view of the overlapping depth of field of the present application;
FIG. 8 is a depth map obtained by the present application according to disparity calculation;
FIG. 9 is a schematic diagram of an embodiment of an image processing apparatus according to the present application;
FIG. 10 is a schematic diagram of another embodiment of an image processing apparatus according to the present application;
fig. 11 is a schematic structural diagram of the photographing apparatus of the present application.
Detailed Description
The embodiment of the application provides an image processing method and a photographing device, which are used for photographing to obtain an image with a more ideal blurring effect and improving user experience.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application can be applied to a photographing device comprising two cameras, the optical axes of the two cameras are parallel, the field angles and focal lengths of the two cameras are the same or similar, and because the optical axes of the two cameras do not coincide, that is, the distance exists between the two cameras, images obtained by the two cameras have parallax, please refer to fig. 1, the two cameras are respectively a and B, the focal lengths are both f, the position of a target object to be photographed at a point P is the position of the point P, and the positions of the target object at two imaging planes are respectively P1 and P2, it can be seen that the distance from the point P1 to the left edge of the imaging plane of the camera a is L1, the distance from the point P2 to the left edge of the imaging plane of the camera B is L2, the distances from the point L1 to the point L2 are not equal, the parallax exists between images obtained by the camera a camera and the camera B, the distance Z from the point P to the plane of the two cameras can be, on the basis, a depth map of an overlapping area of two camera shooting scenes can be further obtained.
The depth map may be specifically as shown in fig. 2, and the objects located in different planes in the shooting scene may be segmented according to the depth map, for example, two cameras focus on the plane where the human body is located to obtain the depth map shown in fig. 2, the plane where the human body is located and the front and rear backgrounds of the plane are segmented according to the depth map, and then the region outside the plane where the human body is located is blurred, so as to obtain a blurred image, however, since the focal lengths of the two cameras are close, the depths of field when the two cameras focus on the human body are also close, that is, the blurring degrees of the two images with respect to the parts (foreground and background) outside the human body are not obviously different, so that the depth map obtained finally as shown in fig. 2 has poor accuracy, and it can be seen that the edge region of the human body and the hollow region between the human fingers are not clear, it is not easy to divide this area, and the resulting blurring effect is not ideal.
Therefore, the embodiment of the application provides an image processing method on the basis of a double-camera shooting system, and a picture with a more ideal blurring effect can be shot.
For the sake of understanding, the following describes a specific flow in the embodiments of the present application:
referring to fig. 3, an embodiment of an image processing method in the embodiment of the present application includes:
301. the photographing device controls the second camera to focus to the target position of the target object through the controller.
In the embodiment of the application, the photographing apparatus includes a first camera, a second camera and a controller, where optical axes of the first camera and the second camera are parallel, a difference between a field angle of the first camera and a field angle of the second camera is smaller than a first threshold, and a difference between a focal length of the first camera and a focal length of the second camera is smaller than a second threshold.
The depth of field of the camera refers to a front-back distance range of a subject measured by the camera capable of obtaining an image of a clear image, two end points of the depth of field are a depth of field near point and a depth of field far point respectively, the depth of field near point is a point closest to the camera within the depth of field, and the depth of field far point is a point farthest from the camera within the depth of field.
It should be noted that the field angles of the first camera and the second camera may be both greater than or equal to 60 °, and both the cameras have relatively large field angles, so that the overlapping area of the two cameras is large, and a picture with a sufficiently large viewing range can be finally obtained, of course, the field angles of the two cameras may be other values, for example, both greater than or equal to 50 °, which is not limited herein.
In addition, the closest focus distances of the first camera and the second camera may be both less than or equal to 20cm, and similarly, the closest focus distances of the two cameras may also be other values, for example, both less than or equal to 30cm, which is not limited herein.
It can be understood that the photographing apparatus may be a terminal apparatus, such as a mobile phone or a tablet computer, and the first camera and the second camera may be arranged on the photographing apparatus in various ways, please refer to fig. 4(a) and fig. 4(b), taking the mobile phone as the photographing apparatus as an example, the two cameras may be arranged in a left-right direction as shown in fig. 4(a), or in an up-down direction as shown in fig. 4(b), and in addition, both the two cameras may be arranged on the back of a display screen of the mobile phone, or in the same plane as the display screen, that is, both the two cameras may be front cameras, or both the two cameras may be rear cameras, which is not limited herein specifically, and the distance between the two cameras is also subject to practical application, which is not limited herein.
Optionally, the photographing apparatus may further include more cameras in addition to the first camera and the second camera.
302. The photographing device determines a first depth of field according to the target position.
After the second camera focuses on the target position, the photographing apparatus may calculate a first depth of field, that is, the depth of field that the first camera needs to have, where an overlapping depth of field of the first depth of field and the second depth of field is smaller than a third threshold, and the target object is located in the overlapping depth of field.
It can be understood that, the focusing positions of the cameras are different, and the depths of field corresponding to the cameras are also different, as shown in fig. 5, the abscissa represents the distance from the focusing position of the camera to the camera, and the ordinate represents the distance from a point within the depth of field corresponding to the camera, as can be seen from fig. 5, the closer the focusing position of the camera to the camera, the smaller the depth of field currently corresponding to the camera is, the embodiment of the present application requires that the depths of field corresponding to the first camera and the second camera when performing shooting are different, the second camera focuses on the target position where the target is located, the target must be within the second depth of field, and it is also required to ensure that the first depth of field can cover the target, so that the first depth of field and the second depth of field have an overlapping range, that is, and the overlapping range of the first depth of field and the second depth of field cannot be too large, the overlapping depth of field is less than the third threshold, for example, the third threshold may be 10cm, or 20cm, and is not limited herein.
303. The photographing device determines a first position corresponding to the first depth of field.
After the photographing device determines the first depth of field, a first position corresponding to the first depth of field can be determined according to a depth of field table of the first camera, the first position is a position where the first camera needs to be focused, wherein the first position is different from a target position.
It should be noted that, there may be a plurality of specific representation manners for distinguishing the first position from the target position, and the following description is separately provided:
firstly, the distance from the first position to the photographing device is smaller than the distance from the target position to the photographing device.
As shown in fig. 6, the second camera focuses on the target position and has a second depth of field, the first camera focuses on the first position and has a first depth of field, the first position is closer to the photographing apparatus relative to the target position, and the overlapped depth of field at this time is a distance range from a far point of the first depth of field to a near point of the second depth of field.
And secondly, the distance from the first position to the photographing device is greater than the distance from the target position to the photographing device.
As shown in fig. 7, the second camera focuses on the target position and has a second depth of field, the first camera focuses on the first position and has a first depth of field, the first position is farther from the photographing apparatus relative to the target position, and the overlapped depth of field at this time is a distance range from a near point of the first depth of field to a far point of the second depth of field.
304. The photographing device controls the first camera to focus to a first position through the controller.
After the photographing apparatus determines the first position, the photographing apparatus may drive the controller and control the first camera to move in the direction of the optical axis and focus to the first position by the controller.
305. The photographing device acquires a first image and a second image.
The photographing device can acquire a first image shot when the first camera focuses on the first position and a second image shot when the second camera focuses on the target position, and it can be understood that the first camera and the second camera can be started to perform real-time image processing, such as luminance and color basic image processing, and the processed images can be sent to the display screen to perform framing shooting to finally obtain the first image and the second image.
When shooting with the shooting device, the user needs to keep the posture unchanged, and no object moves in the shooting scene, so that the first image and the second image are two images shot for the same scene.
306. And the photographing equipment carries out fuzzy processing on the region except the target object in the first image or the second image to obtain a target image.
The photographing device may calculate parallax information of the first image and the second image, and further may calculate depth information according to the parallax information, optionally, the depth information may be first depth information having the same coordinates as the first image, or may also be second depth information having the same coordinates as the second image, and then the photographing device may determine a first region other than the target object in the first image according to the first depth information, or may determine a second region other than the target object in the second image according to the second depth information, where the first region includes an edge region connected to the target object in the first image and/or a hollow region in the target object, and the second region includes an edge region connected to the target object in the second image and/or a hollow region in the target object, and the depth information may be specifically represented as a depth map, as shown in fig. 8, the depth map photographed and calculated by using the method of the embodiment of the present application, the photographing device can perform segmentation processing on the target object and the region outside the target object according to the depth map, that is, distinguish the target object from the region outside the target object, as can be seen from fig. 8, taking the human body as the photographed target object, the edge region of the human body and the hollow region between the human body fingers are obviously clearer compared with the depth map of fig. 2, that is, the accuracy of fig. 8 relative to fig. 2 is higher, the edge region of the human body and the hollow region between the human body fingers can be segmented more effectively, on this basis, the blurring processing is finally performed on the region outside the target object to obtain a blurred target image, and the blurring degree of the region farther from the plane where the target object is located is higher.
Alternatively, the photographing apparatus may select the first image and perform the blurring processing on the first region other than the target object on the basis of the first image to obtain the target image, or may select the second image and perform the blurring processing on the second region other than the target object on the basis of the second image to obtain the target image, and the specific selection of which image to perform the blurring processing is not limited herein.
In the embodiment of the application, the photographing device controls the second camera to focus on a target position where a target object is located through the controller, the second camera has a second depth of field, the photographing device determines a first depth of field according to the target position, the overlapping depth of field of the first depth of field and the second depth of field is smaller than a third threshold, the target object is located in the overlapping depth of field, then the photographing device determines a first position corresponding to the first depth of field according to a depth of field table of the first camera, the first position is different from the target position, the photographing device controls the first camera to focus on the first position through the controller, then the photographing device obtains a first image shot when the first camera focuses on the first position and a second image shot when the second camera focuses on the target position, and then the photographing device performs fuzzy processing on a region except for the target object in the first image or the second image to obtain the target image, it can be understood that the depths of field when the first camera and the second camera are used for shooting are different, and the blur degrees of the first image and the second image relative to objects (foreground and background) other than the target object are different, so that the target object can be identified more accurately, especially, the edge area and/or the hollow area of the target object connected with the front background and the rear background of the target object can be effectively segmented, only the area other than the target object is blurred, so that an image with a more ideal blurring effect is obtained, and the user experience is improved.
The image processing method in the embodiment of the present application is described above, and the image processing apparatus in the embodiment of the present application is described below:
referring to fig. 9, an embodiment of an image processing apparatus in the embodiment of the present application includes a first camera and a second camera, optical axes of the first camera and the second camera are parallel, a difference between a field angle of the first camera and a field angle of the second camera is smaller than a first threshold, and a difference between a focal length of the first camera and a focal length of the second camera is smaller than a second threshold;
further, the image processing apparatus further includes:
the control unit 901 is configured to control the second camera to focus to a target position where the target object is located, where the second camera has a second depth of field when focusing to the target object;
a first determining unit 902, configured to determine a first depth of field according to a position of the object, where an overlapping depth of the first depth of field and the second depth of field is smaller than a third threshold, and the object is located in the overlapping depth of field;
a second determining unit 903, configured to determine a first position corresponding to the first depth of field according to a depth table of the first camera, where the first position is different from the target position;
the control unit 901 is further configured to control the first camera to focus to a first position;
the acquiring unit 904 is configured to acquire a first image and a second image, where the first image is an image captured when the first camera is focused to a first position, and the second image is an image captured when the second camera is focused to a target position;
the blurring unit 905 is configured to perform blurring processing on a region other than the target object in the first image or the second image to obtain a target image.
It should be noted that the control unit 901 in this embodiment of the application may include a controller, that is, the controller integrated in the control unit 901 may control the first camera and the second camera to perform focusing, and in addition, the control unit 901 and the controller may also be two different units, and the control unit 901 controls the controller and controls the first camera and the second camera to perform focusing.
In this embodiment, the control unit 901 controls the second camera to focus on the target position where the target object is located, the second camera has a second depth of field, the first determining unit 902 determines a first depth of field according to the target position, the overlapping depth of field of the first depth of field and the second depth of field is less than a third threshold, and the target object is located in the overlapping depth of field, then the second determining unit 903 determines a first position corresponding to the first depth of field according to the depth of field table of the first camera, the first position is different from the target position, the control unit 901 controls the first camera to focus on the first position, then the obtaining unit 904 obtains a first image captured when the first camera is focused on the first position and a second image captured when the second camera is focused on the target position, and then the blurring unit 905 performs blurring processing on an area other than the target object in the first image or the second image to obtain the target image, it can be understood that the depths of field when the first camera and the second camera are used for shooting are different, and the blur degrees of the first image and the second image relative to objects (foreground and background) other than the target object are different, so that the target object can be identified more accurately, especially, the edge area and/or the hollow area of the target object connected with the front background and the rear background of the target object can be effectively segmented, only the area other than the target object is blurred, so that an image with a more ideal blurring effect is obtained, and the user experience is improved.
Alternatively, referring to fig. 10 on the basis of the embodiment corresponding to fig. 9, in another embodiment of the image processing apparatus according to the embodiment of the present application,
the blurring unit 905 includes:
the first calculation module 9051 is configured to calculate parallax information of the first image and the second image;
the second calculating module 9052 is configured to calculate first depth information of the first image according to the parallax information;
the determining module 9053 is configured to determine a first region, other than the target object, in the first image according to the first depth information, where the first region includes an edge region, which is connected to the target object, in the first image and/or a hollow region in the target object;
and the blurring module 9054 is configured to perform blurring processing on the first region in the first image to obtain a target image.
Alternatively,
the second calculating module 9052 is further configured to calculate second depth information of the second image according to the parallax information;
the determining module 9053 is further configured to determine a second region, other than the target object, in the second image according to the second depth information, where the second region includes an edge region, connected to the target object, in the second image and/or a hollow region in the target object;
the blurring module 9054 is further configured to perform blurring processing on the second region in the second image to obtain a target image.
The image processing apparatus in the embodiment of the present application is described above from the perspective of the modular functional entity, and the photographing device in the embodiment of the present application is described below from the perspective of hardware processing:
as shown in fig. 11, for convenience of description, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The photographing device may be a terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a vehicle-mounted computer, and the like, taking the photographing device as the mobile phone as an example:
fig. 11 is a block diagram illustrating a partial structure of a mobile phone related to a photographing apparatus provided in an embodiment of the present application. Referring to fig. 11, the cellular phone includes: memory 1120, input unit 1130, display unit 1140, controller 1150, first camera 1160, second camera 1170, processor 1180, and power supply 1190. Those skilled in the art will appreciate that the handset configuration shown in fig. 11 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 11:
the memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. Touch panel 1131, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1131 (for example, operations of the user on or near touch panel 1131 by using any suitable object or accessory such as a finger or a stylus pen), and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by the user or information provided to the user and various menus of the mobile phone, and in the embodiment of the present invention, is mainly used to display an image captured by the camera. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1131 can cover the display panel 1141, and when the touch panel 1131 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1180 to determine the type of the touch event, and then the processor 1180 provides a corresponding visual output on the display panel 1141 according to the type of the touch event. Although in fig. 11, the touch panel 1131 and the display panel 1141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated to implement the input and output functions of the mobile phone.
The controller 1150 may be used to control the first camera and the second camera to move along the direction of the optical axis and perform focusing.
The first camera 1160 and the second camera 1170 may be configured to capture a scene to obtain a first image and a second image, respectively, where optical axes of the first camera 1160 and the second camera 1170 are parallel, a difference between a field angle of the first camera 1160 and a field angle of the second camera 1170 is smaller than a first threshold, and a difference between a focal length of the first camera 1160 and a focal length of the second camera 1170 is smaller than a second threshold.
The processor 1180 is a control center of the mobile phone, and is connected to various parts of the whole mobile phone through various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the mobile phone. Optionally, processor 1180 may include one or more processing units; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The phone also includes a power supply 1190 (e.g., a battery) for powering the various components, and preferably, the power supply may be logically connected to the processor 1180 via a power management system, so that the power management system may manage charging, discharging, and power consumption management functions.
In this embodiment of the application, the processor 1180 is specifically configured to execute all or part of the actions performed by the photographing apparatus in the embodiment shown in fig. 3, and details are not described here again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. An image processing method is applied to a photographing device, the photographing device comprises a first camera, a second camera and a controller, the optical axes of the first camera and the second camera are parallel, the difference value between the field angle of the first camera and the field angle of the second camera is smaller than a first threshold value, and the difference value between the focal length of the first camera and the focal length of the second camera is smaller than a second threshold value, and the method comprises the following steps:
the photographing equipment controls the second camera to focus to a target position where a target object is located through the controller, wherein the second camera has a second depth of field when focusing to the target object;
the photographing device determines a first depth of field according to the position of the target, wherein the overlapped depth of field of the first depth of field and the second depth of field is smaller than a third threshold, and the target is located in the overlapped depth of field;
the photographing equipment determines a first position corresponding to the first depth of field according to a depth table of the first camera, wherein the first position is different from the target position;
the photographing equipment controls the first camera to focus to the first position through the controller;
the photographing equipment acquires a first image and a second image, wherein the first image is an image shot when the first camera is focused to the first position, and the second image is an image shot when the second camera is focused to the target position;
and the photographing equipment calculates the parallax information of the first image and the second image, and carries out fuzzy processing on the region except the target object in the first image or the second image according to the parallax information to obtain a target image.
2. The method of claim 1, wherein the step of the photographing device blurring the region of the first image or the second image other than the target object according to the parallax information to obtain the target image comprises:
the photographing equipment calculates first depth information of the first image according to the parallax information;
the photographing device determines a first area, outside the target object, in the first image according to the first depth information, wherein the first area comprises an edge area, connected with the target object, in the first image and/or a hollow area in the target object;
and the photographing equipment carries out fuzzy processing on the first area in the first image to obtain the target image.
3. The method of claim 1, wherein the step of the photographing device blurring the region of the first image or the second image other than the target object according to the parallax information to obtain the target image comprises:
the photographing equipment calculates second depth information of the second image according to the parallax information;
the photographing device determines a second area, which is out of the target object, in the second image according to the second depth information, wherein the second area comprises an edge area, which is connected with the target object, in the second image and/or a hollow area in the target object;
and the photographing equipment carries out fuzzy processing on the second area in the second image to obtain the target image.
4. The method according to any one of claims 1 to 3,
the field angles of the first camera and the second camera are both greater than or equal to 60 degrees.
5. The method according to any one of claims 1 to 3,
the closest focusing distance between the first camera and the second camera is less than or equal to 20 cm.
6. An image processing apparatus including a first camera and a second camera, the first camera being parallel to an optical axis of the second camera, a difference between a field angle of the first camera and a field angle of the second camera being smaller than a first threshold, and a difference between a focal length of the first camera and a focal length of the second camera being smaller than a second threshold, the apparatus comprising:
the control unit is used for controlling the second camera to focus to a target position where a target object is located, wherein the second camera has a second depth of field when focusing to the target object;
a first determining unit, configured to determine a first depth of field according to the target position, wherein an overlapping depth of field of the first depth of field and the second depth of field is smaller than a third threshold, and the target is located in the overlapping depth of field;
a second determining unit, configured to determine a first position corresponding to the first depth of field according to a depth table of the first camera, where the first position is different from the target position;
the control unit is further used for controlling the first camera to focus to the first position;
the acquisition unit is used for acquiring a first image and a second image, wherein the first image is an image shot when the first camera is focused to the first position, and the second image is an image shot when the second camera is focused to the target position;
and the blurring unit is used for calculating parallax information of the first image and the second image and blurring a region except the target object in the first image or the second image according to the parallax information to obtain a target image.
7. The image processing apparatus according to claim 6, wherein the blurring unit includes:
the first calculation module is used for calculating parallax information of the first image and the second image;
the second calculation module is used for calculating first depth information of the first image according to the parallax information;
a determining module, configured to determine, according to the first depth information, a first region in the first image, the first region being outside the object, where the first region includes an edge region in the first image, the edge region being connected to the object, and/or a hollow region in the object;
and the blurring module is used for blurring the first area in the first image to obtain the target image.
8. The image processing apparatus according to claim 6, wherein the blurring unit includes:
the first calculation module is used for calculating parallax information of the first image and the second image;
the second calculation module is used for calculating second depth information of the second image according to the parallax information;
a determining module, configured to determine, according to the second depth information, a second region in the second image, where the second region is other than the target object, and the second region includes an edge region in the second image, where the edge region is connected to the target object, and/or a hollow region in the target object;
and the blurring module is used for blurring the second area in the second image to obtain the target image.
9. The utility model provides a photographing device, includes first camera and second camera, first camera with the optical axis of second camera is parallel, the angle of view of first camera with the difference between the angle of view of second camera is less than first threshold value, the focus of first camera with the difference between the focus of second camera is less than the second threshold value, its characterized in that still includes:
the system comprises a processor, a controller, a memory, a bus and an input/output interface;
the memory has program code stored therein;
when the processor calls the program codes in the memory, the following operations are executed:
driving the controller to control the second camera to focus to a target position where a target object is located, wherein the second camera has a second depth of field when focusing to the target object;
determining a first depth of field according to the target position, wherein the overlapped depth of the first depth of field and the second depth of field is smaller than a third threshold value, and the target is located in the overlapped depth of field;
determining a first position corresponding to the first depth of field according to a depth table of the first camera, wherein the first position is different from the target position;
driving the controller to control the first camera to focus to the first position;
acquiring a first image and a second image, wherein the first image is an image shot when the first camera is focused to the first position, and the second image is an image shot when the second camera is focused to the target position;
and calculating parallax information of the first image and the second image, and performing blurring processing on a region except the target object in the first image or the second image according to the parallax information to obtain a target image.
10. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 5.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810028792.1A CN110035218B (en) | 2018-01-11 | 2018-01-11 | Image processing method, image processing device and photographing equipment |
PCT/CN2018/113926 WO2019137081A1 (en) | 2018-01-11 | 2018-11-05 | Image processing method, image processing apparatus, and photographing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810028792.1A CN110035218B (en) | 2018-01-11 | 2018-01-11 | Image processing method, image processing device and photographing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110035218A CN110035218A (en) | 2019-07-19 |
CN110035218B true CN110035218B (en) | 2021-06-15 |
Family
ID=67219302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810028792.1A Active CN110035218B (en) | 2018-01-11 | 2018-01-11 | Image processing method, image processing device and photographing equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110035218B (en) |
WO (1) | WO2019137081A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112561914A (en) * | 2019-09-10 | 2021-03-26 | 阿里巴巴集团控股有限公司 | Image processing method, system, computing device and storage medium |
CN111144404B (en) * | 2019-12-06 | 2023-08-11 | 恒大恒驰新能源汽车科技(广东)有限公司 | Method, apparatus, system, computer device and storage medium for detecting legacy object |
JP7409604B2 (en) * | 2019-12-18 | 2024-01-09 | キヤノン株式会社 | Image processing device, imaging device, image processing method, program and recording medium |
WO2021134179A1 (en) * | 2019-12-30 | 2021-07-08 | 深圳市大疆创新科技有限公司 | Focusing method and apparatus, photographing device, movable platform and storage medium |
CN112469984B (en) * | 2019-12-31 | 2024-04-09 | 深圳迈瑞生物医疗电子股份有限公司 | Image analysis device and imaging method thereof |
CN111246093B (en) * | 2020-01-16 | 2021-07-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN112702530B (en) * | 2020-12-29 | 2023-04-25 | 维沃移动通信(杭州)有限公司 | Algorithm control method and electronic equipment |
CN113688824B (en) * | 2021-09-10 | 2024-02-27 | 福建汇川物联网技术科技股份有限公司 | Information acquisition method, device and storage medium for construction node |
CN114677425A (en) * | 2022-03-17 | 2022-06-28 | 北京小马慧行科技有限公司 | Method and device for determining depth of field of object |
CN116051362B (en) * | 2022-08-24 | 2023-09-15 | 荣耀终端有限公司 | Image processing method and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103763477A (en) * | 2014-02-21 | 2014-04-30 | 上海果壳电子有限公司 | Double-camera after-shooting focusing imaging device and method |
CN104424640A (en) * | 2013-09-06 | 2015-03-18 | 格科微电子(上海)有限公司 | Method and device for carrying out blurring processing on images |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
CN107087091A (en) * | 2017-05-31 | 2017-08-22 | 广东欧珀移动通信有限公司 | The casing assembly and electronic equipment of electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130169760A1 (en) * | 2012-01-04 | 2013-07-04 | Lloyd Watts | Image Enhancement Methods And Systems |
-
2018
- 2018-01-11 CN CN201810028792.1A patent/CN110035218B/en active Active
- 2018-11-05 WO PCT/CN2018/113926 patent/WO2019137081A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104424640A (en) * | 2013-09-06 | 2015-03-18 | 格科微电子(上海)有限公司 | Method and device for carrying out blurring processing on images |
CN103763477A (en) * | 2014-02-21 | 2014-04-30 | 上海果壳电子有限公司 | Double-camera after-shooting focusing imaging device and method |
CN105847674A (en) * | 2016-03-25 | 2016-08-10 | 维沃移动通信有限公司 | Preview image processing method based on mobile terminal, and mobile terminal therein |
CN107087091A (en) * | 2017-05-31 | 2017-08-22 | 广东欧珀移动通信有限公司 | The casing assembly and electronic equipment of electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110035218A (en) | 2019-07-19 |
WO2019137081A1 (en) | 2019-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110035218B (en) | Image processing method, image processing device and photographing equipment | |
CN110675420B (en) | Image processing method and electronic equipment | |
CN113132618B (en) | Auxiliary photographing method and device, terminal equipment and storage medium | |
WO2022000992A1 (en) | Photographing method and apparatus, electronic device, and storage medium | |
CN107507239B (en) | A kind of image partition method and mobile terminal | |
WO2013146269A1 (en) | Image capturing device, image processing method, and program | |
US20130321347A1 (en) | Virtual touch device without pointer | |
JP6016226B2 (en) | Length measuring device, length measuring method, program | |
CN103345301A (en) | Depth information acquisition method and device | |
CN103297696A (en) | Photographing method, photographing device and photographing terminal | |
WO2023173668A1 (en) | Input recognition method in virtual scene, device and storage medium | |
CN113194253B (en) | Shooting method and device for removing reflection of image and electronic equipment | |
US20140354784A1 (en) | Shooting method for three dimensional modeling and electronic device supporting the same | |
CN109726614A (en) | 3D stereoscopic imaging method and device, readable storage medium storing program for executing, electronic equipment | |
CN111127541B (en) | Method and device for determining vehicle size and storage medium | |
WO2022156672A1 (en) | Photographing method and apparatus, electronic device and readable storage medium | |
CN109947243B (en) | Intelligent electronic equipment gesture capturing and recognizing technology based on touch hand detection | |
CN108550182B (en) | Three-dimensional modeling method and terminal | |
CN109993059B (en) | Binocular vision and object recognition technology based on single camera on intelligent electronic equipment | |
CN109960406B (en) | Intelligent electronic equipment gesture capturing and recognizing technology based on action between fingers of two hands | |
EP3962062A1 (en) | Photographing method and apparatus, electronic device, and storage medium | |
CN112468722B (en) | Shooting method, device, equipment and storage medium | |
CN113192145B (en) | Equipment calibration method and device, electronic equipment and storage medium | |
CN110599602B (en) | AR model training method and device, electronic equipment and storage medium | |
CN112529770A (en) | Image processing method, image processing device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |